Like many powerful tools, learning technology can help or hurt – it depends on how it is used. Getting others to use these technologies appropriately can be a challenge. Those of you around when eLearning was young were able to observed mismatches in learning technology to solution. In that case, eLearning often mismatched the needs and job context of the learner especially when the requirement included some form of behavior or skill acquisition that needed to be applied on the job. This problem is equally if not more prevalent today with new social media and mobile learning technologies available. Many are building wikis, writing blogs, and syndicating podcasts while grouping them under the Learning 2.0 umbrella, but does their content really support learning or a form of communication by providing access to information? If we are going to use or evangelize any learning technology, we need to get back to the basics – instructional systems design. If other learning professionals lack this basic competency, we need to help them.
Ideally, learning professionals have some training and background in instructional systems design (ISD), but we do not always see that in the real world. Even instructional design degree and certification programs tend to focus on the academic versus practice and lack preparedness for applicability when the graduate gets a job. Anecdotally, how many folks do you know with degrees in an area of expertise but lack the real-world experience required to apply it effectively? The old theory versus reality problem… At the same time, there is a basis for the application of ISD, and that’s the trick, applying ISD at practical and appropriate levels.
As learning technologists, practical ISD at a simplistic level tells us we need to help stakeholders and other learning professionals focus on alignment with business goals, business tools, and the work environment. Then we need to identify the needs of the learner by role and what must be accomplished on the job in the ideal world. We need to determine if a training or learning solution is required (it’s not when the problem is systemic or motivational), create a profile of the learner to include what they do in the real world, and map all that to the type of content that will be required (auditory, visual, kinesthetic, blended). Factor in budget, obstacles, and time to proficiency requirements and we can prescribe a set of learning technologies that meet those needs. As learning technologists, we need to push this approach.
If a learning technologist is not available, job-aides like tool matrices or decision work-flows can help stakeholders and development teams make decisions where learning technology is not a core competency, but later on, experts in the required technology need to be involved - especially in the analysis, design, and development phases. Learning technologists or experts will need to hold hands when their stakeholders are too far from their comfort zones.
If starting fresh or embarking on bleeding edge technologies, do some research to benchmark what’s been done successfully by others. When you have identified which technology will meet your needs, incubate and pilot. Start with small groups and the low hanging fruit during the initial test phases and then focus on the larger wins with larger groups as you proceed. Over communicate your wins but document your mistakes and don't forget the lessons learned - they drive efficiency and cost savings later on.
Lastly, ensure you communicate the purpose and value of the learning technologies up and down the food chain as this will drive adoption with your stakeholders and learners, along with prescriptive usage the learning professionals at your organization will need to drive.
I am a learning strategist and technologist with a passion for making a difference. My interests include all things eLearning, learning technologies, and instructional design. If you share my interests and want to discuss - AWESOME! I look forward to the collaboration.
Friday, June 26, 2009
Tips for Driving the Appropriate Use of Learning Technologies through Practical ISD
Labels:
ADDIE,
analysis,
blogs,
design,
development,
implementation,
instructional systems design,
ISD,
learner,
learning context,
learning technologies,
needs,
objectives,
organizational,
role,
technology,
wikis
Thursday, May 21, 2009
Instructional Design and Technology: Where’s the Beef?

"Where's the beef?"
Gagne actually pioneered this mode of instruction in the mid 1960s. For those unfamiliar, programmed instruction models assess a learner's needs through some form of testing and loop back, branch-forward, or multi-path based on performance to pass/fail criteria. More robust PI models incorporate both pretesting an posttesting. The biggest advantage to this design model is that delivery of content is customized to a learner's needs because only filtered content is delivered (based on what the learner did not pass). In essence, a form of needs assessment is built into course delivery and the user experience. How cool is that?
Development tools like Authorware or KnowledgeTRACK were great at facilitating this mode of design. Unfortunately these tools are no longer available, (in all honesty - they were cumbersome to use.) I certainly don’t see this in most of the Articulate or Captivate content I’ve seen lately.
Meanwhile, social and Learning 2.0 suites are starting to bake PI functionality into learning path where not only a test can assess performance, but a virtual instructor or coach can pass or fail. Others even advance the learner automatically if they simply complete a task as instructed. More of such suites are capable of housing SCORM modules or acting as a friendlier front-face to what is typically presented in the learning path of a traditional learning management system (LMS). For an example, check this one it out by Q2 Learning at: http://www.q2learning.com/ . None-the-less, this is a methodology we should revive. Take another look at the process illustrated above and decide for yourself.
Have a comment or question? Leave one and I will get back to you.
Monday, April 6, 2009
Evaluation: Best Practices in Implementation

Capturing data with this type of design overcomes extraneous variables such as history, which has to do with how the timing of external events affect training measures - timing of incentives, change management, market forces, etc; maturation, which has to do with how the passing of time allows participants to learn what is required and affect behavior and results without training; and selection, which is how individual differences between groups affects measures.
Along with the reporting of descriptive data, an analysis of variance (ANOVA) to measure group differences and an analysis of covariance (ANCOVA) to measure differences in statistical power between testing intervals should be performed. These tests measure the changes and effect sizes within each group and between the times the treatment as Tx (in our the training event) was administered. If consistent and measurable changes are noted between the pretesting conditions and the posttests for each group, the results would support that training is attributed to changes in behavior and or performance and not some other factors.
Risks if not conducted or conducted improperly:
- Learning events that remain stale, stagnant, and out of date due to ignorance of the evolving needs of the course participants.
- Training products that result in participant dissatisfaction due to unidentified problems with the course content, materials, delivery medium, learning environment and other factors relating to participant reactions to the course.
- Learning events that produce no transfer and retention of knowledge and skill.
- Training products that result in no impact on significant increase in on the job behaviors and performance outcomes.
- A training product that produces no Learning ROI.
- Results (good or bad) that are falsely attributed to the learning events when events other than training are responsible.
- Attributing degradation of course effectiveness to the learning event delivered as opposed to the changing needs of the course participants.
Thursday, April 2, 2009
Evaluation: Level 5 - Return on Investment (ROI)
Answers the questions – What is the return-on-investment (ROI) for the stakeholder based on outcome measures identified at organizational analysis? These are the cost-returns attributed to performance. Metrics can include profit-gains and cost-reductions such as cost savings in travel, time saved on-task, reduction in error, reduction in resource usage, decreases in down-time, quality in production and customer service, etc… Data from reaction, learning, behavior, and/or results evaluations can be mapped back into an LMS, a best practices library, and/or design to continuously improve the learning value of the training product and to adapt to the ever-changing needs of the stakeholder in a competitive global economy.
Sunday, March 29, 2009
Evaluation: Level 4 - Results/Performance
Answers the questions – Did participants perform better and by how much? While the previously described behavior measures assess how and what gets done, performance measures assess the outcomes. Think of the difference between how a baseball player swings a bat, a behavior-based skill – what he does; and whether he hit the 'sweet spot' or 'knocked one over the fence' - performance outcomes. Performance measures tend to be based on the individual or departmental levels. Measured on an interval basis (by day, week, month, quarter, year, etc.), common examples can include increases in:
- Production
- Bookings
- Revenue
- Margin
- Number of customer interactions
- Average time spent with each customer
- Closures to sale
- Customer satisfaction
- Percentage of total tickets closed
- Project milestones hit
- Product defects
- Over expenditure
- Time spent on task
- Customer complaints
- Issues/tickets opened
- Project milestones missed
Tuesday, March 24, 2009
Evaluation: Level 3 - Application/Behavior
Answers the questions – Do participants apply what they learned after training or on-the-job? How are behavior-based learning objectives applied to the work environment? Do the participants perform targeted skill sets better as a result of the training, than if they had not taken the training at all? Behavior-based evaluation is a measure of efficacy, practicality, and utility based on transfer of skills to the job. Evaluation of application can be achieved through baseline at pretest and or posttest and follow-up tracking of on-the-job behaviors – measurable changes in behavior consistent with the skill-based learning objectives from a course support the affects of training on the job. Common methods for tracking on-the-job behaviors include use of trained judges using Behavior Observation Scales (BOS) or the use of performance support systems. Sound methodology overcomes extraneous factors so that training is supported as a root cause of behavioral improvement. This post-testing methodology also has the added value of reinforcing training based on the principles of organizational behavior modification (OBM). Examples can include any skill measured at Level 2 immediately after the training event, but for Level 3 as a follow-up measure back on the job; survey learners and managers to capture perception of applied skills in on the job settings; or a manager that targets required behavior such as conflict management based on observed and justified need - documents in a performance management system and assigns as a development goal > employee takes assigned and course(s) on conflict management > manager observes behavior/progress, provides feedback, and documents it in the performance management system. This level of assessment requires methodological rigor from the training and follow-up cycle and is typically implemented as a high-end solution.
Friday, March 20, 2009
Evaluation: Level 2 - Learning
Answers the question - Do participants know more and can they do more after the learning event than before? This evaluation is an assessment of immediate transfer of knowledge and skill. Although it is often conducted at the posttest level only, it should be conducted by base-lining through pretest and measuring change through posttest using module-level quizzes or course-level exams. A typical example might have a learner take a pretest before a course or each module > then run through the course or module > and then take a posttest to measure knowledge or skill improvement. Minimal and optimal criteria levels should be identified during Analysis and specified in the Design so thresholds (cut-offs) for score attribution can be applied to measured changes while this level of evaluation is Developed as part of the training solution. Other examples might include having an instructor evaluate (observes and rates) transfer in class; Watch-me > Try-me > Test-me approaches created with simulation tools and LMS tracking (SCORM) of outcomes; through the use of problem-resolution scenarios built into a course – problem is presented > learner selects solution > track responses and performance through the LMS (SCORM); or by creating broken environments in certification scenarios and having learners/candidates fix them with an instructor or remote-proctor verifying success. Gagne’s model for Implementing posttests as part of a course loops the learner back into a module or course if they do not pass the tests threshold and is the basis for posttest delivery in eLearning today. More robust eLearning design also incorporates the pretest component as a filtering mechanism for the learner by prescribing only the content that was not passed. This scenario might look like the programmed instruction model, which augments the training event with the benefits of a custom learning experience and a feedback mechanism.
Monday, March 16, 2009
Evaluation: Level 1 - Reaction
Answers the questions – What did participants like about the learning event? How do participants react to content, materials, instructor, medium environment, etc? These are satisfaction and quality measures. These measures are best obtained through the electronic delivery of questionnaires and should either be built into the courseware solution or supported and tracked through a Learning Management System (LMS) so they can be linked, filtered, and reported according to the course and learner demographics. Typical reaction-level evaluations assess quality of content, delivery, learning environment, and operational support. Reaction measures are the most commonly used and relied upon level of evaluation in the industry. They are highly useful when the learner is a customer as higher levels of customer satisfaction predict the likelihood they will return for additional training and thus drive revenue. Although highly relied upon, 35 years of research has noted no relationship what-so-ever between a learner’s satisfaction with the training experience, and his or her behavior and performance on the job.
Thursday, March 12, 2009
Evaluation: Level 0 - Participation
Answers the question - Who did what as broken down by medium-type, subject-matter, population, or other variables along with the percentage completed? Participation measures usage broken down by the demographics of the learner-base and the parameters of the learning solution. These variables need to be identified in advance and tracked through data stores and learning management systems and are usually tracked on an interval basis. Examples include: number of learners trained and number of courses delivered as broken down by medium or subject by month, quarter, or year.
Sunday, March 8, 2009
Evaluation
“Knowledge is power…”
– Sir Francis Bacon – Founder of the scientific method
– Sir Francis Bacon – Founder of the scientific method
Why should anyone know how effective their training efforts were? Why should anyone take the time to evaluate learning initiatives? The answers are simple… So stakeholders can:
- Baseline current performance
- Assess what employees, partners, and customers are achieving
- Set attainable goals based on data
- Track their investment in learning
- Continuously improve what they do by asking the right questions
Levels 0 through 5 (Kirkpatrick and Phillips):
Level 0 - Participation
Level 1 - Reaction
Level 2 - Learning
Level 3 - Application
Level 4 - Results
Level 5 - ROI
A description of each level will follow shortly...
Subscribe to:
Posts (Atom)