Son-of-Fire's Learning Landscape Headline Animator

Showing posts with label evaluation. Show all posts
Showing posts with label evaluation. Show all posts

Monday, April 6, 2009

Evaluation: Best Practices in Implementation

A scientific approach to evaluation must overcome extraneous variables, which are those things other than the learning event that may have caused a change in behavior or performance. One sound approach combines obtaining learning-measures with application and or performance measures in a Stepped Within Groups Design, which includes secondary (and possibly tertiary) posttesting on an interval basis. This is easier to set up than it sounds.

Capturing data with this type of design overcomes extraneous variables such as history, which has to do with how the timing of external events affect training measures - timing of incentives, change management, market forces, etc; maturation, which has to do with how the passing of time allows participants to learn what is required and affect behavior and results without training; and selection, which is how individual differences between groups affects measures.

Along with the reporting of descriptive data, an analysis of variance (ANOVA) to measure group differences and an analysis of covariance (ANCOVA) to measure differences in statistical power between testing intervals should be performed. These tests measure the changes and effect sizes within each group and between the times the treatment as Tx (in our the training event) was administered. If consistent and measurable changes are noted between the pretesting conditions and the posttests for each group, the results would support that training is attributed to changes in behavior and or performance and not some other factors.

Risks if not conducted or conducted improperly:
  • Learning events that remain stale, stagnant, and out of date due to ignorance of the evolving needs of the course participants.
  • Training products that result in participant dissatisfaction due to unidentified problems with the course content, materials, delivery medium, learning environment and other factors relating to participant reactions to the course.
  • Learning events that produce no transfer and retention of knowledge and skill.
  • Training products that result in no impact on significant increase in on the job behaviors and performance outcomes.
  • A training product that produces no Learning ROI.
  • Results (good or bad) that are falsely attributed to the learning events when events other than training are responsible.
  • Attributing degradation of course effectiveness to the learning event delivered as opposed to the changing needs of the course participants.

Thursday, April 2, 2009

Evaluation: Level 5 - Return on Investment (ROI)

Answers the questions – What is the return-on-investment (ROI) for the stakeholder based on outcome measures identified at organizational analysis? These are the cost-returns attributed to performance. Metrics can include profit-gains and cost-reductions such as cost savings in travel, time saved on-task, reduction in error, reduction in resource usage, decreases in down-time, quality in production and customer service, etc… Data from reaction, learning, behavior, and/or results evaluations can be mapped back into an LMS, a best practices library, and/or design to continuously improve the learning value of the training product and to adapt to the ever-changing needs of the stakeholder in a competitive global economy.

Sunday, March 29, 2009

Evaluation: Level 4 - Results/Performance

Answers the questions – Did participants perform better and by how much? While the previously described behavior measures assess how and what gets done, performance measures assess the outcomes. Think of the difference between how a baseball player swings a bat, a behavior-based skill – what he does; and whether he hit the 'sweet spot' or 'knocked one over the fence' - performance outcomes. Performance measures tend to be based on the individual or departmental levels. Measured on an interval basis (by day, week, month, quarter, year, etc.), common examples can include increases in:
  • Production
  • Bookings
  • Revenue
  • Margin
  • Number of customer interactions
  • Average time spent with each customer
  • Closures to sale
  • Customer satisfaction
  • Percentage of total tickets closed
  • Project milestones hit
Or decreases in:
  • Product defects
  • Over expenditure
  • Time spent on task
  • Customer complaints
  • Issues/tickets opened
  • Project milestones missed
Again, based on the nature of these measures, sound methodology includes pretest and posttest measures while accounting for extraneous variables.

Tuesday, March 24, 2009

Evaluation: Level 3 - Application/Behavior

Answers the questions – Do participants apply what they learned after training or on-the-job? How are behavior-based learning objectives applied to the work environment? Do the participants perform targeted skill sets better as a result of the training, than if they had not taken the training at all? Behavior-based evaluation is a measure of efficacy, practicality, and utility based on transfer of skills to the job. Evaluation of application can be achieved through baseline at pretest and or posttest and follow-up tracking of on-the-job behaviors – measurable changes in behavior consistent with the skill-based learning objectives from a course support the affects of training on the job. Common methods for tracking on-the-job behaviors include use of trained judges using Behavior Observation Scales (BOS) or the use of performance support systems. Sound methodology overcomes extraneous factors so that training is supported as a root cause of behavioral improvement. This post-testing methodology also has the added value of reinforcing training based on the principles of organizational behavior modification (OBM). Examples can include any skill measured at Level 2 immediately after the training event, but for Level 3 as a follow-up measure back on the job; survey learners and managers to capture perception of applied skills in on the job settings; or a manager that targets required behavior such as conflict management based on observed and justified need - documents in a performance management system and assigns as a development goal > employee takes assigned and course(s) on conflict management > manager observes behavior/progress, provides feedback, and documents it in the performance management system. This level of assessment requires methodological rigor from the training and follow-up cycle and is typically implemented as a high-end solution.

Friday, March 20, 2009

Evaluation: Level 2 - Learning

Answers the question - Do participants know more and can they do more after the learning event than before? This evaluation is an assessment of immediate transfer of knowledge and skill. Although it is often conducted at the posttest level only, it should be conducted by base-lining through pretest and measuring change through posttest using module-level quizzes or course-level exams. A typical example might have a learner take a pretest before a course or each module > then run through the course or module > and then take a posttest to measure knowledge or skill improvement. Minimal and optimal criteria levels should be identified during Analysis and specified in the Design so thresholds (cut-offs) for score attribution can be applied to measured changes while this level of evaluation is Developed as part of the training solution. Other examples might include having an instructor evaluate (observes and rates) transfer in class; Watch-me > Try-me > Test-me approaches created with simulation tools and LMS tracking (SCORM) of outcomes; through the use of problem-resolution scenarios built into a course – problem is presented > learner selects solution > track responses and performance through the LMS (SCORM); or by creating broken environments in certification scenarios and having learners/candidates fix them with an instructor or remote-proctor verifying success. Gagne’s model for Implementing posttests as part of a course loops the learner back into a module or course if they do not pass the tests threshold and is the basis for posttest delivery in eLearning today. More robust eLearning design also incorporates the pretest component as a filtering mechanism for the learner by prescribing only the content that was not passed. This scenario might look like the programmed instruction model, which augments the training event with the benefits of a custom learning experience and a feedback mechanism.

Monday, March 16, 2009

Evaluation: Level 1 - Reaction

Answers the questions – What did participants like about the learning event? How do participants react to content, materials, instructor, medium environment, etc? These are satisfaction and quality measures. These measures are best obtained through the electronic delivery of questionnaires and should either be built into the courseware solution or supported and tracked through a Learning Management System (LMS) so they can be linked, filtered, and reported according to the course and learner demographics. Typical reaction-level evaluations assess quality of content, delivery, learning environment, and operational support. Reaction measures are the most commonly used and relied upon level of evaluation in the industry. They are highly useful when the learner is a customer as higher levels of customer satisfaction predict the likelihood they will return for additional training and thus drive revenue. Although highly relied upon, 35 years of research has noted no relationship what-so-ever between a learner’s satisfaction with the training experience, and his or her behavior and performance on the job.

Thursday, March 12, 2009

Evaluation: Level 0 - Participation

Answers the question - Who did what as broken down by medium-type, subject-matter, population, or other variables along with the percentage completed? Participation measures usage broken down by the demographics of the learner-base and the parameters of the learning solution. These variables need to be identified in advance and tracked through data stores and learning management systems and are usually tracked on an interval basis. Examples include: number of learners trained and number of courses delivered as broken down by medium or subject by month, quarter, or year.

Sunday, March 8, 2009

Evaluation

“Knowledge is power…”
– Sir Francis Bacon – Founder of the scientific method

Why should anyone know how effective their training efforts were? Why should anyone take the time to evaluate learning initiatives? The answers are simple… So stakeholders can:
  • Baseline current performance
  • Assess what employees, partners, and customers are achieving
  • Set attainable goals based on data
  • Track their investment in learning
  • Continuously improve what they do by asking the right questions
The most effective way to improve development and delivery of courseware is through Evaluation. The goals are to assess the value of the course to the stakeholder and to continuously improve that value based on valid metrics. Valid metrics include stakeholder feedback received during the development cycle while success measures should be based on the required skill-sets, performance criteria, and delivery modalities identified during the Analysis phase. As the required skill-sets and performance criteria should be translated into the learning objectives that drive Design and how the course should be Developed, evaluative elements built into the design and developed as part of the course should measure if those learning objectives and success measures are met, or not. It is important to note that levels of evaluation that are measured (or not) are often determined by stakeholder budget, resource availability, and statements of work. When not measured, this often attributed to fear of reprisal based on misuse and misinterpretation of measured outcomes, versus the drive to improve process and the overall learning solution. In the long-term, such misunderstanding detriments all by depriving the process of a feedback loop that adds value for the learner and saves margin or increases profit for the stakeholders. The four levels of training evaluation have evolved into six and are listed in order of robustness below:

Levels 0 through 5 (Kirkpatrick and Phillips):

Level 0 - Participation
Level 1 - Reaction
Level 2 - Learning
Level 3 - Application
Level 4 - Results
Level 5 - ROI

A description of each level will follow shortly...

Wednesday, November 26, 2008

ISD and the ADDIE Model

I have done a lot of talking about learning technologies, but I think now is a really good time to get down to basics, specifically, how any learning solution should be built, as in "instructional design." ADDIE is useful model of Instructional Systems Design (ISD) and is based on a 5-phased approach to courseware development... ADDIE is an acronym that stands for:

A - Analysis
D - Design
D - Development
I - Implementation
E - Evaluation

Execution of each phase is dependent on how a development team and stakeholders agree on the project's approach typically specified within a statement of work or project charter. As a system of checks and balances, each phase and milestone requires stakeholder approval to assure that what is delivered at project completion is what the stakeholder has approved from the beginning.

When no or little preexisting courseware exists, the most critical phase of a project is Analysis. Each phase follows sequentially with exception to Evaluation, which if conducted properly, occurs continuously throughout the process. Details regarding each phase will follow in subsequent blog entries. However, for a high level explanation of how this should work, see the graphic below.

It is fair to say that many in the business of course development skip phases within this model and some even ignore the model completely. This is a mistake, but please do not misinterpret that to mean that this process cannot be performed efficiently or within short time-frames. Those of us who work in internal training departments or on the customer business side need only to perform an analysis on a periodic basis for each department or customer-base we serve.

Also, variations in this model exist along with compressed methodologies and adaptations based on stakeholder needs. How this can be accomplished will be addressed in subsequent entries. Stay tuned for details on each phase of the ADDIE model.