Son-of-Fire's Learning Landscape Headline Animator

Sunday, March 29, 2009

Evaluation: Level 4 - Results/Performance

Answers the questions – Did participants perform better and by how much? While the previously described behavior measures assess how and what gets done, performance measures assess the outcomes. Think of the difference between how a baseball player swings a bat, a behavior-based skill – what he does; and whether he hit the 'sweet spot' or 'knocked one over the fence' - performance outcomes. Performance measures tend to be based on the individual or departmental levels. Measured on an interval basis (by day, week, month, quarter, year, etc.), common examples can include increases in:
  • Production
  • Bookings
  • Revenue
  • Margin
  • Number of customer interactions
  • Average time spent with each customer
  • Closures to sale
  • Customer satisfaction
  • Percentage of total tickets closed
  • Project milestones hit
Or decreases in:
  • Product defects
  • Over expenditure
  • Time spent on task
  • Customer complaints
  • Issues/tickets opened
  • Project milestones missed
Again, based on the nature of these measures, sound methodology includes pretest and posttest measures while accounting for extraneous variables.

Tuesday, March 24, 2009

Evaluation: Level 3 - Application/Behavior

Answers the questions – Do participants apply what they learned after training or on-the-job? How are behavior-based learning objectives applied to the work environment? Do the participants perform targeted skill sets better as a result of the training, than if they had not taken the training at all? Behavior-based evaluation is a measure of efficacy, practicality, and utility based on transfer of skills to the job. Evaluation of application can be achieved through baseline at pretest and or posttest and follow-up tracking of on-the-job behaviors – measurable changes in behavior consistent with the skill-based learning objectives from a course support the affects of training on the job. Common methods for tracking on-the-job behaviors include use of trained judges using Behavior Observation Scales (BOS) or the use of performance support systems. Sound methodology overcomes extraneous factors so that training is supported as a root cause of behavioral improvement. This post-testing methodology also has the added value of reinforcing training based on the principles of organizational behavior modification (OBM). Examples can include any skill measured at Level 2 immediately after the training event, but for Level 3 as a follow-up measure back on the job; survey learners and managers to capture perception of applied skills in on the job settings; or a manager that targets required behavior such as conflict management based on observed and justified need - documents in a performance management system and assigns as a development goal > employee takes assigned and course(s) on conflict management > manager observes behavior/progress, provides feedback, and documents it in the performance management system. This level of assessment requires methodological rigor from the training and follow-up cycle and is typically implemented as a high-end solution.

Friday, March 20, 2009

Evaluation: Level 2 - Learning

Answers the question - Do participants know more and can they do more after the learning event than before? This evaluation is an assessment of immediate transfer of knowledge and skill. Although it is often conducted at the posttest level only, it should be conducted by base-lining through pretest and measuring change through posttest using module-level quizzes or course-level exams. A typical example might have a learner take a pretest before a course or each module > then run through the course or module > and then take a posttest to measure knowledge or skill improvement. Minimal and optimal criteria levels should be identified during Analysis and specified in the Design so thresholds (cut-offs) for score attribution can be applied to measured changes while this level of evaluation is Developed as part of the training solution. Other examples might include having an instructor evaluate (observes and rates) transfer in class; Watch-me > Try-me > Test-me approaches created with simulation tools and LMS tracking (SCORM) of outcomes; through the use of problem-resolution scenarios built into a course – problem is presented > learner selects solution > track responses and performance through the LMS (SCORM); or by creating broken environments in certification scenarios and having learners/candidates fix them with an instructor or remote-proctor verifying success. Gagne’s model for Implementing posttests as part of a course loops the learner back into a module or course if they do not pass the tests threshold and is the basis for posttest delivery in eLearning today. More robust eLearning design also incorporates the pretest component as a filtering mechanism for the learner by prescribing only the content that was not passed. This scenario might look like the programmed instruction model, which augments the training event with the benefits of a custom learning experience and a feedback mechanism.

Monday, March 16, 2009

Evaluation: Level 1 - Reaction

Answers the questions – What did participants like about the learning event? How do participants react to content, materials, instructor, medium environment, etc? These are satisfaction and quality measures. These measures are best obtained through the electronic delivery of questionnaires and should either be built into the courseware solution or supported and tracked through a Learning Management System (LMS) so they can be linked, filtered, and reported according to the course and learner demographics. Typical reaction-level evaluations assess quality of content, delivery, learning environment, and operational support. Reaction measures are the most commonly used and relied upon level of evaluation in the industry. They are highly useful when the learner is a customer as higher levels of customer satisfaction predict the likelihood they will return for additional training and thus drive revenue. Although highly relied upon, 35 years of research has noted no relationship what-so-ever between a learner’s satisfaction with the training experience, and his or her behavior and performance on the job.

Thursday, March 12, 2009

Evaluation: Level 0 - Participation

Answers the question - Who did what as broken down by medium-type, subject-matter, population, or other variables along with the percentage completed? Participation measures usage broken down by the demographics of the learner-base and the parameters of the learning solution. These variables need to be identified in advance and tracked through data stores and learning management systems and are usually tracked on an interval basis. Examples include: number of learners trained and number of courses delivered as broken down by medium or subject by month, quarter, or year.

Sunday, March 8, 2009

Evaluation

“Knowledge is power…”
– Sir Francis Bacon – Founder of the scientific method

Why should anyone know how effective their training efforts were? Why should anyone take the time to evaluate learning initiatives? The answers are simple… So stakeholders can:
  • Baseline current performance
  • Assess what employees, partners, and customers are achieving
  • Set attainable goals based on data
  • Track their investment in learning
  • Continuously improve what they do by asking the right questions
The most effective way to improve development and delivery of courseware is through Evaluation. The goals are to assess the value of the course to the stakeholder and to continuously improve that value based on valid metrics. Valid metrics include stakeholder feedback received during the development cycle while success measures should be based on the required skill-sets, performance criteria, and delivery modalities identified during the Analysis phase. As the required skill-sets and performance criteria should be translated into the learning objectives that drive Design and how the course should be Developed, evaluative elements built into the design and developed as part of the course should measure if those learning objectives and success measures are met, or not. It is important to note that levels of evaluation that are measured (or not) are often determined by stakeholder budget, resource availability, and statements of work. When not measured, this often attributed to fear of reprisal based on misuse and misinterpretation of measured outcomes, versus the drive to improve process and the overall learning solution. In the long-term, such misunderstanding detriments all by depriving the process of a feedback loop that adds value for the learner and saves margin or increases profit for the stakeholders. The four levels of training evaluation have evolved into six and are listed in order of robustness below:

Levels 0 through 5 (Kirkpatrick and Phillips):

Level 0 - Participation
Level 1 - Reaction
Level 2 - Learning
Level 3 - Application
Level 4 - Results
Level 5 - ROI

A description of each level will follow shortly...

Wednesday, March 4, 2009

Implementation (Delivery)

Courseware is typically deployed and delivered in the classroom, onsite, and or through computer and Web-based media during Implementation. Goals during this phase should include maximizing usability, satisfaction, transfer, behavior-change, and impact and should be therefore measured. Implementation requirements are identified during analysis based on learner profiles and are designed and developed based on technical and media specifications. Requirements for delivery through an instructor, content server, or Learning Management System (LMS) should be identified in advance during analysis so that content is designed and developed in alignment with delivery requirements, standards, and goals.

Typical delivery media include:
  • Instructor-Led Training (ILT) > in a classroom, lab, or onsite
  • Virtual Classroom > synchronous eLearning delivered via live Web connection, Web instructor-led learning
  • Virtual Lab > live and synchronous virtual machine or a self-paced sandbox
  • Self-Paced Training > asynchronous eLearning delivered as hosted or through CD or file on the local machine)
  • Mobile Learning > mLearning for MP3/MP4 player, PPA, and smartphones
  • On-the-Job Training (OJT)
  • Job Aides > cards, posters, ePaper
  • Collaborative and Social Learning > includes collaboration suites, coaching, mentoring discussion threads, learning Wikis, expert-produced blogs, and relevant RSS feeds
  • Learning Management System (LMS) and supports
  • Evaluation Systems > reaction/satisfaction, knowledge/skill transfer, application on the job, on the job performance improvement, return on the training investments, and predictability of future success
  • Blended media delivery combining any or all of the above as indicated by the learner and media specifications analyses and design documents
Risks if not conducted properly:
  • Extended deployment resulting in missed delivery dates and costs
  • Project failure at final implementation resulting in lost and dissatisfied stakeholders and learners