Son-of-Fire's Learning Landscape Headline Animator

Showing posts with label ISD. Show all posts
Showing posts with label ISD. Show all posts

Friday, September 24, 2010

Is mLearning Like eLearning?

Is mLearning Like eLearning? This is a question I recently responded to as posed by Janet Clarey, Technology Editor from ELearning! magazine from a LinkedIn group she moderates (The ELearning! Magazine Network). In one of Janet's comments, she thinks the key in in the "m," - I agree. It's an important question and because we've discussed mLearning several times in this blog, so I thought I would share my response...


Great question and to Janet's response, the "m" is very relevant right now. To answer directly, mLearning is a form of eLearning in that it's electronic but should otherwise be treated as a very different medium. At the same time, some are making many of the same mistakes made when eLearning first came out...

 

For instance, efforts and tools designed to convert and deliver PowerPoint and traditional eLearning into mLearning eerily parallel efforts to convert traditional classroom training and later PowerPoints straight to eLearning. We all now how that turned out... Why did they do it that way? Because it was new, shiny, and at least in the beginning, eLearning differentiated itself from classroom training as the latest and greatest... (and probably also because true instructional designers were not used to develop). However, it also resulted in very expensive and time-consuming development cycles along with content that often did not address the needs of its learners. Yet now, we are seeing new mobile products designed to develop and deliver "rich mobile media" that allow conversion of PowerPoints or include interfaces that are either not intuitive or are hard to use because they were really designed for use in a standard PC, Mac, or Linux browser. Some have come full circle...

 

For mLearning as with any medium, instructional designers and course developers need to ensure the medium will meet the needs of the organization's business context and learner. Mobile is useful when it is short, simple, and can be agnostically delivered to many devices. Think 2 minutes with content that's light to stream or download. Think MP3 for audio, think MP4 for video, think HTML or PDF for talking points, process diagrams, checklists, sales positioning, etc... As opposed a large PowerPoints or movie files that are large to download or stream, too long to maintain an attention span on a smart-phone, and are just plain not appropriate for the mobile learner. Mobile is meant to be augmentative and add value as part of a larger and blended learning path. It's for the learner on a plane, train, or automobile; for sales person on the way to propose a new product, for the field-tech who is making an on-site repair. Mobile is for the learner on the go. It needs to be designed, developed, and delivered that way.

 

So in the end, mLearning can be like eLearning if we make the same mistakes we did 10 years ago, or we can make it different... ;]


Friday, September 18, 2009

How to Play MP3s and or MP4s on a Mobile Device for mLearning: Part 3 - mLearning Playback from a CD

In Part 2, we reviewed how to transfer and play audio files and videos on MP3/MP4 players and smartphones. With updates of the Android platform, iPhone, iPod Touch, and Zune out, that was pretty relevant and timely material. For Part 3 however, “relative” is the key word…

During a recent needs assessment I was conducting in a discovery of learner profiles and required mLearning media types, I was surprised to identify a need for a transfer and route-to-play on of all things, a compact disk. We assumed we would be focused on the latest and greatest - playing mLearning on BlackBerries, iPods, iPhones, off the cloud…, but I discovered that some, and some in very high places (including an EVP of Sales who reported to the CEO), wanted to be able to take a CD with them so they could play it in their car stereo, laptop, or other CD-capable player. My first impulse was “wow, we need to upgrade some technology around here and train folks on how to use it,” but the reality is, that is what those learners needed and quite frankly, a little bit of training or job-aide assistance makes those baby-boomer execs equally capable of accessing the same mLearning media the typical millennial would access through more modern means. Bottom line, know who your learners are, know how they learn best, and know their work environments and the tools they use to do their jobs so that the solution you provide is relevant to the learner. The only way you can know these things and save time, effort, and pain is through a needs assessment. I love instructional systems design!…

That all being said, if a CD player is all you have handy and that is your preferred mode of learning, this tutorial is for you.

Part 3: Play mLearning on a CD Player

Play MP3 > Play on CD

The instructions that follow assume you have a CD or DVD burner and Windows Media Player installed as the default media application for audio. The instructions are similar for iTunes, or other CD burning software. See the instructions specific to your software or preferred application if you are not using Windows Media Player.
  1. First, you want to save the MP3 or MP4 files to your laptop or workstation. Save as you would any file, but if you are accessing from a web page or email, click on the hyperlink provided (assuming you trust the source).
  2. If prompted to display non-secure items, click Yes.
  3. Click on the title-link you want to save.
  4. Select Save Target As…
  5. Navigate to the directory of your choosing from the Save As window.
  6. Open My Computer.
  7. Click and drag the MP3/MP4 file(s) from the directory to the CD/DVD drive in My Computer.
  8. Right click on the CD/DVD drive.
  9. Select Write these files to CD. The CD Writing Wizard opens.
  10. Name the CD (optional).
  11. Click Next.
  12. Select Make an audio CD.
  13. Click Next. Windows Media Player opens.
  14. Click Start Burn (at the lower right).
  15. Once the disk has completed burning, you can play it in any disk player that supports MP3 and or MP4 files for audio and video codecs.
In Part 4, we’ll review routes- to-play mLearning on the BlackBerry.

Friday, June 26, 2009

Tips for Driving the Appropriate Use of Learning Technologies through Practical ISD

Like many powerful tools, learning technology can help or hurt – it depends on how it is used. Getting others to use these technologies appropriately can be a challenge. Those of you around when eLearning was young were able to observed mismatches in learning technology to solution. In that case, eLearning often mismatched the needs and job context of the learner especially when the requirement included some form of behavior or skill acquisition that needed to be applied on the job. This problem is equally if not more prevalent today with new social media and mobile learning technologies available. Many are building wikis, writing blogs, and syndicating podcasts while grouping them under the Learning 2.0 umbrella, but does their content really support learning or a form of communication by providing access to information? If we are going to use or evangelize any learning technology, we need to get back to the basics – instructional systems design. If other learning professionals lack this basic competency, we need to help them.

Ideally, learning professionals have some training and background in instructional systems design (ISD), but we do not always see that in the real world. Even instructional design degree and certification programs tend to focus on the academic versus practice and lack preparedness for applicability when the graduate gets a job. Anecdotally, how many folks do you know with degrees in an area of expertise but lack the real-world experience required to apply it effectively? The old theory versus reality problem… At the same time, there is a basis for the application of ISD, and that’s the trick, applying ISD at practical and appropriate levels.

As learning technologists, practical ISD at a simplistic level tells us we need to help stakeholders and other learning professionals focus on alignment with business goals, business tools, and the work environment. Then we need to identify the needs of the learner by role and what must be accomplished on the job in the ideal world. We need to determine if a training or learning solution is required (it’s not when the problem is systemic or motivational), create a profile of the learner to include what they do in the real world, and map all that to the type of content that will be required (auditory, visual, kinesthetic, blended). Factor in budget, obstacles, and time to proficiency requirements and we can prescribe a set of learning technologies that meet those needs. As learning technologists, we need to push this approach.

If a learning technologist is not available, job-aides like tool matrices or decision work-flows can help stakeholders and development teams make decisions where learning technology is not a core competency, but later on, experts in the required technology need to be involved - especially in the analysis, design, and development phases. Learning technologists or experts will need to hold hands when their stakeholders are too far from their comfort zones.

If starting fresh or embarking on bleeding edge technologies, do some research to benchmark what’s been done successfully by others. When you have identified which technology will meet your needs, incubate and pilot. Start with small groups and the low hanging fruit during the initial test phases and then focus on the larger wins with larger groups as you proceed. Over communicate your wins but document your mistakes and don't forget the lessons learned - they drive efficiency and cost savings later on.

Lastly, ensure you communicate the purpose and value of the learning technologies up and down the food chain as this will drive adoption with your stakeholders and learners, along with prescriptive usage the learning professionals at your organization will need to drive.

Monday, April 6, 2009

Evaluation: Best Practices in Implementation

A scientific approach to evaluation must overcome extraneous variables, which are those things other than the learning event that may have caused a change in behavior or performance. One sound approach combines obtaining learning-measures with application and or performance measures in a Stepped Within Groups Design, which includes secondary (and possibly tertiary) posttesting on an interval basis. This is easier to set up than it sounds.

Capturing data with this type of design overcomes extraneous variables such as history, which has to do with how the timing of external events affect training measures - timing of incentives, change management, market forces, etc; maturation, which has to do with how the passing of time allows participants to learn what is required and affect behavior and results without training; and selection, which is how individual differences between groups affects measures.

Along with the reporting of descriptive data, an analysis of variance (ANOVA) to measure group differences and an analysis of covariance (ANCOVA) to measure differences in statistical power between testing intervals should be performed. These tests measure the changes and effect sizes within each group and between the times the treatment as Tx (in our the training event) was administered. If consistent and measurable changes are noted between the pretesting conditions and the posttests for each group, the results would support that training is attributed to changes in behavior and or performance and not some other factors.

Risks if not conducted or conducted improperly:
  • Learning events that remain stale, stagnant, and out of date due to ignorance of the evolving needs of the course participants.
  • Training products that result in participant dissatisfaction due to unidentified problems with the course content, materials, delivery medium, learning environment and other factors relating to participant reactions to the course.
  • Learning events that produce no transfer and retention of knowledge and skill.
  • Training products that result in no impact on significant increase in on the job behaviors and performance outcomes.
  • A training product that produces no Learning ROI.
  • Results (good or bad) that are falsely attributed to the learning events when events other than training are responsible.
  • Attributing degradation of course effectiveness to the learning event delivered as opposed to the changing needs of the course participants.

Thursday, April 2, 2009

Evaluation: Level 5 - Return on Investment (ROI)

Answers the questions – What is the return-on-investment (ROI) for the stakeholder based on outcome measures identified at organizational analysis? These are the cost-returns attributed to performance. Metrics can include profit-gains and cost-reductions such as cost savings in travel, time saved on-task, reduction in error, reduction in resource usage, decreases in down-time, quality in production and customer service, etc… Data from reaction, learning, behavior, and/or results evaluations can be mapped back into an LMS, a best practices library, and/or design to continuously improve the learning value of the training product and to adapt to the ever-changing needs of the stakeholder in a competitive global economy.

Sunday, March 29, 2009

Evaluation: Level 4 - Results/Performance

Answers the questions – Did participants perform better and by how much? While the previously described behavior measures assess how and what gets done, performance measures assess the outcomes. Think of the difference between how a baseball player swings a bat, a behavior-based skill – what he does; and whether he hit the 'sweet spot' or 'knocked one over the fence' - performance outcomes. Performance measures tend to be based on the individual or departmental levels. Measured on an interval basis (by day, week, month, quarter, year, etc.), common examples can include increases in:
  • Production
  • Bookings
  • Revenue
  • Margin
  • Number of customer interactions
  • Average time spent with each customer
  • Closures to sale
  • Customer satisfaction
  • Percentage of total tickets closed
  • Project milestones hit
Or decreases in:
  • Product defects
  • Over expenditure
  • Time spent on task
  • Customer complaints
  • Issues/tickets opened
  • Project milestones missed
Again, based on the nature of these measures, sound methodology includes pretest and posttest measures while accounting for extraneous variables.

Tuesday, March 24, 2009

Evaluation: Level 3 - Application/Behavior

Answers the questions – Do participants apply what they learned after training or on-the-job? How are behavior-based learning objectives applied to the work environment? Do the participants perform targeted skill sets better as a result of the training, than if they had not taken the training at all? Behavior-based evaluation is a measure of efficacy, practicality, and utility based on transfer of skills to the job. Evaluation of application can be achieved through baseline at pretest and or posttest and follow-up tracking of on-the-job behaviors – measurable changes in behavior consistent with the skill-based learning objectives from a course support the affects of training on the job. Common methods for tracking on-the-job behaviors include use of trained judges using Behavior Observation Scales (BOS) or the use of performance support systems. Sound methodology overcomes extraneous factors so that training is supported as a root cause of behavioral improvement. This post-testing methodology also has the added value of reinforcing training based on the principles of organizational behavior modification (OBM). Examples can include any skill measured at Level 2 immediately after the training event, but for Level 3 as a follow-up measure back on the job; survey learners and managers to capture perception of applied skills in on the job settings; or a manager that targets required behavior such as conflict management based on observed and justified need - documents in a performance management system and assigns as a development goal > employee takes assigned and course(s) on conflict management > manager observes behavior/progress, provides feedback, and documents it in the performance management system. This level of assessment requires methodological rigor from the training and follow-up cycle and is typically implemented as a high-end solution.

Friday, March 20, 2009

Evaluation: Level 2 - Learning

Answers the question - Do participants know more and can they do more after the learning event than before? This evaluation is an assessment of immediate transfer of knowledge and skill. Although it is often conducted at the posttest level only, it should be conducted by base-lining through pretest and measuring change through posttest using module-level quizzes or course-level exams. A typical example might have a learner take a pretest before a course or each module > then run through the course or module > and then take a posttest to measure knowledge or skill improvement. Minimal and optimal criteria levels should be identified during Analysis and specified in the Design so thresholds (cut-offs) for score attribution can be applied to measured changes while this level of evaluation is Developed as part of the training solution. Other examples might include having an instructor evaluate (observes and rates) transfer in class; Watch-me > Try-me > Test-me approaches created with simulation tools and LMS tracking (SCORM) of outcomes; through the use of problem-resolution scenarios built into a course – problem is presented > learner selects solution > track responses and performance through the LMS (SCORM); or by creating broken environments in certification scenarios and having learners/candidates fix them with an instructor or remote-proctor verifying success. Gagne’s model for Implementing posttests as part of a course loops the learner back into a module or course if they do not pass the tests threshold and is the basis for posttest delivery in eLearning today. More robust eLearning design also incorporates the pretest component as a filtering mechanism for the learner by prescribing only the content that was not passed. This scenario might look like the programmed instruction model, which augments the training event with the benefits of a custom learning experience and a feedback mechanism.

Monday, March 16, 2009

Evaluation: Level 1 - Reaction

Answers the questions – What did participants like about the learning event? How do participants react to content, materials, instructor, medium environment, etc? These are satisfaction and quality measures. These measures are best obtained through the electronic delivery of questionnaires and should either be built into the courseware solution or supported and tracked through a Learning Management System (LMS) so they can be linked, filtered, and reported according to the course and learner demographics. Typical reaction-level evaluations assess quality of content, delivery, learning environment, and operational support. Reaction measures are the most commonly used and relied upon level of evaluation in the industry. They are highly useful when the learner is a customer as higher levels of customer satisfaction predict the likelihood they will return for additional training and thus drive revenue. Although highly relied upon, 35 years of research has noted no relationship what-so-ever between a learner’s satisfaction with the training experience, and his or her behavior and performance on the job.

Thursday, March 12, 2009

Evaluation: Level 0 - Participation

Answers the question - Who did what as broken down by medium-type, subject-matter, population, or other variables along with the percentage completed? Participation measures usage broken down by the demographics of the learner-base and the parameters of the learning solution. These variables need to be identified in advance and tracked through data stores and learning management systems and are usually tracked on an interval basis. Examples include: number of learners trained and number of courses delivered as broken down by medium or subject by month, quarter, or year.

Sunday, March 8, 2009

Evaluation

“Knowledge is power…”
– Sir Francis Bacon – Founder of the scientific method

Why should anyone know how effective their training efforts were? Why should anyone take the time to evaluate learning initiatives? The answers are simple… So stakeholders can:
  • Baseline current performance
  • Assess what employees, partners, and customers are achieving
  • Set attainable goals based on data
  • Track their investment in learning
  • Continuously improve what they do by asking the right questions
The most effective way to improve development and delivery of courseware is through Evaluation. The goals are to assess the value of the course to the stakeholder and to continuously improve that value based on valid metrics. Valid metrics include stakeholder feedback received during the development cycle while success measures should be based on the required skill-sets, performance criteria, and delivery modalities identified during the Analysis phase. As the required skill-sets and performance criteria should be translated into the learning objectives that drive Design and how the course should be Developed, evaluative elements built into the design and developed as part of the course should measure if those learning objectives and success measures are met, or not. It is important to note that levels of evaluation that are measured (or not) are often determined by stakeholder budget, resource availability, and statements of work. When not measured, this often attributed to fear of reprisal based on misuse and misinterpretation of measured outcomes, versus the drive to improve process and the overall learning solution. In the long-term, such misunderstanding detriments all by depriving the process of a feedback loop that adds value for the learner and saves margin or increases profit for the stakeholders. The four levels of training evaluation have evolved into six and are listed in order of robustness below:

Levels 0 through 5 (Kirkpatrick and Phillips):

Level 0 - Participation
Level 1 - Reaction
Level 2 - Learning
Level 3 - Application
Level 4 - Results
Level 5 - ROI

A description of each level will follow shortly...

Wednesday, February 18, 2009

More on Design - PI


It's important to address the science behind instructional design. For reasons unknown, programmed instruction (PI) in eLearning seems to be all but abandoned in much of the learning content I review. In my opinion, this is a flaw in instructional design and worse, is only supported by popular eLearning development tools, because they omit this capability.

Gagne actually pioneered this mode of instruction in the mid 1960s. For those unfamiliar, programmed instruction models assess a learner's needs through some form of testing and loop back, branch-forward, or multi-path based on performance to pass/fail criteria. More robust PI models incorporate both pretesting an posttesting. The biggest advantage to this design model is that delivery of content is customized to a learner's needs because only filtered content is delivered (based on what the learner did not pass). In essence, a form of needs assessment is built into course delivery and the user experience. How cool is that?

Development tools like Macromedia Authorware or CA KnowledgeTRACK were great at facilitating this mode of design. Unfortunately these tools are no longer available, (in all honesty - they were cumbersome to use.) Meanwhile, social and Learning 2.0 suites like Q2 Learning's eCampus currently bake PI functionality into a learning path (Check it out at: http://www.q2learning.com/). None-the-less, this is a methodology we should revive. Take another look at the process illustrated above and decide for yourself.

Friday, January 16, 2009

Design

We specify how to build a course during the Design stage. When a house is to be built by a construction crew, an architect creates a blueprint that specifies the layout and measurement of materials for the contractors and construction crews. In similar fashion, instructional designers frame-out the architecture and content flow of the courseware solution in a set of design documents and or storyboards. A robust design leverages the data gathered during the analysis stage, which results in a record that outlines the learning objectives and instructional supports through appropriate delivery media. A valid design must be derived from accurate analysis data in order to develop learning objectives that can predict success through performance on the job. Learning objects based on the task-KSA analysis and support components should be designed as modular so they can be easily grouped, sequenced, updated, and reused. If it is practical, design document templates should be as media-agnostic, so they can be efficiently converted to any blended media with or without an LCMS (Learning Content Management System). At the curriculum level and based on learner analysis data, blended learning paths can be designed for any combination of instructor-led (classroom based), Web instructor-led (virtual class), or self-paced media (Web-based or computer-based). Based on media and technical specifications, instructional supports such as graphics, animations, video, audio, simulations, classroom-based images, virtual labs, pretest, and posttest objects, or any required interactive elements should be indicated. Blended designs and learning paths can also include job-aides, on-the-job training (OJT), coaching, mentoring, and collaboration through social media. Design documents should be submitted to both subject matter experts (SMEs) and stakeholders for validation and approval prior to development.

Design specifications include:
- Development of learning objectives that map to on the job requirements
- Section titles and topologies that indicate breakdown and sequence by module, lesson, and topic, which is also known as MLT
- Specification of instructional supports
- Targeting of critical learning objects for test, interactive, and certification development
- Specification of learning group differences and roles where applicable (multi-user design)
- Use of interactive exercises and practicums that map to job tasks
- Media types
- Timing estimates

Practical design structure at the module or lesson levels:
- Tell'em
> Learning objectives
> Explain what the learner will learn and why
- Show'em
> Demonstrate the task or concept
- Walk'em through it
> Guide and coach through the task
> Provide feedback as prompting
- Have'em try it
> Provide the learner with an opportunity to experience task with minimum feedback
- Have'em apply it
> Provide a real-world problem scenario where the learner can use the new skill
> Test the learner in a situation he or she will likely encounter on the job
- Tell'em again
> Reiterate the learning objective and explain what was accomplished

The structure above can be modified by using the parts that are appropriate. What is important is the progression from what is explained or demonstrated - to the transfer of learning.

Risks if not conducted or conducted improperly:
- Misdirected development efforts due to a lack-of or inaccurate architectural specifications
- (Think of trying to get from point A to point B in unfamiliar territory without a map)
- Extended development timelines and cost due to design error and misdirection
- A course that does not meet the required needs of the participants resulting in project failure and a dissatisfied stakeholder

Wednesday, November 26, 2008

ISD and the ADDIE Model

I have done a lot of talking about learning technologies, but I think now is a really good time to get down to basics, specifically, how any learning solution should be built, as in "instructional design." ADDIE is useful model of Instructional Systems Design (ISD) and is based on a 5-phased approach to courseware development... ADDIE is an acronym that stands for:

A - Analysis
D - Design
D - Development
I - Implementation
E - Evaluation

Execution of each phase is dependent on how a development team and stakeholders agree on the project's approach typically specified within a statement of work or project charter. As a system of checks and balances, each phase and milestone requires stakeholder approval to assure that what is delivered at project completion is what the stakeholder has approved from the beginning.

When no or little preexisting courseware exists, the most critical phase of a project is Analysis. Each phase follows sequentially with exception to Evaluation, which if conducted properly, occurs continuously throughout the process. Details regarding each phase will follow in subsequent blog entries. However, for a high level explanation of how this should work, see the graphic below.

It is fair to say that many in the business of course development skip phases within this model and some even ignore the model completely. This is a mistake, but please do not misinterpret that to mean that this process cannot be performed efficiently or within short time-frames. Those of us who work in internal training departments or on the customer business side need only to perform an analysis on a periodic basis for each department or customer-base we serve.

Also, variations in this model exist along with compressed methodologies and adaptations based on stakeholder needs. How this can be accomplished will be addressed in subsequent entries. Stay tuned for details on each phase of the ADDIE model.