How Human Performance Technology And Learning Experience Design Are Integrated

Learning Experience Design, LXD, is big and broad, and encompasses the entire ecosystem of learning both on the macro and micro scale. A trend is occurring in the eLearning space, where ISD models have taken precedence over HPT, and we miss out on the broader context of performance solutions. This post was first published on eLearning Industry.

Instructional Design Using The ADDIE Model

You already know that online courses are changing the way businesses train their employees. What are the benefits of creating online courses? Fewer overhead costs, a quicker Return On Investment, and the easy scalability of training. But do you know how to create an online course? This post was first published on eLearning Industry.

Agile Is Important For eLearning Development Today, Isn’t It?

Agile has been here in this industry for quite some time but it's fairly new in eLearning development. In this article, we will see how the Agile methodology is better in all aspects as compared to the traditional ADDIE methodology. This post was first published on eLearning Industry.

Agile production – online learning needs to get its skates on

Der Artikel ist „hingeworfen“: Ja, die Entwicklung von Online-Lernprogrammen, soweit sie dem klassischen Projektmanagement folgt (ADDIE), ist oft aufwändig und teuer. Und, ja, es sollte schneller gehen. Einige Stichworte, wie das aussehen könnte, erwähnt Donald Clark: „1. Assess and edit content, 2. Avoid SME iterations, 3. AI generated online learning, 4. Review actual content (online).“

Doch die Argumente werden überdeckt durch einfache Rechnungen, die „Cost Savings“ ausweisen, was mich an alte „E-Learning ist preiswerter als Präsenztraining“-Diskussionen erinnert. Und Donald Clark reduziert E-Learning auf die Inhaltsentwicklung und -vermittlung. Aber das war auch schon bei ADDIE das Problem.

„Business managers are often surprised when their request for online training will take many months not days, at 15-25k per hour of learning – oh and you’ll not be able to evaluate much, as the data is unlikely to tell you much (we have a thing called SCORM).“
Donald Clark, Donald Clark Plan B, 20. August 2018

Bildquelle: Goh Rhy Yan (Unsplash)

Evaluating the development and impact of an eLearning platform: the case of the Switzerland Travel Academy

Eine E-Learning-Fallstudie aus der Tourismus-Branche, eingebettet in aktuellen Veränderungen („eTourism“) und in daraus resultierende Notwendigkeit der Travel Agents, auf dem Laufenden zu bleiben. Vor diesem Hintergrund haben die Autoren die Einführung und Nutzung eines Kursangebots untersucht. Und zwar mit Hilfe eines neuen Evaluationskonzepts, eine Verbindung aus ADDIE und Kirkpatrick, das sie selbst entworfen haben und das aus ihrer Sicht den dynamischen, zirkulären Entwicklungsschritten eines E-Learning-Projekts besser entspricht. Es bleiben Fragen, zum Beispiel, wie man das Konzept in Projekten unterschiedlicher Größenordnung umsetzt.

„A new evaluation framework was proposed combining and adapting the ADDIE model with Kirkpatrick’s model, according to which evaluation of an eLearning platform should occur on two levels: internally and externally.“
Elide Garbani-Nerini, Nadzeya Kalbaska und Lorenzo Cantoni, in: In Stangl, B., Pesonen, J. (Eds.). Information and Communication Technologies in Tourism. Wien, New York: Springer 2018, S. 450-462 (via ResearchGate)

Bildquelle: Tommaso Pecchioli (Unsplash)

Designing Instruction For Authentic Learning

An applied approach to Instructional Design where the Backwards Design model is integrated into the design phase of ADDIE to create Authentic Learning. Increase the transfer of knowledge effectiveness rate by allowing students to become active learners instead of passive receivers of information.

This post was first published on eLearning Industry.

A Practical Check-In With 4 Of The Most Popular Instructional Design Models

When choosing which Instructional Design model to apply in a learning course, it's often helpful to determine which methodologies are most applicable to the needs of your learners. The following article highlights 4 of the most popular Instructional Design models and how best to utilize them in training.

This post was first published on eLearning Industry.

ADDIE Vs. Backward Design: Which One, When, And Why?

Opinion is divided between which of the two popular Instructional Design models is ideal for developing an online course. Both have their pros and cons, and it is quite likely that one of the two is better than the other in a specific situation. This articles explores the ADDIE vs. Backward Design dilemma.

This post was first published on eLearning Industry.

Getting To Know ADDIE: Part 5 – Evaluation

Getting To Know ADDIE: Evaluation 

We started our journey by studying the target audience, formulating the learning goals, and performing technical analysis. We then proceeded to choosing the format of the course and developing the educational strategy. The next step was creating a prototype and getting busy developing the course itself. In the previous installment we spoke about preparing the teachers, learners, and the environment.

Let us take a look at the individual steps comprising the final stage of the ADDIE framework, Evaluation.

Formative Evaluation

Formative evaluation runs parallel to the learning process and is meant to evaluate the quality of the learning materials and their reception by the students. Formative evaluation can be separated into the following categories:

  1. One-to-One Evaluation.
  2. Small Group Evaluation.
  3. Field Trial.

1. One-to-One Evaluation. 

Imagine that you are conducting a training teaching medical students to use an X-ray machine. You play a video explaining the basics of operating the device. One-to-one evaluation involves you gauging the effectiveness of the video taking into account the age and skillset of the target audience. It is necessary to evaluate the following aspects of the video:

  • Clarity.
    Was the main idea of the video well understood?
  • Usefulness.
    Did the video help in achieving the goals that were set?
  • Relevancy.
    Can the video be used to good practical effect in regard to the place it takes in the curriculum and the material being studied in parallel?

It is important to keep evaluation questions clear, concise, and to the point.

2. Small Group Evaluation. 

This type of evaluation is meant to understand how well do the activities included in the course work in a group setting. Form a small group, preferably consisting of representatives of the various subgroups that make up the student body that is the course’s target audience.

When doing the small group evaluation, you should ask the following questions:

  • Was learning fun and engaging?
  • Do you understand the goal of the course?
  • Do you feel that the teaching materials were relevant to the course’s goals?
  • Was there enough practical exercises?
  • Do you feel that the tests checked the knowledge that is relevant to the course’s goals?
  • Did you receive enough feedback?

3. Field Trial. 

Once the small group evaluation is complete, it is recommended to do one more trial, this time under conditions as similar as possible to the actual environments that will be used in the learning process. This “field trial” will help you evaluate the efficacy of learning in a specific environment and under specific conditions.

Summative Evaluation

The main goal of summative evaluation is to prove, once the course is finished, that the performed training had a positive effect. For that, we use the Donald Kirkpatrick training evaluation model, which has long ago become the standard for evaluating the effectiveness of training.

Summative evaluation helps us find answers to the following questions:

  • Is continuing the learning program worthwhile?
  • How can the learning program be improved?
  • How can the effectiveness of training be improved?
  • How to make sure that the training corresponds to the learning strategy?
  • How can the value of the training be demonstrated?

Donald Kirkpatrick divided his model into 4 levels:

  • Level 1: Reaction.
  • Level 2: Learning.
  • Level 3: Behavior.
  • Level 4: Results.

Let us examine them in more detail.

Level 1: Reaction. 

First thing to be analyzed once the training is complete is how the students reacted to the course and the instructor (if applicable). Usually, the data is obtained with the help of a questionnaire containing a number of statements about the course that students need to rate from one to five, depending on whether they agree or disagree with a particular statement. These questionnaires are usually called “Smile sheets”.

Level 2: Learning. 

On this level we test the knowledge and skills acquired during the training. This evaluation can take place right after the training is concluded, or after some time has passed. Tests and surveys are used to evaluate the training results and to assign to them a measurable value. Another option is to have the learners who have completed the training to train other employees, conduct a presentation for colleagues from different branches, or help in adapting and training new hires. Besides helping internalize the acquired knowledge, this has the additional benefit of speeding up the knowledge transfer process within the company.

Level 3: Behavior. 

According to Donald Kirkpatrick, this evaluation level is the hardest to implement. It involves analyzing the changes in the learners’ behavior as a result of participating in training, and also understanding how well and how often the acquired knowledge and skills are being employed in the workplace. In most cases, the latter reflects the relevancy of the knowledge delivered via the training, as well as the motivation to use the newly acquired knowledge the training may have imparted. For this level, the best evaluation tools are observing the learners’ behavior in the workplace and focus group testing.

Level 4: Results. 

Finally, the fourth level deals with analyzing the financial results of the conducted training. Namely, whether the delivered results matched up to the goals that had been set, whether the company’s financial indicators (sales volume, decrease in expenses, total profit, etc.) improved as the result of the conducted training, and so on. Other factors that can be taken into account include increase in productivity, improvements in quality, decrease in workplace accidents, increase in the number of sales, and decrease in turnover.

For this reason it is important to determine the factors that will be taken into account to determine the effectiveness of the training beforehand, and to measure them before and after the training is conducted.

Evaluation on this level is both difficult and expensive. To obtain results that are as accurate as possible, it is recommended to use one of the following methods:

  • Using a control group (consisting of employees that have not participated in the training).
  • Performing the evaluation after some time has passed since the completion of the training, so that the results would be more pronounced.
  • Performing the evaluation both before and after conducting the training.
  • Conducting the evaluation a number of times during the course of the training.

Is It All Worth It?

Carrying out evaluation following the Kirkpatrick model is time-consuming and not always cheap, but it provides valuable insight into whether it is worthwhile to continue a training program and whether it will deliver the expected results and earn back the money spent on it, so that you can make the correct choice. In addition, this model helps gauge the effectiveness of the training department, and its alignment with the organization’s goals. Some companies neglect to perform third and fourth level evaluation, contenting themselves with analysis on the basic reaction level. However, this denies them the benefits of a clear understanding of the effectiveness and usefulness of the conducted training. Summative evaluation helps in getting on the right track, even if the conducted training is found to have been of substandard quality. It enables you to correct past mistakes and improve the training, so that it may better benefit the next group of students.

Evaluation As The Final ADDIE Stage 

Despite the fact that evaluation is the final stage of the ADDIE methodology, it should be considered not as a conclusion of a long process, but as a starting point for the next iteration of the ADDIE cycle. Diligent evaluation will enable you to review and improve the educational program. Instructional Design is an iterative process, and evaluation should be carried out on a regular basis. Besides, keep in mind that to achieve best results, it is recommended to keep an eye on the quality of the course under construction throughout the development process according to the ADDIE framework, and not only at its conclusion.

Have fun building, and best of luck to you!

This post was first published on eLearning Industry.