Getting To Know ADDIE: Evaluation
We started our journey by studying the target audience, formulating the learning goals, and performing technical analysis. We then proceeded to choosing the format of the course and developing the educational strategy. The next step was creating a prototype and getting busy developing the course itself. In the previous installment we spoke about preparing the teachers, learners, and the environment.
Let us take a look at the individual steps comprising the final stage of the ADDIE framework, Evaluation.
Formative evaluation runs parallel to the learning process and is meant to evaluate the quality of the learning materials and their reception by the students. Formative evaluation can be separated into the following categories:
- One-to-One Evaluation.
- Small Group Evaluation.
- Field Trial.
1. One-to-One Evaluation.
Imagine that you are conducting a training teaching medical students to use an X-ray machine. You play a video explaining the basics of operating the device. One-to-one evaluation involves you gauging the effectiveness of the video taking into account the age and skillset of the target audience. It is necessary to evaluate the following aspects of the video:
Was the main idea of the video well understood?
Did the video help in achieving the goals that were set?
Can the video be used to good practical effect in regard to the place it takes in the curriculum and the material being studied in parallel?
It is important to keep evaluation questions clear, concise, and to the point.
2. Small Group Evaluation.
This type of evaluation is meant to understand how well do the activities included in the course work in a group setting. Form a small group, preferably consisting of representatives of the various subgroups that make up the student body that is the course’s target audience.
When doing the small group evaluation, you should ask the following questions:
- Was learning fun and engaging?
- Do you understand the goal of the course?
- Do you feel that the teaching materials were relevant to the course’s goals?
- Was there enough practical exercises?
- Do you feel that the tests checked the knowledge that is relevant to the course’s goals?
- Did you receive enough feedback?
3. Field Trial.
Once the small group evaluation is complete, it is recommended to do one more trial, this time under conditions as similar as possible to the actual environments that will be used in the learning process. This “field trial” will help you evaluate the efficacy of learning in a specific environment and under specific conditions.
The main goal of summative evaluation is to prove, once the course is finished, that the performed training had a positive effect. For that, we use the Donald Kirkpatrick training evaluation model, which has long ago become the standard for evaluating the effectiveness of training.
Summative evaluation helps us find answers to the following questions:
- Is continuing the learning program worthwhile?
- How can the learning program be improved?
- How can the effectiveness of training be improved?
- How to make sure that the training corresponds to the learning strategy?
- How can the value of the training be demonstrated?
Donald Kirkpatrick divided his model into 4 levels:
- Level 1: Reaction.
- Level 2: Learning.
- Level 3: Behavior.
- Level 4: Results.
Let us examine them in more detail.
Level 1: Reaction.
First thing to be analyzed once the training is complete is how the students reacted to the course and the instructor (if applicable). Usually, the data is obtained with the help of a questionnaire containing a number of statements about the course that students need to rate from one to five, depending on whether they agree or disagree with a particular statement. These questionnaires are usually called “Smile sheets”.
Level 2: Learning.
On this level we test the knowledge and skills acquired during the training. This evaluation can take place right after the training is concluded, or after some time has passed. Tests and surveys are used to evaluate the training results and to assign to them a measurable value. Another option is to have the learners who have completed the training to train other employees, conduct a presentation for colleagues from different branches, or help in adapting and training new hires. Besides helping internalize the acquired knowledge, this has the additional benefit of speeding up the knowledge transfer process within the company.
Level 3: Behavior.
According to Donald Kirkpatrick, this evaluation level is the hardest to implement. It involves analyzing the changes in the learners’ behavior as a result of participating in training, and also understanding how well and how often the acquired knowledge and skills are being employed in the workplace. In most cases, the latter reflects the relevancy of the knowledge delivered via the training, as well as the motivation to use the newly acquired knowledge the training may have imparted. For this level, the best evaluation tools are observing the learners’ behavior in the workplace and focus group testing.
Level 4: Results.
Finally, the fourth level deals with analyzing the financial results of the conducted training. Namely, whether the delivered results matched up to the goals that had been set, whether the company’s financial indicators (sales volume, decrease in expenses, total profit, etc.) improved as the result of the conducted training, and so on. Other factors that can be taken into account include increase in productivity, improvements in quality, decrease in workplace accidents, increase in the number of sales, and decrease in turnover.
For this reason it is important to determine the factors that will be taken into account to determine the effectiveness of the training beforehand, and to measure them before and after the training is conducted.
Evaluation on this level is both difficult and expensive. To obtain results that are as accurate as possible, it is recommended to use one of the following methods:
- Using a control group (consisting of employees that have not participated in the training).
- Performing the evaluation after some time has passed since the completion of the training, so that the results would be more pronounced.
- Performing the evaluation both before and after conducting the training.
- Conducting the evaluation a number of times during the course of the training.
Is It All Worth It?
Carrying out evaluation following the Kirkpatrick model is time-consuming and not always cheap, but it provides valuable insight into whether it is worthwhile to continue a training program and whether it will deliver the expected results and earn back the money spent on it, so that you can make the correct choice. In addition, this model helps gauge the effectiveness of the training department, and its alignment with the organization’s goals. Some companies neglect to perform third and fourth level evaluation, contenting themselves with analysis on the basic reaction level. However, this denies them the benefits of a clear understanding of the effectiveness and usefulness of the conducted training. Summative evaluation helps in getting on the right track, even if the conducted training is found to have been of substandard quality. It enables you to correct past mistakes and improve the training, so that it may better benefit the next group of students.
Evaluation As The Final ADDIE Stage
Despite the fact that evaluation is the final stage of the ADDIE methodology, it should be considered not as a conclusion of a long process, but as a starting point for the next iteration of the ADDIE cycle. Diligent evaluation will enable you to review and improve the educational program. Instructional Design is an iterative process, and evaluation should be carried out on a regular basis. Besides, keep in mind that to achieve best results, it is recommended to keep an eye on the quality of the course under construction throughout the development process according to the ADDIE framework, and not only at its conclusion.
Have fun building, and best of luck to you!
This post was first published on eLearning Industry.