10 Practical Tips To Motivate People To Learn

A few months ago I posted a topic called “How do you motivate people to learn?” in Learning, Education, and Training Professionals Group. The thread attracted over a hundred comments, and I thought it would be nice to highlight the most useful tips and summarize them.

This post was first published on eLearning Industry.

Getting Started With Knowledge Management

In this article you will find a list of basic recommendations that can be used to establish knowledge management in an organization. If your company treats its employees’ expertise with the respect it deserves, you can use the information in this article to preserve it and make it work for you.

This post was first published on eLearning Industry.

Learning Trends In 2016 – Learning Paradigm Shift

Deloitte has published their latest report, Global Human Capital Trends 2016, in which it shared the findings about the main trends in HR, and a whole section of the report is dedicated to the subject of learning. In this article, we aim to summarize the key ideas and trends the report touches upon, as well as the learning paradigm shift it suggests.

This post was first published on eLearning Industry.

Changing Behavioral Patterns Through Education

Behavioral patterns are useful in everyday lives, but changing them is hard. Let's look at the stages a person passes through when his behavior model changes. This will help to understand the students’ psychology and plan the syllabus in order to reinforce new behavioral patterns in their minds.

This post was first published on eLearning Industry.

Knowledge Centered Support Methodology Part 2: Implementation

Before we begin talking about the adapting and implementing Knowledge Centered Support (KCS), let us consider one of the key concept the methodology deals with. Namely, what is “Knowledge”? We do not think twice before uttering the word in professional communication, but chances are, being asked to define it will give you pause.

This post was first published on eLearning Industry.

Using Virtual Reality In Education

The beginning of the Sixties, besides the Cuban Crisis and the rise of the hippie movement, is notable for giving birth to Sensorama, one of the earliest prototypes of the modern virtual reality equipment. It allowed for viewing stereoscopic 3D images accompanied by stereo sound, smells, as well as wind effect. The device was revolutionary for its time, and it spearheaded the development of what Jaron Zepel Lanier dubbed in 1989 as “Virtual Reality” (VR). Having obtained a catchy name, the new technology has been gathering new adherents ever since.

This post was first published on eLearning Industry.

Knowledge Centered Support Methodology: Getting Started

When an eLearning professional encounters the word “methodology”, they are likely to think of ADDIE, or maybe SAM. However, today you’re in for a surprise, for I’m not talking about a course authoring methodology. Instead, I’d like to tell you about a knowledge management methodology called KCS, which stands for “Knowledge Centered Support”.

This post was first published on eLearning Industry.

Evaluating Training Effectiveness And ROI

How To Evaluate Training Effectiveness Using ROI   

Every year, companies all over the world create hundreds of thousands of eLearning courses and conduct hundreds of thousands of trainings. Courses are created for training both employees within the company and unaffiliated personnel, such as clients or other third party individuals. The creation of courses and their consequent employment in the education process is, in most cases, quite costly. When training the company employees, it is important to consider that the acquisition of new knowledge and skills will distract the employees from their duties and cost the company time. To evaluate the effectiveness of education and its practicability from the financial standpoint, it is necessary to calculate the expenses connected with creating and conducting the trainings, gauge the results achieved by the employees who have taken part in them, and decide whether the increase in the employees’ efficiency and company profits was sufficient to recoup the associated costs.

But how does one calculate training effectiveness? Luckily, there exists an all-purpose tool widely used by managers responsible for internal training processes - Donald Kirkpatrick’s Learning Evaluation Model. The model is relatively time-consuming to implement, but the accuracy with which it helps you understand whether your training program should be continued and how it can be improved is well worth the effort.

The Donald Kirkpatrick’s Learning Evaluation Model consists of four levels:

  • Level 1: Reaction.
  • Level 2: Learning.
  • Level 3: Behavior.
  • Level 4: Results.

You can read about these levels in-depth in my previous article, Getting To Know ADDIE: Evaluation. In this article, I would like to focus on the fifth level, which was suggested for addition by Jack Phillips. It is this fifth level that helps to assess the financial viability of training, its costs and benefits.

Level 5: Return On Investment (ROI) 

When evaluating training effectiveness, it is customary to consider an additional level of the Kirkpatrick’s model, namely, the ROI methodology, developed by Jack Phillips in 1991. This methodology enables one to express the evaluation data obtained on the fourth level in terms of money, and then compare the estimated profit figure with the expenses the training program incurred.

The head of the company would require information about the projected costs of a training program before giving it the green light, especially if the budget is tight. In most cases, it is the management that insists on using the ROI methodology for assessing the results of training and personnel development. This makes the use of the methodology more or less a given when trainings are conducted.

The ROI methodology is often used to estimate the potential profit from conducting a training program, and to make sure that the projected costs would fit the budget. The ROI coefficient takes the form of a percentage, expressing the relationship between the projected profit and the projected costs of a training program, calculated according to the following formula:

ROI = [(projected profit - projected costs) / projected costs] x 100%

The fifth level of evaluation, described by the Phillips methodology, makes it possible to:

  • Estimate the cost of a training program and make a prediction regarding whether conducting the program will be cost-effective.
  • Demonstrate a direct relationship between the company’s productiveness and the training of employees.
  • Evaluate a training program as a business tool.

Is Using The Fifth Level Of Evaluation Always Necessary? 

Considering that implementing the fourth and fifth levels of evaluation according to the Kirkpatrick’s model are costly in terms of both time and money, it is important to understand whether conducting such in-depth evaluation is pertinent in your specific situation. The ROI evaluation is usually conducted sparingly, for no more than 5-10% of the total number of training programs.

The fourth and fifth levels of evaluation are usually employed only to validate the training programs concerning the company’s strategic interests, as such programs demand significant investment and are closely monitored by the company’s management. This does not mean that training programs of lesser importance should not be evaluated at all - just that the use of the first three levels of the evaluation model is usually sufficient. Here are the general guidelines for evaluating your training programs using the Kirkpatrick’s model:

  • Ideally, every program should be evaluated at least on the first level (Reaction).
  • Most programs should be evaluated on the second level (Learning) regularly, and only periodically on the third (Behavior).
  • A few programs, those largest in scope and with the greatest impact, should be evaluated on the third (Behavior), fourth (Results), and fifth (ROI) levels.

Once you’ve decided to evaluate a training program on the fifth level, it is vital to calculate the data carefully, and not resort to guesstimations. Should the resulting ROI prove to be negative, diligently calculated data will help to pinpoint the weak links in the training program.

Using The ROI Prognosis In Planning 

Company managers usually need to know the estimated ROI of a training course long before it is developed or implemented. For that reason, in many successful companies it is customary to use the ROI prognosis to assist the managers in decision making. If any deficiencies are discovered at this stage, the corresponding changes are made to ensure that the training program is financially viable.

In Conclusion (Plus One More Bonus Of The ROI Model) 

Another not insignificant advantage to using the ROI model on a regular basis is the fact that it changes the attitude both managers of other branches and the top company management have towards training. Regularly evaluating training programs and demonstrating their impact in terms of hard numbers helps promote the role of training in the development of the company’s employees, as well as the company itself. Calculating ROI when planning and evaluating training programs helps to keep those responsible for their creation focused on the company’s business goals, and improves the design, development, and delivery of trainings. Thus, besides improving the training effectiveness programs, using the ROI model changes how the company management and those in charge of approving training programs view training as a whole.

This post was first published on eLearning Industry.

Getting To Know ADDIE: Part 5 – Evaluation

Getting To Know ADDIE: Evaluation 

We started our journey by studying the target audience, formulating the learning goals, and performing technical analysis. We then proceeded to choosing the format of the course and developing the educational strategy. The next step was creating a prototype and getting busy developing the course itself. In the previous installment we spoke about preparing the teachers, learners, and the environment.

Let us take a look at the individual steps comprising the final stage of the ADDIE framework, Evaluation.

Formative Evaluation

Formative evaluation runs parallel to the learning process and is meant to evaluate the quality of the learning materials and their reception by the students. Formative evaluation can be separated into the following categories:

  1. One-to-One Evaluation.
  2. Small Group Evaluation.
  3. Field Trial.

1. One-to-One Evaluation. 

Imagine that you are conducting a training teaching medical students to use an X-ray machine. You play a video explaining the basics of operating the device. One-to-one evaluation involves you gauging the effectiveness of the video taking into account the age and skillset of the target audience. It is necessary to evaluate the following aspects of the video:

  • Clarity.
    Was the main idea of the video well understood?
  • Usefulness.
    Did the video help in achieving the goals that were set?
  • Relevancy.
    Can the video be used to good practical effect in regard to the place it takes in the curriculum and the material being studied in parallel?

It is important to keep evaluation questions clear, concise, and to the point.

2. Small Group Evaluation. 

This type of evaluation is meant to understand how well do the activities included in the course work in a group setting. Form a small group, preferably consisting of representatives of the various subgroups that make up the student body that is the course’s target audience.

When doing the small group evaluation, you should ask the following questions:

  • Was learning fun and engaging?
  • Do you understand the goal of the course?
  • Do you feel that the teaching materials were relevant to the course’s goals?
  • Was there enough practical exercises?
  • Do you feel that the tests checked the knowledge that is relevant to the course’s goals?
  • Did you receive enough feedback?

3. Field Trial. 

Once the small group evaluation is complete, it is recommended to do one more trial, this time under conditions as similar as possible to the actual environments that will be used in the learning process. This “field trial” will help you evaluate the efficacy of learning in a specific environment and under specific conditions.

Summative Evaluation

The main goal of summative evaluation is to prove, once the course is finished, that the performed training had a positive effect. For that, we use the Donald Kirkpatrick training evaluation model, which has long ago become the standard for evaluating the effectiveness of training.

Summative evaluation helps us find answers to the following questions:

  • Is continuing the learning program worthwhile?
  • How can the learning program be improved?
  • How can the effectiveness of training be improved?
  • How to make sure that the training corresponds to the learning strategy?
  • How can the value of the training be demonstrated?

Donald Kirkpatrick divided his model into 4 levels:

  • Level 1: Reaction.
  • Level 2: Learning.
  • Level 3: Behavior.
  • Level 4: Results.

Let us examine them in more detail.

Level 1: Reaction. 

First thing to be analyzed once the training is complete is how the students reacted to the course and the instructor (if applicable). Usually, the data is obtained with the help of a questionnaire containing a number of statements about the course that students need to rate from one to five, depending on whether they agree or disagree with a particular statement. These questionnaires are usually called “Smile sheets”.

Level 2: Learning. 

On this level we test the knowledge and skills acquired during the training. This evaluation can take place right after the training is concluded, or after some time has passed. Tests and surveys are used to evaluate the training results and to assign to them a measurable value. Another option is to have the learners who have completed the training to train other employees, conduct a presentation for colleagues from different branches, or help in adapting and training new hires. Besides helping internalize the acquired knowledge, this has the additional benefit of speeding up the knowledge transfer process within the company.

Level 3: Behavior. 

According to Donald Kirkpatrick, this evaluation level is the hardest to implement. It involves analyzing the changes in the learners’ behavior as a result of participating in training, and also understanding how well and how often the acquired knowledge and skills are being employed in the workplace. In most cases, the latter reflects the relevancy of the knowledge delivered via the training, as well as the motivation to use the newly acquired knowledge the training may have imparted. For this level, the best evaluation tools are observing the learners’ behavior in the workplace and focus group testing.

Level 4: Results. 

Finally, the fourth level deals with analyzing the financial results of the conducted training. Namely, whether the delivered results matched up to the goals that had been set, whether the company’s financial indicators (sales volume, decrease in expenses, total profit, etc.) improved as the result of the conducted training, and so on. Other factors that can be taken into account include increase in productivity, improvements in quality, decrease in workplace accidents, increase in the number of sales, and decrease in turnover.

For this reason it is important to determine the factors that will be taken into account to determine the effectiveness of the training beforehand, and to measure them before and after the training is conducted.

Evaluation on this level is both difficult and expensive. To obtain results that are as accurate as possible, it is recommended to use one of the following methods:

  • Using a control group (consisting of employees that have not participated in the training).
  • Performing the evaluation after some time has passed since the completion of the training, so that the results would be more pronounced.
  • Performing the evaluation both before and after conducting the training.
  • Conducting the evaluation a number of times during the course of the training.

Is It All Worth It?

Carrying out evaluation following the Kirkpatrick model is time-consuming and not always cheap, but it provides valuable insight into whether it is worthwhile to continue a training program and whether it will deliver the expected results and earn back the money spent on it, so that you can make the correct choice. In addition, this model helps gauge the effectiveness of the training department, and its alignment with the organization’s goals. Some companies neglect to perform third and fourth level evaluation, contenting themselves with analysis on the basic reaction level. However, this denies them the benefits of a clear understanding of the effectiveness and usefulness of the conducted training. Summative evaluation helps in getting on the right track, even if the conducted training is found to have been of substandard quality. It enables you to correct past mistakes and improve the training, so that it may better benefit the next group of students.

Evaluation As The Final ADDIE Stage 

Despite the fact that evaluation is the final stage of the ADDIE methodology, it should be considered not as a conclusion of a long process, but as a starting point for the next iteration of the ADDIE cycle. Diligent evaluation will enable you to review and improve the educational program. Instructional Design is an iterative process, and evaluation should be carried out on a regular basis. Besides, keep in mind that to achieve best results, it is recommended to keep an eye on the quality of the course under construction throughout the development process according to the ADDIE framework, and not only at its conclusion.

Have fun building, and best of luck to you!

This post was first published on eLearning Industry.