Testing for Instructional Designers — A Common Mistake

Somebody sent me a link to a YouTube video today -- a video created to explain to laypeople what instructional design is. Most of it was reasonable, until it gave the following example, narrated as follows:

"... and testing is created to clear up confusion and make sure learners got it right."

image from https://s3.amazonaws.com/feather-client-files-aviary-prod-us-east-1/2016-11-03/d044b9e8-731e-42ff-a2ef-a9b5df14f533.png

Something is obviously wrong here -- something an instructional designer ought to know. What is it?

Scroll down for the answer...

Before you scroll down, come up with your own answer...

.

.

.

.

.

.

.

.

.

.

.

.

Answer: 

The test question is devoid of real-world context. Instead of asking a text-based question, we could provide an image and ask them to point to the access panel.

Better yet, we could have them work on a simulated real-world task and follow steps that would enable them to complete the simulated task only if they used the access panel as part of their task completion.

Better yet, we could have them work on an actual real-world task... et cetera...

Better yet, we might first ask ourselves whether anybody really needs to "LEARN" where the access panel is -- or would they just find it on their own without being trained or tested on it?

Better yet, we might first ask ourselves whether we really need a course in the first place. Maybe we'd be better off to create a performance-support tool that would take them through troubleshooting steps -- with zero or very little training required.

Better yet, we might first ask ourselves whether we could design our equipment so that technicians don't need training or performance support.

.

.

.

Or we could ask ourselves existential questions about the meaning and potency of instructional design, about whether a career devoted to helping people learn work skills is worthy to be our life's work...

Or we could just get back to work and crank out that test...

SMILE...

 

 

Training Maximizers

A few years ago, I created a simple model for training effectiveness based on the scientific research on learning in conjunction with some practical considerations (to make the model's recommendations leverageable for learning professionals). People keep asking me about the model, so I'm going to briefly describe it here. If you want to look at my original YouTube video about the model -- which goes into more depth -- you can view that here. You can also see me in my bald phase.

The Training Maximizers Model includes 7 requirements for ensuring our training or teaching will achieve maximum results.

  • A. Valid Credible Content
  • B. Engaging Learning Events
  • C. Support for Basic Understanding
  • D. Support for Decision-Making Competence
  • E. Support for Long-Term Remembering
  • F. Support for Application of Learning
  • G. Support for Perseverance in Learning

Here's a graphic depiction:

 Training Maximizers Willversion with Copyright

Most training today is pretty good at A, B, and C but fails to provide the other supports that learning requires. This is a MAJOR PROBLEM because learners who can't make decisions (D), learners who can't remember what they've learned (E), learners who can't apply what they've learned (F), and learners who can't persevere in their own learning (G); are learners who simply haven't received leverageable benefits.

When we train or teach only to A, B, and C, we aren't really helping our learners, we aren't providing a return on the learning investments, we haven't done enough to support our learners' future performance.

 

 

Spacing Learning Over Time — Research Report

The spacing effect is one of the most potent learning factors there is--because it helps minimize forgetting.

Here's a research-to-practice report on the subject, backed by over 100 research studies from scientific refereed journals, plus examples. Originally published in 2006, the recommendations are still valid today.

Click to download the research-to-practice report on spacing.   It's a classic!

 

And here's some more recent research and exploration.

Learning Benefits of Questions — Research-to-Practice Report Reissued

The "Learning Benefits of Questions" is a research-to-practice report on how to use questions to boost learning results. First published back in 2003, and partially funded by Questionmark (to whom I am still grateful), the Learning Benefits of Questions was inspired by fundamental learning research, provided a practical perspective, and even provided a diagnostic to help readers determine how well they understood questions for learning.

Still getting requests and seeing people refer to this research-to-practice report, I've decided to reissue the report, with a few minor improvements. You can download the report using the following link:

Download Learning Benefits of Questions 2014 v2.0

You can also see our other papers, articles, and job aids at the Work-Learning Research catalog.

MOOC’s are Ineffective–Except for One Thing!

MOOC's don't have to suck. The 4% to 10% completion rates may be the most obvious problem, but too many MOOC's simply don't use good learning design. They don't give learners enough realistic practice, they don't set work in realistic contexts, they don't space repetitions over time.

But after reading this article from Thomas Friedman in the New York Times, you will see that there is one thing that MOOC's do really well. The get learning content to learners.

Really, go ahead. Read the article...

 

Why is "Exposure" one of the Decisive Dozen learning factors?

Many people have wondered why I included "Exposure" as one of the most important learning factors. Why would exposing learners to learning content rank as so important? Friedman's article makes it clear in one example, but there are billions of learners just waiting for the advantage of learning.

I got the idea of the importance of exposing learners to valid content by noticing in many a scientific experiment that learners in the control group often improved tremendously--even though they were almost always outclassed by those who were in the treatment groups.

By formalizing Exposure as one of the top 12 learning factors, we send the message that while learning design matters, giving learners valid content probably matters more.

And yes, that last sentence is as epically important as it sounds...

It also should give us learning experts a big dose of humility...

 

MOOC's will get better...

Most MOOC's aren't very well designed, but over time, they'll get better.

 

 

Learners’ Often Use Poor Learning Strategies — From a Research Review

I just read the following research article, and found a great mini-review of some essential research.

  • Hagemans, M. G., van der Meij, H., & de Jong, T. (2013). The effects of a concept map-based support tool on simulation-based inquiry learning. Journal of Educational Psychology, 105(1), 1-24. doi:10.1037/a0029433

Experiment-Specific Findings:

The article shows that simulations—the kind that ask learners to navigate through the simulation on their own—are more beneficial when learners are supported in their simulation playing. Specifically, they found that learners given the optimal learning route did better than those supplied with a sub-optimal learning route. They also found that concept maps helped the learners by supporting their comprehension. They also found that learners who got feedback on the correctness of their practice attempts were motivated to correct their errors and thus provided themselves with additional practice.

Researchers’ Review of Learners’ Poor Learning Strategies

The research Hagemans, van der Meij, and de Jong did is good, but what struck me as even more relevant for you as a learning professional is their mini review of research that shows that learners are NOT very good stewards of their own learning. Here is what their mini-review said (from Hagemans, van der Meij, and de Jong, 2013, p. 2:

  • Despite the importance of planning for learning, few students engage spontaneously in planning activities (Manlove & Lazonder, 2004).  
  • Novices are especially prone to failure to engage in planning prior to their efforts to learn (Zimmerman, 2002).  
  • When students do engage in planning their learning, they often experience difficulty in adequately performing the activities involved (de Jong & Van Joolingen, 1998; Quintana et al., 2004). For example, they do not thoroughly analyze the task or problem they need to solve (Chi, Feltovich, & Glaser, 1981; Veenman, Elshout, & Meijer, 1997) and tend to act immediately (Ge & Land, 2003; Veenman et al., 1997), even when a more thorough analysis would actually help them to build a detailed plan for learning (Veenman, Elshout, & Busato, 1994).  
  • The learning goals they set are often of low quality, tending to be nonspecific and distal (Zimmerman, 1998).
  • In addition, many students fail to set up a detailed plan for learning, whereas if they do create a plan, it is often poorly constructed (Manlove et al., 2007). That is, students often plan their learning in a nonsystematic way, which may cause them to start floundering (de Jong & Van Joolingen, 1998), or they plan on the basis of what they must do next as they proceed, which leads to the creation of ad hoc plans in which they respond to the realization of a current need (Manlove & Lazonder, 2004).  
  • The lack of proper planning for learning may cause students to miss out on experiencing critical moments of inquiry, and their investigations may lack systematicity.
  • Many students also have problems with monitoring their progress, in that they have difficulty in reflecting on what has already been done (de Jong & Van Joolingen, 1998).
  • Regarding monitoring of understanding, students often do not know when they have comprehended the subject matter material adequately (Ertmer & Newby, 1996; Thiede, Anderson, & Therriault, 2003) and have difficulty recognizing breakdowns in their understanding (Ertmer & Newby, 1996).
  • If students do recognize deficits in their understanding, they have difficulty in expressing explicitly what they do not understand (Manlove & Lazonder, 2004).
  • One consequence is that students tend to overestimate their level of success, which may result in “misplaced optimism, substantial understudying, and, ultimately, low test scores” (Zimmerman, 1998, p. 9).

The research article is available by clicking here.

Final Thoughts

This research, and other research I have studied over the years, shows that we CANNOT ALWAYS TRUST THAT OUR LEARNERS WILL KNOW HOW TO LEARN. We as instructional designers have to design learning environments that support learners in learning. We need to know the kinds of learning situations where our learners are likely to succeed and those where they are likely to fail without additional scaffolding.

The research also shows, more specifically, that inquiry-based simulation environments can be powerful learning tools, but ONLY if we provide the learners with guidance and/or scaffolding that enables them to be successful. Certainly, some few may succeed without support, but most will act suboptimally.

We have a responsibility to help our learners. We can't always put it on them...