New Meta-Analysis on Debunking — Still an Unclear Way to Potency

A new meta-analysis on debunking was released last week, and I was hoping to get clear guidelines on how to debunk misinformation. Unfortunately, the science still seems somewhat equivocal about how to debunk. Either that, or there's just no magic bullet.

Bigstock-Clown-businessman-in-funny-con-67646752

Let's break this down. We all know misinformation exists. People lie, people get confused and share bad information, people don't vet their sources, incorrect information is easily spread, et cetera. Debunking is the act of providing information or inducing interactions intended to correct misinformation.

Misinformation is a huge problem in the world today, especially in our political systems. Democracy is difficult if political debate and citizen conversations are infused with bad information. Misinformation is also a huge problem for citizens themselves and for organizations. People who hear false health-related information can make themselves sick. Organizations who have employees who make decisions based on bad information, can hurt the bottom line.

In the workplace learning field, there's a ton of misinformation that has incredibly damaging effects. People believe in the witchcraft of learning styles, neuroscience snake oil, traditional smile sheets, and all kinds of bogus information.

It would be nice if misinformation could be easily thwarted, but too often it lingers. For example, the idea that people remember 10% of what they read, 20% of what they hear, 30% of what they see, etc., has been around since 1913 if not before, but it still gets passed around every year on bastardized versions of Dale's Cone.

A meta-analysis is a scientific study that compiles many other scientific studies using advanced statistical procedures to enable overall conclusions to be drawn. The study I reviewed (the one that was made available online last week) is:

Chan, M. S., Jones, C. R., Jamieson, K. H., & Albarracin, D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, early online publication (print page numbers not yet determined). Available here (if you have journal access: http://journals.sagepub.com/doi/10.1177/0956797617714579).

This study compiled scientific studies that:

  1. First presented people with misinformation (except a control group that got no misinformation).
  2. Then presented them with a debunking procedure.
  3. Then looked at what effect the debunking procedure had on people's beliefs.

There are three types of effects examined in the study:

  1. Misinformation effect = Difference between the group that just got misinformation and a control group that didn't get misinformation. This determined how much the misinformation hurt.

  2. Debunking effect = Difference between the group that just got misinformation and a group that got misinformation and later debunking. This determined how much debunking could lesson the effects of the misinformation.

  3. Misinformation-Persistence effect = Difference between the group that got misinformation-and-debunking and the control group that didn't get misinformation. This determined how much debunking could fully reverse the effects of the misinformation.

They looked at three sets of factors.

First, the study examined what happens when people encounter misinformation. They found that the more people thought of explanations for the false information, the more they would believe this misinformation later, even in the face of debunking. From a practical standpoint then, if people are receiving misinformation, we should hope they don't think too deeply about it. Of course, this is largely out of our control as learning practitioners, because people come to us after they've gotten misinformation. On the other hand, it may provide hints for us as we use knowledge management or social media. The research findings suggest that we might need to intervene immediately when bad information is encountered to prevent people from elaborating on the misinformation.

Second, the meta-analysis examined whether debunking messages that included procedures to induce people to make counter-arguments to the misinformation would outperform debunking messages that did not include such procedures (or that included less potent counter-argument-inducing procedures). They found consistent benefits to these counter-argument inducing procedures. These procedures helped reduce misinformation. This suggests strongly that debunking should induce counter-arguments to the misinformation. And though specific mechanisms for doing this may be difficult to design, it is probably not enough to present the counter-arguments ourselves without getting our learners to fully process the counter-arguments themselves to some sufficient level of mathemagenic (learning-producing) processing.

Third, the meta-analysis looked at whether debunking messages that included explanatory information for why the misinformation was wrong would outperform debunking messages that included just contradictory claims (for example, statements to the effect that the misinformation was wrong). They found mixed results here. Providing debunking messages with explanatory information was more effective in debunking misinformation (getting people to move from being misinformed to being less misinformed), but these more explanatory messages were actually less effective in fully ridding people of the misinformation. This was a conflicting finding and so it's not clear whether greater explanations make a difference, or how they might be designed to make a difference. One wild conjecture. Perhaps where the explanations can induce relevant counter-arguments to the misinformation, they will be effective.

Overall, I came away disappointed that we haven't been able to learn more about how to debunk. This is NOT these researchers' fault. The data is the data. Rather, the research community as a whole has to double down on debunking and persuasion and figure out what works.

People certainly change their minds on heartfelt issues. Just think about the acceptance of gays and lesbians over the last twenty years. Dramatic changes! Many people are much more open and embracing. Well, how the hell did this happen? Some people died out, but many other people's minds were changed.

My point is that misinformation cannot possibly be a permanent condition and it behooves the world to focus resources on fixing this problem -- because it's freakin' huge!

------------

Note that a review of this research in the New York Times painted this in a more optimistic light.

------------

Some additional thoughts (added one day after original post).

To do a thorough job of analyzing any research paradigm, we should, of course, go beyond meta-analyses to the original studies being meta-analyzed. Most of us don't have time for that, so we often take the short-cut of just reading the meta-analysis or just reading research reviews, etc. This is generally okay, but there is a caveat that we might be missing something important.

One thing that struck me in reading the meta-analysis is that the authors commented on the typical experimental paradigm used in the research. It appeared that the actual experiment might have lasted 30 minutes or less, maybe 60 minutes at most. This includes reading (learning) the misinformation, getting a ten-minute distractor task, and answering a few questions (some treatment manipulations, that is, types of debunking methods; plus the assessment of their final state of belief through answers to questions). To ensure I wasn't misinterpreting the authors' message that the experiments were short, I looked at several of the studies compiled in the meta-analysis. The research I looked at used very short experimental sessions. Here is one of the treatments the experimental participants received (it includes both misinformation and a corrective, so it is one of the longer treatments):

Health Care Reform and Death Panels: Setting the Record Straight

By JONATHAN G. PRATT
Published: November 15, 2009

WASHINGTON, DC – With health care reform in full swing, politicians and citizen groups are taking a close look at the provisions in the Affordable Health Care for America Act (H.R. 3962) and the accompanying Medicare Physician Payment Reform Act (H.R. 3961).

Discussion has focused on whether Congress intends to establish “death panels” to determine whether or not seniors can get access to end-of-life medical care. Some have speculated that these panels will force the elderly and ailing into accepting minimal end-of-life care to reduce health care costs. Concerns have been raised that hospitals will be forced to withhold treatments simply because they are costly, even if they extend the life of the patient. Now talking heads and politicians are getting into the act.

Betsy McCaughey, the former Lieutenant Governor of New York State has warned that the bills contain provisions that would make it mandatory that “people in Medicare have a required counseling session that will tell them how to end their life sooner.”

Iowa Senator Chuck Grassley, the ranking Republican member of the Senate Finance Committee, chimed into the debate as well at a town-hall meeting, telling a questioner, “You have every right to fear…[You] should not have a government-run plan to decide when to pull the plug on Grandma.”

However, a close examination of the bill by non-partisan organizations reveals that the controversial proposals are not death panels at all. They are nothing more than a provision that allows Medicare to pay for voluntary counseling.

The American Medical Association and the National Hospice and Palliative Care Organization support the provision. For years, federal laws and policies have encouraged Americans to think ahead about end-of-life decisions.

The bills allow Medicare to pay doctors to provide information about living wills, pain medication, and hospice care. John Rother, executive vice president of AARP, the seniors’ lobby, repeatedly has declared the “death panel” rumors false.

The new provision is similar to a proposal in the last Congress to cover an end-of-life planning consultation. That bill was co-sponsored by three Republicans, including John Isakson, a Republican Senator from Georgia.

Speaking about the end of life provisions, Senator Isakson has said, “It's voluntary. Every state in America has an end of life directive or durable power of attorney provision… someone said Sarah Palin's web site had talked about the House bill having death panels on it where people would be euthanized. How someone could take an end of life directive or a living will as that is nuts.”

That's it. That's the experimental treatment.

Are we truly to believe that such short exposures are representative of real-world debunking? Surely not! In the real world, people who get misinformation often hold that misinformation over months or years while occasionally thinking about the misinformation again or encountering additional supportive misinformation or non-supportive information that may modify their initial beliefs in the misinformation. This all happens and then we try our debunking treatments.

Finally, it should be emphasized that the meta-analysis also only compiled eight research articles, many using the same (or similar) experimental paradigm. This is further inducement to skepticism. We should be very skeptical of these findings and my plea above for more study of debunking -- especially in more ecologically-valid situations -- is reinforced!

 

 

 

 

It’s Behavior Change Stupid! Transfer that!

Those of us in the learning professions are naturally enamored with the power of learning. This is all fine and good--learning is necessary for human survival and for our most enlightened achievements--but too narrow a focus on learning misses a key responsibility. Indeed, learning without sustained behavior change is like feeding a man who's planning to jump off a bridge. Nice, but largely besides the point. 

The bottom line is that we learning professionals must not only look to the science of learning, but also the science of behavior change.

My friend and colleague Julie Dirksen has been thinking about behavior change for years. Here's a recent article she wrote:

Here is another recent resource on behavior change:

It's good to keep this all in perspective. Science often moves slowly and in fits and starts. There is great promise in the many and varied research areas under study. You can see this most fully in the health-behavior-change field. There's a ton of research being done. Here's a quick list of research being done on behavior change:

  • Cardiac health
  • Obesity
  • Clean cooking
  • Asthma
  • HIV and STD prevention
  • College-student drinking
  • Cancer prevention
  • Child survival
  • Recycling
  • Use of hotel towels
  • Young-driver distraction
  • Hand washing
  • Encourage walking and cycling
  • Smoking cessation
  • Healthy pregnancy behaviors
  • Promoting physical activity

Okay, the list is almost endless.

One of the findings is not surprising. Lasting behavior change is very difficult. Think how hard it is to lose weight and keep it off, or stop an internet addition. So it's great that researchers are looking into this.

In the learning field, we have our own version of behavior-change research. It's called transfer. We've already learned a lot about how to get people to transfer what they've learned back to their jobs or into their lives. We're not done learning, of course.

One thing we do know is that training by itself is rarely sufficient to produce lasting change. Sometimes our learners will take what they've learned, put it immediately into practice, deepen their own learning and continue to learn and engage and use what they've learned over time. Too often, they forget, they get distracted, they get no support.

 

Summary

My four messages to you are these:

  1. Keep your eyes open for Behavior Change Research.
  2. Keep your eyes open for Transfer-of-Learning Research.
  3. Don't be a fool in thinking that training/learning is enough.
  4. We learning professionals have a responsibility to enable usable behavior change.

 

Wisdom from John Medina — and Lunch

John Medina, author of Brain Rules, and Development Molecular Biologist at University of Washington/ Seattle Pacific University, was today's keynote speaker at PCMA's Education Conference in Fort Lauderdale, Florida.

John Medina

He did a great job in the keynote, well organized and with oodles of humor, but what struck me was that even though the guy is a real neuroscientist, he is very clear in stating the limitations of our understanding of the brain. Here are some direct quotes from his keynote, as I recorded them in my notes:

"I don't think brain science has anything to say for business practice."

"We still don't really know how the brain works."

"The state of our knowledge [of the brain] is childlike."

"The human brain was not built to learn. It was built to survive."

Very refreshing! Especially in an era where conference sessions, white papers, and trade-industry publications are oozing with brain science bromides, neuroscience snake oil, and unrepentant con artists who, in the interest of taking money from fools, corral the sheep of the learning profession into all manner of poor purchasing decisions. 

The Debunker Club is working on a resource page to combat the learning myth, "Neuroscience (Brain Science) Trumps Other Sources of Knowledge about Learning," and John Medina gives us more ammunition against the silliness.

In addition to John's keynote, I enjoyed eating lunch with him. He's a fascinating man, wicked knowledgeable about a range of topics, funny, and kind to all (as I found out as he developed a deep repartee with the guy who served our food). Thanks John for a great time at lunch!

One of the topics we talked about was the poor record researchers have in getting their wisdom shared with real citizens. John believes researchers, who often get research funding from taxpayer money, have a moral obligation to share what they've learned with the public.

I shared my belief that one of the problems is that there is no funding stream for research translators. The academy often frowns on professors who attempt to share their knowledge with lay audiences. Calls of "selling out" are rampant. You can read my full thoughts on the need for research translators at a blog post I wrote early this year.

Later in the day at the conference, John was interviewed in a session by Adrian Segar, an expert on conference and meeting design. Again, John shined as a deep and thoughtful thinker -- and refreshingly, as I guy who is more than willing to admit when he doesn't know and/or when the science is not clear.

To check out or buy the latest version of Brain Rules, click on the image below:

 

 

 

 

Training Maximizers

A few years ago, I created a simple model for training effectiveness based on the scientific research on learning in conjunction with some practical considerations (to make the model's recommendations leverageable for learning professionals). People keep asking me about the model, so I'm going to briefly describe it here. If you want to look at my original YouTube video about the model -- which goes into more depth -- you can view that here. You can also see me in my bald phase.

The Training Maximizers Model includes 7 requirements for ensuring our training or teaching will achieve maximum results.

  • A. Valid Credible Content
  • B. Engaging Learning Events
  • C. Support for Basic Understanding
  • D. Support for Decision-Making Competence
  • E. Support for Long-Term Remembering
  • F. Support for Application of Learning
  • G. Support for Perseverance in Learning

Here's a graphic depiction:

 Training Maximizers Willversion with Copyright

Most training today is pretty good at A, B, and C but fails to provide the other supports that learning requires. This is a MAJOR PROBLEM because learners who can't make decisions (D), learners who can't remember what they've learned (E), learners who can't apply what they've learned (F), and learners who can't persevere in their own learning (G); are learners who simply haven't received leverageable benefits.

When we train or teach only to A, B, and C, we aren't really helping our learners, we aren't providing a return on the learning investments, we haven't done enough to support our learners' future performance.

 

 

Will Thalheimer Interviewed by Brent Schlenker

I had the great pleasure of being interviewed recently by Brent Schlenker, long-time elearning advocate. We not only had a ton of fun talking, but Brent steered us into some interesting discussions.

----------

He's created a three-part video series of our discussion:

----------

Brent is a great interviewer--and he gets some top-notch folks to join him. Check out his blog.

 

How the Science of Learning Explains Superbowl Victory

Sports is sometimes a great crucible for life lessons. Players learn teamwork, the benefits of hard work and practice, and how to act in times of success and failure.

Learning professionals can learn a lot from sports as well. The 2015 Superbowl is a case in point.

Interception

With 27 seconds to go, the Seattle Seahawks were on the New England Patriots one yard line. Only one more yard to go for victory. They called a pass play, rather controversial in the court of public opinion, but not a bad call according to statisticians.

The Seahawks quarterback, Russell Wilson, thought he had a touchdown. “I thought it was going to be a touchdown when I threw it.” Unfortunately for Wilson and the Seahawks, Malcolm Butler, a Patriots rookie cornerback, was prepared.

This is where the science of learning comes in. Butler was prepared for a number of reasons--many having to do with the science of learning. For an explanation of the 12 most important learning factors, you can review my work on the Decisive Dozen.

  1. Butler, despite being a rookie, had played a lot of football before. He had a lot of prior knowledge, which enable him to quickly learn what to do.
  2. He was given tools and resources to help him learn. He got a playbook, he was able to view videotape of Seahawks' plays, he was surrounded by experienced players and coaches, he was motivated and encouraged.
  3. He was given feedback on his performance--but not just general feedback, very specific feedback on what to do.
  4. He got many practice opportunities to refine his knowledge and performance.
  5. Perhaps most importantly, Butler was prompted to make a link between a particular situation and a particular action to take.

Here's the formation prior to the interception. Notice on the bottom of the image that the receivers for Seattle are "stacked" two deep--that is, one is lined up on the line of scrimmage, one is behind the other.

Interception-Set

  

Here is what Butler saw just as the play was getting started.

What Butler Saw

Here's what Butler said:

“I saw Wilson looking over there. He kept his head still and just looked over there, so that gave me a clue. And the stacked receivers; I just knew they were going to throw. I don’t know how I knew. I just knew. I just beat him to the point and caught the ball.”

In a separate interview he restated what he saw:

“I remembered the formation they were in, two receivers stacked, I just knew they were going to [a] pick route.”

From a science of learning perspective, what Butler did was link a particular SITUATION (two receivers stacked) with a particular ACTION he was supposed to take (move first to where the ball would be thrown). It's this cognitive linking that was so crucial to the Superbowl victory--and to human performance more generally.

While we human beings like to think of ourselves as proactive--and we are sometimes--most of our moment-to-moment actions are triggered by environmental cues. SITUATION-ACTION! It's the way the world works. When we are served food on smaller plates, we eat less--because the small plates make the food look bigger, triggering us to feel full more quickly. When we drive on a narrow street, we drive more slowly. When we see someone dressed in a suit, we think more highly of that person than if they were dressed shabbily. We can't help ourselves. What's more, these reactions are largely automatic, unintended, subconsciously triggered. Indeed, notice Butler's first quote. He wasn't sure what made him react as he did.

In the Decisive Dozen, I refer to this phenomenon as "Context Alignment." The notion is that the learning situation ought to mimic or simulate the anticipated performance situation. Others have similar notions about the importance of context, including Bransford's transfer-appropriate processing, Tulving's encoding-specificity, and Smith's context-dependent memory.

Indeed, recently a meta-analysis (a study of many studies) by Gollwitzer and Sheeran  found that "implementation intentions"--what I prefer to call "triggers"--had profound effects, often improving performance compliance by twice as much as having people set a goal to accomplish something. That is, creating a cognitive link between SITUATION and ACTION was often twice as potent as prompting people to have a goal to take a particular action.

Butler was successful because he had a trigger linking the SITUATION of stacked receivers with the ACTION of bolting to the point where the ball would be thrown.

Situation-Action

Listen to football players talk and you'll know that the best teams understand this phenomenon deeply. They talk about "picking up the keys," which is really a way of saying noticing what situation they're in on the field. Once they understand the situation, then they know what action to take. Moreover, if they can automate the situation-action link--through repeated practice--they can take actions more quickly, which can make all the difference!!

Here's how Butler talks about his preparation. When asked in an interview, "You said you knew that play was coming. How did you know that play was coming?" Butler said:

"Preparation in the [fan?] room, looking over my play book, looking over their plays, studying my opponent. I got beat on it at practice ... last week, and Bill [Coach of New England Patriots] told me I got to be on it. And what I did wrong at practice I gave ground instead of just planting and going. And during game time I just put my foot in the ground, broke on the ball and beat him to the point."

For those of us working in the learning field, we should use this truth regarding the human cognitive architecture to design our learning programs.

  1. Don't just teach content.
  2. Give them tools to help them link situations and actions.
  3. Give your learners realistic practice, that is practice set in real-world situations.
  4. Give them feedback, then give them additional practice.
  5. Continually emphasize the noticing of situations, and the actions to be taken.
  6. Provide varied practice situations, without hints, to simulate real-world conditions.
  7. For critical situations, give additional practice to automate your learners' responses.
  8. Collect Lombardi Trophy or similar...

As a resident of New England, I have to add one more nugget of wisdom...

 

  

  

   

Go Patriots!

Patriots

  

Sources of Football Information, Images, Videos:

  1. Reuters
  2. NBC
  3. Boston Globe
  4. New York Times
  5. http://nflbreakdowns.com/malcolm-butlers-interception-wilson-superbowl/
  6. http://www.nfl.com/videos/new-england-patriots/0ap3000000467843/Butler-and-Edelman-go-to-Disneyland

 

 

Spacing Learning Over Time — Research Report

The spacing effect is one of the most potent learning factors there is--because it helps minimize forgetting.

Here's a research-to-practice report on the subject, backed by over 100 research studies from scientific refereed journals, plus examples. Originally published in 2006, the recommendations are still valid today.

Click to download the research-to-practice report on spacing.   It's a classic!

 

And here's some more recent research and exploration.

Airforce finds Interesting Results with Ability Grouping–Plus Other Research on Ability Grouping

NPR's Morning Edition produced a five-minute radio piece on the U.S. Airforce Academy's attempt at improving learning results by modifying the ability-grouping of their cadets.

Shankar Vedantam

According to the piece, reported by Shankar Vedantam, based on research by Dartmouth researcher Bruce Sacerdote and colleagues:

  • Weaker students did better when in squadrons with stronger students (but note caveats below).
  • However, when researchers intentionally created squadrons with only the strongest and weakest students (that is, the middle students were removed), the weaker students did worse than they otherwise would have. The researchers argue that this was caused by the splintering of the squadron into groups of strong students and groups of weak students.
  • Middle students did better when they didn't have weaker and stronger students in their squadrons.
  • It appears that the middle students acted as a glue in the mixed-ability squadrons--and specifically, they helped the squadron to avoid splitting into groups.

Of course, one study should not be taken without some skepticism. Indeed, there is a long history of research on academic ability grouping. For example see the review article:

Schofield, J. W. (2010). International evidence on ability grouping with curriculum differentiation and the achievement gap in secondary schools. Teachers College Record, 112(5), 1492-1528.

As Schofield reports:

International research supports the conclusion that having high-ability/high-achieving schoolmates/classmates is associated with increased achievement. It also suggests that ability grouping with curriculum differentiation increases the achievement gap. For example, attending a high-tier school in a tiered system is linked with increased achievement, whereas attending a low-tier school is linked with decreased achievement, controlling for initial achievement. Furthermore, there is a stronger link between students’ social backgrounds and their achievement in educational systems with more curriculum differentiation and in those with earlier placement in differentiated educational programs as compared with others.

But she also warns:

However, numerous methodological issues remain in this research, which suggests both the need for caution in interpreting such relationships and the value of additional research on mechanisms that may account for such relationships.

In addition, social effects are probably not the only effects in play. For example, the research tells us that learners do better when they are presented with information and given instructional supports targeted specifically to their cognitive needs. So for example, this could be why the middle-ability students did better when they were grouped together.

Also interesting is that neither the NPR piece or Shofield's abstract reports specifically on how the mixed groupings affect the stronger learners.

Indeed, other researchers have advocated that gifted students should not be so ignored. See for example the following review article:

Subotnik, R. F., Olszewski-Kubilius, P., & Worrell, F. C. (2012). A proposed direction forward for gifted education based on psychological science. Gifted Child Quarterly, 56(4), 176-188.

Here's what these authors recommend:

In spite of concerns for the future of innovation in the United States, the education research and policy communities have been generally resistant to addressing academic giftedness in research, policy, and practice. The resistance is derived from the assumption that academically gifted children will be successful no matter what educational environment they are placed in, and because their families are believed to be more highly educated and hold above-average access to human capital wealth. These arguments run counter to psychological science indicating the need for all students to be challenged in their schoolwork and that effort and appropriate educational programing, training and support are required to develop a student’s talents and abilities.

MOOC’s are Ineffective–Except for One Thing!

MOOC's don't have to suck. The 4% to 10% completion rates may be the most obvious problem, but too many MOOC's simply don't use good learning design. They don't give learners enough realistic practice, they don't set work in realistic contexts, they don't space repetitions over time.

But after reading this article from Thomas Friedman in the New York Times, you will see that there is one thing that MOOC's do really well. The get learning content to learners.

Really, go ahead. Read the article...

 

Why is "Exposure" one of the Decisive Dozen learning factors?

Many people have wondered why I included "Exposure" as one of the most important learning factors. Why would exposing learners to learning content rank as so important? Friedman's article makes it clear in one example, but there are billions of learners just waiting for the advantage of learning.

I got the idea of the importance of exposing learners to valid content by noticing in many a scientific experiment that learners in the control group often improved tremendously--even though they were almost always outclassed by those who were in the treatment groups.

By formalizing Exposure as one of the top 12 learning factors, we send the message that while learning design matters, giving learners valid content probably matters more.

And yes, that last sentence is as epically important as it sounds...

It also should give us learning experts a big dose of humility...

 

MOOC's will get better...

Most MOOC's aren't very well designed, but over time, they'll get better.

 

 

Learners’ Often Use Poor Learning Strategies — From a Research Review

I just read the following research article, and found a great mini-review of some essential research.

  • Hagemans, M. G., van der Meij, H., & de Jong, T. (2013). The effects of a concept map-based support tool on simulation-based inquiry learning. Journal of Educational Psychology, 105(1), 1-24. doi:10.1037/a0029433

Experiment-Specific Findings:

The article shows that simulations—the kind that ask learners to navigate through the simulation on their own—are more beneficial when learners are supported in their simulation playing. Specifically, they found that learners given the optimal learning route did better than those supplied with a sub-optimal learning route. They also found that concept maps helped the learners by supporting their comprehension. They also found that learners who got feedback on the correctness of their practice attempts were motivated to correct their errors and thus provided themselves with additional practice.

Researchers’ Review of Learners’ Poor Learning Strategies

The research Hagemans, van der Meij, and de Jong did is good, but what struck me as even more relevant for you as a learning professional is their mini review of research that shows that learners are NOT very good stewards of their own learning. Here is what their mini-review said (from Hagemans, van der Meij, and de Jong, 2013, p. 2:

  • Despite the importance of planning for learning, few students engage spontaneously in planning activities (Manlove & Lazonder, 2004).  
  • Novices are especially prone to failure to engage in planning prior to their efforts to learn (Zimmerman, 2002).  
  • When students do engage in planning their learning, they often experience difficulty in adequately performing the activities involved (de Jong & Van Joolingen, 1998; Quintana et al., 2004). For example, they do not thoroughly analyze the task or problem they need to solve (Chi, Feltovich, & Glaser, 1981; Veenman, Elshout, & Meijer, 1997) and tend to act immediately (Ge & Land, 2003; Veenman et al., 1997), even when a more thorough analysis would actually help them to build a detailed plan for learning (Veenman, Elshout, & Busato, 1994).  
  • The learning goals they set are often of low quality, tending to be nonspecific and distal (Zimmerman, 1998).
  • In addition, many students fail to set up a detailed plan for learning, whereas if they do create a plan, it is often poorly constructed (Manlove et al., 2007). That is, students often plan their learning in a nonsystematic way, which may cause them to start floundering (de Jong & Van Joolingen, 1998), or they plan on the basis of what they must do next as they proceed, which leads to the creation of ad hoc plans in which they respond to the realization of a current need (Manlove & Lazonder, 2004).  
  • The lack of proper planning for learning may cause students to miss out on experiencing critical moments of inquiry, and their investigations may lack systematicity.
  • Many students also have problems with monitoring their progress, in that they have difficulty in reflecting on what has already been done (de Jong & Van Joolingen, 1998).
  • Regarding monitoring of understanding, students often do not know when they have comprehended the subject matter material adequately (Ertmer & Newby, 1996; Thiede, Anderson, & Therriault, 2003) and have difficulty recognizing breakdowns in their understanding (Ertmer & Newby, 1996).
  • If students do recognize deficits in their understanding, they have difficulty in expressing explicitly what they do not understand (Manlove & Lazonder, 2004).
  • One consequence is that students tend to overestimate their level of success, which may result in “misplaced optimism, substantial understudying, and, ultimately, low test scores” (Zimmerman, 1998, p. 9).

The research article is available by clicking here.

Final Thoughts

This research, and other research I have studied over the years, shows that we CANNOT ALWAYS TRUST THAT OUR LEARNERS WILL KNOW HOW TO LEARN. We as instructional designers have to design learning environments that support learners in learning. We need to know the kinds of learning situations where our learners are likely to succeed and those where they are likely to fail without additional scaffolding.

The research also shows, more specifically, that inquiry-based simulation environments can be powerful learning tools, but ONLY if we provide the learners with guidance and/or scaffolding that enables them to be successful. Certainly, some few may succeed without support, but most will act suboptimally.

We have a responsibility to help our learners. We can't always put it on them...