Adapting a MOOC for Research

Written by my colleague, Rachael Hodge, this article is a summary of our experience in identifying and developing research activities within the University of Warwick’s MOOC Literature and Mental Health.

The University of Warwick’s FutureLearn MOOC Literature and Mental Health: Reading for Wellbeing, which began its first presentation February 2016, was identified as an opportunity to conduct some research into the course subject area, ‘reading for wellbeing’ or ‘bibliotherapy’. Since 2013, a substantial body of literature has emerged in the field of MOOC-related research, with the MOOC becoming both the subject of and vehicle for research. The research approach adopted in Literature and Mental Health was influenced by other, recent research studies conducted within MOOCs, and particularly by the first presentation of Monash University’s Mindfulness for Wellbeing and Peak Performance FutureLearn MOOC, which distributed a stress survey to its learners in the first and final weeks of the course, to assess the efficacy of the course’s mindfulness practices. 

A number of reasons for trialling the use of this MOOC as a research tool were identified at the project’s outset. MOOCs give researchers access to large numbers of possible research participants, making MOOC research an attractive prospect, while the opportunity to gather valuable, potentially publishable data from free online courses may help to justify the time and resources expended during the production of new MOOCs. Several additional benefits of in-MOOC research were discovered during the process, including the potential for research activities to enrich the learner experience. However, a number of challenges and limitations were also encountered during the development of the study; the inevitable self-selection bias among MOOC learners, and the difficulty of establishing a control group within the MOOC activities, posed impediments to the gathering of useful, publishable data. 

Although we were aware of other MOOCs which had been used as vehicles for research, the process of adapting Literature and Mental Health for this research study was nonetheless an illuminating and instructive experience. The purpose of this paper is to reflect on that experience, and to consider the lessons learned during the process which may be useful in informing future research studies conducted via Massive Open Online Courses.

Reference
Hodge, R., (2016). Adapting a MOOC for Research: Lessons Learned from the First Presentation of Literature and Mental Health: Reading for Wellbeing. Journal of Interactive Media in Education. 2016(1), p.19. DOI:http://doi.org/10.5334/jime.428

Image source: Judy Dean (CC BY 2.0)

Research Trends in Massive Open Online Course (MOOC) Theses and Dissertations: Surfing the Tsunami Wave

Die Autoren haben 51 wissenschaftliche Abschlussarbeiten aus den Jahren 2008 - 2015 untersucht, die sich mit dem Thema MOOCs befasst haben. Gesucht wurden die Forschungstrends auf diesem noch jungen Gebiet. Dabei wurde festgestellt, dass die meisten Arbeiten aus dem Bildungsbereich kamen, es sich dabei vor allem um qualitative Studien (49 %) handelte und sich gerade in den letzten Jahren das Forschungsinteresse von den cMOOCs zu den xMOOCs bewegt hat. Aber die Autoren haben einleitend auch die MOOC-Entwicklung auf dem Gartner Hype Cycle abgebildet, was zu einer interessanten Darstellung und (die Autoren) zu dem Schluss führte: “MOOCs are at the verge of Plateau of Productivity which means that there will increasingly be a diversity in MOOC applications in the future.”
Aras Bozkurt, Nilgun Ozdamar Keskin und Inge de Waard, Open Praxis, Vol. 8, Issue 3, Juli - September 2016, S. 203-221 (via Academia.edu)

ward_201608.jpg

Can Instructor Attractiveness lead to Higher Smile-Sheet Ratings? More Learning? A Research Brief.

In a recent research article, Tobias Wolbring and Patrick Riordan report the results of a study looking into the effects of instructor "beauty" on college course evaluations. What they found might surprise you -- or worry you -- depending on your views on vagaries of fairness in life.

Before I reveal the results, let me say that this is one study (two experiments), and that the findings were very weak in the sense that the effects were small.

Their first study used a large data set involving university students. Given that the data was previously collected through routine evaluation procedures, the researchers could not be sure of the quality of the actual teaching, nor the true "beauty" of the instructors (they had to rely on online images). 

The second study was a laboratory study where they could precisely vary the level of beauty of the instructor and their gender, while keeping the actual instructional materials consistent. Unfortunately, "the instruction" consisted of an 11-minute audio lecture taught by relatively young instructors (young adults), so it's not clear whether their results would generalize to more realistic instructional situations.

In both studies they relied on beauty as represented by facial beauty. While previous research shows that facial beauty is the primary way we rate each other on attractiveness, body beauty has also been found to have effects.

Their most compelling results:

1.

They found that ratings of attractiveness are very consistent across raters. People seem to know who is attractive and who is not. This confirms findings of many studies.

2.

Instructors who are more attractive, get better smile sheet ratings. Note that the effect was very small in both experiments. They confirmed what many other research studies have found, although their results were generally weaker than previous studies -- probably due to the better controls utilized.

3.

They found that instructors who are better looking engender less absenteeism. That is, students were more likely to show up for class when their instructor was attractive.

4.

They found that it did not make a difference on the genders of the raters or instructors. It was hypothesized that female raters might respond differently to male and female instructors, and males would do the same. But this was not found. In previous studies there have been mixed results.

5.

In the second experiment, where they actually gave learners a test of what they'd learned, attractive instructors engendered higher scores on a difficult test, but not an easy test. The researchers hypothesize that learners engage more fully when their instructors are attractive.

6.

In the second experiment, they asked learners to either: (a) take a test first and then evaluate the course, or (b) do the evaluation first and then take the test. Did it matter? Yes! The researchers hypothesized that highly-attractive instructors would be penalized for giving a hard test more than their unattractive colleagues. This prediction was confirmed. When the difficult test came before the evaluation, better looking instructors were rated more poorly than less attractive instructors. Not much difference was found for the easy test.

Ramifications for Learning Professionals

First, let me caveat these thoughts with the reminder that this is just one study! Second, the study's effects were relatively weak. Third, their results -- even if valid -- might not be relevant to your learners, your instructors, your organization, your situation, et cetera!

  1. If you're a trainer, instructor, teacher, professor -- get beautiful! Obviously, you can't change your bone structure or symmetry, but you can do some things to make yourself more attractive. I drink raw spinach smoothies and climb telephone poles with my bare hands to strengthen my shoulders and give me that upside-down triangle attractiveness, while wearing the most expensive suits I can afford -- $199 at Men's Warehouse; all with the purpose of pushing myself above the threshold of ... I can't even say the word. You'll have to find what works for you.
  2. If you refuse to sell your soul or put in time at the gym, you can always become a behind-the-scenes instructional designer or a research translator. As Clint said, "A man's got to know his limitations."
  3. Okay, I'll be serious. We shouldn't discount attractiveness entirely. It may make a small difference. On the other hand, we have more important, more leverageable actions we can take. I like the research-based findings that we all get judged primarily on two dimensions warmth/trust and competence. Be personable, authentically trustworthy, and work hard to do good work.
  4. The finding from the second experiment that better looking instructors might prompt more engagement and more learning -- that I find intriguing. It may suggest, more generally, that the likability/attractiveness of our instructors or elearning narrators may be important in keeping our learners engaged. The research isn't a slam dunk, but it may be suggestive.
  5. In terms of learning measurement, the results may suggest that evaluations come before difficult performance tests. I don't know though how this relates to adults in workplace learning. They might be more thankful for instructional rigor if it helps them perform better in their jobs.
  6. More research is needed!

Research Reviewed

Wolbring, T., & Riordan, P. (2016). How beauty works. Theoretical mechanisms and two
empirical applications on students' evaluation of teaching. Social Science Research, 57, 253-272.

Practice Firms — Giving People Real-World Experience

Today's New York Times has a fascinating article on the mostly European concept of practice firms. As the name implies, practice firms give people practice in doing work.

This seems to align well with the research on learning that suggests that learning in a realistic context, getting lots of retrieval practice and feedback, and many repetitions spaced over time can be the most effective way to learn. Of course, the context and practice and feedback have to be well-designed and aligned with the future work of the learner.

Interestingly, there is an organization that is solely devoted to the concept. EUROPEN-PEN International is the worldwide practice enterprise network. The network consists of over 7,500 Practice Enterprises in more than 40 countries. It has a FaceBook page and a website.

I did a quick search to see if there was an scientific research on the use of practice firms, but I didn't uncover anything definitive...If you know of scientific research, or other rigorous evidence, let me know...

 

 

Research on Mathematics Education for U.S. First Graders

A recent research review (by Paul L. Morgan, George Farkas, and Steve Maczuga) finds that teacher-directed mathematics instruction in first grade is superior to other methods for students with "math difficulties." Specifically, routine practice and drill was more effective than the use of manipulatives, calculators, music, or movement for students with math difficulties.

For students without math difficulties, teacher-directed and student-centered approaches performed about the same.

In the words of the researchers:

In sum, teacher-directed activities were associated with greater achievement by both MD and non-MD students, and student-centered activities were associated with greater achievement only by non-MD students. Activities emphasizing manipulatives/calculators or movement/music to learn mathematics had no observed positive association with mathematics achievement.

For students without MD, more frequent use of either teacher-directed or student-centered instructional practices was associated with achievement gains. In contrast, more frequent use of manipulatives/calculator or movement/music activities was not associated with significant gains for any of the groups.

Interestingly, classes with higher proportions of students with math difficulties were actually less likely to be taught with teacher-directed methods -- the very methods that would be most helpful!

 

Will's Reflection (for both Education and Training)

These findings fit in with a substantial body of research that shows that learners who are novices in a topic area will benefit most from highly-directed instructional activities. They will NOT benefit from discovery learning, problem-based learning, and similar non-directive learning events.

See for example:

  • Kirschner, P. A., Sweller, J., & Clark, R. E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 41(2), 75-86.
  • Mayer, R. E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? American Psychologist, 59(1), 14-19.

As a research translator, I look for ways to make complicated research findings usable for practitioners. One model that seems to be helpful is to divide learning activities into two phases:

  1. Early in Learning (When learners are new to a topic, or the topic is very complex)
    The goal here is to help the learners UNDERSTAND the content. Here we provide lots of learning support, including repetitions, useful metaphors, worked examples, immediate feedback.
  2. Later in Learning (When learners are experienced with a topic, or when the topic is simple)
    The goal here is to help the learners REMEMBER the content or DEEPEN they're learning. To support remembering, we provide lots of retrieval practice, preferably set in realistic situations the learners will likely encounter -- where they can use what they learned. We provide delayed feedback. We space repetitions over time, varying the background context while keeping the learning nugget the same. To deepen learning, we engage contingencies, we enable learners to explore the topic space on their own, we add additional knowledge.

What Elementary Mathematics Teachers Should Stop Doing

Elementary-school teachers should stop assuming that drill-and-practice is counterproductive. They should create lesson plans that guide their learners in understanding the concepts to be learned. They should limit the use of manipulatives, calculators, music, and movement. Ideas about "arts integration" should be pushed to the back burner. This doesn't mean that teachers should NEVER use these other methods, but they should be used to create occasional, short, and rare moments of variety. Spending hours using manipulatives, for example, is certainly harmful in comparison with more teacher-directed activities.

 

Training Maximizers

A few years ago, I created a simple model for training effectiveness based on the scientific research on learning in conjunction with some practical considerations (to make the model's recommendations leverageable for learning professionals). People keep asking me about the model, so I'm going to briefly describe it here. If you want to look at my original YouTube video about the model -- which goes into more depth -- you can view that here. You can also see me in my bald phase.

The Training Maximizers Model includes 7 requirements for ensuring our training or teaching will achieve maximum results.

  • A. Valid Credible Content
  • B. Engaging Learning Events
  • C. Support for Basic Understanding
  • D. Support for Decision-Making Competence
  • E. Support for Long-Term Remembering
  • F. Support for Application of Learning
  • G. Support for Perseverance in Learning

Here's a graphic depiction:

 Training Maximizers Willversion with Copyright

Most training today is pretty good at A, B, and C but fails to provide the other supports that learning requires. This is a MAJOR PROBLEM because learners who can't make decisions (D), learners who can't remember what they've learned (E), learners who can't apply what they've learned (F), and learners who can't persevere in their own learning (G); are learners who simply haven't received leverageable benefits.

When we train or teach only to A, B, and C, we aren't really helping our learners, we aren't providing a return on the learning investments, we haven't done enough to support our learners' future performance.

 

 

Rebooting MOOC Research

Dieser Artikel von Justin Reich, ein Harvard-Wissenschaftler, macht gerade die Runde. Es ist ein Appell an die MOOC-Forschung, nicht nur Klicks der Teilnehmer auszuwerten, sondern sich der Frage anzunehmen, wie und wo wirklich Lernen stattfindet. Das ist, so Justin Reich, natürlich eine Frage der Perspektive; das bedeutet aber auch, eine größere Aufmerksamkeit auf Assessments zu legen, um Lernprozessen auf die Spur zu kommen; das bedeutet, Daten nicht nur aus einzelnen, sondern aus mehreren Kursen auszuwerten; das bedeutet, komplexere Forschungsdesigns zu entwickeln, um Ergebnisse zu erzielen, die auf andere Felder, Themen und Kurse übertragbar sind. “Raising the bar” lautet seine Forderung. Ob es hierzulande allerdings bereits eine MOOC-Forschung gibt, die rebootet werden muss, darf bezweifelt werden. Aber das ist ja auch eine Chance.

“In the years since MOOCs first attracted widespread attention, new lines of research have begun, but findings from these efforts have had few implications for teaching and learning. Big datasets do not, by virtue of their size, inherently pos-sess answers to interesting questions. …
We have terabytes of data about what students clicked and very little understanding of what changed in their heads.”

Justin Reich, Science Magazine, 2. Januar 2015 (via Phil Hill)

Mythical Retention Data & The Corrupted Cone

The Danger

Have you ever seen the following “research” presented to demonstrate some truth about human learning?

 

 

 

 

 
Unfortunately, all of the above diagrams are evangelizing misleading information. Worse, these fabrications have been rampant over the last two or three decades—and seem to have accelerated during the age of the internet. Indeed, a Google image search for “Dale’s Cone” produces about 80% misleading information, as you can see below from a recent search.

 

 

This proliferation is a truly dangerous and heinous result of incompetence, deceit, confirmatory bias, greed, and other nefarious human tendencies.

It is also hurting learners throughout the world—and it must be stopped. Each of us has a responsibility in this regard.

 

New Research

Fortunately, a group of tireless researchers—who I’ve had the honor of collaborating with—has put a wooden stake through the dark heart of this demon. In the most recent addition of the scientific journal Educational Technology, Deepak Subramony, Michael Molenda, Anthony Betrus, and I (my contribution was small) produced four articles on the dangers of this misinformation and the genesis of it. After working separately over the years to debunk this bit of mythology, the four of us have come together in a joint effort to rally the troops—people like you, dedicated professionals who want to create the best outcomes for your learners.

Here are the citations for the four articles. Later, I will have a synopsis of each article.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Previous Attempts to Debunk the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 17-21.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Good, the Bad, and the Ugly: A Bibliographic Essay on the Corrupted Cone. Educational Technology, Nov/Dec 2014, 54(6), 22-31.

Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 31-24.

Many thanks to Lawrence Lipsitz, the editor of Educational Technology, for his support, encouragement, and efforts in making this possible!

To get a copy of the “Special Issue” or to subscribe to Educational Technology, go to this website.

  

The Background

There are two separate memes we are debunking, what we’ve labeled (1) the mythical retention chart and (2) the corruption of Dale’s Cone of Experience. As you will see—or might have noticed in the images I previously shared—the two have often be comingled.

Here is an example of the mythical retention chart:

Oftentimes though, this is presented in text:

“People Remember:

  • 10 percent of what they read;
  • 20 percent of what they hear;
  • 30 percent of what they see;
  • 50 percent of what they see and hear;
  • 70 percent of what they say; and
  • 90 percent of what they do and say

Note that the numbers proffered are not always the same, nor are the factors alleged to spur learning. So, for example, you can see that on the graphic, people are said to remember 30 percent of what they hear, but in the text, the percentage is 20 percent. In the graphic, people remember 80 percent when they are collaborating, but in the text they remember 70% of what they SAY. I’ve looked at hundreds of examples, and the variety is staggering.

Most importantly, the numbers do NOT provide good guidance for learning design, as I will detail later.

Here is a photocopied image of the original Dale’s Cone:

Edgar Dale (1900-1985) was an American educator who is best known for developing “Dale’s Cone of Experience” (the cone above) and for his work on how to incorporate audio-visual materials into the classroom learning experience. The image above was photocopied directly from his book, Audio-visual methods in teaching (from the 1969 edition).

You’ll note that Dale included no numbers in his cone. He also warned his readers not to take the cone too literally.

Unfortunately, someone somewhere decided to add the misleading numbers. Here are two more examples:

 

I include these two examples to make two points. First, note how one person clearly stole from the other one. Second, note how sloppy these fabricators are. They include a Confucius quote that directly contradicts what the numbers say. On the left side of the visuals, Confucius is purported to say that hearing is better than seeing, while the numbers on the right of the visuals say that seeing is better than hearing. And, by the way, Confucius did not actually say what he is being alleged to have said! What seems clear from looking at these and other examples is that people don’t do their due diligence—their ends seems to justify their means—and they are damn sloppy, suggesting that they don’t think their audiences will examine their arguments closely.

By the way, these deceptions are not restricted to the English-speaking world:

 

 

Intro to the Special Issue of Educational Technology

As Deepak Subramony and Michael Molenda say in the introduction to the Special Issue of Educational Technology, the four articles presented seek to provide a “comprehensive and complete analysis of the issues surrounding these tortured constructs.” They also provide “extensive supporting material necessary to present a comprehensive refutation of the aforementioned attempts to corrupt Dale’s original model.”

In the concluding notes to the introduction, Subramony and Molenda leave us with a somewhat dystopian view of information trajectory in the internet age. “In today’s Information Age it is immensely difficult, if not practically impossible, to contain the spread of bad ideas within cyberspace. As we speak, the corrupted cone and its attendant “data” are akin to a living organism—a virtual 21st century plague—that continues to spread and mutate all over the World Wide Web, most recently to China. It therefore seems logical—and responsible—on our part that we would ourselves endeavor to continue our efforts to combat this vexing misinformation on the Web as well.”

Later, I will provide a section on what we can all do to help debunk the myths and inaccuracies imbedded in these fabrications.

Now, I provide a synopsis of each article in the Special Edition.

 
Synopsis of First Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Mythical Retention Chart and the Corruption of Dale’s Cone of Experience. Educational Technology, Nov/Dec 2014, 54(6), 6-16.

The authors point out that, “Learners—both face-to-face and distant—in classrooms, training centers, or homes are being subjected to lessons designed according to principles that are both unreliable and invalid. In any profession this would be called malpractice.” (p. 6).

The article makes four claims.

Claim 1: The Data in the Retention Chart is Not Credible

First, there is no body of research that supports the data presented in the many forms of the retention chart. That is, there is no scientific data—or other data—that supports the claim that People Remember some percentage of what they learned. Interestingly, where people have relied on research citations from 1943, 1947, 1963, and 1967 as the defining research when they cite the source of their data, the numbers—10%, 20%, 30% and so on—actually appeared as early as 1914 and 1922—when they were presented as information long known. A few years ago, I compiled research on actual percentages of remembering. You can access it here.

Second, the fact that the numbers all are divisible by 5 or 10 makes it obvious to anyone who has done research that these are not numbers derived by actual research. Human variability precludes round numbers. In addition, as pointed out as early at 1978 by Dwyer, there is the question of how the data were derived—what were learners actually asked to do? Note for example that the retention chart data always measures—among other things—how much people remember by reading, hearing, and seeing. How people could read without seeing is an obvious confusion. What are people doing when they only see and don’t read or listen? Also problematic is how you’d create a fair test to compare situations where learners listened or watched something. Are they tested on different tests (one where they see and one where they listen), which seems to allow bias or are they tested on the same test, in which case on group would be at a disadvantage because they aren’t taking a test in the same context in which they learned.

Third, the data portrayed don’t relate to any other research in the scientific literature on learning. As the authors write, “There is within educational psychology a voluminous literature on remembering and learning from various mediated experiences. Nowhere in this literature is there any summary of findings that remotely resembles the fictitious retention chart.” (p. 8)

Finally, as the author’s say, “Making sense of the retention chart is made nearly impossible by the varying presentations of the data, the numbers in the chart being a moving target, altered by the users to fit their individual biases about desirable training methods.” (p. 9).

Claim 2: Dale’s Cone is Misused.

Dale’s Cone of Experience is a visual depiction that portrays more concrete learning experiences at the bottom of the cone and more abstract experiences at the top of the cone. As the authors write, “The cone shape was meant to convey the gradual loss of sensory information” (p. 9) in the learning experiences as one moved from lower to higher levels on the cone.

“The root of all the perversions of the Cone is the assumption that the Cone is meant to be a prescriptive guide. Dale definitely intended the Cone to be descriptive—a classification system, not a road map for lesson planning.” (p. 10)

Claim 3: Combining the Retention Chart Data with Dale’s Cone

“The mythical retention data and the concrete-to-abstract cone evolved separately throughout the 1900’s, as illustrated in [the fourth article] ‘Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone.’ At some point, probably around 1970, some errant soul—or perhaps more than one person—had the regrettable idea of overlaying the dubious retention data on top of Dale’s Cone of Experience.” (p. 11). We call this concoction the corrupted cone.

“What we do know is that over the succeeding years [after the original corruption] the corrupted cone spread widely from one source to another, not in scholarly publications—where someone might have asked hard questions about sources—but in ephemeral materials, such as handouts and slides used in teaching or manuals used in military or corporate training.” (p. 11-12).

“With the growth of the Internet, the World Wide Web, after 1993 this attractive nuisance spread rapidly, even virally. Imagine the retention data as a rapidly mutating virus and Dale’s Cone as a host; then imagine the World Wide Web as a bathhouse. Imagine the variety of mutations and their resistance to antiviral treatment. A Google Search in 2014 revealed 11,000 hits for ‘Dale’s Cone,’ 14,500 for ‘Cone of Learning,’ and 176,000 for ‘Cone of Experience.’ And virtually all of them are corrupted or fallacious representations of the original Dale’s cone. It just might be the most widespread pedagogical myth in the history of Western civilization!” (p. 11).

Claim 4: Murky Provenance

People who present the fallacious retention data and/or the corrupted cone often cite other sources—that might seem authoritative. Dozens of attributions have been made over the years, but several sources appear over and over, including the following:

  • Edgar Dale
  • Wiman & Meierhenry
  • Bruce Nyland
  • Various oil companies (Mobil, Standard Oil, Socony-Vacuum Oil, etc.)
  • NTL Institute
  • William Glasser
  • British Audio-Visual Society
  • Chi, Bassok, Lewis, Reimann, & Glaser (1989).

Unfortunately, none of these sources are real sources. They are false.

Conclusion:

“The retention chart cannot be supported in terms of scientific validity or logical interpretability. The Cone of Experience, created by Edgar Dale in 1946, makes no claim of scientific grounding, and its utility as a prescriptive theory is thoroughly unjustified.” (p. 15)

“No qualified scholar would endorse the use of this mish-mash as a guide to either research or design of learning environments. Nevertheless, [the corrupted cone] obviously has an allure that surpasses logical considerations. Clearly, it says something that many people want to hear. It reduces the complexity of media and method selection to a simple and easy to remember formula. It can thus be used to support a bias toward whatever learning methodology might be in vogue. Users seem to employ it as pseudo-scientific justification for their own preferences about media and methods.” (p. 15)

 
Synopsis of Second Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Previous Attempts to Debunk the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 17-21.

The authors point to earlier attempts to debunk the mythical retention data and the corrupted cone. “Critics have been attempting to debunk the mythical retention chart at least since 1971. The earliest critics, David Curl and Frank Dwyer, were addressing just the retention data.  Beginning around 2002, a new generation of critics has taken on the illegitimate combination of the retention chart and Edgar Dale’s Cone of Experience – the corrupted cone.” (p. 17).

Interestingly, we only found two people who attempted to debunk the retention “data” before 2000. This could be because we failed to find other examples that existed, or it might just be because there weren’t that many examples of people sharing the bad information.

Starting in about 2002, we noticed many sources of refutation. I suspect this has to do with two things. First, it is easier to quickly search human activity in the internet age, giving an advantage in seeking examples. Second, the internet also makes it easier for people to post the erroneous information and share it to a universal audience.

The bottom line is that there have been a handful of people—in addition to the four authors—who have attempted to debunk the bogus information.

 
Synopsis of Third Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). The Good, the Bad, and the Ugly: A Bibliographic Essay on the Corrupted Cone. Educational Technology, Nov/Dec 2014, 54(6), 22-31.

The authors of the article provide a series of brief synopses of the major players who have been cited as sources of the bogus data and corrupted visualizations. The goal here is to give you—the reader—additional information so you can make your own assessment of the credibility of the research sources provided.

Most people—I suspect—will skim through this article with a modest twinge of voyeuristic pleasure. I did.

 
Synopsis of Fourth Article:

Citation:
Subramony, D., Molenda, M., Betrus, A., and Thalheimer, W. (2014). Timeline of the Mythical Retention Chart and Corrupted Dale’s Cone. Educational Technology, Nov/Dec 2014, 54(6), 31-24.

The authors present a decade-by-decade outline of examples of the reporting of the bogus information—From 1900 to the 2000s. The outline represents great detective work by my co-authors, who have spent years and years searching databases, reading articles, and reaching out to individuals and institutions in search of the genesis and rebirth of the bogus information. I’m in continual awe of their exhaustive efforts!

The timeline includes scholarly work such as the “Journal of Education,” numerous books, academic courses, corporate training, government publications, military guidelines, etc.

The breadth and depth of examples demonstrates clearly that no area of the learning profession has been immune to the disease of poor information.

 
Synopsis of the Exhibits:

The authors catalog 16 different examples of the visuals that have been used to convey the mythical retention data and/or the corrupted cone. They also present about 25 text examples.

The visual examples are black-and-white canonical versions, and given these limitations, can’t convey the wild variety of examples available now on the internet. Still, they show in their variety just how often people have modified Dale’s Cone to support their own objectives.

 
My Conclusions, Warnings, and Recommendations

The four articles in the special edition of Educational Technology represent a watershed moment in the history of misinformation in the learning profession. The articles utilize two examples—the mythical retention data (“People remember 10%, 20%, 30%...”) and the numerical corruptions of Dale’s Cone—and demonstrate the following:

  1. There are definitively-bogus data sources floating around the learning profession.
  2. These bogus information sources damage the effectiveness of learning and hurt learners.
  3. Authors of these bogus examples do not do their due diligence in confirming the validity of their research sources. They blithely reproduce sources or augment them before conveying them to others.
  4. Consumers of these bogus information sources do not do their due diligence in being skeptical, in expecting and demanding validated scientific information, in pushing back against those who convey weak information.
  5. Those who stand up publically to debunk such misinformation—though nobly fighting a good fight—do not seem to be winning the war against this misinformation.
  6. More must be done if we are to limit the damage.

Some of you may chaff at my tone here, and if I had more time I might have been able to be more careful in my wording. But still, this stuff matters! Moreover, these articles focus only on two examples of bogus memes in the learning field. There are many more! Learning styles anyone?

Here is what you can do to help:

  1. Be skeptical.
  2. When conveying or consuming research-based information, check the actual source. Does it say what it is purported to say? Is it a scientifically-validated source? Are there corroborating sources?
  3. Gently—perhaps privately—let conveyors of bogus information know that they are conveying bogus information. Show them your sources so they can investigate for themselves.
  4. When you catch someone conveying bogus information, make note that they may be the kind of person who is lazy or corrupt in the information they convey or use in their decision making.
  5. Punish, sanction, or reprimand those in your sphere of influence who convey bogus information. Be fair and don’t be an ass about it.
  6. Make or take opportunities to convey warnings about the bogus information.
  7. Seek out scientifically-validated information and the people and institutions who tend to convey this information.
  8. Document more examples.

To this end, Anthony Betrus—on behalf of the four authors—has established www.coneofexperience.com. The purpose of this website is to provide a place for further exploration of the issues raised in the four articles. It provides the following:

  • Series of timelines
  • Links to other debunking attempts
  • Place for people to share stories about their experience with the bogus data and visuals.

The learning industry also has responsibilities.

  1. Educational institutions must ensure that validated information is more likely to be conveyed to their students, within the bounds of academic freedom…of course.
  2. Educational institutions must teach their students how to be good consumers of “research,” “data,” and information (more generally).
  3. Trade organizations must provide better introductory education for their members; more myth-busting articles, blog posts, videos, etc.; and push a stronger evidence-based-practice agenda.
  4. Researchers have to partner with research translators more often to get research-based information to real-world practitioners.

Links:

 

 

The dissemination of research in online learning: a lesson from the EDEN Research Workshop

Tony Bates beklagt, dass es heute zwar eine wachsende Zahl valider, empirisch abgesicherter Forschungsergebnisse gibt, die das Online-Lernen betreffen, diese Ergebnisse aber Praktiker nicht erreichen. Kurz: “Houston, we have a problem: no-one reads our research”. Folgende Gründe macht er aus: u.a. die Komplexität des Forschungsfelds “Lernen”, fehlende Aufmerksamkeit für das Online-Lernen, Journale mit geringer Verbreitung. Ich würde noch hinzufügen, dass häufig auch aus ökonomischen Gründen vorliegende Erkenntnisse und Erfahrungen einfach ignoriert werden. Soweit geht Tony Bates nicht. Doch er schreibt sich und der Community ins Stammbuch:

“I can suggest a number of ways in which research dissemination can be done, but what is needed is a conversation about
(a) how best to identify the key research findings on online learning around which most experienced practitioners and researchers can agree
(b) the best means to get these messages out to the various stakeholders.”

Tony Bates, e-learning and distant education resources, 5. November 2014