What Do You Do With Your Evaluation data?

Donald Kirkpatrick created the four-level model for training evaluation, which most organisations claim to cherish. For those unfamiliar, the four levels are as follows.

  1. Reaction – this answers the question what did the learners think about the training. We measure reaction through surveys conducted towards the end of training (sometimes called smile sheets)
  2. Learning – this answers the question what did the learners learn during or immediately following the instruction. We measure learning most often through a quiz or a skills demonstration
  3. Behaviour – this answers the question did the learners implement their new knowledge or skills back on the job
  4. Results – this answers the question what impact did the training have on the organisation. We measure results most often with financial reports. However, results can also be things like customer satisfaction.

In my last full-time job (before I became a freelance designer/developer), the facilitator or designer/developer would review his or her level 1 evaluations and retain this data for their semi-annual review. Occasionally the team manager would look at them, but more often than not, the team administrator would stuff them in a file cabinet, never to be seen again.

As a designer, I would look at the odd results from our level 2 evaluation reports. Unfortunately, our LMS wasn’t sophisticated enough to tell me which questions were proving to be difficult for my students. Had I known those types of results, I would have looked more closely at first the course content that would affect those problem questions and secondly I would review the question itself. I would ask myself was it written in such a way that could make it difficult for students to answer correctly?

I’m afraid to say that in my previous organisation we didn’t perform any level 3 or level 4 evaluations at all. There just was no demand for this information and very little time to conduct the research needed to get these results. Instead, our executive was more concerned about completion reports.

When I started working alongside Adobe, they granted me a complimentary license for Adobe Captivate Prime for a period. I was impressed with the simple yet effective level 3 evaluation tools built into the LMS. Each time an employee completes online training from Adobe Captivate Prime, the employee’s manager will receive a notification at a later time asking them to evaluate the on the job performance. Level 1 and 2 evaluations are great but what matters are behaviour and results. If you can combine the level 3 results provided from this LMS along with your company’s financial reports, you could say without too much uncertainty if your company’s learning strategy is effective.

Shortly after I trialled Adobe Captivate Prime I created the following video. It’s a couple years old now, but I think it’s still an accurate assessment of Adobe’s LMS product and how effective your learning can be.

When PowerPoint goes bad

What are your pet peeves about using PowerPoint? Is it the tool itself or how people use it?

I use PowerPoint, and think it is a good way to engage students and staff, and can be used as a way to spur enjoyment, engagement and interest in your subject. But that’s more about how the tool is used rather than the tool itself. So, here are some observations I’ve made over the years about PowerPoint, and how people use it ‘badly’:

  • Font – Inconsistent use of fonts across the slide deck, or even on the same slide. Using fonts that really don’t work on screen (like Times New Roman), or using Comic Sans. Please. Don’t.
  • Images – So you found Google images or another such image search. You’ve copied the image to your slide and it looks good. It doesn’t. That small image might look OK on your screen, but test it in a classroom or lecture theatre, you’ve stretched it so much it’s pixelated so much it’s almost unrecognisable.
  • Words – Writing your whole lesson in PowerPoint and spending half the lesson with your back to the class so you can read from the projector screen. Same goes if you stand behind the lectern PC and read of that screen instead.
  • Bullet points – PowerPoint makes it too easy to use them, but that doesn’t mean you should (yes, I can see the irony as I’m using them here too).
  • Colour / Templates – Just because you can lots of colour or standard PowerPoint templates doesn’t mean you should. Keep it simple so your key message shines through – the more colour / mess on the slide will only detract or hide your content.
  • Charts / Tables – Do you really need that chart or table that shows 50 different points of information.
  • Animation – I’ve never found animated stars or arrows to help the presentation. If the slide is structured properly you shouldn’t need them.
  • Clipart – Please. Don’t.
  • Volume – You may feel that your one hour presentation needs 100 slides. I’m pretty sure your audience/class doesn’t. 

If in doubt about any aspect of your use of PowerPoint, the best time to find out how you’re doing is now, while you’ve time to go and check it all out and not half way through the most important presentation of your career. Would you rather a slightly awkward conversation in private now or suddenly realise the conference venue has emptied for lunch 45 minutes early, just after you start your 16th of 135 slides?

Go find your friendly learning technologist (yes, we are friendly!), ask us to look over it and tell you what we think. We will be honest but we’ll be critical and, most importantly, constructive. We will offer support and suggestions, we will give your pointers on how to cut the information on the slides (and how to deliver it too, if you want) and we will be there to help you feel comfortable creating slide decks in future and deliver them. Every learning technologist I’ve ever met will do this, without question and without judgement; we’re just happy we can offer our expertise and make your job easier (and more successful).

There are plenty of online tutorials and help websites if you want to find out yourself about using PowerPoint ‘well’. Try sites like this and this and this.

If in doubt this video – Life after death by PowerPoint – will help you see the error of your ways.

Image source: EU PVSEC (CC BY-NC-ND 2.0)

Reading: Lurkers as invisible learners

I’ve always been annoyed at being called a ‘lurker’, it’s a term that has a different meaning for me when talking about the engagement, or not, of students in an online class – read my post ‘Listener or Lurker?’ from 2013. In this instance the paper ‘Learners on the Periphery: Lurkers as Invisible Learners‘, by Sarah Honeychurch and colleagues, defines as a ‘lurker’ or ‘silent learner’ or ‘legitimate peripheral participant’ as.

“… hard to track in a course because of their near invisibility. We decided to address this issue and to examine the perceptions that lurkers have of their behaviour by looking at one specific online learning course: CLMOOC. In order to do this, we used a mixed methods approach and collected our data via social network analysis, online questionnaires, and observations, including definitions from the lurkers of what they thought lurking was … [our] research findings revealed that lurking is a complex behaviour, or set of behaviours, and there isn’t one sole reason why lurkers act the ways that they do in their respective communities. We concluded that for a more participatory community the more active, experienced or visible community members could develop strategies to encourage lurkers to become more active and to make the journey from the periphery to the core of the community.”

I’m far more comfortable with the terms used here, and reasons why students don’t engage perhaps how we’d like them to, or indeed in the way we design the course. We need to accept and address that not everyone taking online learning, whether it’s a free MOOC, paid-for CPD course or fully online degree, wants to be social, vocal, or indeed visible in the online environment. We can provide the base materials and ask the students to go off and read around the subjects, we can offer opportunities to engage and ‘test’ themselves on different types of course activities. The only way we know the students are engaging in the subject and materials is usually if we assign marks or grades to the activities, especially if those marks carry weight on the course’s final grade.

Reference

Honeychurch, S., Bozkurt, A., Singh, L, and Koutropoulos, A. (2017). Learners on the Periphery: Lurkers as Invisible Learners. European Journal of Open, Distance and E-Learning. [online] Available at: http://www.eurodl.org/?p=current&sp=full&article=752 [Accessed 21 Jun. 2017].

With what shall I remember the future?

I read an article this morning from Martin King, which got me thinking.

I feel quite reflective these days, so Martin’s question resonated with me:

“I wonder what the world will be like 50 years from now – what will life be like in 2067! I’d like to record what I think life will be like in 2067 but how should I do this?”

Reflecting on my photographic history I got something very like the Kodak 500 for my 7th or 8th birthday (late 1980s). A film cartridge was something like £2-5, developing slightly more, so not something I wasted shots on. A film of 24 photos would usually last a year, or for a holiday/event. At the time my dad had a Kodak SLR and had his photographs developed as slides and we used to have family slideshows. Over the years I got a bigger and better camera and, in time for my wedding, got a more up to date camera, a Canon IXUS II with the APS film type. This was great, the ability to change the photo size (classic, HDTV and panoramic) and a reasonable zoom, for the time.

Again, the stumbling block here was the cost of the film and developing. but digital was already on the way. At this time the cost of digital was higher than film-based cameras, and the availability of good quality home printing or online services was extremely limited.

Fast forward to spring 2017 and I’ve progressed through a couple of digital cameras and now have a good quality digital camera in the shape of a Sony HX90V, and my phone (iPhone 6S+). I back all photos and video up to Dropbox and because of the size of my collection, I pay for the larger 1TB storage. 

But of course, this isn’t ‘remembering’ as such, it’s collecting or collating the past in photographic form. How will I remember the future? I was impressed with the concept of Google Glass, but the technology seems to have lost it’s way – all we have to show from it now is the SnapChat equivalent, the Spectacles. We’ve lost the innovation around augmented in-time content from the Google offering and just developed it in to something quite immature and superficial. yes, we can take and share photos instantly (but not on Instagram, for me at any rate), but we should have, and expect, so much more. Are developments in augmented reality/AR going to shape our future? If ‘Ready Player One‘ is any indication (a book I’ve just started reading) it’ll be a smartphone killer if it has the catch-all purpose (and financial incentive from the likes of the OASIS developer James Halliday in the book).

Wearable technology will soon become more personal and ‘invasive’, actually part of us, and us of it. We already have smart watches and rings and fitness trackers. We even have the promise (?) of connected contact lenses (again from Google) for diabetes care. So when will memories be recordable and transferable? Will advances in medicine enable the fabled Star Trek Tricorder, a non-invasive medical instrument? That is the future I am waiting for. Maybe not enthusiastically, but certainly waiting for.

Interesting further reading, if you want to:

Image source: James Lee (CC BY 2.0)

Dear Twitter. It’s not me, it’s you

Here’s a confession … I’m not as enamoured by Twitter as I used to be. Unlike a traditional break up argument (is this the case, I don’t know?) where one party says to the other “it isn’t you, it’s me”, I am most definitely saying “it’s not me, it’s you [Twitter]”.

Twitter, at its core, is something that merely reflects us, either individually or culturally. It’s a free tool and subject to very few rules and regulations. And I don’t like what I see there these days. A year ago, I wouldn’t have thought I would be in a position anywhere I would be called, or call myself, anything other than Avid Twitter User (ATU), but today I find myself a Reluctant Twitter User (RTU). I still use Twitter because I have made some amazing friends and contacts there, I have some fabulous conversations and networking, and the like. I’ve had ideas, shared them, allowed them to grow, collected and collated articles and books, all from Twitter. And I want to continue that. For the most part my use of Twitter hasn’t changed in the last year. But the way other people use Twitter has. Let me explain.

I have never used the ‘trending‘ or ‘moments‘ features of Twitter. I’m not interested in the latest celebrity news, I don’t care what who said to whom, or which talentless so-called celebrity is on the cover of some over-priced glam-mag, or whatever they’re called. And don’t get me started on the ads … all I’ve learned from Twitter ads is that the more you interact with them (either blocking the accounts or clicking the ‘dismiss’ option) just means you get more. The last time I tried dismissing or blocking the ads I ended up with a ad every 5th or 6th tweet in the iOS app. Now I ignore them, just gloss over them, and I get far far fewer! Annoying, oh yes, but fewer of them.

No, these are mere annoyances. What is causing me to think twice about Twitter is the way, as I said earlier, the way it reflects ‘us’ and how others are using it. In the last year the world has changed, it’s quite difficult to have not noticed. For my UK and European friends, it’s been Brexit. For the US and, frankly, rest of the world, it’s Trump. My Twitter feed is now full of political commentary and all sorts of negative content that wasn’t there before. Don’t get me wrong, and I’m not making a political statement here, the world feels like it’s on the edge of a very precarious precipice, and I feel like we’re toppling into the abyss on the other side we may never recover from. But that’s not the Twitter I want, or rather not what I look to Twitter for … this is why I ignore the ‘trending’ and ‘moments’ features, it doesn’t represent the Twitter (and my network) I want. 

I admire those who are vocal and active in bringing the ‘new world’ to our attention, to bringing the elite few to task for the masses who are not as able or represented (freedom of the press is powerful and ultimately the only thing capable of bringing balance to current affairs, by holding those in power to account for their actions), but I want to read and hear about it when I choose, not somewhere where I go to learn about my work, my network, my interested and passions, etc. Twitter has always been, for me, about learning, learning technology, etc. because those I choose to interact with and choose to follow are also tweeting about that. The world has changed, and all of us with it.

So, here’s what I need from Twitter, in this new world – I don’t want my Twitter timeline/stream to be controlled by algorithms, but I do want more control (note: I want the control, not for it to be done for me) over the kind of tweets that fill my timeline. If the 1,300 or so people I follow on Twitter want to share and discuss current affairs and Brexit and the like, then I am happy for them and don’t want to stop them, or unfollow them either. I just want some way to filter those out, until I want to read them. Twitter is acting against the rise (and rise) of trolls and the nasty side of the internet (some say too late).

Some might say I shouldn’t’ blame Twitter, it’s merely holding the mirror up to reflect society as it is changing, and it’s that reflection that I don’t like, but Twitter has changed – not just how it’s being used but also how it’s allowing itself to be used. Twitter, I believe, has a responsibility to balance how it is used. An analogy would be to not blame the car manufacturer for the people the drivers kill in accidents where their cars are involved, but we still hold them responsible for either false or misleading advertising features or safety they don’t have, as well as holding them responsible for the safety features they ought to have (so your car can go 200mph … how good are the brakes? Good enough, or the best they can possibly be?). So, Twitter needs to hold itself to account and deal with trolls, deal with the abuse of the verified icon, deal with the abuse of the global audience every tweet can have (whether it’s from someone with 3 followers or 3,000,000 followers), deal with (deliberate) misinformation from those who are in a position to affect so many, etc. Twitter has a responsibility. I don’t know how it can do any of this, but hiding or ignoring it isn’t going to make it go away. Inaction to deal with these problems, by association, is the same as allowing them to happen, almost to the level of making it approved behaviour, almost encouraging it?

Am I breaking up with Twitter? No. Or rather, not yet. But I am very conscious of trying hard to separate the wheat from the chaff. Oh yes, Facebook. Don’t get me started on Facebook …

Image source: “Twitter” by Pete Simon (CC BY 2.0)

Jeopardy Jedi: return of the branching menu monster

In a previous post on the Adobe Help Forums, I outlined a challenge that I was facing by trying to execute a branching menu interface which would allow a user to choose from 3 paths each containing a related question pool. In the spirit of “flexible learning” the general idea was that none of the paths were meant to be sequential, i.e. the learner was not expected to complete set 1 before attempting set 2 or set 3. What that meant though, was that a path should only be available to a learner if he had not already visited it and completed the related question set.

Using question slides, I also intended to provide unique feedback to the leaner based on his performance at the end of each path. Captivate, as I soon learnt, would by default, view my output as a single quiz with a single system generated results quiz slide at the end. I created customized “checkpoint” results slides at the end of each path so that the learner could have some feedback on his performance regardless of if all 3 groups had been completed.

Just when everything looked set: (i) my user variables were running accurately and generating the right calculations, (ii) states were changing as they ought to and (iii) all other cosmetic details, I noticed that the caption at the bottom of the question slide on the final set of questions -whichever set was the last to be completed – kept reading “Question 4 of 8” -when truly this could not be the case. This caused the final results slide to return null values as well since the real values from the second set were never being stored or recorded.

I immediately reached out to Lilybiri and Ron Ward and Allen Partridge – as I am teaching myself Captivate. I experimented with the quiz scope. I experimented with “branch aware” vs. not “branch aware” but alas, it didn’t quite fit what I thought my finished product should do. For example, I included the menu in the quiz scope since I had users jumping back and forth there (remember the 3 paths?) but instead, this increased the ‘perceived’ number of questions in the output: Question 4 of 16. Though it appeared as a grey, barely visible caption that no learner would probably take seriously, I was annoyed and took it seriously because it was “not what i wanted” In the interest of time, and at my wit’s end,  I opted to give the learner a linear experience and finished 2016 feeling defeated.

 The Study Tour Project


In early 2017, I developed a content planning template: Jeopardy for the creation of a game. I had briefly glimpsed the learning interaction available in the Captivate software and initially thought that it would suffice for what I wanted. However, from the point of view of an artist, the published interaction was not at all engaging. I would have to design my own thing. Unfortunately, the branching menu conundrum was on the return – perhaps in a less complex way but nevertheless it was re-surfacing and I hadn’t worked out a solution for it. This case study is about how I did just that.  I would like to share my process with other newcomers like myself.

The Jeopardy Project

Project details:  Length 41.5 secs (1245 frames) ; Resolution 1024 x 627;  Slides = 13; Responsive project; Size: 4.3mb

I will describe the lessons learned based on individual slide designs and the reasoning behind each in the Jeopardy Game project. The slide categories in this case study are: (i) Title Slide, (ii) Instructions Slide, (iii) Menu Slide, (iv) Question Slide and (v) Results Slide.

Personal Planning Tips:

Tip #1. I usually refer to the eLearning Brothers website by default. One of the bonuses of working in Captivate is that you are automatically linked to their much coveted assets library. The cut-outs are hands-down every elearning designer’s joy. Unfortunately, the game templates aren’t quite as attractive at least to me. I didn’t like any ready-made option that I saw, and the Game Show Quiz Challenge wasn’t even working as it should. This proved useful in showing me what NOT to do for my particular needs.

Tip #2. I put aside the artist in me – at least at the very start – and became an amateur logician. It is always more important to know that it will work right, i.e. the logical flow, definition of user variables, selection of system variables and the writing of execution of advanced actions: shared, single or conditional. There was no need to build out the entire thing as some creative masterpiece, I focused only on the baseline structure. What were the critical/parent actions? I began with those. Which of these have children or will be duplicated/repeated? How often?  I used some basic shapes and buttons and ensured that everything was operating as they should. The idea is to embellish with transitions, animation, sounds, etc. after.

Tip #3. I have a habit of designing my graphics (as detailed as possible) in an external programme and simply importing them as static background graphics into Captivate. I have found that this reduces the number of objects that only add cosmetic value on my stage and nothing else. The was the approach used for the look and feel of the jeopardy game.

Title Slide

As a launch slide,  its purpose is to gain attention (Gagne, 1965). This can be achieved by adding music, transitions, animations etc. I included an exit button as an option to the learner.  Adding audio to the timeline tends to extend the length of the frame count. In the past, I have been advised by Lilybiri to keep this to a minimum, so instead I used the On Enter option in the Slide Properties to “Play Audio” with the continue movie at the end of audio option selected. When it was not selected, the published project appeared white and nothing was happening. Additionally, it was discovered that for most, if not all, of my buttons I would only allow users, a single attempt. What that meant however, is that if an advanced action was linked to a single attempt button, I would have to again specify the same advanced action script for the “on last attempt” dialogue.

Instructions Slide

Conveying expectations to learners/users good is proper instructional design practice. Even if users glance over instructions, they should be there as a precaution. While I’m sure everyone gets the idea of Jeopardy, I wanted to briefly indicate how performance would be assessed.

I have also used the instructions slide to achieve something that I never did before in my previous projects. I executed an advanced action on entering that has absolutely nothing to do with the current slide! The advanced action Start_Game sets up the next slide rather, by resetting any and all variables or changed states to their default or normal value. By handling the reset one slide ahead, I was free to write another advanced action, and perhaps a more pertinent one, for the coming slide.  This, was a breakthrough for me as in the past, I often tried to do too much on a single slide and an advanced action at run-time.

Menu Slide

There are a couple of design tricks here. I used shapes which I turned into buttons and positioned them against the background graphic. Because my graphic had used different coloured crowns as visual cues for the 3 levels/categories of questions, I would have to reduce the opacity of the shape to 0%. At first when I did this, the four customized states that I had assigned to each button also automatically changed to 0% – causing my heart to sink. However, I later realized that I could customize the states individually and leave the normal state at 0%. This was neat – and is a lot like the “hidden” option in Articulate Storyline.

There are 9 hidden user variables on the menu slide. These count or check if a learner/user/player has previously accessed one of the questions from the menu. These variables are important because this is a branching menu where the learner is free to choose which jeopardy question he wants to attempt. I don’t want him repeating any questions. I don’t just want them disabled, I want to give him an indication that he has already tried, and whether it was right or wrong, that question is now no longer available to him. Isn’t that how jeopardy works? 

Using those variables helped to set up a conditional action called “Menu_Checks” which I executed on Enter. What it does is loop through Q1 to Q9 by determining if “Any of the conditions true” such as [Q1 is equal to 1] etc. Change the state of Question1_Button to Attempted and Disable Question1_Button etc. As a non-programmer, I don’t profess to know anything much about variables except what I have gleaned from Lilybiri’s Tutorials on Adobe – I am sure there are other ways to do this more efficiently.

I used the system variable cpQuizInfoPointsScored to display the Game Participant’s earnings at the start and during the game.

Question Slide

As you may guess, for the branching menu to work accurately I had to ensure that the Q1 through Q9 variables were assigned the value of 1 ON ENTER – for each respective question slide.  This is perhaps the subtle difference between this Jeopardy project and the Study Tour project since here a single slide was assigned to a single variable, once it was counted, it triggered the right change.

Questions slides, in addition to having slide properties where advanced actions can be executed, also have quiz properties where more advanced actions can be added. Each question in the Jeopardy Game has a conditional action for its specific question. The conditional action that I have used takes advantage of the system variable cpQuizInfoAnswerChoice. I disabled shuffling of questions for this to work appropriately. Basically the command checks that the right answer (designated by the literal A/B/C/D that I indicated), plays a success or failure audio clip and jumps back to the menu. It was the first time my if/else conditional action worked! I was only too pleased to duplicate it for the nine questions.  I am a bit curious though, if I did want to enable shuffling of answers, would I have had to specify the correct phrase (full sentence related to the right A/B/C/D) as the literal value instead?

Results Slide

This slide, though system generated, has some changes. My objective for this one was to personalize the quiz results using an avatar. To achieve this effect I used the technique described earlier by the opacity at 0% trick for normal inbuilt states. Under project info in the Pass Fail option, I added the “Change State” of avatar to “Happy” / “Disappointed” and adjusted the default pass and failure messages.

Conclusion

IMG_1440

Thinking it through – My Storyboards

Building the Jeopardy Game was a giant leap in my self-directed learning with Captivate because I was able to move faster and apply lessons from previous projects to make it come together. Adobe Captivate for Beginners who may be overly zealous and ambitious in their creations like me – is a worthwhile challenge if you’re up for the task.

Providing Feedback on Quiz Questions — Yes or No?

I was asked today the following question from a learning professional in a large company:

It will come as no surprise that we create a great deal of mandatory/regulatory required eLearning here. All of these eLearning interventions have a final assessment that the learner must pass at 80% to be marked as completed; in addition to viewing all the course content as well. The question is around feedback for those assessment questions. 

  • One faction says no feedback at all, just a score at the end and the opportunity to revisit any section of the course before retaking the assessment.

  • Another faction says to tell them correct or incorrect after they submit their answer for each question.

  • And a third faction argues that we should give them detailed feedback beyond just correct/incorrect for each question. 

Which approach do you recommend? 

Bigstock-Results-Information-Homepage-E-123859796


Here is what I wrote in response:

It all depends on what you’re trying to accomplish…

If this is a high-stakes assessment you may want to protect the integrity of your questions. In such a case, you’d have a large pool of questions and you’d protect the answer choices by not divulging them. You may even have proctored assessments, for example, having the respondent turn on their web camera and submit their video image along with the test results. Also, you wouldn’t give feedback because you’d be concerned that students would share the questions and answers.

If this is largely a test to give feedback to the learners—and to support them in remembering and performance—you’d not only give them detailed feedback, but you’d retest them after a few days or more to reinforce their learning. You might even follow-up to see how well they’ve been able to apply what they’ve learned on the job.

We can imagine a continuum between these two points where you might seek a balance between a focus on learning and a focus on assessment.

This may be a question for the lawyers, not just for us as learning professionals. If these courses are being provided to meet certain legal requirements, it may be most important to consider what might happen in the legal domain. Personally, I think the law may be behind learning science. Based on talking with clients over many years, it seems that lawyers and regulators often recommend learning designs and assessments that do NOT make sense from a learning standpoint. For example, lawyers tell companies that teaching a compliance topic once a year will be sufficient -- when we know that people forget and may need to be reminded.

In the learning-assessment domain, lawyers and regulators may say that it is acceptable to provide a quiz with no feedback. They are focused on having a defensible assessment. This may be the advice you should follow given current laws and regulations. However, this seems ultimately indefensible from a learning standpoint. Couldn't a litigant argue that the organization did NOT do everything they could to support the employee in learning -- if the organization didn't provide feedback on quiz questions? This seems a pretty straightforward argument -- and one that I would testify to in a court of law (if I was asked).

By the way, how do you know 80% is the right cutoff point? Most people use an arbitrary cutoff point, but then you don’t really know what it means.

Also, are your questions good questions? Do they ask people to make decisions set in realistic scenarios? Do they provide plausible answer choices (even for incorrect choices)? Are they focused on high-priority information?

Do the questions and the cutoff point truly differentiate between competence and lack of competence?

Are the questions asked after a substantial delay -- so that you know you are measuring the learners' ability to remember?

Bottom line: Decision-making around learning assessments is more complicated than it looks.

Note: I am available to help organizations sort this out... yet, as one may ascertain from my answer here, there are no clear recipes. It comes down to judgment and goals.

If your goal is learning, you probably should provide feedback and provide a delayed follow-up test. You should also use realistic scenario-based questions, not low-level knowledge questions.

If your goal is assessment, you probably should create a large pool of questions, proctor the testing, and withhold feedback.

 

Interview with Karl Kapp on Games, Gamification, and LEARNING!

Dr. Karl Kapp is one of the learning field's best research-to-practice gurus! Legendary for his generosity and indefatigable energy, it is my pleasure to interview him for his wisdom on games, gamification, and their intersection.

His books on games and gamification are wonderful. You can click on the images below to view them on Amazon.com.

 
 

The following is a master class on games and learning:

Will (Question 1):

Karl, you’ve written a definitive exploration of Gamification in your book, The Gamification of Learning and Instruction: Game-Based Methods and Strategies for Training and Education. As I read your book I was struck by your insistence that Gamification “is not the superficial addition of points, rewards, and badges to learning experiences.” What the heck are you talking about? Everybody knows that gamification is all about leaderboards, or so the marketplace would make us believe… [WINK, WINK] What are you getting at in your repeated warning that gamification is more complex than we might think?

Karl:

If you examine why people play games, the reasons are many, but often players talk about the sense of mastery, the enjoyment of overcoming a challenge, the thrill of winning and the joy of exploring the environment. They talk about how they moved from one level to another or how they encountered a “boss level” and defeated the boss after several attempts or how they strategized a certain way to accomplish the goal of winning the game. Or they describe how they allocated resources so they could defeat a difficult opponent. Rarely, if ever, do people who play games talk about the thrill of earning a point or the joy of being number seven on the leaderboard or the excitement of earning a badge just for showing up.

The elements of points, badges and leaderboards (PBLs) are the least exciting and enticing elements of playing games. So there is no way we should lead with those items when gamifying instruction. Sure PBLs play a role in making a game more understandable or in determining how far away a player is from the “best” but they do little to internally motivate players by themselves. Reliance solely on the PBL elements of games to drive learner engagement is not sustainable and not even what makes games motivational or engaging. It’s the wrong approach to learning and motivation. It’s superficial; it’s not deep enough to have lasting meaning.

Instead, we need to look at the more intrinsically motivating and deeper elements of games such as: challenge, mystery, story, constructive feedback (meaningful consequences) strategy, socialization and other elements that make games inherently engaging. We miss a large opportunity when we limit our “game thinking” to points, badges and leaderboards. We need to expand our thinking to include elements that truly engage a player and draw them into a game. These are the things that make games fun and frustrating and worth our investment in time.

 

Will (Question 2):

You wrote that “too many elements of reality and the game ceases to be engaging,”—and I’m probably taking this out of context—but I wonder if that is true in all cases? For example, I can imagine a realistic flight simulator for fighter pilots that creates an almost perfect replica of the cockpit, g-forces, and more, that would be highly engaging… On the other hand, my 13-year-daughter got me hooked on Tanki, an online tank shot-em-up game, and there are very few elements of reality in the game—and I, unfortunately, find it very engaging. Is it different for novices and experts? Are the recommendations for perceptual fidelity different for different topic areas, different learning goals, et cetera?

Karl:

A while ago, I read a fake advertisement for a military game. It was a parody. The fake game description described how the “ultra-realistic” military game would be hours of fun because it was just like actually being in the military. The description told the player that he or she would have hours of fun walking to the mess hall, maintaining equipment, getting gasoline for the jeep, washing boots, patrolling and zigging and cleaning latrines. Now none of these things are really fun, in fact, they are boring but they are part of the life of being in the military. Military games don’t include these mundane activities. Instead, you are always battling an enemy or strategizing what to do next. The actions that a military force performs 95% of the time are not included in the game because they are too boring.

 If games where 100% realistic, they would not be fun. So, instead, games are an abstraction of reality because they focus on things within reality that can be made engaging or interesting. If a game reflected reality 100%, there would be boring game play. Now certainly, games can be designed to “improve” reality and make it more fun. In the game, The Sims, you wake up, get dressed and go to work which all seems pretty mundane. However, these realistic activities in The Sims are an abstraction of the tasks you actually perform. The layer of abstraction makes the game more exciting, engaging and fun. But in either the military game case or The Sims, too much reality is not fun.

The flight simulator needs to be 100% realistic because it’s not really a game (although people do play it as a game) but the real purpose of a simulation is training and perfection of skills. A flight simulator can be fun for some people to “play” but in a 100% realistic simulator, if you don’t know what you are doing, it’s boring because you keep crashing. For someone who doesn’t know how to fly, like me. If you made a World War II air battle game which had 100% realistic controls for my airplane, it wouldn’t be fun. In game design, we need to balance elements of reality with the learning goal and the element of engagement.

For some people, a simulator can be highly engaging because the learner is performing the task she would do on the job. So there needs to be a balance in games and simulations to have the right amount of reality for the goals you are trying to achieve.

 

Will (Question 3):

In developing a learning game, what should come first, the game or the goals (of learning)?

Karl:

Learning goals must come first and must remain at the forefront of the game design process. Too often I see the mistake of a design team becoming too focused on games elements and losing site of the learning goals. In our field, we are paid to help people learn, not to entertain them. Learning first.

Having said that, you can’t ignore or treat the game elements as second class citizens, you can’t bolt-on a points system and think you have now developed a fun game—you haven’t. The best process involves simultaneously integrating game mechanics and learning elements. It’s tricky and not a lot of instructional designers have experience or training in this area but it’s critical to have integration of game and learning elements, the two need to be designed together. Neither can be an afterthought.

 

Will (Question 4):

Later we’ll talk about the research you’ve uncovered about the effectiveness of games. As I peruse the literature on games, the focus is mostly on the potential benefits of games. But what about drawbacks? I, for one, “waste” a ton of time playing games. Opportunity costs are certainly one issue, but maybe there are other drawbacks as well, including addiction to the endorphins and adrenaline; a heightened state of engagement during gaming that may make other aspects of living – or learning – seem less interesting, engaging. What about learning bad ideas, being desensitized to violence, sexual predation, or other anti-social behaviors? Are there downsides to games? And, in your opinion, has the research to date done enough to examine negative consequences of games?

Yes, games can have horrible, anti-social content. They can also have wonderful, pro-social content. In fact, a growing area of game research focuses on possible pro-social aspects of games. The answer really is the content. A “game” per-say is neither pro- or anti-social like any other instructional medium. Look at speeches. Stalin gave speeches filled with horrible content and Martin Luther King, Jr. gave speeches filled with inspiring content. Yet we never seem to ask the question “Are speeches inherently good or bad?”

Games, like other instructional media, have caveats that good instructional designers need to factor when deciding if a game is the right instructional intervention. Certainly time is a big factor. It takes time to both develop a game and to play a game. So this is a huge downside. You need to weigh the impact you think the game will have on learner retention or knowledge versus another instructional intervention. Although, I can tell you there are at least two meta-analysis studies that indicate that games are more effective for learning than traditional, lecture-based instruction. But the point is not to blindly choose game over lecture or discussion. The decision regarding the right instructional design needs to be thoughtful. Knowing the caveats should factor into the final design decision.

Another caveat is that games should not be “stand-alone.” It’s far better for a learning game to be included as part of a larger curriculum rather than developed without any sense of how it fits into the larger pictures. Designers need to make sure they don’t lose site of the learning objective. If you are considering deploying a game within your organization, you have to make sure it’s appropriate for your culture. Another big factor to consider is how the losers are handled in the game. If a person is not successful at a game, what are the repercussions? What if she gets mad and shuts down? What if he walks away halfway through the experience because he is so frustrated? These types of contingencies need to be considered when developing a game. So, yes, there are downsides to games as there are downsides to other types of instruction. Our job, as instructional designers, is to understand as many downsides and upsides as possible for many different design possibilities and make an informed, evidence-based decision.

 

Will (Question 5):

As you found in your research review, feedback is a critical element in gaming. I’ve anointed “feedback” as one of the most important learning factors in my Decisive Dozen – as feedback is critical in all learning. The feedback research doesn’t seem definitive in recommending immediate versus delayed feedback, but the wisdom I take from the research suggests that delayed feedback is beneficial in supporting long-term remembering, whereas immediate feedback is beneficial in helping people “get” or comprehend key learning points or contingencies. In some sense, learners have to build correct mental models before they can (or should) reinforce those understandings through repetitions, reinforcement, and retrieval practice.

Am I right that most games provide immediate feedback? If not, when is immediate feedback common in games, when is delayed feedback common? What missed opportunities are there in feedback design?

You are right; most games provide immediate, corrective feedback. You know right-away if you are performing the right action and, if not, the consequences of performing the wrong action. A number of games also provide delayed feedback in the form of after-action reviews. These are often seen in games using branching. At the end of the game, the player is given a description of choices she made versus the correct choices. So, delayed feedback is common in some types of games. In terms of what is missing in terms of feedback, I think that most learning games do a poor job of layering feedback. In well-designed video games, at the first level of help, a player can receive a vague clue. If this doesn’t work or too much time passes, the game provides a more explicit clue and finally, if that doesn’t work, the player receives step-by-step instructions. Most learning games are too blunt. They tend to give the player the answer right away rather than layers choices or escalating the help. I think that is a huge missed opportunity.

 

Will (Question 6):

By the way, your book does a really nice job in describing the complexity and subtlety of feedback, including Robin Hunicke’s formulation for what makes feedback “juicy.” What subtleties around feedback do most of us instructional designers or instructors tend to miss?

Our feedback in learning games and even elearning modules is just too blunt. We need more subtlety. Hunicke describes the need for feedback to have many different attributes including the need for the feedback to be tactile and coherent. She refers to tactile feedback as creating an experience where the player can feel the feedback as it is occurring on screen so that it’s not forced or unnatural within the game play. Instructional designers typically don’t create feedback the player or learner feels, typically, they create feedback that is “in your face” such as “Nice job!” or “Sorry, try again.” She describes coherent feedback as feedback that stays within the context of the game. It is congruent with on screen actions and activities as well as with the storyline unfolding as the interactions occur. Our learning games seem to fail at including both of these elements in our feedback. In general, our field needs to focus on feedback that is more naturally occurring and within the flow of the learning.

 

Will (Question 7):

Do learners have to enjoy the game to learn from it? What are the benefits of game pleasure? Are there drawbacks at all?

Actually, research by Tracy Sitzmann indicates (2011) that a learner doesn’t have to indicate that he or she was “entertained” to learn from a serious game. So fun should not be the standard by which we measure the success of game. Instead, she found that what makes a game effective for learning is the level of engagement. Engagement should be the goal when designing a learning game. However, there are a number of studies that indicate that games are motivational. Although, one meta-analysis on games indicated that motivation was not a factor. So, I am not sure if pleasure is a necessary factor for learning. Instead, I tend to focus more on building engagement and having learners make meaningful decisions and less on learner enjoyment and fun. This tends to run counter to why most people want a learning game but the reason we should want learning games is to encourage engagement and higher order thinking and not to simply make boring learning fun. Engagement, mastery and tough decision making might not always be fun but, as you indicated in your questions about simulations, it can be engaging and learning results from engagement and then understanding the consequences of actions taken during that engagement.

 

Will (Question 8):

As I was perusing research on games, one of my surprises was that games seemed to be used for health-behavior change at least as much as learning. What they heck’s going on?

Games are great tools for promoting health. We all know that we should focus on health and wellness but we often let other life elements get in the way. Making staying healthy a game provides, in many cases that little bit of extra motivation to make you stay on course. I think games for health work so well because they capitalize on our already existing knowledge that we need to stay healthy and then provide tracking of progress, earning of points and other incentives to help us give that extra boost that makes us take the extra 100 steps needed to get our 10,000 for the day. Ironically, I find games used in many life and death situations.

 

Will (Question 9):

In your book you have a whole chapter devoted to research on games. I really like your review. Of course, with all the recent research, maybe we've learned even more. Indeed, I just did a search of PsycINFO (a database of scientific research in the social sciences). When I searched for "games" in the title, I found 110 articles in peer-reviewed journals in this year (2016) alone. That's a ton of research on games!!

Let's start with the finding in your book that the research methodology of much of the research is not very rigorous. You found that concern from more than one reviewer. Is that still true today (in 2016)? If the research base is not yet solid, what does that mean for us as practitioners? Should we trust the research results or should we be highly skeptical -- OR, where in-between these extremes should we be?

The short answer, as with any body of research, is to be skeptical but not paralyzed. Waiting for the definitive decision on games is a continually evolving process. Research results are rarely a definitive answer; they only give us guidance. I am sure you remember when “research” indicated that eggs were horrible for you and then “research” revealed that eggs were the ultimate health food. We need to know that research evolves and is not static. And, we need to keep in mind that some research indicated that smoking had health benefits so I am always somewhat skeptical. Having said that, I don’t let skepticism stop me from doing something. If the research seems to be pointing in a direction but I don’t have all the answers, I’ll still “try it out” to see for myself.

That said the research on games, even research done today, could be much more rigorous. There are many flaws which include small sample sizes, no universal definition of games and too much focus on comparing the outcomes of games with the outcomes of traditional instruction. One would think that argument would be pretty much over but decade after decade we continue to compare “traditional instruction” with radio, television, games and now mobile devices. After decades of research the findings are almost always the same. Good design, regardless of the delivery medium, is the most crucial aspect for learning. Where the research really needs to go, and it’s starting to go in that direction, is toward comparing elements of games to see which elements lead to the most effective and deep learning outcomes. So, for example, is the use of a narrative more effective in a learning game than the use of a leaderboard or is the use of characters more critical for learning than the use of a strategy-based design? I think the blanket comparisons are bad and, in many cases, misleading. For example, Tic-Tac-Toe is a game but so is Assassin’s Creed IV. So to say that all games teach pattern recognition because Tic-Tac-Toe teaches pattern recognition is not good. As Clark Aldrich stated years ago, the research community needs some sort of taxonomy to help identify different genres of games and then research into the learning impact of those genres.

So, I am always skeptical of game research and try to carefully describe limitations of the research I conduct and to carefully review research that has been conducted by others. I tend to like meta-analysis studies which are one method of looking at the body of research in the field and then drawing conclusions but even those aren’t perfect as you have arguments about what studies were included and what studies were not included.

At this point I think we have some general guidelines about the use of games in learning. We know that games are most effective in a curriculum when they are introduced and described to the learners, then the learners play the game and then there is a debrief. I would like to focus more on what we know from the research on games and how to implement games effectively rather than the continuous, and in my opinion, pointless comparison of games to traditional instruction. Let’s just focus on what works when games do provide positive learning outcomes.

 

Will (Question 10):

A recent review of serious games (Tsekleves, Cosmas, & Aggoun, 2014, 2016) concluded that their benefits were still not fully supported. “Despite the increased use of computer games and serious games in education, there are still very few empirical studies with conclusive results on the effectiveness of serious games in education.” This seems a bit strong given other findings from recent meta-analyses, for example the moderate effect sizes found in a meta-analysis from Wouters, van Nimwegen, van Oostendorp, & van der Spek (2013).

Can you give us a sense of the research? Are serious games generally better, sometimes better, or rarely better than conventional instruction? Or, are they better in some circumstance, for some learners, for some topics – rather than others? How should us practitioners think about the research findings?

Wouters et al. (2013) found that games are more effective than traditional instruction as did Stizmann (2011). But, as you indicated, other meta-analysis studies have not come to that conclusion. So, again, I think the real issue is that the term “games” is way too broad for easy comparisons and we need to focus more on the elements of games and how the individual elements intermingle and combine to cause learning to occur. One major problem with research in the field of games is that to conduct effective and definitive research we often want to isolate one variable and then keep all other variables that same. That processes is extremely difficult to do with games. New research methods might need to be invented to effectively discover how game variables interact with one another. I even saw an article that declared that all games are situational learning and should be studied in that context rather than in an experimental context. I don’t know the answer but there are few simple solutions to game-based research and definitive declarations of the effectiveness of games.

However, having said all that, here are some things we do know from the research related to using games for learning:

  • Games should be embedded in instructional programs. The best learning outcomes from using a game in the classroom occur when a three-step embedding process is followed. The teacher should first introduce the game and explain its learning objectives to the students. Then the students play the game. Finally, after the game is played, the teacher and students should debrief one another on what was learned and how the events of the game support the instructional objectives. This process helps ensure that learning occurs from playing the game (Hays, 2005; Sitzmann, 2011).

  • Ensure game objectives align with curriculum objectives. Ke (2009) found that the learning outcomes achieved through computer games depend largely on how educators align learning (i.e., learning subject areas and learning purposes), learner characteristics, and game-based pedagogy with the design of an instructional game. In other words, if the game objectives match the curriculum objectives, disjunctions are avoided between the game design and curricular goals (Schifter, 2013). The more closely aligned curriculum goals and game goals, the more likely the learning outcomes of the game will match the desired learning outcomes of the student.

  • Games need to include instructional support. In games without instructional support such as elaborative feedback, pedagogical agents, and multimodal information presentations (Hays, 2005; Ke, 2009; Wouters et al., 2013)., students tend to learn how to play the game rather than learn domain-specific knowledge embedded in the game. Instructional support that helps learners understand how to use the game increases the effectiveness of the game by enabling learners to focus on its content rather than its operational rules.

  • Games do not need to be perceived as being “entertaining” to be educationally effective. Although we may hope that Maria finds the game entertaining, research indicates that a student does not need to perceive a game as entertaining to still receive learning benefits. In a meta-analysis of 65 game studies, Sitzmann (2011) found that although “most simulation game models and review articles propose that the entertainment value of the instruction is a key feature that influences instructional effectiveness, entertainment is not a prerequisite for learning,” that entertainment value did not impact learning (see also Garris et al., 2002; Tennyson & Jorczak, 2008; Wilson et al., 2009). Furthermore, what is entertaining to one student may not be entertaining to another. The fundamental criterion in selecting or creating a game should be the learner’s actively engagement with the content rather than simply being entertained (Dondling, 2007; Sitzmann, 2011).

 

Will (Question 11):

If the research results are still tentative, or are only strong in certain areas, how should we as learning designers think about serious games? Is there overall advice you would recommend?

First of all, I’d like to point to the research that exists indicating that lectures are not as effective for learning as some believe. So practitioners, faculty members and others have defaulted to lectures and held them up as the “holy grail” of learning experiences when the literature clearly doesn’t back up the use of lectures as the best method for teaching higher level thinking skills. If one wants to be skeptical of learning designs, start with the lecture.

Second, I think the guidelines outlined above are a good start. We are literally learning more all the time so keep checking to see the latest. I try to publish research on my blog (karlkapp.com) and at the ATD Science of Learning blog and, of course, the Will at Work blog for all things learning research are good places to look.

Third, we need to take more chances. Don’t be paralyzed waiting for research to tell you what to do. Try something, if you fail, try something else. Sure you can spend your career creating safe PowerPoint-based slide shows where you hit next to continue but that doesn’t really move your career or the field forward. Take what is known from reading books and from vetted and trusted internet sources and make professionally informed decisions.

 

Will (Question 12):

Finally, if we decide to go ahead and develop or purchase a serious game, what are the five most important things people should know?

  1. First clearly define your goals. Why are you designing or purchasing a serious game and what do you expect as the outcome? After the learners play the game what should they be able to do? How should they think? What result do you desire? Without a clearly defined outcome, you will run into problems.

  2. Determine how the game fits into your overall learning curriculum. Games should not be stand-alone; they really should be an integral part of a larger instructional plan. Determine where the serious game fits into the bigger picture.

  3. Consider your corporate culture. So cultures will allow a fanciful game with zombies or strange characters and some will not. Know what your culture will tolerate in terms of game look and feel and then work within those parameters.

  4. If the game is electronic, get your information technology (IT) folks involved early. You’ll need to look at things like download speed, access, browser compatibility and a host of other technical issues that you need to consider.

  5. Think carefully and deeply before you decide to develop a game internally. Developing good, effective serious games is tough. It’s not a two-week project. Partner with a vendor to obtain the desired result.

  6. (A bonus) Don’t neglect the power of card games or board games for teaching. If you have the opportunity to bring learners together, consider low-tech game solutions. Sometimes those are the most impactful.

 

Will (Question 13):

One of your key pieces of advice is for folks to play games to learn about their power and potential. What kind of games should we choose to play? How should we prioritize our game playing? What kind of games should we avoid because they’ll just be a waste of time or might give us bad ideas about games for learning?

I think you should play all types of games. First, pick different types of games from a delivery perspective so pick some card games, board games, causal games on your smartphone and video games on a game console. Mix it up. Then play different types of games such as role-play games, cooperative games, matching games, racing games, games where you collect items (like Pokémon Go). The trick is to not just play games that you like but to play a variety of games. You want to build a “vocabulary” of game knowledge. Once you’ve built a vocabulary, you will have a formidable knowledge base on which to draw when you want to create a new learning game.

Also, you can’t just play the games. You need to play and critically evaluate the games. Pay attention to what is engaging about the game, what is confusing, how the rules are crafted, what game mechanics are being employed, etc.? Play games with a critical eye. Of course, you will run the danger of ruining the fun of games because you will dissect any game you are playing to determine what about the game is good and what is bad but, that’s ok, you need that skill to help you design games. You want to think like a game designer because when you create a serious game, you are a game designer. Therefore, the greater the variety of game you the play and dissect, the better game designer you will become.

 

Will (Question 14):

If folks are interested, where can they get your book?

Amazon.com is a great place to purchase my book or at the ATD web site. Also, if people have access to Lynda.com, I have several courses on Lynda including “The Gamification of Learning”. And I have a new book coming out in January co-authored by my friend Sharon Boller called “Play to Learn” where we walk readers through the entire serious game design process from conceptualization to implementation. We are really excited about that book because we think it will be very helpful for people who want to create learning games.

 

You can click on the images below to view Karl's Gamification books on Amazon.com.

 
 

 

Research

Sitzmann, T. (2011). A meta-analytic examination of the instructional effectiveness of computer-based simulation games. Personnel Psychology, 64(2), 489–528.

Tsekleves, E., Cosmas, J., & Aggoun, A. (2016). Benefits, barriers and guideline recommendations for the implementation of serious games in education for stakeholders and policymakers. British Journal of Educational Technology, 47(1), 164-183. Available at: http://onlinelibrary.wiley.com/doi/10.1111/bjet.12223/pdf

Wouters, P., van Nimwegen, C., van Oostendorp, H., & van der Spek, E. D. (2013). A meta-analysis of the cognitive and motivational effects of serious games. Journal of Educational Psychology, 105(2), 249-265. http://dx.doi.org/10.1037/a0031311

Can Instructor Attractiveness lead to Higher Smile-Sheet Ratings? More Learning? A Research Brief.

In a recent research article, Tobias Wolbring and Patrick Riordan report the results of a study looking into the effects of instructor "beauty" on college course evaluations. What they found might surprise you -- or worry you -- depending on your views on vagaries of fairness in life.

Before I reveal the results, let me say that this is one study (two experiments), and that the findings were very weak in the sense that the effects were small.

Their first study used a large data set involving university students. Given that the data was previously collected through routine evaluation procedures, the researchers could not be sure of the quality of the actual teaching, nor the true "beauty" of the instructors (they had to rely on online images). 

The second study was a laboratory study where they could precisely vary the level of beauty of the instructor and their gender, while keeping the actual instructional materials consistent. Unfortunately, "the instruction" consisted of an 11-minute audio lecture taught by relatively young instructors (young adults), so it's not clear whether their results would generalize to more realistic instructional situations.

In both studies they relied on beauty as represented by facial beauty. While previous research shows that facial beauty is the primary way we rate each other on attractiveness, body beauty has also been found to have effects.

Their most compelling results:

1.

They found that ratings of attractiveness are very consistent across raters. People seem to know who is attractive and who is not. This confirms findings of many studies.

2.

Instructors who are more attractive, get better smile sheet ratings. Note that the effect was very small in both experiments. They confirmed what many other research studies have found, although their results were generally weaker than previous studies -- probably due to the better controls utilized.

3.

They found that instructors who are better looking engender less absenteeism. That is, students were more likely to show up for class when their instructor was attractive.

4.

They found that it did not make a difference on the genders of the raters or instructors. It was hypothesized that female raters might respond differently to male and female instructors, and males would do the same. But this was not found. In previous studies there have been mixed results.

5.

In the second experiment, where they actually gave learners a test of what they'd learned, attractive instructors engendered higher scores on a difficult test, but not an easy test. The researchers hypothesize that learners engage more fully when their instructors are attractive.

6.

In the second experiment, they asked learners to either: (a) take a test first and then evaluate the course, or (b) do the evaluation first and then take the test. Did it matter? Yes! The researchers hypothesized that highly-attractive instructors would be penalized for giving a hard test more than their unattractive colleagues. This prediction was confirmed. When the difficult test came before the evaluation, better looking instructors were rated more poorly than less attractive instructors. Not much difference was found for the easy test.

Ramifications for Learning Professionals

First, let me caveat these thoughts with the reminder that this is just one study! Second, the study's effects were relatively weak. Third, their results -- even if valid -- might not be relevant to your learners, your instructors, your organization, your situation, et cetera!

  1. If you're a trainer, instructor, teacher, professor -- get beautiful! Obviously, you can't change your bone structure or symmetry, but you can do some things to make yourself more attractive. I drink raw spinach smoothies and climb telephone poles with my bare hands to strengthen my shoulders and give me that upside-down triangle attractiveness, while wearing the most expensive suits I can afford -- $199 at Men's Warehouse; all with the purpose of pushing myself above the threshold of ... I can't even say the word. You'll have to find what works for you.
  2. If you refuse to sell your soul or put in time at the gym, you can always become a behind-the-scenes instructional designer or a research translator. As Clint said, "A man's got to know his limitations."
  3. Okay, I'll be serious. We shouldn't discount attractiveness entirely. It may make a small difference. On the other hand, we have more important, more leverageable actions we can take. I like the research-based findings that we all get judged primarily on two dimensions warmth/trust and competence. Be personable, authentically trustworthy, and work hard to do good work.
  4. The finding from the second experiment that better looking instructors might prompt more engagement and more learning -- that I find intriguing. It may suggest, more generally, that the likability/attractiveness of our instructors or elearning narrators may be important in keeping our learners engaged. The research isn't a slam dunk, but it may be suggestive.
  5. In terms of learning measurement, the results may suggest that evaluations come before difficult performance tests. I don't know though how this relates to adults in workplace learning. They might be more thankful for instructional rigor if it helps them perform better in their jobs.
  6. More research is needed!

Research Reviewed

Wolbring, T., & Riordan, P. (2016). How beauty works. Theoretical mechanisms and two
empirical applications on students' evaluation of teaching. Social Science Research, 57, 253-272.