FDAnews
www.fdanews.com/articles/169541-creating-effective-computer-based-assessment

Creating Effective Computer-Based Assessment

January 14, 2015

Computer-based quizzes are pretty inflexible. If you want to administer and grade an assessment on a computer, you are basically limited to the multiple choice concept. Your eLearning authoring software may dress it up in many different guises, but all the question formats are essentially the same (even pictorial ones): choose the correct response(s) from one or more incorrect ones.

This seems pretty straightforward, but how can you be sure you are evaluating the performance metric you really want to test?

It's very easy to get stuck when writing a multiple-choice test question. First, you come up with the question, which is easy because you know what's important about the subject matter. Logically, the correct response comes next, although you probably had that in mind before writing the question. Job done? Now you just need to add some wrong answers. Depending on the subject matter this may be easy, particularly if you want to test the learner's grasp of technical data where all the answers can be very similar, for example:

Q: How many seconds does it take a 2003 Lamborghini Gallardo to reach 60 mph?
a) 5.4 seconds
b) 4.8 seconds
c) 4.0 seconds
d) 6.2 seconds

This is a good test of the learner's retention of numerical data in regards to a specific vehicle. However, we often want to assess a learner's behavioral understanding, and it becomes much more difficult to offer up wrong answers that are credible. In fact, it can be downright frustrating. Consider this example:

Q: Why should you not open the oven while baking a cake?
a) it will ruin the cake
b) you might get burned
c) it could fall out
d) a bird might fly in

You want answer a) to be the correct response. The problem is that answer b) is also possible, and while your other answers are clearly more ridiculous, even they aren't impossible. What should be a question designed to assess our would-be baker's understanding of a process has become bogged down with ambiguity.

Use Short Story Questions

One solution to multiple-choice is to write narrative questions followed by a simple Yes/No or True/False decision. Presenting the question as a story can place it in a real-world situation and provide valuable context. For example:

Q: You pour the cake batter into the pan and place it in the oven. After 10 minutes, you wonder how it is doing but cannot see properly through the oven door. Would you open the oven to see how your cake is doing?
a) Yes
b) No

By changing the question, you now don't need to invent a list of incorrect answers and you're actually assessing the right metric: the learner’s understanding of the proper behavior.

Provide Feedback

In our short story question, we've stripped down our answers so much that they no longer supply any context. You could make them more elaborate, but beware of falling back into the problem of making credible wrong responses. What we can – and should – do is provide feedback to reinforce the point. Instead of responding with a simple “correct” or “incorrect,” tell the learner why the answer is right or wrong. In our cake-baking example, effective feedback could be:

That’s right (or wrong)! Opening the oven door alters the baking temperature and affects the cake’s ability to rise.

Bring the Message Home

Most assessments relate directly to training content that the learner has been through prior to taking the test, and while the feedback can be a reminder of that content, you can take it further. For example, in a recent question I wrote about sterile medicine manufacturing, I included in the feedback a tragic news story about premature babies who had died due to non-sterile drips. My intention was to remind the learner that the training wasn't just a box-ticking exercise, but that there are real-world implications to working in sterile manufacturing.

Contact information:
Leigh Heath
Learning Developer
MVI/MicronTraining
leighheath@mvitraining.com
www.mvitraining.com