Evaluating learning programs is a continuous challenge for instructional designers and L&D specialists. There seems to be a consensus that the higher you go on the Kirkpatrick model, the better. Yet, most organizations confess that they stop somewhere around the second level of the learning program assessment model.
Read more: Measuring training effectiveness — the Kirkpatrick model
Since this is the case, instead of insisting on going further, I will focus on the types of questions already being used and how to make them more efficient for your purpose.
The point of evaluating a learning intervention is to assess the quality of a course. Moreover, it helps enhance information retention and with discovering which areas learners may have missed.
This is the main issue with testing. It fails to provide genuine results because the questions are aimed simply at testing short-term memory. Most quizzes are taken at the end of a module or course. That’s too close to determine what learners will remember and use in the long run.
That’s why the assessment phase should be postponed at least a couple of days after the module has ended, and the questions need to focus more on workplace application than on simple recollection of information. For example, instead of having a question like:
What does the T in SMART (objectives) stand for?
you can go for:
“The objective should be reached by the end of the second quarter.” Which part of the SMART description does this match:
While testing the same thing, the second question takes the learner into a work-like frame of mind and encourages them to think both about the theory they have learned and an actual situation that can come up at work.
When dealing with multiple-choice questions, the correct answer is often longer and differently worded than the other ones. This makes it strikingly obvious, and even those who have done something else entirely throughout the course will get it right without much effort.
While this may look good in the initial results, it does nothing for learning transfer nor for a genuine assessment of the course. Here’s an example of such a transparent item:
What is assertive communication?
The difference in length and the more academic wording in option a) makes it stand out. And while we're at the DON’Ts section, if you have items in which the same words show up at the start of every choice — “A way of” in this case — it’s best to go with “Assertive communication is a way of:”.
Both types of questions are great for assessments, and it’s perfectly fine to have them together in the same quiz. It’s not such a great idea to blend them, like in the following example:
……………………………… leadership style focuses on culling opinions from all employees to make a decision that reflects the majority’s opinion and desires.
It’s not just about the structure of the questions. When you ask something and provide options, the learner tries to answer from their perspective and understanding of the material. Fill-in-the-blank questions turn this around and make it about the content.
If you want to include fill-in-the-blank questions in your online assessments, it’s best to find a way for the learner to actually fill that info, not pick it from a list.
Evaluating corporate learning programs should be more than checking a box on the instructional designer’s to-do list. The results need to be relevant and available for processing to generate better ideas. Instructors need a more accurate image of how a training program is doing. All of this starts with asking better questions.