Find your portal
false

Building efficient questions for learning program evaluations

Evaluating learning programs is a continuous challenge for instructional designers and L&D specialists. There seems to be a consensus that the higher you go on the Kirkpatrick model, the better. Yet, most organizations confess that they stop somewhere around the second level of the learning program assessment model.


Read more: Measuring training effectiveness — the Kirkpatrick model


Since this is the case, instead of insisting on going further, I will focus on the types of questions already being used and how to make them more efficient for your purpose.

The point of evaluating a learning intervention is to assess the quality of a course. Moreover, it helps enhance information retention and with discovering which areas learners may have missed.

Questions need to move past simple recall

This is the main issue with testing. It fails to provide genuine results because the questions are aimed simply at testing short-term memory. Most quizzes are taken at the end of a module or course. That’s too close to determine what learners will remember and use in the long run.

That’s why the assessment phase should be postponed at least a couple of days after the module has ended, and the questions need to focus more on workplace application than on simple recollection of information. For example, instead of having a question like:

What does the T in SMART (objectives) stand for?

  1. True
  2. Thorough
  3. Tailored
  4. Time-oriented

 
you can go for:

“The objective should be reached by the end of the second quarter.” Which part of the SMART description does this match:

  1. Attainable
  2. Time-oriented
  3. Measurable
  4. Specific

 
While testing the same thing, the second question takes the learner into a work-like frame of mind and encourages them to think both about the theory they have learned and an actual situation that can come up at work.

Length and vocabulary are important

When dealing with multiple-choice questions, the correct answer is often longer and differently worded than the other ones. This makes it strikingly obvious, and even those who have done something else entirely throughout the course will get it right without much effort.

While this may look good in the initial results, it does nothing for learning transfer nor for a genuine assessment of the course. Here’s an example of such a transparent item:

What is assertive communication?

  1. A way of speaking that clearly states your wants and needs while maintaining respect for yourself and the person you are speaking to.
  2. A way of getting your point across in a conversation.
  3. A way of telling the other person what they want to hear.
  4. A way of focusing the conversation on relevant subjects.

 
The difference in length and the more academic wording in option a) makes it stand out. And while we're at the DON’Ts section, if you have items in which the same words show up at the start of every choice — “A way of” in this case — it’s best to go with “Assertive communication is a way of:”.

Don’t mix multiple-choice with fill-in-the-blanks

Both types of questions are great for assessments, and it’s perfectly fine to have them together in the same quiz. It’s not such a great idea to blend them, like in the following example:

……………………………… leadership style focuses on culling opinions from all employees to make a decision that reflects the majority’s opinion and desires.

  1. The effective
  2. The authoritarian
  3. The participative
  4. The transformational

 
It’s not just about the structure of the questions. When you ask something and provide options, the learner tries to answer from their perspective and understanding of the material. Fill-in-the-blank questions turn this around and make it about the content.

If you want to include fill-in-the-blank questions in your online assessments, it’s best to find a way for the learner to actually fill that info, not pick it from a list.

Other helpful tips for crafting relevant testing questions

  • Avoid throwing in answers that are obviously wrong or make no sense just for fun; they may get a smile, but they also diminish the relevance of the results.
  • Spread the correct answers over the letter options (a, b, c and d); it’s all right to first put down the correct answer but not in the same position. If your LMS is randomizing the answers anyway, you don't need to worry about this.
  • Make sure there is only one correct answer and no room for interpretation.
  • Put the fill-in-the-blank questions towards the end of the assessment because they take more effort, and if you start with them, odds are people will skip them and move to the multiple-choice ones, then forget to come back.
  • When testing online, use the diminishing response technique – at the end of the test, the learner can see what questions they got wrong. Instead of showing the correct answer, give them the option to go back and try again after a short revisit of the learning material.

Closing thoughts

Evaluating corporate learning programs should be more than checking a box on the instructional designer’s to-do list. The results need to be relevant and available for processing to generate better ideas. Instructors need a more accurate image of how a training program is doing. All of this starts with asking better questions.

TABLE OF CONTENTS
f-image t-image pin-image lin-image