Find your portal
false

Best practices on measuring the impact of organizational learning

Learning measurement is one of those topics that constantly seems to need revisiting. Because there is a lot of transformation in organizations as a whole and in their demands when it comes to L&D, programs have to evolve, become innovative and tailored to the specific needs of an ever changing audience.

The Kirkpatrick model still stands as a beacon in this sea of continuous renewal but there is the poignant need for a different approach to measuring everything from engagement to impact of training programs. Much of the corporate learning has moved online – even something as personal as one on one coaching is often done via some communication app between individuals situated in different geographical areas.


Read more: How many types of mentoring are there?


As a result, assessment of L&D interventions should also tap into the potential digitization offers and make things simpler.

Learning evaluation needs to be simpler

It’s ironic, to some extent, how with all the developments in e-learning, bite-size units, AR, VR, xAPI and LMSs, what learning measurement really needs is to simplify and become a natural process to be carried out over a longer period of time instead of a one-time calculation resulting in numerous charts, graphs, feed-back samples and compared scores that few actually know how to read and put together to get a clear view.

The steps taken in the evaluation process should be logical, repeatable and sustainable in the long run. Ultimately, this will lead to a learning culture that is data driven. Making things simpler does not imply that the results will be shallower, unreliable or downright invalid. The point is to make it easier for those who need to use that data to feel confident in using it.


Read more: 4 Great tips for developing a learning culture


Evaluations have to be optimized

The first step into making learning measurement simpler yet more relevant is to optimize evaluations. Brushing through the questions and keeping only those that fetch important insight is highly advisable.

The questionnaire should comprise ten items or less, focusing on value and usefulness rather than on training quality. A helpful tip is to keep these evaluations standard, with as many questions as possible applying to all learning interventions and only a few left for customized inquiry.

Having standardized questions will lead to coherent results that are easily compared and made into relevant reports. This also establishes a common language among all that are directly involved, helping avoid misunderstandings and long meetings aimed at getting everybody on the same page.

Fewer questions also increase the chances that respondents fill all the answers truthfully and don’t just check boxes and put down words to finish sooner.

Conditional questions bring valuable data

Once the standardized form is done, one up to three conditional questions should be built in to get relevant information about key programs. These will quantify impact by targeting specific attitudes and behaviors and quantifying the extent to which they are changed or altered.

If, for instance, the company has set an objective having to do with the satisfaction and engagement of its employees, adding evaluation items that ask about how committed and happy employees are following their participation in a specific learning intervention will give a good appraisal of those states.

It works even better if there is an LMS with an xApi extension. This way further data can be gathered by looking at the overall user experience. Correlating this insight with the answers of the evaluations will result in a very clear and vivid picture of precisely the aspects of interest to the stakeholders.


Read more: Building a learning ecosystem that works with xAPI


It’s important to make results accessible to the stakeholders

Coming to the subject of stakeholders, there is a trend in current organizations to make reports and presentations that look and sound almost as if they were taken out of a sci-fi movie. They may very well make sense to the L&D people (though I am one and I can say in all honesty that sometimes slides left me in utter darkness) but for senior managers with insane schedules and very little time these intricate workings just don’t cut it.

Making it all simpler by selecting a few basic metrics, each showing a specific result compared to an explicit goal and the clear variability between the two will be a lot more appreciated.

Color coding is also a powerful tool as it aids with the visual part, making it very clear if a metric meets, exceeds or falls short of the objective. A visual of the trend – to make it easily apparent if it has been improving or declining could also prove very useful.

All in all

Measuring the impact of organizational learning is important. It shows L&D specialists what they are doing right and where programs fall short and gives stakeholders a sense of how much progress is made and in what particular areas. However, more time should be invested into optimizing the process than into the reporting. Trimming down on the dozens of reports and questions will lead to a much easier but very relevant scaling of organizational learning.

TABLE OF CONTENTS
f-image t-image pin-image lin-image