In my last post in this series, I wrote about trust (or a lack thereof) as a motivation for organizations producing and/or requiring measurements of training based on learner knowledge or volume of completions. In this post, we’ll take a look at the evolution of measurement systems and how it has led to our current state.

Evolution of Measurement

We are measured our entire lives, starting before we are even born. Height, weight, volume, girth and length are all used as metrics or measurements for doctors and our parents to label us as “normal” when compared to a set of standards. For the most part all of these measurements are well and good, and can serve as indicators of our health.

Eventually, we get bundled up and sent off to school where all the sudden the measurements aren’t necessarily about our health, but rather as a comparative ranking of our ability to retain and occasionally apply knowledge—against a set of standards. These rankings go down on our “permanent record” and follow us as indicators of readiness and aptitude. For better or for worse, this measurement system is used throughout the duration of our education and is sometimes used as factor in deciding whether or not we get a job.

And then a lot of it stops.

Corporations have little use for ranking the knowledge or knowledge capacity of the people who work there. People are brought in to do a job and achieve something that contributes to that company reaching its business objectives—making money.

What workers know is secondary to what they do.

The application of that knowledge to achieve real world results is what really counts.

However, no one really thinks that workers come ready-made with all the knowledge or skills they will ever need. So there has to be some kind of mechanism to assure that knowledge exists if it is missing. That’s what we fondly call a “learning gap.” Of course, personal and professional development is recognized as an irrefutable need since there’s a high correlation between personal development and the likelihood of people being exemplary producers. When we find a learning gap, our knee-jerk reaction is to fill that gap with training and assume that knowing will equate to doing.

Filling the Learning Gap vs. Measuring Performance

The metaphorical issue with the term “learning gap” is that it describes an opportunity or need as a hole or chasm that needs to be crossed. Metaphorically, there are three ways to deal with a hole or chasm: fill it, build a bridge over it, or go around it. In a performance focused sense, none of the metaphorical solutions are the right answer to the problem. We don’t want to go over, around, or through; we want a behavior that clearly demonstrates that the opportunity or need no longer exists.

How do you measure something that doesn’t exist?

It’s much easier to measure how deep a hole is or how far it is across, so that’s the kind of systems we have developed to measure corporate learning. Since 1975 (or 1959 depending on how you measure it), the Kirkpatrick model has been the most accepted standard for measuring the effectiveness of these efforts with its four levels of measurement:

  1. Reaction
  2. Learning
  3. Behavior
  4. Results

However, recently there has been a groundswell towards the rejection of the Kirkpatrick model as a sole methodology for measurement because it often surmises a learning event as a starting point. These grumblings were heard recently at both the CLO Symposium and Learning 2011 conferences and with the writings of thought leader Dan Pontefract, who wrote what I consider the defining article on the Kirkpatrick model problem in the February 2011 Chief Learning Officer Magazine—a stance he further qualified in his blog a short time later. The basic premise is that effective learning is not an event and cannot be disconnected from on-the-job performance; therefore, it cannot be measured on its own outside of a performance system.

That’s not to say that the model has never had value. Level 4 of the model—the Results level—clearly links performance to learned behavior, but it ties those results and behavior to a measured learning event and not culmination of an experience which should include the influence of factors beyond just the learning event. Even if we did apply the model to a grouping of formal learning events, it would do very little to help us evaluate effectiveness of individual pieces or the informal learning that takes place regardless of whether or not informal learning was a planned part of the experience. There are just too many other factors, in addition to learning, that contribute to an individual’s ability to achieve something of value to a business or an organization.

It would be easy at this point to form a rally cry for new measurement standards—ones that are a true indicator of performance—but most organizations already have ways of measuring how they are performing; they just need to find ways to apply those measurements to individual contributors and tie doing things to measurable performance.

There are a select few legitimate needs to measure the delivery of training linked to legal requirements or legal exposure that organizations often refer to as compliance training. However, it’s easy to fall into the trap of imagined compliance, in the next installment in this series on measurements, we’ll explore legitimate verses imagined compliance and how to differentiate between them.