Organization Horsepower

Thinking Like a Motorcycle Racing Team

Tag: measurement

Measure Twice, Cut Once

Cigar box guitar

My biggest hobby outside of work is building musical instruments. I don’t have a woodworking or lutherie background. In fact, I’ve never taken a single class on either, and my dad is more shade tree mechanic than wood worker. So my instruments are generally pretty primitive.

I started building instruments out of cigar boxes. The practice of building instruments out of found objects is nothing new, in fact there is a long history of improvising to create something that you couldn’t afford for less than the cost to acquire that item. If you don’t have anything good to work from, then use what you have and improvise.

While this is a rewarding hobby for me, and it keeps me sharp in a lot of areas in my business life, my HR clients are not dealing with “found objects”. They don’t need to, and can’t afford to improvise on their talent to meet the needs of their companies. They are more mature at the practice of HR than a proverbial cigar box guitar.

The thing about learning slowly through experience alone is that while it takes a while, the journey is fairly rewarding. But the costs to get the experience are astronomical. Part of that cost is time, some of it is materials, tools have been a major expense, and some of it is lost to mistakes made.
As my skills have improved, the instruments I build take more time,
and the cost of materials I used have escalated rapidly.

Maple guitar

Measure twice, cut once is an over-used cliché, especially when we are talking about ruining a 99 cent 2×4 that is part of the unseen interior hidden by drywall. But when I’m using a $100 set of spalted curly maple (as seen in the picture to the right), that over-used cliché suddenly means something. I measure multiple times from multiple directions because there are real consequences if I make a mistake. But it’s more than the cost of wood. That piece of wood directly determines how the instrument sounds, how it plays, and how attractive the end result is. It also determines, if I choose to sell it, how much I can sell that finished instrument for.

Companies spend more on their people than any other expense on the balance sheet, yet too many of them treat people like a 99 cent 2×4 and not like a beautiful set of unique one-of-a-kind wood that directly affects their profitability. That’s not to say that they treat those employees poorly, it’s that they fail to measure, let alone twice, what the real value of that person to the organization really is.

This is also more than a “cut” metaphor; this isn’t about “staff reduction” as much as it’s about being smart about how people are applied to the end result.

I want to build better guitars. If you want a better HR function, measurement that means something
needs to be part of your approach. You will never improve without it.

 

Why Measure HR?

Think about this for a second. Are you measuring HR? What are the measurements you are tracking?

If you aren’t measuring HR, why would you want to start?

Many companies are trying to track employee satisfaction with HR services or transactions. Even more are capturing volumes of services.  Those are excellent measures if you are trying to diagnose efficiency of a specific function or process for targeted improvement, but do you send those numbers to your boss? Do those numbers reach the C-suite?

The brutal truth is that the only reason you would ever send volume or satisfaction numbers up the line is to justify your own existence, to “prove” you are doing work of value. The problem with that approach is that it is very transparent in an “emperor has no clothes” sort of way. If you are fighting to justify yourself, it casts a shadow of doubt on your numbers. It does nothing to show the value HR brings to the business. It doesn’t matter how insightful your take on the numbers is, the business has no reason to trust you.

If you are going to measure HR, and you really should, you need to pick metrics that speak directly to the function of the business. Or at bare minimum, ones that can be directly correlated to a business measure. An employee being happy with a process is valuable to that process, but the business wants to know if that happiness made the company more profitable.

If you’re measuring the wrong things for the wrong reasons, stop. You are part of the problem by adding costs (labor) to something that undermines your credibility and at the end of the day isn’t helping your company be better.

Measurement, Part III: Measurement as Evidence

The Legitimate Need for Measurement as Evidence

I keep coming back to a quote from President Clinton at Learning 2011:

“If you already know the truth, you don’t need the evidence.”

He was using that in context of a political topic, but I really think it’s applicable to measurement. If trust isn’t the real issue, and if performance and business results are the “truth” we are seeking, and if we can prove those business-related results, do we really need “evidence” that training—either as learning events or as a continuous and integrated process—got us there?

If the purpose of our occupation is to make our companies better, to improve performance, then the primary measurement should be whether or not our businesses are in fact becoming better. In either case, our evidence should ultimately be based in the “proof” that our business objectives are being met.

While this is true in most situations, there is the possible exception of compliance training in which there is a legitimate need to prove learner participation and present it as evidence. However, there is a real danger in perpetuating what I call pseudo-compliance courses, where compliance is mandated but not linked to any regulatory need nor real business drivers or goals.

Compliance vs. Pseudo-Compliance: What’s the Difference?

“But…,” you say. “I have course XYZ that I HAVE to make sure everyone takes.”

This is the classic compliance model. The notion here is:

  1. There are organizations that are legally mandated to provide a training event and must prove that employees did observe the event.
  2. There are organizations whose legal exposure will be unreasonably high if they cannot prove that their employees observed a training event.
  3. There is a strong feeling that training creates a real, actionable alignment between a body of knowledge and the day-to-day behavior of employees.

Clearly items 1 and 2 happen in real life and pass as legitimate reasons for measuring compliance. However, item 3 falls short since it is not linked to a measurable business goal or driver. It’s that simple. It doesn’t mean that it’s not important or that you shouldn’t do it, but you may not need the evidence to back it up.

The Danger of Pseudo-Compliance

The biggest danger in measuring compliance or gathering evidence on compliance comes from tracking things as “compliance” that do not meet the criteria. It’s really easy to incorrectly identify a training event as being either legally necessary or subject to unreasonable legal exposure. These pseudo-compliance courses or events, if allowed to, will:

  • waste your time and resources
  • perpetuate poor impressions of formal training
  • provide cost justifications for systems and processes that do not contribute to your company’s business objectives

It’s perfectly reasonable to set an expectation that employees participate in a pseudo-compliance course, but there are generally ZERO measurable returns on that activity or event. Measuring compliance does, however, have a measurable cost in terms of systems and labor.

The most common occurrence I see of pseudo-compliance courses are around philosophical topics. Sure there are ethical issues that have concrete actions and legal repercussions that are legitimate candidates for measuring compliance, but I’m talking about philosophy here in terms of asking or expecting an employee to believe or think a certain way. Topics like integrity or honesty. You can give examples of someone acting in a way you want your employees to act, but it’s not measurable in the business. Lack of compliance with a mandate for honesty or integrity is typically grounds for dismissal of an employee. What does it matter if you have evidence of the training event when this type of mandate is violated?

Legal Compliance

Assuming that the training you wish to track is legally required or implied as such, it’s reasonable then to assume that the legislation that defines the requirement is strongly linked to either the financial, personal, or civil liberties of persons who work with or for, or come into contact with, your corporation. The premise is that it is in the best interest of your company and the public to comply with the legislation. The rebel in me would love to argue against the idea that all legislated training is needed, but the fact remains that it is a reality of business that there are legal requirements that make compliance necessary.

Assuming for a second that legislation is good and there is a public interest or common good in our compliance, isn’t that something we should want to do regardless? After all, aren’t we as individuals party to the laws of our land? Therefore, the training we do should be such that we not only comply with the law, but also ensure that our behavior is such that we never violate the intent of the law or requirement.

It’s easy enough to leverage an LMS to prove 100% compliance in the eyes of a legal requirement, but the true measure of success is that we have zero violations in our business practice. Thus, our performance measurement is zero or our compliance measurement is 100%. Which measurement is more important?

By definition, legally mandated training is a cost center. We have a responsibility to manage expenditures and be efficient with our companies spending, but that should never interfere with our performance measurement of obeying our legal obligations.

Legally-Compelled Compliance

Now I’d like to move onto to the scenario in which we are legally compelled to provide a compliance measurement, but it is not legislatively mandated. This is a cost-avoidance mechanism. We are, in principle, agreeing to invest in learning in exchange for a reduced or minimized cost should legal action occur at an unknown future date. But let’s be honest with ourselves; legal liability occurs as a result of a tort. Under tort law, companies are held liable for the behavior or actions someone commits while acting as a representative of that company. The reasons for the tort action vary, but can be generally attributed to:

  1. Negligence—lack of knowledge or insight that an action performed on the part of the company could cause damage to another party
  2. Intent—purposeful gains realized by a person, persons, or the company itself at the expense of another party

Much like legally required compliance training, legally-compelled training may have a compliance measurement that could be used in defense of legal action, but the true measure of success is once again that zero actionable behaviors are committed by individuals acting on behalf of our company.

To avoid negligence, we must make sure that people know better, but more importantly, that their actions of behavior reflect that knowledge. When you know better and act in defiance or without deference to that knowledge, then that is the definition of intent. In either case, to truly realize the cost avoidance measure, you must have evidence of compliance yet your obligation doesn’t stop there. Ultimately, performance is the real measurement of success, not compliance.

What Not to Measure

The problem with cost avoidance as a measuring stick is that there is no guarantee that the expense you try to avoid would ever have materialized had you taken no action at all. It’s just a possible expense you may have incurred down the road. There is no direct link to sustainable profitability unless you can say with certainty that you had a consistent, if not fixed, expense that you incurred at a defined level that will no longer be incurred or, at least, will now be incurred at a reduced level. There is no real ROI—only an imagined or implied ROI.

Looking at compliance training as whole, there is a real business requirement, if not a legal requirement, to measure compliance with prescribed formal training events. But that shouldn’t be our justification for creating, maintaining, or supporting those formal training events. And by no means should training compliance itself be a measure of effectiveness upstream in your organization.

At the end of the day, quarter, and fiscal year, the list of training events that we gather evidence of participation on should be as small as possible. This evidence has value, but only as a vague measurement of possible cost avoidance. If we want to actually measure the effectiveness of that training, then the measuring stick needs to be performance based and evidenced by a LACK of adverse occurrences.

 

Measurement, Part II: The Evolution of Systems

In my last post in this series, I wrote about trust (or a lack thereof) as a motivation for organizations producing and/or requiring measurements of training based on learner knowledge or volume of completions. In this post, we’ll take a look at the evolution of measurement systems and how it has led to our current state.

Evolution of Measurement

We are measured our entire lives, starting before we are even born. Height, weight, volume, girth and length are all used as metrics or measurements for doctors and our parents to label us as “normal” when compared to a set of standards. For the most part all of these measurements are well and good, and can serve as indicators of our health.

Eventually, we get bundled up and sent off to school where all the sudden the measurements aren’t necessarily about our health, but rather as a comparative ranking of our ability to retain and occasionally apply knowledge—against a set of standards. These rankings go down on our “permanent record” and follow us as indicators of readiness and aptitude. For better or for worse, this measurement system is used throughout the duration of our education and is sometimes used as factor in deciding whether or not we get a job.

And then a lot of it stops.

Corporations have little use for ranking the knowledge or knowledge capacity of the people who work there. People are brought in to do a job and achieve something that contributes to that company reaching its business objectives—making money.

What workers know is secondary to what they do.

The application of that knowledge to achieve real world results is what really counts.

However, no one really thinks that workers come ready-made with all the knowledge or skills they will ever need. So there has to be some kind of mechanism to assure that knowledge exists if it is missing. That’s what we fondly call a “learning gap.” Of course, personal and professional development is recognized as an irrefutable need since there’s a high correlation between personal development and the likelihood of people being exemplary producers. When we find a learning gap, our knee-jerk reaction is to fill that gap with training and assume that knowing will equate to doing.

Filling the Learning Gap vs. Measuring Performance

The metaphorical issue with the term “learning gap” is that it describes an opportunity or need as a hole or chasm that needs to be crossed. Metaphorically, there are three ways to deal with a hole or chasm: fill it, build a bridge over it, or go around it. In a performance focused sense, none of the metaphorical solutions are the right answer to the problem. We don’t want to go over, around, or through; we want a behavior that clearly demonstrates that the opportunity or need no longer exists.

How do you measure something that doesn’t exist?

It’s much easier to measure how deep a hole is or how far it is across, so that’s the kind of systems we have developed to measure corporate learning. Since 1975 (or 1959 depending on how you measure it), the Kirkpatrick model has been the most accepted standard for measuring the effectiveness of these efforts with its four levels of measurement:

  1. Reaction
  2. Learning
  3. Behavior
  4. Results

However, recently there has been a groundswell towards the rejection of the Kirkpatrick model as a sole methodology for measurement because it often surmises a learning event as a starting point. These grumblings were heard recently at both the CLO Symposium and Learning 2011 conferences and with the writings of thought leader Dan Pontefract, who wrote what I consider the defining article on the Kirkpatrick model problem in the February 2011 Chief Learning Officer Magazine—a stance he further qualified in his blog a short time later. The basic premise is that effective learning is not an event and cannot be disconnected from on-the-job performance; therefore, it cannot be measured on its own outside of a performance system.

That’s not to say that the model has never had value. Level 4 of the model—the Results level—clearly links performance to learned behavior, but it ties those results and behavior to a measured learning event and not culmination of an experience which should include the influence of factors beyond just the learning event. Even if we did apply the model to a grouping of formal learning events, it would do very little to help us evaluate effectiveness of individual pieces or the informal learning that takes place regardless of whether or not informal learning was a planned part of the experience. There are just too many other factors, in addition to learning, that contribute to an individual’s ability to achieve something of value to a business or an organization.

It would be easy at this point to form a rally cry for new measurement standards—ones that are a true indicator of performance—but most organizations already have ways of measuring how they are performing; they just need to find ways to apply those measurements to individual contributors and tie doing things to measurable performance.

There are a select few legitimate needs to measure the delivery of training linked to legal requirements or legal exposure that organizations often refer to as compliance training. However, it’s easy to fall into the trap of imagined compliance, in the next installment in this series on measurements, we’ll explore legitimate verses imagined compliance and how to differentiate between them.

Measurement, Part I: Trust

Every conference I’ve been to in the past year… scratch that. Every conference I’ve EVER been to has had a major focus on measurement. There have been various measurement trends through the years, but recently I’ve seen some shifts that make me hopeful that corporations may actually make some progress in making and taking measurements that actually matter.

This will be the first in a series of blog posts exploring different aspects of measurement—including the importance of trust, motivation, compliance, shifting to business-based measurement, individual measurement, and measurement and its role in budget negotiations.

First up: let’s talk about the importance of trust.

Measurement Part I: Trust

Far be it for me to hold back on how I really feel about something. So, here goes:

Measuring training as a justification for training is an utter waste of time.

It’s like giving style points to the 50-yard dash. It may be interesting, but the only thing that matters is who crossed the finish line first. In other words, the performance or result mattered; the style in which it was achieved is barely noteworthy. Yet, when you measure training in and of itself, that’s exactly what is happening.

I think Charles H. Green hits it on the head with this quote from his blog:

“The ubiquity of measurement inexorably leads people to mistake the measures themselves for the things they were intended to measure.”

Why do we keep using measures instead of actual performance as justification to ourselves and our organizations? The answer to that question in many cases is rooted in why we are asked to measure training in the first place… that is, to prove that it has some kind of meaningful, measurable impact on the organization’s results.

Many of our organizations do not believe that training as it is currently defined has a positive impact. Or they do not trust that you or your immediate organization can execute learning in an impactful way. The requirement for measurement comes from a place of distrust—not from a defined need to measure results. Consequently, measurement is demanded to “prove” training works. Trust is not impacted or improved through this exercise, but regardless, time and effort is spent generating measurements that don’t really tell us anything about the business.

It is not my intent to write a primer on the effects of trust in business. I think Stephen M.R. Covey has done a good job with that in his book the Speed of Trust and the follow-up Smart Trust. The point is that a lack of trust affects our relationships and results in demands for measurements based on volume that are intended to justify the existence of training in an organization. It’s a closed loop with no obvious business value. That’s why old-school training departments are usually viewed as a cost centers, not as a strategic business partners.

So how do we as learning and performance improvement professionals earn trust and show that learning systems are effective and worthwhile without volume (i.e. number of butts in seats) or knowledge-based metrics?

Before we go there, we need to understand how measurement evolved to this state and also how the systems that we maintain perpetuate meaningless measurements. I’ll leave that for next blog post, so stay tuned.