Organization Horsepower

Thinking Like a Motorcycle Racing Team

Page 5 of 6

Measurement, Part IV: Four Characteristics of Measurable Performance Improvement

In my last blog post, Measurement as Evidence, we looked at when it is necessary to measure training, and the dangers of creating pseudo-compliance courses that take our time and attention from actual performance. Fortunately, the vast majority of learning that happens and needs to happen in our organizations is not subject to legal requirement, nor are we legally compelled to track compliance with individual events. Since the measure of compliance doesn’t help us determine business results, we can completely alleviate trying to chase measurement based on volume of training delivered.

“What’s that? You mean, don’t to track training numbers anymore?”

No.

So where does that leave us on measurement? How do we measure performance improvement for the organization? Before we are able measure our efforts towards improvement we need to make sure our efforts embrace these four core characteristics of measurable efforts:

  1. Aligned — We start by aligning ourselves to our business. Too many people in HR and training see what they do as a cost center that is disconnected from the day-to-day operations of the business. The Kirkpatrick model enables us to perpetuate that separation by giving us a measurement system that allows us to look at training as something disconnected when in fact it’s usually only one of the factors that leads to meaningful performance. The only measures we should use are the same measures we use to determine if the business is successful or not. That means, first and foremost, profitability. As part of an overall solution mix, learning systems can help build real performance improvement once learning objectives are linked with performance objectives that have a direct “line of site” link with business performance measures. This means we have to start with defined business metrics and make sure that we’ve provided a performance environment that maximizes each person’s ability to meet those needs. This type of total alignment helps further align our performance management and career development processes toward performance improvement.
  2. Identified Performance Improvement Factors — The role of learning in organizations is drastically changing. It’s no longer our job to simply pick out knowledge gaps and develop content that will fill those gaps. In an aligned state, we look at places we want the business to improve; we identify the performance factors and curate the solution.
  3. IntegratedContinuous, and Connected Experiences — When it becomes clear to us that training events are no longer the panacea (never were) and content context is where we add value, we create contexts that are meaningful to individuals in focused ways; we can then build environments that enable performance improvement. This includes solutions like cohort systems and portals that are about more than just learning.
  4. Agile — We need to embrace the need for constant change. Business needs continually change so our new role is to say aligned and be agile enough to change with it. The days where we have multi-month engagements to create large, formal training offerings are gone or greatly reduced.

If we are aligned with the business and can accurately identify the performance factors that contribute to business goals, our efforts to improve performance can be integrated, continuous, and connected.

If we live up to these core principles, then the evidence we need to measure those efforts are the same measurements we use to gauge the success of the business as a whole. Profitability as measured by the business is a very good and accurate measure of the relative success of integrated efforts towards performance improvement.

Proper execution and inclusion of these four characteristics also has a dramatic and positive impact on trust because it allows others in your organization to directly witness behaviors that confirm you are all paddling on the same tributary and in the same direction.

Five Attributes Your Mobile Sales App Must Demonstrate

It’s been a little while since I’ve written about mobile, but I committed this week to do a talk at the eLearning Guild mLearnCon, June 19-21 in San Jose; so it’s top-of-mind. My talk is going to be about the design process we used in an actual development scenario for a client of ours. That particular application never saw the light of day, but it got me thinking about the components of that application and why we were so passionate about including them.

Specific to the sales audience, we believe the following five attributes are critical to the successful adoption of a sales-focused mobile app. Your sales app must be:

  1. Relevant everyday – On the surface, this means the content must be kept up-to-date. While that is critical, when you dig a little deeper, it also means the content must also be applicable every day. It has to be more than learning about the future, tomorrow’s product, or the sale they might someday have. It has to help them do their job today.
  2. Improve some aspect of the sales process – Sales has always been and always will be about more and faster. If your mobile sales app doesn’t help them increase or accelerate the sale by letting them access more information faster from different places, or work with other people in your company more efficiently, sales people just plain won’t use it. In fact they will find some other mobile app that will help them, or they will spend that time doing some other activity even if that’s Angry Birds.
  3. Integrated across sites and systems – In most companies, there is no lack of systems or internal sites designed to support the sales team. If mobile devices can even access behind your firewall, chances are those systems and sites aren’t very useable on a mobile device, and you can’t expect your team to ferret out and adapt while they are on the go. Your mobile app needs to be a combined interface to the sites and systems that are most critical to the sale.
  4. Socially connected – Mobile technology at its core is about real-time communication. If your app doesn’t take advantage of this, you are missing something really important. Imagine situations where your app can help sales people get answers in seconds, when it used to take hours.
  5. Intuitive – The reason Apple’s products are so pervasive is because they always put the user experience first, even if that means it doesn’t do everything that you imagined. This isn’t a ringing endorsement of Apple by any stretch, especially for the enterprise, but the point is that it has to be better than easy. It has to be intuitive to the point where it feels automatic to the end user. Mobile apps do not have, nor should they need, training programs. They just work. You are better off not including a piece of functionality rather than including one that needs to be explained.

This isn’t intended to be a comprehensive design guide; just five things you need to consider. Of course, there are big differences between organizations and the processes that they use. Device strategy, even if it’s “Bring Your Own Device” or BYOD, also factors into design. However, if your app does these five things successfully, it will make a difference in the performance of your sales team.

Key Learning from the Gartner Portal, Content, and Collaboration Summit

Having just returned from the Gartner Summit, I thought a quick recap was in order. Besides I need a break from my measurement series (Part I, Part II, Part III)

Things got started with Gartner VP and Analyst Whit Andrews taking the stage carrying a shovel. My first thoughts were he we go with another speech about breaking new ground, but I should have known better. His comment was “A Gartner guy with a shovel, never a good thing.” But it turns out, it was a prop for talking about Minecraft, an online game both he and his son play on a regular basis. Here’s a YouTube of him previewing the shovel theme. It set the tone to talk about gamification, but the rest of the conference was decidedly higher level and talked about building and maintaining higher orders and evolutions of portals and content management.

Side note: While researching Whit Andrews I found this hilarious text-speech YouTube diatribe.

Day 1: Portals & UXP

All of the sessions I attended were geared more towards the portal end of the spectrum. Within the realm of portals, the major focus areas were User Experience (UX) and mobile. Of course there is a ton subtlety and secondary topics. I won’t cover every session, key sessions, and perspectives here.

My first session was Gene Phifer talking about User Experience Platforms or UXP. Any reader of the Media 1 blog knows this is a huge focus for us right now. The UXP definitely has a place in the corporate landscape and is a critical piece for creating alignment between people, process, and technology. Gene’s original article on UXP is a great overview of the session. For more of Media 1’s vision for how UXP can drive performance, see Chris Willis’s blog post “Social, Mobile, Integrated…UXP and Your Future Workforce.”

“Using Generation 7 Portals to Attract and Engage Customers” with Jim Murphy was up next. While this session was geared somewhat towards customer facing portals, the same principles apply to employee or internally-focused portals. With a lot people I talk to, “portal” has become a bad word. In a lot folks’ experience, portal technology has been purchased, implemented, and subsequently fallen short of expectations. What we have to realize is that portals have evolved and continue to evolve. The biggest shift I see in portals is that they are moving away from being company, department, or role specific and they are getting personal. When you put the individual at the center of the design and you surround that person with filtered and specific options, portals get a lot more compelling—and that’s the root of why Media 1 is bullish on UXP.

Jim Murphy describes the characteristics of Generation 7 portals as featuring:

  • Analytics
  • Portal-less Portals
  • Context aware
  • Portal ubiquity
  • Emerging UXP
  • Widgets dominate
  • Mobile dominates

Jim and Gene teamed up on another session later that afternoon called “Employee Portals: The Revenge of the Intranet” to address the employee-specific side of the portal equation and aptly drew the connection that the employee portal is the heart of the new intranet. The changing ways that we work have driven the corporate intranet from being information portal to knowledge portal, then to process portal, and now to the latest generation of intranet portal that includes social functionality, mash-ups and combined applications, and has a mobile enabled interface. Social is the connective tissue that ties people to process and information—or people, process and technology in the Media 1 vernacular. More than social, the new portal-based intranet (SharePoint portals included) also enables information management and process management driving alignment to corporate goals. Gartner defines the must have characteristics of the next-generation intranet as:

Current

  • Social
  • Business process enabled
  • Analytics and optimization

Near future

  • Mobile
  • Context aware
  • Gamified

Wish list

  • Enable user content contribution
  • “App store” model

Day 2: Gamification & More

Day 2 of the conference started out with Jane McGonigal’s  compelling presentation “Reality is Broken” on the role of game play on our society and consequently how we can use those principles when addressing the needs of our organizations. Jane’s presentation was based on her book is Reality Is Broken: Why Games Make Us Better and How They Can Change the World. While I’m not always a fan of gamified approaches to development, I must admit to having some changing perspectives based on this presentation. One of the key advantages to gaming is the resiliency gained from repeated trial and error—and sometimes subsequent failure. I think that’s why a lot of people find golf compelling and certainly the incredible volume of Angry Birds play would also seem to reinforce that supposition. Seems to me that many corporate gamified approaches take a “failure is not an option” position and with the absence of failure or the possibility of failure, you miss out on the resiliency benefit and ultimately engagement that the “game” was intended to create. I’m going to do a future blog post on this topic alone. There are some very compelling distinctions between playing, competing, achievement, failure, and losing that deserve to be explored.

I spent the afternoon of Day 2 exploring sessions on UI/UX, people-centered strategy, Yammer and Mobile. Not that these sessions weren’t valuable or intriguing, but I have to assume you have something else planned for your day other than reading my blog.

I finished up the conference with a case a study from GlaxoSmithKline where they talked about their cloud-based employee portal solution, which happened to be a SharePoint 2007 site. While many of their experiences and concerns aligned with my other clients’ SharePoint installations, the fact that the SharePoint farm existed in cloud space created very few if any new concerns. Instead, it appeared to show considerable value to GlaxoSmithKline along the lines of the value proposition of other cloud services.

The final keynote of the conference was from Seth Godin. It was engaging and inspirational to say the least. While many of the examples used where on how to create compelling messaging, or “purple cows” as he calls them, it’s not much of a stretch to see how that relates to how we market performance improvement to our own organizations. It’s more than position; it’s the stories we tell.

In summary, the conference served as an excellent confirmation that I’m talking about the right things with my clients and in the right priorities. It also has taught me that I have a different perspective to bring to gamification and a new understanding of what it takes to make these games compelling. And finally, not that I needed any more convincing, we need to find ways of engaging employees on mobile platforms in meaningful ways. Our obstacles are just that, but mobility is critical to the new work environment.

Measurement, Part III: Measurement as Evidence

The Legitimate Need for Measurement as Evidence

I keep coming back to a quote from President Clinton at Learning 2011:

“If you already know the truth, you don’t need the evidence.”

He was using that in context of a political topic, but I really think it’s applicable to measurement. If trust isn’t the real issue, and if performance and business results are the “truth” we are seeking, and if we can prove those business-related results, do we really need “evidence” that training—either as learning events or as a continuous and integrated process—got us there?

If the purpose of our occupation is to make our companies better, to improve performance, then the primary measurement should be whether or not our businesses are in fact becoming better. In either case, our evidence should ultimately be based in the “proof” that our business objectives are being met.

While this is true in most situations, there is the possible exception of compliance training in which there is a legitimate need to prove learner participation and present it as evidence. However, there is a real danger in perpetuating what I call pseudo-compliance courses, where compliance is mandated but not linked to any regulatory need nor real business drivers or goals.

Compliance vs. Pseudo-Compliance: What’s the Difference?

“But…,” you say. “I have course XYZ that I HAVE to make sure everyone takes.”

This is the classic compliance model. The notion here is:

  1. There are organizations that are legally mandated to provide a training event and must prove that employees did observe the event.
  2. There are organizations whose legal exposure will be unreasonably high if they cannot prove that their employees observed a training event.
  3. There is a strong feeling that training creates a real, actionable alignment between a body of knowledge and the day-to-day behavior of employees.

Clearly items 1 and 2 happen in real life and pass as legitimate reasons for measuring compliance. However, item 3 falls short since it is not linked to a measurable business goal or driver. It’s that simple. It doesn’t mean that it’s not important or that you shouldn’t do it, but you may not need the evidence to back it up.

The Danger of Pseudo-Compliance

The biggest danger in measuring compliance or gathering evidence on compliance comes from tracking things as “compliance” that do not meet the criteria. It’s really easy to incorrectly identify a training event as being either legally necessary or subject to unreasonable legal exposure. These pseudo-compliance courses or events, if allowed to, will:

  • waste your time and resources
  • perpetuate poor impressions of formal training
  • provide cost justifications for systems and processes that do not contribute to your company’s business objectives

It’s perfectly reasonable to set an expectation that employees participate in a pseudo-compliance course, but there are generally ZERO measurable returns on that activity or event. Measuring compliance does, however, have a measurable cost in terms of systems and labor.

The most common occurrence I see of pseudo-compliance courses are around philosophical topics. Sure there are ethical issues that have concrete actions and legal repercussions that are legitimate candidates for measuring compliance, but I’m talking about philosophy here in terms of asking or expecting an employee to believe or think a certain way. Topics like integrity or honesty. You can give examples of someone acting in a way you want your employees to act, but it’s not measurable in the business. Lack of compliance with a mandate for honesty or integrity is typically grounds for dismissal of an employee. What does it matter if you have evidence of the training event when this type of mandate is violated?

Legal Compliance

Assuming that the training you wish to track is legally required or implied as such, it’s reasonable then to assume that the legislation that defines the requirement is strongly linked to either the financial, personal, or civil liberties of persons who work with or for, or come into contact with, your corporation. The premise is that it is in the best interest of your company and the public to comply with the legislation. The rebel in me would love to argue against the idea that all legislated training is needed, but the fact remains that it is a reality of business that there are legal requirements that make compliance necessary.

Assuming for a second that legislation is good and there is a public interest or common good in our compliance, isn’t that something we should want to do regardless? After all, aren’t we as individuals party to the laws of our land? Therefore, the training we do should be such that we not only comply with the law, but also ensure that our behavior is such that we never violate the intent of the law or requirement.

It’s easy enough to leverage an LMS to prove 100% compliance in the eyes of a legal requirement, but the true measure of success is that we have zero violations in our business practice. Thus, our performance measurement is zero or our compliance measurement is 100%. Which measurement is more important?

By definition, legally mandated training is a cost center. We have a responsibility to manage expenditures and be efficient with our companies spending, but that should never interfere with our performance measurement of obeying our legal obligations.

Legally-Compelled Compliance

Now I’d like to move onto to the scenario in which we are legally compelled to provide a compliance measurement, but it is not legislatively mandated. This is a cost-avoidance mechanism. We are, in principle, agreeing to invest in learning in exchange for a reduced or minimized cost should legal action occur at an unknown future date. But let’s be honest with ourselves; legal liability occurs as a result of a tort. Under tort law, companies are held liable for the behavior or actions someone commits while acting as a representative of that company. The reasons for the tort action vary, but can be generally attributed to:

  1. Negligence—lack of knowledge or insight that an action performed on the part of the company could cause damage to another party
  2. Intent—purposeful gains realized by a person, persons, or the company itself at the expense of another party

Much like legally required compliance training, legally-compelled training may have a compliance measurement that could be used in defense of legal action, but the true measure of success is once again that zero actionable behaviors are committed by individuals acting on behalf of our company.

To avoid negligence, we must make sure that people know better, but more importantly, that their actions of behavior reflect that knowledge. When you know better and act in defiance or without deference to that knowledge, then that is the definition of intent. In either case, to truly realize the cost avoidance measure, you must have evidence of compliance yet your obligation doesn’t stop there. Ultimately, performance is the real measurement of success, not compliance.

What Not to Measure

The problem with cost avoidance as a measuring stick is that there is no guarantee that the expense you try to avoid would ever have materialized had you taken no action at all. It’s just a possible expense you may have incurred down the road. There is no direct link to sustainable profitability unless you can say with certainty that you had a consistent, if not fixed, expense that you incurred at a defined level that will no longer be incurred or, at least, will now be incurred at a reduced level. There is no real ROI—only an imagined or implied ROI.

Looking at compliance training as whole, there is a real business requirement, if not a legal requirement, to measure compliance with prescribed formal training events. But that shouldn’t be our justification for creating, maintaining, or supporting those formal training events. And by no means should training compliance itself be a measure of effectiveness upstream in your organization.

At the end of the day, quarter, and fiscal year, the list of training events that we gather evidence of participation on should be as small as possible. This evidence has value, but only as a vague measurement of possible cost avoidance. If we want to actually measure the effectiveness of that training, then the measuring stick needs to be performance based and evidenced by a LACK of adverse occurrences.

 

Measurement, Part II: The Evolution of Systems

In my last post in this series, I wrote about trust (or a lack thereof) as a motivation for organizations producing and/or requiring measurements of training based on learner knowledge or volume of completions. In this post, we’ll take a look at the evolution of measurement systems and how it has led to our current state.

Evolution of Measurement

We are measured our entire lives, starting before we are even born. Height, weight, volume, girth and length are all used as metrics or measurements for doctors and our parents to label us as “normal” when compared to a set of standards. For the most part all of these measurements are well and good, and can serve as indicators of our health.

Eventually, we get bundled up and sent off to school where all the sudden the measurements aren’t necessarily about our health, but rather as a comparative ranking of our ability to retain and occasionally apply knowledge—against a set of standards. These rankings go down on our “permanent record” and follow us as indicators of readiness and aptitude. For better or for worse, this measurement system is used throughout the duration of our education and is sometimes used as factor in deciding whether or not we get a job.

And then a lot of it stops.

Corporations have little use for ranking the knowledge or knowledge capacity of the people who work there. People are brought in to do a job and achieve something that contributes to that company reaching its business objectives—making money.

What workers know is secondary to what they do.

The application of that knowledge to achieve real world results is what really counts.

However, no one really thinks that workers come ready-made with all the knowledge or skills they will ever need. So there has to be some kind of mechanism to assure that knowledge exists if it is missing. That’s what we fondly call a “learning gap.” Of course, personal and professional development is recognized as an irrefutable need since there’s a high correlation between personal development and the likelihood of people being exemplary producers. When we find a learning gap, our knee-jerk reaction is to fill that gap with training and assume that knowing will equate to doing.

Filling the Learning Gap vs. Measuring Performance

The metaphorical issue with the term “learning gap” is that it describes an opportunity or need as a hole or chasm that needs to be crossed. Metaphorically, there are three ways to deal with a hole or chasm: fill it, build a bridge over it, or go around it. In a performance focused sense, none of the metaphorical solutions are the right answer to the problem. We don’t want to go over, around, or through; we want a behavior that clearly demonstrates that the opportunity or need no longer exists.

How do you measure something that doesn’t exist?

It’s much easier to measure how deep a hole is or how far it is across, so that’s the kind of systems we have developed to measure corporate learning. Since 1975 (or 1959 depending on how you measure it), the Kirkpatrick model has been the most accepted standard for measuring the effectiveness of these efforts with its four levels of measurement:

  1. Reaction
  2. Learning
  3. Behavior
  4. Results

However, recently there has been a groundswell towards the rejection of the Kirkpatrick model as a sole methodology for measurement because it often surmises a learning event as a starting point. These grumblings were heard recently at both the CLO Symposium and Learning 2011 conferences and with the writings of thought leader Dan Pontefract, who wrote what I consider the defining article on the Kirkpatrick model problem in the February 2011 Chief Learning Officer Magazine—a stance he further qualified in his blog a short time later. The basic premise is that effective learning is not an event and cannot be disconnected from on-the-job performance; therefore, it cannot be measured on its own outside of a performance system.

That’s not to say that the model has never had value. Level 4 of the model—the Results level—clearly links performance to learned behavior, but it ties those results and behavior to a measured learning event and not culmination of an experience which should include the influence of factors beyond just the learning event. Even if we did apply the model to a grouping of formal learning events, it would do very little to help us evaluate effectiveness of individual pieces or the informal learning that takes place regardless of whether or not informal learning was a planned part of the experience. There are just too many other factors, in addition to learning, that contribute to an individual’s ability to achieve something of value to a business or an organization.

It would be easy at this point to form a rally cry for new measurement standards—ones that are a true indicator of performance—but most organizations already have ways of measuring how they are performing; they just need to find ways to apply those measurements to individual contributors and tie doing things to measurable performance.

There are a select few legitimate needs to measure the delivery of training linked to legal requirements or legal exposure that organizations often refer to as compliance training. However, it’s easy to fall into the trap of imagined compliance, in the next installment in this series on measurements, we’ll explore legitimate verses imagined compliance and how to differentiate between them.

On Onboarding: An Excerpt from Thoughts from Learning 2011

The following is an excerpt on onboarding from Thoughts from Learning 2011, originally published November 18, 2011.

In the several sessions that I attended about onboarding at Learning 2011, I was pleased to see a real recognition and connection between the onboarding experience and long-term retention of employees. There are a few companies that are recognizing the needs of their newest employees, but there are still far too many people who treat onboarding like an event that is completed in short order. Orientation is an event that is part of the learning experience that is onboarding.

Part of the problem with onboarding as practiced now is in how it is measured. In a lot of cases, onboarding is being measured as a compliance issue—as in, we achieved 100% compliance and everyone has been through onboarding. The problem lies in the fact that it’s really easy (LMS) to track compliance—i.e. whether a person sat in chair or watched a computer-based piece of content—but it’s exceptionally difficult to track whether they engaged in an experience. In response to this, many companies turn to a survey, so they can ask employees how they “felt” about their onboarding experience. The problem here is that a feeling doesn’t translate into knowledge, practice, or behavior; and it certainly doesn’t address on-the-job performance.

In order to measure true effectiveness of an onboarding experience, you have to measure whether or not the participant is actually performing at the level you expected. And, that the realization of that performance has had a tangible effect on the business. Assuming you have an effective workforce and are profitable (and that may be a big assumption), then you can move on to measurements that relate to degrees of better, faster, and my least favorite, cost avoidance. I’m going to save more musings on measurement for a future blog post, but there is another reason to ask how you can justify designing an integrated onboarding experience. In the words of keynote speaker President Bill Clinton, “if you already have the truth, the evidence doesn’t matter.” Good luck selling that up your management chain.

Measurement, Part I: Trust

Every conference I’ve been to in the past year… scratch that. Every conference I’ve EVER been to has had a major focus on measurement. There have been various measurement trends through the years, but recently I’ve seen some shifts that make me hopeful that corporations may actually make some progress in making and taking measurements that actually matter.

This will be the first in a series of blog posts exploring different aspects of measurement—including the importance of trust, motivation, compliance, shifting to business-based measurement, individual measurement, and measurement and its role in budget negotiations.

First up: let’s talk about the importance of trust.

Measurement Part I: Trust

Far be it for me to hold back on how I really feel about something. So, here goes:

Measuring training as a justification for training is an utter waste of time.

It’s like giving style points to the 50-yard dash. It may be interesting, but the only thing that matters is who crossed the finish line first. In other words, the performance or result mattered; the style in which it was achieved is barely noteworthy. Yet, when you measure training in and of itself, that’s exactly what is happening.

I think Charles H. Green hits it on the head with this quote from his blog:

“The ubiquity of measurement inexorably leads people to mistake the measures themselves for the things they were intended to measure.”

Why do we keep using measures instead of actual performance as justification to ourselves and our organizations? The answer to that question in many cases is rooted in why we are asked to measure training in the first place… that is, to prove that it has some kind of meaningful, measurable impact on the organization’s results.

Many of our organizations do not believe that training as it is currently defined has a positive impact. Or they do not trust that you or your immediate organization can execute learning in an impactful way. The requirement for measurement comes from a place of distrust—not from a defined need to measure results. Consequently, measurement is demanded to “prove” training works. Trust is not impacted or improved through this exercise, but regardless, time and effort is spent generating measurements that don’t really tell us anything about the business.

It is not my intent to write a primer on the effects of trust in business. I think Stephen M.R. Covey has done a good job with that in his book the Speed of Trust and the follow-up Smart Trust. The point is that a lack of trust affects our relationships and results in demands for measurements based on volume that are intended to justify the existence of training in an organization. It’s a closed loop with no obvious business value. That’s why old-school training departments are usually viewed as a cost centers, not as a strategic business partners.

So how do we as learning and performance improvement professionals earn trust and show that learning systems are effective and worthwhile without volume (i.e. number of butts in seats) or knowledge-based metrics?

Before we go there, we need to understand how measurement evolved to this state and also how the systems that we maintain perpetuate meaningless measurements. I’ll leave that for next blog post, so stay tuned.

Harrison’s Application for TEDx Grand Rapids

I hate being late.

I am the first guy at a party because when something starts at a certain time, I just can’t reconcile in my head this concept of being fashionably late. I also suffer from anxiety when other people don’t show up on time. I have no idea where this great sense of punctuality comes from. My brother certainly doesn’t share my views on punctuality, so I’m not sure I can blame my parents.

However I got here, I’m kind of upset that I’m late to the TED party. I guess I got a little bitter and little bent from other groundswell business talk “trends” over the years, and I mistook TED for something it wasn’t (a platform for an agenda) instead of what it is (a platform for ideas).

When you get that many ideas in one place and you leave the conclusions and actions up to the viewer, it seems to me that what you have is most likely the purest source for informal learning we have ever known.

But now I’m on the outside looking in. It’s that Labor Day party your friend invites you to every year, but you never seem to have time to make. This evidently is the year you don’t automatically get an invitation. The Grand Rapids TEDx event is happening on May 10th, and since I didn’t attend years past, I’ve got to fill out an application to explain why I’m worthy of getting an invite.

When you think about it, this is incredibly smart. Even if there was no judgment placed on the merits of the application (which evidently there is), the act of applying shows a level of commitment and engagement. By denying general admission, you ensure that people really WANT to be there. That’s something worth thinking about when it comes to development activities and even the communities of practice we build for our organizations.

At any rate, I thought I would share some of my application in this blog, if for no other reason than to put a little more of myself out there.

How will you contribute and what unique perspectives and characteristics will you bring to TEDxGrandRapids?

I am constantly seeking new inspiration—and I know I inspire others—but it takes real work, real thought, and real attention.

I live my life as an open book. My business life and my personal life are one in the same and give each other real value. The things we care about personally are therefore inseparable from the things that our businesses care about.

I will contribute to the TEDx conversation through my writing—not only as a reporter of what I see, hear and feel, but also as an interpreter of how I can apply what I have learned and how others may do so too.

I will use the inspiration I gather to help formulate my own ideas, and I will share those ideas for the greater good.

What inspires you? Tell us about yourself.

I get charged up about very small details—a riff in a song, the way the grain converges in a chunk of curly maple, the rusty piece of junk in a photograph—and I want to take those little details and think of how I can reapply them, use them differently, put my own stamp on it.

I run a company. There are plenty of places to apply those details, but I am just as proud of the music I play, the guitars that I build, the last run I took in the skate park, the puck I stopped as a goalie.

The trick is to take one of those little things and grow it into something that combines some of those passions in life into something greater than the sum of its parts. The combination guitar/skateboard didn’t stay in tune so well, but I’m sure the next big idea is just around the corner.

Now what if you take those two simple questions and apply them to your job? Would you answer the same way?

How do we get there?

I, for one, am shopping for a few ideas and hope I find them at my local TEDx event. That is, if I get invited to the party.

Thoughts from Learning 2011

It’s been a week since I returned from Learning 2011, so I really needed to sit down and get some of my thoughts down before they were lost forever. But as I sat down to write this, I noticed a major shift in how I’m referencing my experience at the event.

I didn’t reach for my notebook, I launched my twitter account.

I’m finding that the things I tweeted were the things that struck me the most and the things that other people re-tweeted of mine where the things that resonated with them the most, so this would seem to be a solid strategy. Let me know how it worked out by leaving me a comment and following me on twitter @harrisonwithers.

Overall, the major themes of the conference were the importance of storytelling, and the implementation of social and mobile learning. But there were also great sessions from Dean Kamen on innovation, and I did attend a number sessions about onboarding. The opening keynote took the traditional approach of presenting a “state of the industry” look at where we are at, and there were no surprises here. The stand-alone, disconnected LMS by itself does not help us create competence, performance, and really doesn’t provide a service to our learners.

Think about that; what do learners get out of the LMS experience?

Searching? There are better ways to search.

Tracking? Is that really for them or for you?

At any rate, the LMS conversations led to a great quote on twitter from Dave Halverson from Target (@halvorsd):

“ah, LMS. Like the worst girlfriend I ever had. Testy, hard to understand, and rarely delivered on promises…”

Having sufficiently bagged on the LMS, we moved on to how social-based computing can add relevance and context to the learning experience. I was about to shout “Amen,” but in the next breath Elliott Masie (@emasie) decried SharePoint™ by saying it “sucked” as a social platform. Elliott, we’ve known each other for a long time—and I love you—but saying SharePoint sucks as a social platform is saying that a jump rope sucks because I failed to hop when it got to my feet.

I don’t want to come off as a SharePoint fan boy here. It certainly has its problems and Microsoft could be a lot more helpful in making it better suited as a social platform. But in many cases, it’s what we have; it’s already installed, and it represents an opportunity to align our organizations from a technical perspective—which goes part and parcel to aligning on people and process. I too have seen implementations that suck, but give me 20 minutes of your time and I’ll show you a couple that don’t. You don’t have to take my word for it, talk to some people who aren’t my clients like Telus, United Healthcare, Diebold, or Xerox. All of those companies have social-enabled SharePoint implementations that don’t suck.

Moving on to mobile, there wasn’t a lot of new talk here and I’ve done plenty of writing on the topic in the past. However, I will reinforce a couple of long-held beliefs:

  1. Tablets are a much better platform than phones for almost every type of content.
  2. Mobile content does not mean a course in the traditional sense, think performance support.

It’s also really interesting that the term tablet is almost becoming synonymous with the Apple iPad. Everyone was talking about content for the iPad and how to sell the cost of iPads to management. I love iPads; my wife has one, but mark my words, the availability of sub-$200 Android devices (like the Amazon Kindle Fire I received yesterday), will open the door to real and affordable tablet-based mobile applications. In fact, we’re already working on different ways to leverage and integrate tablet-based applications with social-based cohorts. Stay tuned!

In the several sessions that I attended about onboarding, I was pleased to see a real recognition and connection between the onboarding experience and long-term retention of employees. There are a few companies that are recognizing the needs of their newest employees, but there are still far too many people who treat onboarding like an event that is completed in short order. Orientation is an event that is part of the learning experience that is onboarding.

Part of the problem with onboarding as practiced now is in how it is measured. In a lot of cases, onboarding is being measured as a compliance issue—as in, we achieved 100% compliance and everyone has been through onboarding. The problem lies in the fact that it’s really easy (LMS) to track compliance—i.e. whether a person sat in chair or watched a computer-based piece of content—but it’s exceptionally difficult to track whether they engaged in an experience. In response to this, many companies turn to a survey, so they can ask employees how they “felt” about their onboarding experience. The problem here is that a feeling doesn’t translate into knowledge, practice, or behavior; and it certainly doesn’t address on-the-job performance.

In order to measure true effectiveness of an onboarding experience, you have to measure whether or not the participant is actually performing at the level you expected. And,  that the realization of that performance has had a tangible effect on the business. Assuming you have an effective workforce and are profitable (and that may be a big assumption), then you can move on to measurements that relate to degrees of better, faster, and my least favorite, cost avoidance. I’m going to save more musings on measurement for a future blog post, but there is another reason to ask how you can justify designing an integrated onboarding experience. In the words of keynote speaker President Bill Clinton, “if you already have the truth, the evidence doesn’t matter.” Good luck selling that up your management chain.

Bill ClintonNo matter where you sit on the political spectrum, I couldn’t possibly recap the Learning 2011 experience without mentioning the keynote by President Bill Clinton. Articulate and comfortable, he spoke for over an hour with no teleprompter and no stumbles. He had notes and wore his reading glasses, but I don’t think he looked at them a single time. Amazing orator, with the intention of this blog being non-political, I’ll leave it at that.

Which leaves us with the concept of storytelling, I could do another blog post on this topic alone, and I think I will. There were at least three exceptional story tellers at this conference, and long after the details of learning theory collapse and fade from my memory, I will remember the stories.

The story of Dean Kamen sending his parents on a trip so he could add on to their basement without permission to have more room for his machine shop. The story of the military leaders who asked him to invent a prosthetic that could do three simple things we take for granted: pick up a raisin or grape off a table, put it in their own mouth without smashing it, and be able to know the difference without looking at it. And, there was Bill Clinton, telling a story about growing up poor and deciding whether he wanted to be a politician or a musician. Not to go unmentioned, John Lithgow’s story of reading to his ailing father and recognizing the moment when his father turned for the better.

It’s the stories that we remember. And what is a story but a container for learning? It’s a package we can use to bring real sustainable change in our lives and at our companies.

Themes from the Fall CLO Symposium: Game-Changers and the New Normal

It’s hard to believe that it’s been almost a full two weeks since I have returned from the scenic vistas of Laguna Niguel and the intellectual stimulation of the Fall CLO Symposium. As lovely as the scenery was, I will try to keep my focus on the intellectual pursuits. But allow me this one intrusion:

Laguna Nigel Resort

Now that I have that out of my system, the theme of the conference was “Game-Changing Learning: Development for the New Normal.” And while there were definitely some sessions focused on the old normal—how do I leverage my LMS and measure how many butts in seats I pushed through last year—there were also some refreshing perspectives that really align with our commitment to integrated learning.

The conference kicked off with Steven M.R. Covey and his keynote based on The Speed of Trust. The Media 1 management staff has all been through this program, and it has taught us quite a bit about how we work with each other. Reviewing this material reminded me how trust is critical when we talk about justifying our efforts not only to our clients, but also as we carry the vision up to their senior leadership at their companies. In retrospect, this also speaks to another major theme of the conference: measurement. It occurs to me that there is sometimes a difference between justification and measurement, and I wonder what it says about our trust levels when we use measurement of volume as justification for effectiveness. I would propose that trust, when properly established, allows us to let go of soft numbers linked to volume of effort and places focus on the hard numbers that reflect moving our businesses forward.

Covey’s keynote was soon followed by a panel of thought leaders including my friends from the Internet Time Alliance—Jay Cross (@jaycross), Clark Quinn (@Quinnovator), and Jane Hart (@C4LTP). In this somewhat controversial segment, several issues were explored, including whether or not the ADDIE model (Analyze, Design, Develop, Implement, Evaluate) was really still relevant in a day and age where user-generated content  and informal content has become more prominent. That prompted my thought, and subsequent tweet:

Does every problem we need to face need to be analyzed? Does every analysis have a solution that needs to be designed, and does every design need to be developed?

I would argue that the role of corporate training and learning professionals is transforming and evolving away from being ADDIE-driven content factories and is now aligning closer to being curators. This also lead to a discussion on whether or not the Kirkpatrick model was still valid for measuring training—see a pattern yet? It’s not that the Kirkpatrick model is inherently bad, just many of our interpretations fall short of being valuable. If we only make it through the first two levels—that is, whether or not people are happy and whether or not they remember—that may be enough to justify our efforts, but did it really help us evaluate whether or not we are having an impact on the business?

Another highlight was Bob Mosher (@bmosh) from LearningGuide Solutions on becoming an agile learning organization. The premise of Bob’s talk was his experience in guiding his clients toward learning that is more strategic and aligns with the strategy of the business. Bob believes informal learning and social learning opportunities offer performance support that is both effective and practical. Bob also echoed my sentiment that we need to look at the role of the corporate training department differently; that it’s not as much about creating content as is about creating usable context and integration.

By far, my favorite session was with Dan Pontefract (@dpontefract) from Telus talking about “The Rise of Collaboration in Learning, Leadership and 2.0 Technologies.” Dan is an open and frank speaker who pulls no punches in his evaluation of traditional learning functions in companies. Dan, perhaps more than any other learning professional I’ve talked to lately, has a clear vision for shifting learning from a series of disconnected events to an INTEGRATED, connected, continuous and collaborative process. He also has the technical infrastructure to prove it—built in SharePoint™ to boot! I left Dan’s session charged up that someone else gets our vision and came up with solutions that in many ways echo the types of solutions we are building for our clients. Check out Dan’s blog Trainingwreck; you’ll be glad you did.

All in all, of course we hear what we want to hear, and I heard people starting to talk about the sorts of things that my team and I are passionate about—and I am excited. I feel that I am energized and eager to help my clients carve out their own little piece of the transformation that we are in the middle of. Sure I heard a lot of the old normal too—like how we do we stop our employees from saying stuff we don’t agree with and new ways to spin all the old numbers. But there is enough evidence of the “new normal” taking hold to make me believe that we really can change the game.

« Older posts Newer posts »