Is learning ROI fool’s gold?

Imagine you’re the head of commercial L&D for a pharmaceutical company.  Prescriptions are flat for one of your key products.  To figure out why, the Sales VPs decide to shadow the sales reps.  They find that reps struggle to properly respond to three objections often raised by physicians.  Sales leaders feel this could be affecting prescribing behavior, so they ask L&D to help fix things.

Being the good L&D leader that you are, you have your team train the reps on the objection handling procedures developed by Marketing.  After training, a level two evaluation shows beyond a shadow of a doubt that sales reps are capable of properly responding to the three objections.  Job well done!

When the reps returned to the field, an assessment is done one, three, and six months out.  In each case, prescribing behavior remains unchanged.

Does this mean the training failed?  Did L&D fail to do its job?

L&D teams need answers

Questions like these vex many L&D professionals.  One of the refrains I hear from learning leaders across all industries is the constant need to justify their budgets and demonstrate their value.  For completely understandable reasons, many organizations are keen to know whether their training investments are being put to good use.  Every L&D team, whether or not they feel such pressures, would do well to have a good answer to these questions. Is the prevailing approach to evaluation capable of providing answers to the business?

Without “why” there is no “what”

In our scenario, training sales reps did not lead to an improvement in sales.  This much is obvious.  If you were to end the investigation here and conclude it was not worth training the sales reps, you could justify that decision.  What you could not do is claim that the training was ineffective or that L&D failed in its responsibilities.  There’s nowhere near enough information available at this point to make such claims.

Maybe this seems counterintuitive.  One moment sales are flat.  Sales reps receive training.  Afterward sales are still flat.  Seems straightforward.  It’s not.  The fact is that there are a lot of other things that could have prevented the training from affecting prescriptions.  It could be that the objection handling procedures developed by marketing were ineffective.  Even if they were effective, it’s possible objection handling alone isn’t important enough to change prescribing behavior.  Maybe the sales reps handled the objections exceptionally well after training but routinely messed up other parts of their sales calls.  It could even be that the product is just not that great.  This is the short list.

Unless you rule out every other factor as a possible cause of an unchanged business result, you cannot know whether training was ineffective.  A true evaluation of training and L&D’s contribution requires that you control for all of these factors.  In theory, this is possible.  But in practice, it is not for three primary reasons.

Problem one: level three leakage

The evaluation in our scenario is a Kirkpatrick level four evaluation since its goal was to find out whether training impacted the business. The evaluation revealed that prescriptions did not increase.  But it did not tell us why.  It would be entirely possible for L&D to create the best possible training and still not improve business outcomes.  If we want to determine if ineffectual training was the issue, we would need to know, among other things, whether sales reps used their new skills properly during their sales calls.  This would be assessed with a level three evaluation.

A level three evaluation is straightforward enough in concept.  You find out if learners are using their new skills on the job.  In our scenario, we might have done this by observing sales reps in the field, preferably using a formal evaluation rubric.  This would take some time and effort but would have been within the realm of reason practically speaking.

If we had found that sales reps were not using their new objection handling skills during sales calls, that alone would not mean the training failed.  Just because someone can do something does not mean they will.  Failure to behave a certain way can be influenced by how someone is managed, the peer/social influence, incentive systems, personal goals and priorities, and a long list of other things.  Human behavior is highly complex.  Teasing out its actual causes is never as simple as it seems.

This is the fatal weakness of the level three evaluation.  As with the level four evaluation, it can tell you what happened but not why.  Without the why, its utility as a training evaluation tool is almost completely negated.

Problem two: distributed accountability

Let’s pretend for a moment that level three evaluations are methodologically sound.  Say that our pharma company conducted a level three evaluation and found that reps were using their new objection handling skills, yet prescribing behavior still did not change.  Can we then say in this instance that the training failed?

No, we cannot for at least two reasons.  First, there is no evidence that objection handling affects prescribing behavior.  Sales leadership guessed that proper objection handling might increase prescriptions.  But they did so using their judgement and a limited pool of uncorrelated data. The second reason is that the objection handling processes developed by Sales and Marketing could be suboptimal.  Because of either of these two reasons, L&D could create amazing training only on the wrong procedures.

In a case like this, you could not blame L&D.  L&D does not determine the priorities of the functional units it serves (ideally, they participate in or even facilitate that process) nor do they define the business processes they train on.  Nor should they.  If the business holds Sales to task for increasing prescriptions, Sales should have the authority to operate in ways that it feels will enable it to do this.

In a world where functional units are required to accomplish the broader mission, those units must collaborate to get the work done.  This requires that the functional units put a certain degree of trust in one another to hold up their end of the bargain.  These are basic truths about virtually all organizations, like immutable laws of organizational physics. But they also make it so that it is almost impossible to determine who is accountable when a learning program does not produce the desired business outcome.  Can it be done?  Maybe in some cases.  Is it practical?  Almost never.

Problem three: impracticality 

This brings us to the third problem with the prevailing ways of thinking about evaluation.  To determine whether a training program failed to produce a business outcome, you need to control a long list of confounding variables, many of which are hard to measure.  Gathering this data will almost always require a degree of time and a cost that would dwarf that which was spent on the original training.  Further, it will require suspending normal operations for long periods of time.  This is going to be very hard for a modern organization to stomach.

Organizations operate in competitive environments that put pressure on us to move fast.  Using evidence-based approaches may be very appealing to those who are looking for greater confidence in their decisions.  Yet so much of the time, it is in direct conflict with our need to be agile.  Working in a modern organization means operating with incomplete data and relying on the judgements of experienced professionals.  We need to be decisive, act, learn fast, and adjust as needed.  Today’s organizations cannot afford to slow down in order to decisively find the answers to most training evaluation questions.  And given the choice of doing a thorough evaluation of one of your training programs or delivering five or more additional training programs, the calculus is going to lead to the same answer the vast majority of the time.

The danger of going halfway 

OK so we can’t get the perfect answer using perfect data.  But isn’t some evaluation data better than none?  Sure, if you don’t mind people in the organization drawing the wrong conclusions and making important business decisions based on that.

Suppose a group of senior executives at our fictional pharma company have the evaluation data saying that reps were trained on objection handling, yet sales did not increase.  Is there a chance that they will draw the wrong conclusions from that data?  Might they incorrectly conclude that L&D failed to do its job?  If they do come to this conclusion, what decisions might they make based on this?  It doesn’t take a lot of imagination to recognize the possibility of dire consequences for L&D and the broader organization that could arise from using an incomplete and largely invalid set of data.

In life and work, much of the time having some data is better than having none.  But when the cost of the data and the risk of misjudgment outweigh its functional utility, it is truly better to have nothing at all. That does not mean, however, that there is no data we can use to evaluate the value that L&D brings to an organization.  There is.

How L&D should be measured

It may seem as though I’m saying there is no way to measure the value that training brings to an organization, but I’m not.  There is a perfectly valid and low-cost way to measure the value of training and L&D in general.  In fact, it’s been staring us in the face this whole time: level two evaluations.  We can measure the extent to which people have learned what they were supposed to have learned.

Level two evaluations are not only valid, they align with the responsibilities and expertise of the typical L&D team.  L&D professionals are experts in learning.  That is what the business asks of them.  What they focus on is largely a matter for the broader organization (though L&D should have input and ideally helps to facilitate the process).  Whether people learn is the one thing wholly within the authority of L&D.  If L&D is asked to help people develop a new skill and they do not develop it, that is usually a failure of L&D.

I am not suggesting that just throwing a multiple-choice knowledge test at your learners is always going to be sufficient.  A quality level two evaluation is not necessarily easy to carry out (though sometimes it is).  A good level two evaluation will ascertain a person’s ability to perform a skill in an actual work setting.  Creating one takes considerable thought and care.  But compared to teasing out the true causes of a business outcome in a complex social environment, it is an exceptionally doable thing.

How you deal with the business

OK so perhaps this is all true, but what do we say to the business?  Even if you agree with the arguments I’ve made here (and as always, you are welcome to decide for yourself), I understand that they give little help to those trying to answer hard questions from the business.  What do we say to the senior executives who want us to justify our existence?

If they are asking this question, there is not a short response that will assuage any of there concerns.  In the end, the job here is about trying to help the business reorient its thinking when it comes to its investments in training.  Obviously, this is going to take time and may not always take right away.

As a start we can try to encourage the business to view level three and four evaluations not as evaluations of L&D alone but as evaluations of L&D and their functional area partners.  Since the accountability for the construction of the right training program is spread across those partners, it follows that they would be assessed as a unit.  This would have the added benefit of creating a greater sense of team between L&D and its internal customers.

Over time, the business should reframe the way it thinks about evaluation and L&D overall.  Instead of asking whether the investment in training is producing results, it asks whether the business is using L&D in the optimal way.  Are the functional units properly leveraging L&D to move the business forward?  If not, how can that be improved? L&D is a tool of the business and must be directed in the proper way.  When it is, L&D can be a big part of a company’s success.  But it takes the entire enterprise to ensure that this happens.

Share article

Similar article

Learning standards: the missing key to L&D effectiveness

When it comes to team consistency, standardizing on process is helpful. But it's not enough. You also need a shared concept of what good looks like. This is where learning standards come in.

The five mistakes companies make when hiring elearning vendors

Finding the right elearning vendor isn't easy. The features that will most effect success are not always obvious and it can be hard to peer through the sales and marketing veneer. Here are the most common mistakes we see.

Five questions to ask before hiring a learning agency

Finding the right agency requires getting beyond the typical sales pitch and marketing spin. Here are some questions that will help you dig under the hood to get closer to the heart of what matters the most.