Want to Make an Impact with Your Training Impact Study? Avoid these 4 Pitfalls.
Training impact studies are quite common within the L&D community. Reserved for the most strategic or visible programs, these studies aim to show that the training program has resulted in new behaviors on the job and that these behaviors led to a business impact. The business impacts are measures critical to business leaders such as increased sales, improved productivity or greater employee retention.
Unfortunately, too many impact studies are, well, not very impactful. They fail to produce meaningful action, process improvements or even acknowledgement that the findings have merit. This is a shame not to mention a waste of resources, time, funds and energy. After all, if your impact study doesn’t effect change, of what use is it? Additionally, after having spent time, resources and money, who wants to have their work sit on a virtual shelf?
Why might your impact study fall victim to this fate? Here are four pitfalls you should avoid to ensure you get maximum impact from your impact study.
- No ownership for action
- Your stakeholders didn't like or anticipate the results
- The users of the study question your evaluation methods or the quality of your data
- The final report wasn't action oriented.
Let's explore each of these pitfalls and discuss what you can do proactively to avoid them in the future.
1. No ownership for action
If your impact study is going to make an impact, then somebody (or
somebodies) need to be accountable for acting on the recommendations.
Unfortunately, ownership for action is often not defined when the study begins
and no one knows ‘who’s on first.’ Is the training program manager accountable?
Is there an identified business leader who can improve how managers support
their trainees? When you lack clarity on ownership responsibilities, no one
feels that he/she is on the hook to follow through.
What to do proactively:
When you launch the study, identify who cares about the results and where and
how to involve them. Have them sign off on your approach, your assumptions and
hypotheses. Finally, get them to agree on the role they will play at the
study’s conclusion and what actions they are prepared to take.
2. Your stakeholders didn’t like or anticipate the results
Stakeholders don’t like surprises. Getting an unanticipated or
negative result will rarely be well received. When this happens, the tendency
is to question everything about the study, which in turn creates a reluctance
to act (see Pitfall #3).
What to do proactively:
Michael Quinn Patton wrote an invaluable book called “The Essentials of
Utilization-Focused Evaluation”. In the book, he suggests simulating the use of the findings. The simulation engages
stakeholders to explore the range of possible results and the underlying root
causes. What should you investigate if you find that the program is highly
successful, but only with a subset of the population? What further data should
you consider if find that the program was well received, but fizzled when
employees tried to apply the concepts in their work? The simulation process not
only prepares stakeholders for possible negative results, but also helps to
identify, in advance, how to uncover root causes.
3. The users of the study question your evaluation methods or the credibility of your data.
Related to pitfall #2, there is nothing quite like presenting
the results of a month’s long impact study only to have someone question your
measurement approach or take pot shots at your data. A skeptic about your
methods can undercut the findings and leave you exposed and vulnerable.
What do to proactively: When
you are engaging your stakeholders, get their input about the project as well as how you will assess impact and what data you will collect. Do they
trust self-report data or consider it useless? Do they question the quality of
the business data because no one has updated the demographics to reflect organizational
changes? If you have skeptics in your midst (and who doesn’t?), identify them
early. Talk to them earnestly about data
integrity issues and seek their ideas on how to mitigate the risks. Most often,
these same skeptics can suggest complimentary methods that will make them feel
more at ease and will improve the quality of your study.
4. The final report wasn’t action oriented
In telecommunications, there is a phrase called, “The Last Mile
Problem.” This expression refers to the problem of implementing infrastructure
but not considering the “last mile” of connecting it to the end consumer. In evaluation, we have a serious “last-mile
problem”. How many reports do you read that are filled with statistical jargon,
detailed tables or poor visualizations that don’t provide insights or suggest
what should be done differently? Audiences
listen to these presentations but often have no clue as to what they should do
differently.
What to do proactively:
In Patton’s book, he cites a rule of thumb from the Canadian Health Services Organization.
The format is 1:3:25 and works like this: One
page for main messages and relevant conclusions, three pages for the executive summary of the main findings, and twenty-five pages with a comprehensive,
plain-language report. Keep your findings and recommendations succinct. Eliminate
jargon. Be explicit about what should happen next and who owns the action.
If you have experienced one of these pitfalls, or others not mentioned here, I'd love to hear from you. Follow me on Twitter @peggyparskey or connect with me on LinkedIn at www.linkedin.com/in/peggy-parskey.