Sunday, October 9, 2016


Want to Shift from being a Data Monitor to being a Data User? Here's How.


A couple in a TV commercial calls a termite inspector to determine if their house has termites.  The inspector confirms that the house does indeed have a termite problem when he observes the couple’s daughter fall through a termite-damaged stair.  Much to the dismay of the couple, the inspector says he is only a pest monitor and they need to call someone to exterminate the termites.

In this ad from LifeLock [1], the company exclaims they not only monitor problems, but also fix problems when they find them. While I scoff at this commercial, it struck me recently that L&D reporting shares a lot of similarities with that termite pest monitor. We spend too much time monitoring our results rather than using them.   

In an admittedly unscientific poll I conducted this month on LinkedIn (n=30), 87% of respondents used less than half of the data in their reports for decision making. I suspect the people who generate these reports believe their output is useful to someone, otherwise they wouldn't generate it. But the data (and my client experiences) suggest that we spend a lot of time monitoring data rather than using it.  

Why We Become Data Monitors

I doubt anyone aspires to be data monitor. You probably feel a tad guilty about ignoring that 20-page slide deck that shows up regularly in your email.  In the midst of all those urgent and critical to-do’s, the data in those reports likely feels irrelevant and not particularly helpful. To protect the little discretionary time available, you tell yourself: “If there is something really important in that report, I’m certain someone will let me know.”   

We often unwittingly become data monitors because of systemic organizational problems:
  • Leaders unrealistically expect us to manage or oversee a large number of measures
  • Most data in the reports isn’t pertinent to our role or accountabilities (as my little poll suggests)
  • The important insights are not immediately visible
  • The data is static and often backward looking    

How to Shift from Being a Data Monitor to a Data User

In David Ulrich’s excellent 2015 book, “The Leadership Capital Index” [2] he writes that organizations need to do two important things with its information (See the infographic below):
  • Manage the information, specifically its flow, transparency and quality
  • Use the information not only to make decisions but also to solve puzzles and uncover mysteries. 

I’m not going to discuss the management of information in this blog other than to state that it is the foundation required for effective data use. Instead, I will focus on the use of the information.  


Use information to Solve Puzzles

In an article by Millward Brown [3], Phillip Herr wrote, “When we solve a puzzle, we end up with a specific answer — one that is quantifiable, comparable to answers to other puzzles treated in a similar manner, and invariably a go/no-go decision.”

In L&D, we solve puzzles frequently. We may ask, “Did the senior directors get more value from the leadership course than the executive directors?” “Did our data suggest that employees with high manager support were more likely to improve performance vs employees with low manager support?”  Through basic statistical analysis, we can get straightforward answers to these puzzles. We may need another level of analysis to uncover why senior directors get more value from the course, but we now have direction and a means to understand both the “what” and the “why”.  Our answers to these puzzles should help inform our decisions and possible actions.

L&D organizations with whom I work do a solid job of asking questions and solving puzzles.  However, if you suspect your organization is more prone to monitor rather than use data, consider these approaches:
  • Explore where you exceed as well as fall short of your goals.  Ask questions that uncover the factors that drive successful outcomes.
  • Identify how results vary by key demographics such as geography, course, delivery vendor or location.  When you separate data by demographics you can uncover specific areas that warrant further investigation or even help you solve the puzzle.
  • Look for correlations of key variables. Do employees derive more value from their community of practice when the content is more readily accessible? Is value more highly correlated with the number of opportunities to interact with other community members? Correlation data can help you explore relationships that are not visible through trend or comparison analyses.

Use Information to Make Decisions

In almost everything we read about data (big or small), we are reminded that we should use information for decision-making. The challenge is that most reports give us information but insufficient insight.

If “information over insight” describes your organization, you should consider how to restructure your reports to ‘speed time to insight” and ultimately decision making. Consider the following approaches:
  • Ensure reports are role relevant.  If you manage L&D leadership programs, the reports should highlight the data relevant to those programs you specifically manage. You should not have to hunt and dig to find what is specific to your programs. If you receive data via a dashboard, when you click to the site or app, it should know who you are and display your content on the first screen.
  • Display all results against a goal. Your performance against goal will inform your decisions and actions. The comparison (with appropriate color-coding or symbols) makes it easy to highlight large variances that require attention.
  • Annotate your reports. Context is critical; add comments to explain the cause of a deviation from the trend or goal.  
  • Use appropriate data visualization methods. Learn how to display data; avoid chart junk or difficult-to-read displays to ensure the message is clear.

Use Information to Uncover Mysteries

I commonly see leaders use information to solve existing problems and make decisions. What Ulrich makes clear is that problem solving and decision-making is not enough. Competitive advantage springs from seeing patterns and themes that highlight emerging problems or opportunities.  This is the territory of "Big Data" and where you need to uncover mysteries.

Unlike puzzle solving, “when dealing with a mystery, no one can address the problem directly; someone has to discover the pattern first. The problem with mysteries is that sometimes leaders don’t even know the right questions to ask to explore them.[4]” 

How do we identify the mysteries we should solve? First, connect the dots across multiple data sets.  For example, what is the relationship between engagement and the use of social/informal learning resources? Second, identify how to mine unstructured data such as emails or online discussions. Third, stay attuned to market and industry shifts and use your own data to assess if your organization is keeping pace.  

Here are specific actions you can take to identify important mysteries to solve:
  • Identify sources of information and listening posts such as discussions between performance consultants and their business clients as well as presentations or blogs by industry leaders or researchers.
  • Quarterly, identify themes from your listening posts. Discuss articles about shifts in the industry or a success story from a company you admire. Develop hypotheses and then identify how to confirm or refute them in your own organization.
  • Incorporate insights into the annual planning cycle. Where should you allocate funds to conduct a proof of concept of a new process or method?  Where do you need further insights about how your employees learn or consume content?


Who owns the shift from monitoring to using data?

If the L&D measurement and reporting team were solely responsible for moving the organization from being data monitors to data users, you could simply pass along this post and ask them to make it happen. But, like that termite problem, it’s not just about the termite exterminator. The homeowner has some responsibility as well. 

Let’s explore how those responsibilities sort out (also see graphic).
  • Measurement and Reporting (M&R): To shift the organization from monitoring to using reports, the M&R function needs to rethink its approach to data and reporting. 
    • Eliminate one-size-fits all reporting 
    • Enable users to interact directly with the data to ask and answer questions, drill down and explore the data beyond the surface
    • Guide L&D to focus on the critical few measures to manage the business. Help L&D align its measures to its business priorities
    • Coach L&D to use a balanced suite of measures. Include a mix of both leading and lagging indicators. Report on not just efficiency measures but effectiveness and outcomes as well.  With a balanced suite of measures, report users will be provided with a holistic view of their performance. 
  • Report users:
    • Develop goals for every measure you actively manage. (I can't repeat this enough.) Goals define what ‘success looks like’ and help focus your attention in those areas that need course correction. 
    • Schedule time to reflect on the data, ask questions and explore insights. Confirm or disprove rumors or ‘conventional wisdom’ through your analysis.
    • Establish a process to synthesize and interpret unstructured information. Qualitative and unstructured data has an important role to play.  Ensure you bake it into your data use regime.


Summary

Being a data user is lot more rewarding than being a data monitor.  Make data use a priority by reviewing your reports regularly with your immediate team. Discuss insights and assign follow-up. With the right data, the right type of report and thoughtful review, you can hone your analytical thinking skills and improve how you lead and manage your function.




[1] I do not use the LifeLock product or invest in the company
[2] "The Leadership Capital Index", Dave Ulrich, Berrett-Koehler Publishers, 2015, http://amzn.to/2dKATQH
[3] Solving Puzzles Delivers Answers; Solving Mysteries Delivers Insights. 2011, http://bit.ly/2dQ9kmb
[4] "The Leadership Capital Index", Dave Ulrich (see source above)

Sunday, September 4, 2016

Four Approaches to Break the "No One Is Asking" Cycle


In the last 18 months, I have noticed a concerning shift in the sentiment of learning professionals about ramping up the quality of their measurement and in particular their reporting. This sentiment had been quite prevalent 10-12 years ago, but abated for a while. Now it's back with a vengeance. 

The essence of the sentiment is “Business leaders aren’t asking for this information, so there is no need for us to provide it.”  Most L&D practitioners don’t state it quite like that of course. They say, “Our business leaders only want business impact data." Sometimes the stated reason is that business leaders consider L&D self-report data to be useless. Regardless of how they say it, the message is clear: “Business leaders aren’t asking for it (or worse, don’t want it), so we are not going to bother.”

I'm not imaging this shift. Brandon Hall Group recently published a study entitled, "Learning Measurement 2016: Little Linkage to Performance." In their study, they found that the pressure to measure learning is coming mostly from within the learning function itself. In addition, of the 367 companies studied, 13% said there was no pressure at all to measure learning. 

What’s concerning about this situation?

L&D has been saying for years that it wants to be a viewed as a strategic partner and get a seat at the proverbial table.  Yet, the sentiment that “no one is asking for measurement data” is equivalent to saying, “Hey, I’m just an order taker. No orders, no product or service.”   If we are going to break free of an order-taking mentality, then we need to think strategically about measurement and reporting, not just the solutions L&D creates.

The sad reality is that often business leaders do not ask for L&D data because they don’t know what data is available or they don’t understand its value.  Many business leaders assume L&D is only capable of reporting activity or reaction (L1) data. Many still pejoratively refer to L&D evaluations as Happy Sheets or Smile Sheets.  If they believe that is all L&D can do, then of course they won’t ask or exert any pressure to do more.

Equally importantly, innovation would come to a standstill if organizations only produced products and services that resulted from a client request. Henry Ford famously said, “If I had asked people what they wanted, they would have said faster horses.”  Steve Jobs talked about the dangers of group-think as a product development strategy when he said, “It's really hard to design products by focus groups. A lot of times, people don't know what they want until you show it to them.”  Simply because your internal client cannot envision what you can provide does not mean you shouldn’t provide it and then iterate to get it right.  


The big problem with a “no one is asking” mentality is that it is a self-fulfilling prophesy and a downward spiral. At some point, someone needs to break the cycle. It might as well be L&D. (See the Infographic) 

What can you do differently right now?

Fortunately, as an individual learning practitioner you can break this cycle. Here are four approaches (singly or together) you should consider:
  1.  Reframe the problem: If truly ‘no one is asking”, stop and ask yourself, “Why aren't leaders asking?” Is it because this information has no value or because they don't understand the value? Is it because they don’t know what L&D can provide?” Based on your answers to these questions, develop a game plan to demonstrate the value of the data you can provide (and then provide it to them). 
  2. Find a friendly: If you want to break the ‘negative reinforcing loop’, find a business leader who is willing to work with you, a ‘friendly.’  You can spot a friendly fairly easily. She is interested in how L&D can add value to her business. He views you as a trusted advisor, brainstorming how to build new skills and capabilities within his team. She is innovative and willing to try new approaches even if they might not succeed the first time out. If you have one or two such business leaders with whom you work, seek them out and discuss what data you could provide to answer their questions. 
  3. Report on data that matters to the business leader: From a measurement and reporting standpoint, L&D still puts too much of its energy into gathering and reporting data that matters to L&D but is not compelling to the business.  The number of employees who attended courses or the results of your Level 1 evaluation are simply not important to the business. Look beyond your current measures and educate yourself on best practices in L&D measurement.  Integrate questions about learning application and support into your evaluation instruments. This is data business leaders will care about if you show them how it affects them and their organization. 
  4. Tell a compelling story: Do you remember the Magic Eye picture within a picture phenomenon? If you held the picture up to your nose, you might see the constellation Orion buried inside the picture. (I never saw anything.)  If you believe your data is meaningful and can help the business, don't use the Magic Eye approach. Don't expect your business partner to find the meaning in the data. Rather, tell the story behind the charts and graphs through dialogue. Help your business partner connect the dots; help her understand the consequences of not acting on the data and the benefits if she does. 

A real life example

The ability of employees to apply training in the workplace depends on several conditions, much of it outside control of the L&D department. Factors such as the motivation of the employee, the opportunity to apply the training to real work and the reinforcement of the direct manager all affect the extent to which training is applied.

A few years ago, I worked with a company that sold complex financial software. They re-engineered their implementation process to simplify the client experience, reduce implementation time and accelerate revenue recognition. The business leaders identified project management (PM) skills as critical to the success of this new approach. 

The Process Transformation Team identified an initial group of employees to attend the PM training and pilot the new approach with several clients. When they reviewed the pilot results they were disappointed. Implementation time had not declined appreciably and the client felt that the process was more complex than expected. The Team Leaders investigated and found that the employees' managers were not reinforcing the training or directing them to support resources when they struggled to apply the PM methodology in a real life setting. They also found that the job aids L&D had created were too cumbersome and not designed to be used in a dynamic client setting.

Imagine you are the L&D partner of the Implementation Transformation Business Leader. What data could you have provided to demonstrate that his people were not building and honing skills in this critical discipline?  

This leader needed a regular stream of L&D data on actual application on the job and barriers to application.  He needed data on employees' perceptions of how this training would impact business outcomes of simplification, implementation time and reduced time to revenue recognition. He needed to understand the barriers to application and where the business was accountable to address the issue or L&D. Moreover, he did not need 
incontrovertible proof that the training was improving business outcomes.

Data from L&D was essential for this leader to take action and address issues that affected his ability to successfully transform a key client process.  Unfortunately, the business leader didn't realize that L&D could help him get ahead of this issue and didn't think to ask. After L&D and the business leader started talking, sharing data and insights, the Leader not only acted, but worked with L&D to develop regular business-oriented reports.

Final thoughts

As an L&D practitioner, you can break the negative reinforcing cycle. Why not regularly provide this type of data and use it to create a dialogue about what other insights you can provide?  Why not take the first step to dispel the belief that L&D has no useful data or insights to offer?  It's in your hands.

Have you had a "No One is Asking" moment? I would love to hear from you. Follow me on Twitter @peggyparskey or connect with me on LinkedIn at https://www.linkedin.com/in/peggy-parskey-11634

Sunday, August 7, 2016

It's Time to Solve the “The Last Mile Problem” in L&D


The telecommunications industry has had to confront a challenge referred to as “The Last Mile Problem.” This phrase describes the disproportional effort and cost to connect the broader infrastructure and communications network to the customer who resides in the ‘last mile.”  


Learning Measurement also has a “Last Mile” problem. We have a wealth of data, automated reporting and data analysts who attempt to make sense of the data. Yet, as I observe and work with L&D organizations, I continually witness “Last Mile Problems” where the data doesn’t create new insights, enable wise decisions or drive meaningful action. If we want our investments in measurement to pay off, we need first to recognize that we have a problem and then second, to fix it. 


Do you have a “Last Mile" problem?  Here are six indicators:
1.    One-size-fits-all reporting
2.    Mismatch between reporting and decision-making cadence
3.    Lack of context for assessing performance
4.    Poor data visualizations
5.    Little attention to the variability of the data
6.    Insufficient insight that answers the “so what?” question

Let’s explore each indicator with a brief discussion of the problem and the consequences for learning measurement

1.    One size fits all reporting

While L&D organizations generate a plethora of reports, not enough of them tailor their reports to the needs of their various audiences. Each audience, from senior leaders to program managers to instructors has a unique perspective and requires different data to inform decisions. 

In an age of personally targeted ads on Facebook (creepy as they may be), we each want a customized, “made for me” experience. A single report that attempts to address the needs of all users will prove frustrating to everyone and meet the needs of no one. Ultimately, the one-size-fits-all approach will lead users to request ad hoc reports, create their own shadow process or simply resort to gut feel since the data provided wasn’t very useful.  

2.    Mismatch between Reporting and Decision Making Cadence

Every organization has a cadence for making decisions and the cadence will vary based on the stakeholder and the decisions he/she will make. Instructors and course owners need data as soon as the course has completed. Course owners will also need monthly data to compare performance across the courses in their portfolio. The CLO will need data monthly but may also want a report when a particular measure is above or below a specific threshold.

In a world of high velocity decision making, decision makers need the right information at the right time for them. When the reporting cycle is mismatched with the timing of decision-making, the reports become ineffective as decision-making tools. The result? See #1: shadow processes or ad hoc reporting to address decision needs. 

3.    Lack of context to assess performance

Many L&D reports present data without any context for what ‘good’ looks like.  The reports display the data, perhaps comparing across business units or showing historical performance. However, too many reports do not include a goal, a performance threshold or a benchmark.

Leaders don’t manage to trends nor do they manage to comparative data. Without specific performance goals or thresholds, L&D leaders lose the ability to motivate performance on the front end and have no basis for corrective action on the back end. These reports may be interesting, but ultimately produce little action.

4.    Poor data visualization

Data visualization is a hot topic and there is no shortage of books, blogs and training courses available. Despite the abundance of information, L&D’s charts and graphs appear not to have received the memo on the importance of following data display best practices.

Look at the reports you receive (or create). Do your visuals include unnecessary flourishes (also known as chart junk)? Do they contain 3-D charts? Do they present pie charts with more than five slices? Do your annotations simply describe the data on the chart or graph? If you answered "yes" to any of these questions, then you are making the last mile rockier and more challenging for your clients than it needs to be.

No one wants to work hard to figure out what your chart means. They don’t want to struggle to discern if that data point on the 3D chart is hovering near 3.2 (which is how it looks) or if the value is really 3.0.  They won’t pull out a magnifying glass to see which slice of the pie represents their Business Unit.

Poor visualizations fail to illuminate the story.  You have made it more difficult for your users to gain new insights or make decisions.  Over time, your readers shut down and opt to get their information elsewhere.

5.    Little attention to the variability of the data

As the L&D community increasingly embraces measurement and evaluation, it has also ratcheted up its reporting. Most L&D organizations publish reports with activity data, effectiveness scores, test results, cost data and often business results.

"The Signal and the Noise"

What we rarely consider, however, is the underlying variability of the data.  Without quantifying the variance, we may over-react to changes that are noise or under-react to changes that look like noise but in fact are signals.  We are doing our users a disservice by not revealing the normal variability of the data that helps to guide their decisions.  

6.    No answers to the “So-what” question

This last mile problem is perhaps the most frustrating. Assume you have addressed the other five issues. You have role-relevant reports, reports provided based on decision cycles, goals and exception thresholds based on data variability and great visualizations. 
Unfortunately, we all too often present data “as is” without any insights, context, or meaningful recommendations. In an attempt to add something that looks intelligent, we occasionally add text that describes the graph or chart (e.g. “This chart shows that our learning performance declined in Q2.”). Perhaps we think our audience knows more than we do about what is driving the result. Often, we are rushed to publish the report and don’t have time to investigate underlying causes. We rationalize that it’s better to get the data out there as is than publish it late or not at all.  

Amanda Cox, the New York Times Graphics Editor once said: “Nothing really important is headlined, “Here is some data. Hope you find something interesting.” 

If we are going to shorten the last mile, then we have an obligation to highlight the insights for our users. Otherwise, we have left them standing on a lonely road hoping to hitch a ride home.

Recommendations to solve the "Last Mile" problems

There is a lot you can do to address the “Last Mile Problem" in L&D. Here are six suggestions that can help:
  • Create a reporting strategy. Consider your audiences, the decisions they need to make and how to present the data to speed time to insight. Include in your reporting strategy the frequency of reports and data to match the decision-making cadence of each role.
  • Identify a performance threshold for every measure on your report or dashboard that you intend to actively manage.  In the beginning, use historical data to set a goal or use benchmarks if they are available. Over time, set stretch goals to incent your team to improve its performance.
  • Learn about data visualization best practices and apply them.  Start with Stephen Few’s books. They are fun to read and even if you only pick up a few tips, you will see substantive improvements in your graphs and charts.
  • Periodically review the variability of your data to see if its behavior has changed. Use control charts (they are easy to create in Excel) for highly variable data to discern when your data has above or below your control limits.
  • Add meaningful annotations to your graphs and charts. Do your homework before you present your data. If a result looks odd, follow up and try to uncover the cause of an anomaly.
  • For senior leaders, in particular, discuss the data in a meeting (virtual or face-to-face). Make the meeting about exploring and understanding the findings and agreeing on actions and accountability for follow up. Use these meetings to create a deeper understanding of how you can continually improve the quality of your reporting.

If you start taking these recommendations to heart, your last mile could very well shrink to a very small and manageable distance.

Have you experienced a "Last Mile" problem? I'd love to hear from you. Follow me on Twitter @peggyparskey or connect with me on LinkedIn at www.linkedin.com/in/peggy-parskey

Saturday, July 23, 2016

Want to Make an Impact with Your Training Impact Study? Avoid these 4 Pitfalls.


Training impact studies are quite common within the L&D community. Reserved for the most strategic or visible programs, these studies aim to show that the training program has resulted in new behaviors on the job and that these behaviors led to a business impact. The business impacts are measures critical to business leaders such as increased sales, improved productivity or greater employee retention.

A well-executed training impact study can be revealing and not only provide evidence of training impact, but also demonstrate to business leaders where improvements are warranted.  This is where the value lies: specific recommendations on how to improve the training. If you do it right, you can also identify how to get even more benefit from employees who have already been trained.


Unfortunately, too many impact studies are, well, not very impactful. They fail to produce meaningful action, process improvements or even acknowledgement that the findings have merit.  This is a shame not to mention a waste of resources, time, funds and energy.  After all, if your impact study doesn’t effect change, of what use is it? Additionally, after having spent time, resources and money, who wants to have their work sit on a virtual shelf?

Why might your impact study fall victim to this fate? Here are four pitfalls you should avoid to ensure you get maximum impact from your impact study.  
  1. No ownership for action
  2. Your stakeholders didn't like or anticipate the results
  3. The users of the study question your evaluation methods or the quality of your data
  4. The final report wasn't action oriented.
Let's explore each of these pitfalls and discuss what you can do proactively to avoid them in the future.

1. No ownership for action

If your impact study is going to make an impact, then somebody (or somebodies) need to be accountable for acting on the recommendations. Unfortunately, ownership for action is often not defined when the study begins and no one knows ‘who’s on first.’ Is the training program manager accountable? Is there an identified business leader who can improve how managers support their trainees? When you lack clarity on ownership responsibilities, no one feels that he/she is on the hook to follow through.

What to do proactively: When you launch the study, identify who cares about the results and where and how to involve them. Have them sign off on your approach, your assumptions and hypotheses. Finally, get them to agree on the role they will play at the study’s conclusion and what actions they are prepared to take.

2. Your stakeholders didn’t like or anticipate the results 

Stakeholders don’t like surprises. Getting an unanticipated or negative result will rarely be well received. When this happens, the tendency is to question everything about the study, which in turn creates a reluctance to act (see Pitfall #3).

What to do proactively: Michael Quinn Patton wrote an invaluable book called “The Essentials of Utilization-Focused Evaluation”. In the book, he suggests simulating the use of the findings. The simulation engages stakeholders to explore the range of possible results and the underlying root causes. What should you investigate if you find that the program is highly successful, but only with a subset of the population? What further data should you consider if find that the program was well received, but fizzled when employees tried to apply the concepts in their work? The simulation process not only prepares stakeholders for possible negative results, but also helps to identify, in advance, how to uncover root causes. 

3. The users of the study question your evaluation methods or the credibility of your data.

Related to pitfall #2, there is nothing quite like presenting the results of a month’s long impact study only to have someone question your measurement approach or take pot shots at your data. A skeptic about your methods can undercut the findings and leave you exposed and vulnerable.

What do to proactively: When you are engaging your stakeholders, get their input about the project as well as how you will assess impact and what data you will collect. Do they trust self-report data or consider it useless? Do they question the quality of the business data because no one has updated the demographics to reflect organizational changes? If you have skeptics in your midst (and who doesn’t?), identify them early.  Talk to them earnestly about data integrity issues and seek their ideas on how to mitigate the risks. Most often, these same skeptics can suggest complimentary methods that will make them feel more at ease and will improve the quality of your study.

4. The final report wasn’t action oriented

In telecommunications, there is a phrase called, “The Last Mile Problem.” This expression refers to the problem of implementing infrastructure but not considering the “last mile” of connecting it to the end consumer.  In evaluation, we have a serious “last-mile problem”. How many reports do you read that are filled with statistical jargon, detailed tables or poor visualizations that don’t provide insights or suggest what should be done differently? Audiences listen to these presentations but often have no clue as to what they should do differently.

What to do proactively: In Patton’s book, he cites a rule of thumb from the Canadian Health Services Organization. The format is 1:3:25 and works like this: One page for main messages and relevant conclusions, three pages for the executive summary of the main findings, and twenty-five pages with a comprehensive, plain-language report. Keep your findings and recommendations succinct. Eliminate jargon. Be explicit about what should happen next and who owns the action. 


In summary, consider the end in mind before you launch your study. Think about who cares about the training program, how to get them on board and how to involve them. Begin by identifying the accountable parties before you start. Set stakeholder expectations for the possible findings of your study. Gain their support for your evaluation approach and the type of data you will collect. Finally, make your recommendations action focused. By engaging the right people throughout the process, your training impact study has a good chance of making meaningful impact.

If you have experienced one of these pitfalls, or others not mentioned here, I'd love to hear from you. Follow me on Twitter @peggyparskey or connect with me on LinkedIn at www.linkedin.com/in/peggy-parskey