How to Measure the Effectiveness of Your Leadership Development Program
A primary goal of any leadership development program is tight alignment to business objectives. While it is critical to measure the effectiveness of leadership training to prove the value of the program to the organization, measuring learning outcomes can be a challenge. Measuring learning effectiveness is known as the learning return on investment or learning ROI. Measuring the learning ROI from a leadership development program should include both qualitative and quantitative metrics that directly or indirectly link to business objectives.
In their handbook of training evaluation and measurement methods, Jack and Patty Phillips from the ROI Institute identify 3 phases of learning analytics:
- The first is an assessment. What are the business needs, the current performance gaps, and the skill requirements?
- The second is measurement. What are the performance metrics? The first step in performing learning analytics on a leadership development program is to carefully clarify the intended business outcomes of the learning programs and identify the most appropriate ways to measure them. Unfortunately, all too often, those responsible for designing leadership development programs don’t think carefully enough about these measures until after the fact. The selection of the right measures to drive business value which link to key performance indicators for the business is crucial.
- The third is evaluation. To what extent did the learning achieve its intended purposes?
How to assess your leadership development training
The Kirkpatrick Model of four levels – reaction, learning, behavior, and results – provides a useful framework for establishing an objective assessment of a leadership development program.
Level I: Reaction
The focus here is on learner satisfaction and general reaction to the training. Program participants are given a questionnaire to rate the relevance of the program to their leadership role. Considerations include whether it was a good use of their time, whether they would recommend it to others, and so forth. They also respond to questions about whether the training met the learning objectives and their learning needs. This survey can be administered through a simple questionnaire using an off-the-shelf cloud-based survey tool.
Level 2: Learning
The goal here is to determine whether learners gained knowledge and skills from the leadership development program. It’s an assessment of the applicability of learned information. The program designer should measure and assess course usage, completion rates, and course pass rates. Another type of measure is an application outcome survey, in which respondents to a follow-up questionnaire report on how frequently they use the skills and competencies they learned. To conduct this, you need to carefully craft questions to reflect the application of learned competencies and knowledge.
Level 3: Behavior
The objective of behavior change measurement is to understand whether training was, in fact, transferred to on-the-job behaviors. It’s designed to measure learner competency and the extent of improvement in behavior. The best way to conduct this kind of assessment is through the administration of multi-rater pulse surveys provided to a leader’s direct reports. The accumulated results of a multi-rater questionnaire provide an excellent source of data for determining behavioral learning outcomes. It is an objective assessment of program participants and the performance improvement (or lack thereof) in leadership behaviors according to those who matter most, their direct reports.
Here’s Wharton Professor Peter Cappelli discussing the benefits of running pulse survey:
[youtube
]
Level 4: Results
The goal here is to investigate whether training had an impact on the bottom line. Do the benefits exceed the costs? Is the return on investment favorable? The data to perform this calculation is best acquired when leader performance data obtained from level III analysis is correlated to data on a leaders’ direct reports, including measures of direct report retention, engagement, and productivity. The objective is to quantify the hard dollar value associated with improvement in retention, engagement and productivity, and other indicators. It’s a straightforward calculation to perform. Divide the dollar value of the benefit by the dollars invested in the program.
The hardest part is quantifying the benefit, which often requires making some assumptions.
Sometimes program directors attempt to quantify the benefits associated with a specific leadership development program through a direct causal link to broader business outcomes such as increased revenue, cost savings, or increased customer satisfaction. The issue is many other variables would need to be accounted for in a statistical model to validate causation. It’s not impossible, though. Causation, in this case, can best be determined by employing an experimental design involving multiple training groups, non-training control groups, random assignment to conditions, numerous participants (which increases sample size), and various pre-and post-training quantitative measures. However, while possible, these kinds of experiments can be complex to set up and execute.
In summary, leadership development programs require evaluation at all Kirkpatrick levels. It’s essential to consider this before rolling out a program.
Heide Abelli is the SVP of Content Product Management at Skillsoft.
Subscribe to the Skillsoft Blog
We will email when we make a new post in your interest area.