Trainer Mistake 4 – Poor Use of Evaluations
By Bill Wilder, Life Cycle Institute
Many subject matter experts and managers are asked or required to deliver training. Most have little training experience so they make mistakes. At the 2015 Association for Talent Development conference Bob Pike, a widely respected learning authority, presented The 7 Greatest Mistakes Trainers Make and How to Avoid Them.
1 – No transfer strategy
2 – Too much content
3 – Failure to chunk content
4 – Poor use of evaluations
5 – Managing questions in the classroom
6 – Lack of planned and impromptu closers, openers, revisits, and energizers
7 – Not available before and after formal class
Mistake #4 is the poor use of evaluations.
We have all been in training that ends with the hurried handout of a short evaluation form, or a request to go online to complete an evaluation. By this point, most participants are done and ready to move on.
Bob Pike believes this is a poor use of evaluations.
Let’s step back for a brief intro to commonly accepted training evaluation processes. In the learning discipline, Donald Kirkpatrick’s four levels have emerged as the de facto standard. In his model we evaluate four levels of learning.
1 - Reaction
2 - Learning
3 - Application
4 - Results
The evaluation referred to in the first paragraph is a level one evaluation. We are simply asking the student about their reaction. This is often referred to as the “smiley sheet.” It is easy, cheap… and valuable, when well applied. This is why nearly all training programs employ them.
The other levels are more difficult. Levels three and four cannot be done until well after the training event. They require observation and data of the student using what they have learned. I’ve seen numbers suggesting that less than five percent of training programs receive this level of scrutiny. It is a big investment that should only be undertaken for large training programs designed to drive short-term behavior change that is mission-critical to the organization.
Level two evaluations are typically executed through tests, demonstrations or presentations during the training event. They are becoming much more common and often are a requirement of self-paced learning. While more common, they do require effort and money to execute.
Level one evaluations are simple, maybe too simple. This leads to some carelessness. Bob Pike suggests that a few little changes will make a big difference in the value of the data.
Instead of waiting until the end, when everyone is rushed and they are likely to have forgotten some noteworthy feedback, hand out the forms early. I have seen them distributed at the beginning, with break time allowed throughout the program to write comments. At a minimum, forms should be distributed well before the class is over, perhaps at the beginning of the last break. Also, collect them prior to class conclusion.
Ask for quantitative and qualitative feedback on the facilitator, content, materials, and participant. Participant? Yes, participant. To what extent did they engage? What was their contribution to the collective learning? How did they prepare? What are their application goals?
I will add another recommendation that comes from Donald Kirkpatrick: Set targets and communicate the results. Use the data to develop your facilitators, processes, and your courses.
Bill Wilder, M.Ed is the founder and director of Life Cycle Institute, the learning, leadership and change management practice at Life Cycle Engineering. The Institute integrates the science of learning and the science of change management to help organizations produce results through behavior change. You can reach Bill at bwilder@LCE.com.
© Life Cycle Engineering
For More Information
843.744.7110 | info@LCE.com