Almost there, and this is the last active step you’ll take when it comes to rolling out your health and safety training. Today we’re talking evaluation. And why is it so important?
Because it’s easy to assume that if you’ve provided training to workers, you can pat yourself on the back and think you’re done. But if you do, you’ve put your cart before your horse. If your goal is to deliver effective training that changes your worker’s behaviour and skill on the job, then you need to confirm that the training was effective. The standard way to do this is to conduct post-evaluation of the training.
Did the employees like the training? Did they feel like they learned? You can find this out by:
You can hand out paper-based surveys after training if you want, but you may get better results if the survey is anonymous.
The assessments you conducted during the training should evaluate the employees’ actual learning of the objectives. This might include simple tests for knowledge issues, or case studies, job simulations, or hands-on exercises for skills and attitudes.
Are the workers taking the new knowledge/skills/attitudes from training and applying them at work where it counts? Observing the employees’ on-the-job work behaviour will determine this, as will other performance-based metrics.
Did the training result in reaching the desired business goal (i.e. were workplace incidents reduced)?
So where do you start?
There is one dominant model used to evaluate training effectiveness. It is the Kirkpatrick Model, which is built around a four-step process, in which each step (or level) adds precision, but also requires more time-consuming analysis and greater cost.
The following is a brief overview of each step:
Level 1: Evaluating reactions
This measures how your participants value the training and determines whether they were engaged, and believe they can apply what they learned. Evaluation tools include end-of-course surveys that collect feedback as to whether participants are satisfied with the training, and whether they believe the training is effective.
Level 2: Evaluating learning
Level two measures whether participants actually learned enough from the training.
At this level, your evaluation tools will include:
Level 3: Evaluating behaviour
Here the measures assess whether training has had a positive effect on job performance (transfer of applied learning). This level will require a cost-benefit decision, because this can be resource-intensive to evaluate, requiring a more time-consuming analysis. You may want to consider performing a ‘level three’ evaluation for safety skills that have a high consequence to error, where you want to make sure safety skills/performance transfer to the job.
Your evaluation tools will include:
Level 4: Evaluating results
At this level, you will measure whether the training is achieving results. To do this, ask yourself such questions as:
The challenge here is that there are many factors that will influence performance, so it is difficult to correlate increased performance to training alone.
Here you evaluation tools will include:
Sources:
https://ohsonline.com/Articles/2003/06/Evaluating-the-Effectiveness-of-Safety-Training.aspx
http://www.thegreentie.org/voices/measuring-the-effectiveness-of-your-training-program
https://www.onepetro.org/conference-paper/SPE-178399-MS
http://www.aidic.it/cet/14/36/012.pdf
http://ehstoday.com/news/ehs_imp_32629
http://blog.convergencetraining.com/how-to-create-an-effective-training-program-8-steps-to-success
http://www.acc.co.nz/PRD_EXT_CSMP/groups/external_ip/documents/publications_promotion/wcm000924.pdf
http://www.startupdonut.co.uk/startup/employees/people-management/how-to-identify-training-needs