SafetyCloud Blog

Evaluating your HSE training’s effectiveness

Written by NOSA | Jul 13, 2017 9:00:00 AM

Almost there, and this is the last active step you’ll take when it comes to rolling out your health and safety training. Today we’re talking evaluation. And why is it so important?

 

Because it’s easy to assume that if you’ve provided training to workers, you can pat yourself on the back and think you’re done. But if you do, you’ve put your cart before your horse. If your goal is to deliver effective training that changes your worker’s behaviour and skill on the job, then you need to confirm that the training was effective. The standard way to do this is to conduct post-evaluation of the training.

 

This is done at four different levels:

Evaluate your employees’ reaction to training

Did the employees like the training? Did they feel like they learned? You can find this out by:

  • observing the employees during training
  • asking their opinions
  • handing out surveys.

 

You can hand out paper-based surveys after training if you want, but you may get better results if the survey is anonymous. 

 

Evaluate your employees’ actual learning

The assessments you conducted during the training should evaluate the employees’ actual learning of the objectives. This might include simple tests for knowledge issues, or case studies, job simulations, or hands-on exercises for skills and attitudes.

 

Evaluate your employees’ post-training job behaviour

Are the workers taking the new knowledge/skills/attitudes from training and applying them at work where it counts? Observing the employees’ on-the-job work behaviour will determine this, as will other performance-based metrics.

 

Evaluate quantifiable business results

Did the training result in reaching the desired business goal (i.e. were workplace incidents reduced)?

 

So where do you start?

There is one dominant model used to evaluate training effectiveness. It is the Kirkpatrick Model, which is built around a four-step process, in which each step (or level) adds precision, but also requires more time-consuming analysis and greater cost.

 

 

The following is a brief overview of each step:

Level 1: Evaluating reactions

This measures how your participants value the training and determines whether they were engaged, and believe they can apply what they learned. Evaluation tools include end-of-course surveys that collect feedback as to whether participants are satisfied with the training, and whether they believe the training is effective.

 

Level 2: Evaluating learning

Level two measures whether participants actually learned enough from the training.

At this level, your evaluation tools will include:

  • pre- and post-tests, and quizzes
  • observation of learners (i.e. did an employee execute a particular skill effectively?)
  • successful completion of activities.

 

Level 3: Evaluating behaviour

Here the measures assess whether training has had a positive effect on job performance (transfer of applied learning). This level will require a cost-benefit decision, because this can be resource-intensive to evaluate, requiring a more time-consuming analysis. You may want to consider performing a ‘level three’ evaluation for safety skills that have a high consequence to error, where you want to make sure safety skills/performance transfer to the job.

Your evaluation tools will include:

  • work observation
  • focus groups
  • interviews with workers and management.

 

Level 4: Evaluating results

At this level, you will measure whether the training is achieving results. To do this, ask yourself such questions as:

  • Is the training improving safety performance?
  • Has training resulted in better quality?
  • Is there increased productivity?
  • Have sales increased?
  • Has customer service improved?

The challenge here is that there are many factors that will influence performance, so it is difficult to correlate increased performance to training alone.

Here you evaluation tools will include:

  • measuring the reduction in the number, or severity, of incidents or accidents compared to the organisation’s performance (or contract goals)
  • measuring the reduction in total recordable cases (TRC)
  • measuring the reduction in the DART rate (days away, or restricted work).

 

Sources:

https://ohsonline.com/Articles/2003/06/Evaluating-the-Effectiveness-of-Safety-Training.aspx

http://www.thegreentie.org/voices/measuring-the-effectiveness-of-your-training-program

https://www.onepetro.org/conference-paper/SPE-178399-MS

http://www.aidic.it/cet/14/36/012.pdf

http://ehstoday.com/news/ehs_imp_32629

http://blog.convergencetraining.com/how-to-create-an-effective-training-program-8-steps-to-success

http://www.worksafe.vic.gov.au/forms-and-publications/forms-and-publications/employee-health-and-safety-training

http://www.acc.co.nz/PRD_EXT_CSMP/groups/external_ip/documents/publications_promotion/wcm000924.pdf

http://www.startupdonut.co.uk/startup/employees/people-management/how-to-identify-training-needs