Recent Blog Posts

Safety Metrics: Are They Measuring Up?

More

Learning to Deal with Drift

More

Enacting resilience in everyday practice: steps along the journey

More

The Learning Review -- Looking up the River

More

Practical Resilience: Misapplication of an Important Concept

More

Topic

Board Education Governance Health


Archive

2017 (4)
June (1)
April (1)
February (1)
January (1)
2016 (3)
September (1)
June (1)
April (1)
2015 (3)
May (1)
March (1)
January (1)
2014 (5)
October (1)
September (1)
June (1)
May (1)
January (1)
2013 (4)
November (2)
October (1)
September (1)

Safety Metrics: Are They Measuring Up?

Posted by Tanya Hewitt on Thursday June 8th, 2017

The founder of modern management practices, Peter Drucker, said “What gets measured, gets improved”[1]. The Plan-Do-Check-Act cycle attributed to Deming and Shewhart [2] is the core of improvement initiatives, and has measurement an inseparable activity. All in all, measurement is important.

Many industries do ample measurement and use such measurements in their routine practices. The processing industry is likely the leader in this domain, having very extensive metrics on formulae to calculate them. Quoting from [3] (emphasis added)

An employee injury that occurs at a process location, but in which the process plays no direct part, is not reportable as a [Process Safety Incident] (though it could be an OSHA or other agency reportable injury). The intent of this criterion is to identify those incidents that are related to process safety as distinguished from personnel safety incidents that are not process-related. For example, a fall from a ladder resulting in a lost workday injury is not reportable simply because it occurred at a process unit. However, if the fall resulted from a chemical release, then the incident is reportable.

We see that even seemingly objective metrics use some form of judgement. There is in fact a decision tree to help the analyst through the many levels of possibilities in reporting a process safety incident. Once decided, the most frequent words found in the formulae are “incidents” and “severity”. Looking at industries such as construction[4], healthcare[5], shipping[6], aviation[7], and oil and gas[8], other frequently used metrics employ language such as “number of accidents”, “lost time”, “mortality/morbidity”, “wait times”, “readmission rate”, “cost”, “failure”, “deficiency”, “number of reported events”. All of these metrics (or indicators) are lagging, but they are also share other properties.

Negative - They are all based on an understanding of safety that “safe = absence of bad”. This premise leads those charged with à safety role to “look for bad”. This has some unintended consequences, such as Hollnagel’s WYLFIWYF[9], and posing a data challenge for ultrasafe industries.

Quantitative - They are all numeric. In our Western society, we have an inherent trust in number driven data over qualitative data, notwithstanding that people respond best to stories to which they can relate. More worryingly, safety metrics are often presented in aggregate, and decisions are made on these numbers without any understanding of how those numbers were generated. [10]

Historical - They are (by definition) based on the past. This is premised on the understanding of safety that past performance predicts future performance, which we know to be blatantly untrue.

Deceptive - A blind trust in this type of data describes a key factor in what led to the Deep Water Horizon offshore oil rig disaster[11].

Game-able - Once attaining the metric becomes the goal, behaviour can be driven in undesirable directions. The MAD TV video is a good depiction of this, as is the first part of Sydney Dekker’s talk.

It is worth remembering that many of the indicators we use are proxy indicators, representing concepts that are difficult to measure directly[12]. This leads us to measure indicators that are more accessible to us, more convenient. In short, we measure what we CAN, not what we SHOULD.

While there are many lagging indicators (of arguably dubious value), there are few current indicators or leading indicators. Leading indicators intend to help predict the future. They are not well established, but some novel indicators might include: training on how to deal with a workplace bully (and how successful that training is), pre job briefs that are done in situ in advance of the work, and appreciating the larger context of why any work needs to be done. Looking again at the industries noted above, the process industry[3] has language such as: number of inspections completed on time, length of time to effect corrective actions, and the construction industry[4], % trained, hours per worker invested in accident prevention. These are all quantitative metrics, although the shipping industry[6] proposes the phrase “ability to …”, which is a novel leading indicator.

Other indicators could be more qualitatively focussed, such as a patient safety story beginning a hospital’s boardroom meeting, ensuring a diversity of voices are heard as input to a decision that affects the organization, defining value and qualitatively reporting against this definition.

Overall, good measurement is difficult, and one must resist the temptation to prioritize convenience over measurement worth. We need to expand our toolbox to include more than just lagging indicators, more than just quantitative indicators, and be acutely aware of the shortcomings of the metrics we do use. Likely the ideal safety metrics are a blend of lagging, current and leading indicators, both qualitative and quantitative, but all thoughtfully employed.

Videos:

Mad TV Video https://www.youtube.com/watch?v=rK8UIGkzsf8 0 - 2:35

Dekker video https://www.youtube.com/watch?v=bIygMgyxaHA 5:58 – 6:42, 8:05 – 10:52

 

References:

[1] Principle #4 on https://www.entrepreneur.com/article/23748484

[2] https://en.wikipedia.org/wiki/W._Edwards_Deminghttps://en.wikipedia.org/wiki/Walter_A._Shewhart

[3] CCPS. (2011). Process Safety Leading and Lagging Metrics, (January 2011), 44. https://www.aiche.org/sites/default/files/docs/pages/CCPS_ProcessSafety_Lagging_2011_2-24.pdf

[4]López-Arquillos, A., & Rubio-Romero, J. C. (2015). Proposed indicators of prevention through design in construction projects. Revista de La Construccion, 14(2), 58–64. http://doi.org/10.4067/S0718-915X2015000200008

[5] CIHI. (2016). CIHI Indicator Library. Retrieved from http://indicatorlibrary.cihi.ca/display/HSPIL/Indicator+Library?desktop

[6] Rialland, A., Nesheim, D. A., Norbeck, J. A., & Rødseth, Ø. J. (2014). Performance-based ship management contracts using the Shipping KPI standard. WMU Journal of Maritime Affairs, 13(2), 191–206. http://doi.org/10.1007/s13437-014-0058-9

[7] ICAO. (2014). Safety report. Retrieved from http://www.icao.int/safety/Documents/ICAO_2014 Safety Report_final_02042014_web.pdf

[8] VisualBi. (2016). Environment, Health and Safety Dashboard. Retrieved from http://visualbi.com/analytics/oil-and-gas/upstream-ep

[9] Cedergren, A., & Petersen, K. (2011). Prerequisites for learning from accident investigations – A cross-country comparison of national accident investigation boards. Safety Science, 49(8–9), 1238–1245. http://doi.org/10.1016/j.ssci.2011.04.005

[10] Hewitt, T. A., Chreim, S., Forster, A. J., Vanderloo, S., & Backman, C. (2015). Fix and forget or fix and report: a qualitative study of tensions at the front line of incident reporting. BMJ Quality & Safety, (March), 1–8. http://doi.org/10.1136/bmjqs-2014-003279

[11] BP “missed big hazards” before Gulf oil spill. (2012). Retrieved from https://www.theguardian.com/business/2012/jul/24/bp-missed-hazards-deepwater-horizon

[12] Hollnagel, E. (2011). Meaningful patient safety indicators? Retrieved from http://static.sdu.dk/mediafiles//4/7/3/%7B473E3AA0-38F6-4E6C-928E-07E7C76175B9%7DInauguralSeminarEH.pdf

[13] Six Sigma Daily. (2016). Run Chart with a Shift. Retrieved from http://www.sixsigmadaily.com/run-chart-shifts/

Add your comment

Submit comment

Tanya Hewitt is a PhD candidate in the Population Health Programme at the University of Ottawa. She is passionate about safety science, and her qualitative study is focused on incident reporting systems in hospitals. Tanya worked as a licensing assessment officer and inspector of primarily cancer clinics for the Canadian Nuclear Safety Commission for 10 years and holds a graduate certificate in risk management, an MSc in medical physics, and has enthusiastically taken many (free) courses, from Social Psychology to the Science of Safety in Healthcare.

Events:More...


News:More...

6th Resilient Health Care Meeting

For details about the 6th Resilient Health Care Meeting being hosted in Vanco

More