Over the past two decades, safety resourcing, focus, and intervention have been subjugated by performance management dogma, contending that “safety” can only be present as an expression of its measurement in a zero harm paradigm. Within this paradigm, and my observations across numerous organisations and industries, I can only conclude that the amount of effort invested in the measurement of safety data directly correlates to the amount of “safety” an organisation possesses. I have observed organisations that produce dashboard data in excess of 70 pages for its leadership team. I am not sure how they meaningfully interpret this amount of data?
When reviewing safety performance metrics I am struck by the absence of poor performance data; compliance audits routinely return 95-100% compliance. Similarly, culture surveys reflect high levels of engagement, particularly following the latest initiative to engage the workforce. LTIFR and TRIFR metrics celebrate the virtual extinction of incidents, and operational risk assessment implies that the operational risk is conditionally treated and mitigated.
It is an interesting feature of these safety performance metrics that there is an overwhelming presence of dashboard indicators re-enforcing the great state of the respective organisations’ safety performance. In such conditions, significant consequence events arrive as a “surprise” aberration and are then attributed to an unauthorized deviation by a non-conforming individual or team. Such a performance system fails to acknowledge that these variations (deviations) are a constant and accepted norm in the delivery of work, where there are always varying conditions and constraints.
When examining other performance metrics used by organisations such as those for production or financial measurement, their characteristics are fundamentally different to that of safety metrics. Financial performance data will contain a variety of measures that produce consistent output where both good and poor performance is illustrated and expected. This data accurately conveys respective performance, allowing the management team to adjust strategy, resourcing, and intervention, avoiding unexpected “surprise” conditions and events.
In large and complex organisations, there is a consistent trend associated with fatal and catastrophic events. These events are often experienced at facilities delivering industry best safety performance for incident rates. Notable examples include Macondo, Texas City, and Esso Longford. Dekker and Pitzer (1) propose that the more attention that flows to keeping TRIFR down, the more a climate and culture of risk secrecy may be created. This, in turn, puts downward pressure on honesty, openness, and sharing, and erodes a culture of trust and learning, opening an organization up to the risk of a safety disaster.
The current safety performance paradigm consequently appeases safety anxiety for the leadership we have imbued with their due diligence responsibility. In the phraseology of Sidney Dekker, we produce LGIs (Looking Good Indexes). These dashboard safety indicators confer the desired safety result rather than reflecting the true operational condition. We design and transact metrics to produce the desired result.
The misdirection of safety performance measurement can be illustrated from research in healthcare. In our hospital system, hand hygiene is a primary strategy to prevent hospital-acquired infection. In most cases, when a patient is admitted to the hospital, the greatest risk to the patient is not the disease or condition they arrive with, but potential infection from those that treat them. A recent study in the MJA (2) reported that there are 165,000 cases of hospital-acquired infection in Australia each year and that there are 6,000 deaths from sepsis in Australian and New Zealand ICUs each year; hence the interest of Clinical Governance in the compliance rates for Hand Hygiene. To assess compliance, hospitals routinely have infection control staff perform observational compliance assessments. These assessments routinely report compliance rates with percentages in the high 90’s. This compliance rate features in the clinical governance dashboards to provide assurance for the conditional state for hand hygiene.
A recent study in the British Medical Journal on Hand Hygiene compliance (3) examined the accuracy of this compliance assessment method. In this study, the authors used a real-time location system (RTLS) to record all uses of alcohol-based hand rub and soap for 8 months in two units in an academic acute care hospital. The RTLS also tracked the movement of hospital hand hygiene auditors. Rates of hand hygiene events per dispenser per hour as measured by the RTLS were compared for dispensers within sight of auditors and those not exposed to auditors.
The study found that hand hygiene event rates were approximately threefold higher in hallways within eyesight of an auditor compared with when no auditor was visible and the increase occurred after the auditors’ arrival. The system that set up the dashboard was delivering the desired result, not the actual condition.
If we were to challenge safety compliance metrics, I suggest that we would find a similar result with much of the audit, compliance and competence testing we do. I know of examples of organisations that require compulsory online induction training to be completed before a worker can enter the respective worksite. Ingenious workplaces have assigned the job of undertaking the induction for other workers to a specialist who can complete the test quickly and efficiently. The resulting dashboard subsequently reports 100% compliance for all approved workers.
I propose that if our cars dashboards were run under the current paradigm for safety performance dashboards, the following would be the result.
- Your fuel gauge would always read full, no matter what the amount of fuel was in the tank.
- The oil pressure would always be optimum.
- Your speed would always indicate the correct speed for the zone you were driving in.
As the driver of this vehicle, you would inevitably be shocked when the fuel tank runs dry, and the engine sputters to a halt without warning. You would be flummoxed when your license was unexpectedly revoked for multiple speeding infractions in one day. You would be panic-stricken when the engine unexpectedly seized on a rail crossing when the car had lost its engine oil from a leak. I can only hope that safety specialists do not build my car dashboard.
The question that I pose to our safety profession is this: Is the purpose of measurement to prove the good safety work of leadership and safety teams, or to engage management with the real operational conditional knowledge that truly reflects how work is done?
- Dekker, S. W. A., & Pitzer, C. (2016). Examining the asymptote in safety progress: A literature review. Journal of Occupational Safety and Ergonomics, 22(1), 57-65.
- Manon Heldens, Marinelle Schout, Naomi E Hammond, Frances Bass, Anthony Delaney and Simon R Finfer: Sepsis incidence and mortality are underestimated in Australian intensive care unit administrative data, Med J Aust 2018; 209 (6): 255-260.
- Srigley JA, et al. Quantification of the Hawthorne effect in hand hygiene compliance monitoring using an electronic monitoring system: a retrospective cohort study; BMJ Qual Saf 2014;23:974–980.