In the weeks after the great Bunnings sausage saga of 2018 the outcry about just how silly some safety decisions are brought a wry smile to my face and has provided many with a humorous analogy on why we should focus on high consequence risk. Although I have seen a hot works permit and a safe work method statement for the site barbecue, thankfully almost all top tier construction companies have a focus on high consequence risk, be that Global Minimum Requirements, Fatal and Severe Risks or Critical Risk programs established in both safety I and Safety II frameworks. In the safety I world of retributive justice, if you break one of these sacred rules you can expect to be given the next available window seat on the flight home, after you are ‘just cultured’ first of course. In the safety II version they no longer use Golden Rules, preferring critical control management with a level of freedom within the risk management framework and a more restorative justice approach. It also acknowledges that for innovation and building organisational resilience (apologies for the buzzword) there needs to be trial and error, but the consequences of some errors are simply too great to tolerate. In safety II people are the solution, but it seems when it comes to critical risk management that’s not always the case. Like fighting tribes, religions or family members, the sooner safety I and II stop fighting over the moral high ground and remember we all are ultimately after the same thing, the better and one thing you’ll get almost no arguments from either camp is that safety is an ethical responsibility.
Ethics is the topic of many studies and one of the most widely used analogies within law enforcement is the “Dirty Harry Problem”. Traditionally a Dirty Harry Problem is a genuine morale dilemma whereby a person faces the choice to break the rules to achieve an urgent and unquestionably good outcome. People are encountering these morale dilemmas in the workplace such as finding a critical control not implemented and deciding whether to act upon it. They may use ‘people are the solution’ as the excuse for not acting which is fundamentally against everything safety I and II are about and a shorthanded interpretation of the safety II principles. In doing so they are choosing not to act upon the organisation edict to stop work and instead apply the people are the solution principle in isolation. Similarly, Dirty Harry applied his own brand of justice in killing (an absolute grub of a human) with total disregard for the rule of law and the justice system, considering himself as the solution to the problem of an ineffective justice system. Paraphrasing Klockers (1980) and Albanese (2008) the wrongness associated with dirty means is not in question, but rather the shorthanded interpretation of the principles that concludes that the “ends justify the means” suggesting that any available course of action is acceptable so long as it results in greater total happiness. This in effect amounts to noble cause corruption. There is a clear choice – a moral dilemma – I don’t want to stop the work because it will make someone unhappy is not an ethical decision, it’s a cop out.
Do the ends justify the means?
(It doesn’t matter how we get there, as long as we get there)
Does it matter if we don’t stop work when we find a critical control missing if nobody gets hurt? According to what is happening anecdotally on sites, the organisational morals and a utilitarian lens applied to the question results in the work continuing. Typically we hear that people are adaptive and they will come up with a solution. Ultimately, we achieved greater total happiness, and nobody got hurt – but what if someone did get hurt? If we have a control missing and someone suffers a minor injury or we have a near miss, is that still okay? What happens when we seriously injure or kill someone after ignoring a critical control because we didn’t want to stop the work or make someone unhappy? Before you start thinking about pointing fingers and self-preservation, hold that thought…
Let’s flip that for a minute – do the means justify the ends?
(It doesn’t matter what the result is as long as we go to work the way we planned)
If you take the view that the result will be a by-product of how we go to work – does it matter if we don’t achieve production rates if we still achieve what we set out to do safely? If we plan to go to work and undertake a task with reference to our critical controls, plan to work within those boundaries with the right resources and people will we encounter a missing critical control? In theory no, but “something always pops up or circumstances change” according to the people that do the work. If you disregard the ends for a moment and think about how people go to work rather than the end goal, any moral dilemmas can be tackled from a different perspective. If you apply ‘people are the solution’ in this framework, adaptive people are able to come up with a way to work as planned. This view assumes that the process is what is important and ethical dilemmas only arise if people see a need to stray from work as planned.
So what happens when we ignore that missing control? That really depends on the maturity of the organisation and the types of justice sought. In an under developed organisation that seeks retributive justice faced with scenario one, people are the solution quickly goes out the window in favour of blame and self-preservation whereas scenario two, the dilemma is overcome, however accountability may come for other failures not safety related i.e. cost, time etc. In a more developed organisation that is open to a restorative justice in both scenarios the interest isn’t in who was right or wrong but more about what the organisation needs to heal and learning how we can learn from the ethical dilemmas encountered. Where we currently fall down in each of these theories of justice is the lack of an independent judge. We typically don’t have independence in determining retributive, substantive, procedural or restorative justice. Whatever action is taken or learning that occurred is undertaken by those directly involved.
There are different and more complex lenses to view ethics and morality in safety than just the utilitarian and deontological examples here where one focusses on the collective and the end result but disregards the individual, and the other focusses on how work is undertaken and the safety of the individual without regard for the end result. This is a binary way of thinking; you are either good or evil. We know the world and ethical thinking is more complicated than good versus evil, but our application of safety can be very binary because we have been programmed to think this way – Legislation asks us to control risk to a level that is reasonably practicable, where we sometimes go wrong is the framework in which we apply the reasonably practicable test. People and businesses have latched onto a cost-benefit framework and our decisions are being made with a cost-benefit / production lens as opposed to an ethical lens. I am not saying that the traditional cost / benefit framework is wrong after all business is about making money and people within businesses are generally not purposefully ignoring ethical dilemmas in favour of dollars. People and businesses gravitate to what they know and the cost / benefit framework is an underlying principal for business. Safety dilemmas within a cost-benefit framework are a challenge to the safety II principals and lead to people being the solution to a production problem. The reasonably practicable test overlayed in an ethical framework results in people being the solution to an ethical dilemma, the method chosen is driven by people outcomes instead of production impacts which is not always seen as conducive to good business. What a dilemma.
If we really expect our leaders to make ethical choices that are good for people and business in different circumstances how do we as safety professionals and particularly safety differently practitioners, support our leaders to live into the ‘safety is an ethical responsibility’ principle? How can we:
- Have a greater appreciation for the different philosophical theories that inform our ethical decisions – it is remise of us to solely apply our own morale potency to ethical dilemmas rather than study ethics in greater detail; and
- Help our leaders navigate through the different thinking paradigms and ethical dilemmas with more than just culture decision trees; and
- Build and develop emotional intelligence in ourselves and others to be able to apply the different schools of ethical thought to different scenarios to achieve safe outcomes for people; and
- Understand that our own morals may conflict with what may be an ethical decision for an organisation or group and have the personal resilience to be okay with that and learn from it; and
- Understand that safe outcomes for people can also be smart business decisions and provide evidence to support this.
To help people understand the ethical complexity of different scenarios I have often used the trolley problem – first introduced by Phillipa Foot in 1967 – as a theoretical example to pull apart the dilemmas associated with different schools of ethical thinking. It’s not a comfortable example for some as it makes them squirm wondering whether they would commit murder to save their loved ones or make a decision that is not in their interest but is for the greater good. It forces them to question their personal values at a deeper level thus exposing the complexity of work and the over simplified just culture frameworks they previously relied upon to fix things after an incident. This is only one way of exploring ethics, but I wonder how often we have these types of discussions with our leaders and if ethics is an underlying principal of safety II, are we giving it the attention it needs?
The safety II principles people are the solution and safety is an ethical responsibility, are intertwined; they are not mutually exclusive; they do not work when applied in isolation of each other. Similarly, safety I or safety II practiced in isolation of each other won’t work as well as it could, and critical control management is not the future of safety practices if used in isolation of ethics – it simply becomes the new golden rule. There are emerging practices such as Human and Organisation Performance (HOP) learning teams, with true independence to simply better understand the nuances of organisational challenges. These HOP teams are yet to be critically evaluated for safety benefits by academia. They appear promising but in the meantime, people are the solution to our ethical dilemmas and Dirty Harry safety (using ‘people as the solution’ as an excuse for inaction) and practicing safety without a greater appreciation for different ethical thinking or without independent review, leaves the effectiveness of safety management more up to luck, and you’ve got ask one question. Do I feel lucky? Well, do ya? Punk!