The Enlightenment once suggested that if we are smart, if we think harder about a problem with those minds we can trust, then we can make the world a better place, we can constantly improve it. Modernism says that technical-scientific rationality can create that better, safer, more predictable, more controllable world for us. We might achieve workplaces without injuries, incidents or accidents. If, for example, we plan the work carefully, if we design well and train, discipline, supervise and monitor the people who are going to execute the work (just like Frederick Taylor recommended), we can eventually live in a world without human error. This ideology of constant improvement, and the vision of an immaculate “city on the hill,” is deeply embedded in the zero-visions of many industries and organizations around the world, from road traffic to construction. Networks or forums for vision zero exist in countries around the world. Membership in these things, and the commitment it implies, gets organizations to realize safety improvements because they need to back up their commitment with resources. But these were already very safe and committed companies. Being a high achiever partly explains one’s membership in such a group. Very little is typically known, however, about the exact activities and mechanisms that lie underneath the reductions in harm that committed companies have witnessed, and little research has been conducted into this.
One important reason for this is that the goal, the zero vision, was never driven by safety theory or research. It has grown out of a practical commitment and a faith in its morality. It is defined by its dependent variable, not its manipulated variables. In typical scientific work the experimenter gets to manipulate one or a number of variables (called the independent or manipulated variables). These are in turn presumed to have an effect on one or a number of dependent variables. In this, safety is always the dependent variable—it is influenced by a lot of other things (independent or manipulated variables). Increases in production pressure and resource shortages (independent variables), for example, pushes the operating state closer to the marginal boundary, leading to a reduction in safety margins (the dependent variable). A decrease in the transparency of interactions and interconnections (the independent variable) can increase the likelihood of a systems accident (the dependent variable). Structural secrecy and communication failures associated with bureaucratic organization (independent variables) can drive the accumulation of unnoticed safety problems (the dependent variable). Managerial visibility on work sites (an independent variable) can have an impact on worker procedural compliance rates (the dependent variable).
Zero vision has got this upside-down. It tells managers to manipulate a dependent variable. But safety research is mostly about manipulated variables, even though it often considers which dependent variables to look for (e.g. are incident counts meaningful dependent variables to measure? Can we develop new indicators of resilience?). But mostly, theories tend to specify the kinds of things that engineers, experts, managers, directors, supervisors and workers need to do to organize work, communicate about it, write standards for it. What they need to manipulate, in other words. Outcomes (measured in terms of incidents or accidents, or in terms of indicators of resilience) then are what they are. Zero vision turns all of this on its head. Managers are expected to manipulate a dependent variable—a complete oxymoron. Manipulating a dependent variable is something that science considers to be either experimentally impossible or professionally unethical. And the latter is what zero vision can become as well. With a focus on the dependent variable—in terms of how bonuses are paid, contracts are awarded, promotions are earned—fraudulent manipulation of the dependent variable (which is, after all, a variable that literally depends on a lot of things not under one’s control) becomes a logical response.
Not suprisingly, there is no evidence that zero vision has an impact on safety that is any greater than the next safety intervention. This may not matter, however, because zero visions are a strong instrument of what is known as bureaucratic enterpreneurialism. It allows people involved in safety to say two things simultaneously: they can claim that great things have been accomplished already because of their work, but that more work is necessary because zero has not yet been reached. And because it never will, or because the organizational fear of backsliding away from zero can be maintained, safety people will stay relevant, employed, contracted, funded. Whether people in these positions genuinely believe that injuries and accidents can be fully expunged is hard to know. But they have to be seen to believe it—in order to attract investments, work, federal grants, contracts, regulatory approval, and affordable insurance.
Does a zero vision have practical benefits though? Defining a goal by its dependent variable tends to leave organizations in the dark about what to do (which variables to manipulate) to get to that goal. Workers, too, can become skeptical about zero sloganeering without evidence of tangible change in local resources or practices. It is easily seen as leadership double-speak. Not only is the vision itself unable to practically engage workers, there is nothing actionable (no manipulable variables) in a mere call to zero that they can identify and work with. A zero vision also tends to stigmatize workers involved in an incident. One of the most deeply rooted instances of this can be found in medicine, which has had its own version of vision zero handed down through decades, centuries even. Many there are still battling the very idea that errors don’t occur. They are faced daily with a world where errors are considered to be shameful lapses, moral failures, or failures of character in a practice that should aim to be perfect. Errors are not seen as the systematic byproduct of the complexity and organization and machinery of care, but as caused by human ineptitude; as a result of some people lacking the “strength of character to be virtuous”. The conviction is that if we all pay attention and apply our human reasoning like our Enlightenment forebears, we too can make the world a better place. The 2000 Institute of Medicine report was accompanied by a political call to action to obtain a 50% reduction in medical mistakes over five years. This was not quite a zero-vision, but halfway there. And commit to it we must: it would essentially be our moral duty as reasonable humans. It may have exacerbated, in medicine and elsewhere, feelings of shame and guilt when failures do happen, and led to underreporting and fudged numbers and stifled learning. For many industries in Australia and elsewhere to move in an exact opposite direction (by basically declaring they want zero injuries or incidents) of where many safety and human factors people want medicine to go (acknowledging that errors and failures are a normal, though undesirable, part of being in that business) is quite befuddling.
Investigative resources are easily wasted too: if zero is assumed to be achievable, then everything is preventable. And if everything is preventable, everything needs to be investigated, including minor sprains and papercuts. And if an organization doesn’t investigate, it can even have direct legal implications. A documented organizational commitment to zero harm can lead a prosecutor to claim that if the organization and its managers and directors really believed that all harm was preventable, then such prevention was reasonable practicable. They are liable if harm occurs after all, since they or their workers must have failed to take all reasonably practicable steps to prevent it. Accidents are evidence that managerial control was lost; that a particular risk was not managed well enough. Such failures of risk management open the door to look for somebody who was responsible, on whose account we can put the failure, including that of managers and directors. The 2011 harmonized OHS legislation gives prosecutors precisely that power (even though it has not been tested in court yet).
A zero vision is a commitment. It is a modernist commitment, inspired by Enlightenment thinking, that is driven by the moral appeal of not wanting to do harm and making the world a better place. It is also driven by the modernist belief that progress is always possible, that we can continually improve, always make things better. Past successes of modernism are taken as a reason for such confidence in progress. After all, modernism has helped us achieve remarkable increases in life expectancy, create fantastic technologies, and reduce all kinds of injuries and illnesses. With even more of the same efforts and commitments, we should be able to achieve more of the same results, ever better! But a commitment should never be mistaken for a statistical probability. The statistical probability of failure in a complex, resource-constrained world—both empirically, and in terms of the predictions made by the theory—simply rules out zero. In fact, safety theorizing of almost any pedigree is too pessimistic to allow for an incident- and accident-free organization. Look at man-made disaster theory, for example. On the basis of empirical research on a number of high-visibility disasters, it has concluded that “despite the best intentions of all involved, the objective of safely operating technological systems could be subverted by some very familiar and ‘normal’ processes of organizational life”. Such “subversion” occurs through usual organizational phenomena such as information not being fully appreciated, information not correctly assembled, or information conflicting with prior understandings of risk. Barry Turner, father of man-made disaster theory, noted that people were prone to discount, neglect or not take into discussion relevant information. So no matter what vision managers, directors, workers or other organization members commit to, there will always to be erroneous assumptions and misunderstandings, rigidities of human belief and perception, disregard of complaints or warning signals from outsiders and a reluctance to imagine worst outcomes—as the normal products of bureaucratically organizing work.
Not much later, Perrow suggested in his work on Normal Accidents Theory that accident risk is a structural property of the systems we build and operate. The extent of their interactive complexity and coupling is directly related to the possibility of a systems accident. Interactive complexity makes it difficult for humans to trace and understand how failures propagate, proliferate and interact, and tight coupling means that the effects of single failures reverberate through a system—sometimes so rapidly or on such a massive scale that intervention is impossible, too late, or futile. The only way to achieve a zero vision in such a system is to dismantle it, and not use it altogether. Which is what Perrow essentially recommended societies to do with nuclear power generation. Some would argue that Perrow’s prediction has not been borne out quantitatively since the theory was first publicized in 1984. Perrow’s epitome of extremely complex and highly coupled systems—nuclear power generation—has produced only a few accidents, after all. Yet the 2011 earthquake-related disaster at Fukushima closely followed a Perrowian script. The resulting tsunami flooded low-lying rooms at the Japanese nuclear plant, which contained its emergency generators. This cut power to the coolant water pumps, resulting in reactor overheating and hydrogen-air chemical explosions and the spread of radiation. Also, increasingly coupled and complex systems like military operations, spaceflight and air traffic control have all produced Perrowian accidents since 1984. Zero seems out of the question.
Diane Vaughan’s analysis of the 1986 Space Shuttle Challenger launch decision reified what is known as the banality-of-accidents thesis. Similar to man-made disaster theory, it says that the potential for having an accident grows as a normal by-product of doing business under normal pressures of resource scarcity and competition. Telling people not to have accidents, to try to get them to behave in ways that make having one less likely, is not a very promising remedy. The potential for mistake and disaster is socially organized: it comes from the very structures and processes that organizations implement to make them less likely. Through cultures of production, through the structural secrecy associated with bureaucratic organizations, and a gradual acceptance of risk as bad consequences are kept at bay, the potential for an accident actually grows underneath the very activities an organization engages in to model risk and get it under control. Even high-reliability organization (HRO) theory is so ambitious in its requirements for leadership and organizational design, that a reduction of accidents to zero is all but out of reach. Leadership safety objectives, maintenance of relatively closed operational systems, functional decentralization, the creation of a safety culture, redundancy of equipment and personnel, and systematic learning are all on the required menu for achieving HRO status. While some organizations may hew more closely to some of these ideals than others, there is none that has closed the gap perfectly, and there are no guarantees that manipulating and tweaking these attributes will bring an organization at zero or keep it there.
The call to industry should be this—don’t worry about the dependent variable. It is what it is. Worry instead about the manipulable variables, and proudly talk about those. Compare yourselves on what you do, not on what the results are.
Please see also: Donaldson, C. (2013). Zero harm: Infallible or ineffectual. OHS Professional. Melbourne, Safety Institute of Australia: 22-27.