The case for small data

smallbigIt is reasonable for management to want evidence of a problem before they put resources into fixing it, and we would like to be evidence driven ourselves. All too often, though, we assume that having good data means having more data. We live in the time of Big data where the value of some companies is measured in terms of the size of the data set they hold. Our phones collect and report our habits, our movements, and our interests, so that processing clusters can predict our future behaviour. It is tempting to think that to understand safety better and do safety better, we need more numbers.

Unless you work as a data analyst, most of us are limited to the use of ‘small data’ – we are limited to observing or listening to individual or unique experiences. What good is that? Can you really ‘speak truth to power’ when your information comes from whimsical stories, from people with their own agendas, biased by desires to tell a compelling story involving themselves? Against 10,000s of data points it may seem like a futile exercise. But individual experiences are data too – and they have a power to engage and inspire change that numbers do not.

Consider what a supervisor once shared:

I had FedEx, the other coordinator had StarTrack. Both had half a crew each. FedEx made deadline by 30 seconds. It was just a mad mad rush, push, sweat, scream just to get the goods out. Severely undermanned. Not enough leading hands, only one leading hand on shift every night.

When we listen to the experiences of the end-users of the many structures, processes, and systems set in place to govern what should happen at work, a different picture emerges. There are, in all likelihood, no procedures instructing people to scream, to sweat and to madly rush work, but sometimes that is what work is like. These stories draw attention to things that are entangled and emergent. It brings out the messy details and other real aspects that impact how work is carried out. Relationships, feelings, mess, smells, perspiration, sounds, screaming, anxiety, pain, joy, pride and camaraderie are all highly unlikely to be captured by big data. Yet, all of these aspects impact work, and safety. Unless you have ways to know about how such aspects are involved in work, you will never know. Engaging with the unique experience is pretty much the only way to get to that. And when you start with this experience, work as done comes to the fore and engages tellers and listeners alike in sensing how things actually come together, rather than what should happen.

And consider this mechanic’s experience:

There are so many tools and equipment problems. One simple thing is a thermometer. Just a normal thermometer, just to measure the local temperature, which is a very small thing but is very important for cable tension. We cannot sign the paperwork, we cannot finish the work if we don’t have the thermometer with us because cable tension varies with the temperature and we have to get that temperature to set up how much tension we will put. They don’t have a thermometer. I’ve been looking and asking around for it for the last five days. Couldn’t find it. I eventually used the iPhone to read the temperature of where we are.

This story does not explain why the thermometer is missing. But it does hint about the impact. The obvious impact of loss of precision from using iPhone information is in there (and the increased risk that comes with that), but also hints about frustration on the part of the person who for five days was looking for the thermometer.

Stories such as the ones above are subversive in the way that they embrace the tension between the plans and the unexpected. After hearing what it’s like to work within an imperfect system, it can make a lot of sense to go about work in a particular, less than optimal, way. Such stories convince not by their objective truth, but by their ‘aesthetics’ or emotional appeal on the listener or observer (surprising, touching, humorous, upsetting). By listening to end users unique experiences, we enter a perspective in which we can see normal humans doing normal work in trying to create success amongst scarcity, imperfect and conflicting setups. Work governance, and safety ambitions, are more likely to turn into a need to support and provide, rather than to constrain and enforce. Can big data do that?

The stories people tell about work often also have an ethical dimension – they tell about what is right and what is wrong, what a good workplace should look like, and what it is like to operate within, or outside, that ethical space. A welder once shared:

One of the days I did do a Take 5 and I was doing hot work and the Take 5 didn’t help me. I ended up getting burnt. I ended up in the hospital. The first thing the corporate guy did was ask if I had a Take 5, and that’s all they wanted to know: if I had a Take 5 done.

Then I said yes, in my shirt which is all the same. Well do you mind if I have a look? Go for your life I don’t give a shit at this point.

But that’s what the standard paper trail is.. so if anything comes down to it.. if you don’t have one it’s got nothing to fall back on them. It’s all you.

Compatible or incompatible values become visible in the clashes or meetings between different mentalities. These meetings or clashes are matters of the heart, and critically important for how we create workplaces and collaborate. I don’t think that big data can ever capture these. I don’t even think that big data at all can capture what is ethical. What is right and wrong, is not something that can be proven. Not in numbers, nor in cause and effect relationships. But these issues are as real as productivity rates or the number of checks performed. In the current strive for large truths about work, discussions about ethics and morale, about what is right and wrong, risk being (further) marginalised.

While numbers can give indications and insights that are valuable, the reliance on a calculative approach to understanding work risk shifting attention away from those things that are real to the people who do the work. To overcome, I see no other option that to emphasise the importance and potential of using descriptions about what goes on at work. Such descriptions do not have cause and effect perfectly outlined that allow precise interventions, and they may be ambiguous and open-ended. But that is the beauty of them. They are as difficult and messy as work often is. And more and more people can be invited to interpret and contribute to increasingly large conversations about work. By engaging with ‘small data’ organisations stand a better chance to get to understand and engage the heart of what happens at work.

Note: Thanks to Ron Gantt and Drew Rae for insightful discussions and contributions to this text!

5 thoughts on “The case for small data”

  1. Great article Daniel. I agree there has been an overemphasis on quantitative safety data over employees’ qualitative accounts. I would probably give a word of caution around the analysis and “deep dives” into the stories. Our biases, perspectives and world view naturally shape our interpretation and without proper instruction or training around interpreting the stories, critical cues or insights may be missed. This may require a shift from not only an organisational perspective but from a OHS tertiary education perspective.

  2. Michelle,

    That could be helpful, as long as the training is not about how to ‘properly’ interpret/analyse stories, but rather about how to invite others to interpret without locking in on one particular perspective/interpretation.

    If I show the thermometer story to a procurement manager, he will see procurement issues. A safety person is more likely to point out the increased risks. And a quality professional might start thinking about non-conformances in upcoming audits, someone else might pick up on the loss of productivity issues, or something else (sorry about the caricatures). There is no way around bias. I rather embrace bias, collect as many biases i can even, than look for truth. I struggle with the idea of how there can be ‘a view from nowhere’ or ‘correct’ way to understand anyway.

    When I work with examples such as the ones above I try to ensure that multiple and diverse perspectives are involved in the reading/assessing to enable a increasingly rich interpretations.

    d

  3. Love the article. Big data by its nature is about the general, which hides much of the specific. People, however, act in a specific setting, time and place, not in a general one.

    I agree with Michelle that interpreting stories is difficult and training can be useful (same for big data btw, there’s tons of bad statistics), but I think it’s an illusion to think we can avoid ‘missing’ cues or insights. Different experts will not see the same things and even together they will not learn everything there is to learn from a collection of stories.
    Our biases (heuristics), perspectives and world views give us a local view, but they are what enable us to see anything at all. Without them there is no ‘meaningful’ way of perceiving the world. There is no avoiding them, even computer algorithms are biased by taking somethings more or less into account, and process them in a certain way.

  4. Great article, Dan. It reinforces my drumbeating to use stories to really understand what’s going. A colleague’s quote “Culture is the emergent set of core narratives that people live out” accentuates the case for small data.

    The only person who can interpret a story is the storyteller. In our work the storyteller “signifies” the story and turns his/her interpretation into data points. We create heat maps or fitness landscapes to discover patterns. Essentially we now have the ability to convert qualitative into quant data.

    As the volume of stories collected increases, the concern about lies, self-serving, cognitive biases is mitigated. They may appear as outliers or washed out by the majority.

    Stories are one type of data fragment. We also collect photos and voice recordings which also can be signified by the author.

    Analyzing the maps indicates where to focus the next safety intervention. We simply ask: How might we hear fewer stories here and more stories there? So instead of a disruptive safety transformation, we nudge the system to make improvements.

  5. Nice article Daniel. Any data (big, medium, small, accurate or otherwise) can be useful if applied in its correct context. That’s part of the core issue, because the common tendency is to focus on metrics on higher-level emergent regularities (and rare but noticeable irregularities) in the system. Problems arise when we use these to alter the system at a completely different scale, usually at a the worker level where many shadow processes operate to keep operations going. Unanticipated consequences soon follow.

    Practically, organizations should realize that metrics on deviations alone are insufficient to curate safety performance. An equal investment is necessary to find and track measures of unremarkable but successful performance.

Leave a Reply