Uncategorized

Understanding Human Error

Issue 1 and Volume 12.

By Shawn Pruchnicki

Leverage for change, that is real change that further reduce the dangers we face during the next call, is born from our willingness to look deeply into the system we have created. (Photo by John Cetrino.)

On July 28, 2011, a 37-year-old career captain from the Asheville (NC) Fire Department died and nine other firefighters were injured in a six-story medical building fire while searching for the seat of the fire. The medical examiner’s report states that the victim’s cause of death was smoke and fume inhalation in an oxygen-depleted environment. Postmortem toxicological testing revealed a carboxyhemoglobin saturation of 22 percent.1

As firefighters, we understand what it takes to enter a burning structure to save a life or reduce property damage. From a more humbling perspective, we also understand that despite all our training, experience, and tools-technological and otherwise-these may not be enough to keep us out of harm’s way. We know that although we have studied and experienced fire behavior countless times, there is always an element of the unknown. And yet, despite our best efforts, roofs collapse, walls cave in, and firefighters become lost and run out of air. As if not already dangerous enough, significant exposure to heat, communication problems, and interior obstacles all serve to further challenge our capabilities. This list is not exhaustive; all these factors and others serve to ensure that any fire, no matter how seemingly routine, will be dynamic and to some degree always unpredictable.

It is important to understand that these events are both complicated and complex in their behavior. This affects how people function within these environments and hence our eventual hindsight determination of “human error” when it goes badly. We should be clear what the words complicated and complex mean, as their use will further how we understand error and bad outcomes. For example, complicated means that there are many moving parts and many actors, all trying to accomplish various goals, some of which are the same and others not in these dynamic time-sensitive operations. Systems where complexity can be defined are those situations where small changes during the event can produce unexpected large outcomes. It should be no surprise that these large outcomes, usually in the form of a line-of-duty death (LODD), almost always appear as a great astonishment to those involved. This perception is common in complex environments that suddenly collapse.

To better understand human error, we need to be mindful that during the post-incident investigation, and despite our best efforts for those experiencing the event, at no point did they have every bit of information regarding the scenario as the event unfolds. Additionally, we should be mindful that no one goes to work to die, and the decisions we make when entrenched in these time-critical and complex socio-technical systems are based on what we know at those specific moments in time-not what we know later as investigators sifting through the rubble and evidence with none of the pressures and resource limitations that plagued the individuals during the event. There is no imaginary lofty perch where alternative courses of action can be viewed or decision trajectories observed and the “right one” then chosen with 100-percent certainty of a favorable outcome.2 There is no perspective that offers a guaranteed outcome that achieves all our individual goals and those of the larger system simultaneously. This is a bedrock concept for truly understanding what hindsight is and how it affects the determination of human error.

Incident Debriefing

As the events of the scene wrap up, eventually fireground operations will end and, once all companies are back in service, our effort turns to one of introspection. That is, we’ll have a chance to debrief and examine those actions that went well and those that did not. Unfortunately, briefings typically only tend to focus on factors that went poorly and many times the culture of the department, and hence the debrief, will follow in suit and take on a judgmental tone. In some cases, there is simply outright declaration that human error is to blame-case closed-and we can feel better under the illusion that we understand “who” is responsible as opposed to “what” went wrong.

From here, we traditionally follow with an implicit tone that we can now rest easy, our system is safe, and we have found the miscreant. These accusations are even more commonplace in those cases where there is firefighter injury or worse. These debriefings unfortunately result in a process where the “culprits” are identified and, once their errors are put on display for all involved in the debriefing, they are sometimes given retraining, or in other cases employment is suspended or terminated. This is the time when a just culture is most needed and seldom seen in the fire service. As if this harshness isn’t already enough, the emotive response from the people accused is frequently shock, disappointment, and a severely harsh personal introspective critique. Their self-examination can be devastating knowing that their “errors” resulted in injury or death of a fellow firefighter. Nothing is more nightmarish for our fraternity as we declare that we “go in together and we come out together.” Unfortunately, afterward, once the culprits are identified and punished by the department (not to mention their own self-imposed trauma), this process yields an organization that feels better, safer, and ready to tackle the next event believing that the “bad apples” who acted erroneously have been corrected or removed. (2) The organization and the system within which it operates self-determine that it is once again efficient and safe. Or is it?

Imprecise Judgment

The problem is that this lofty judgmental perch is one that extends from the comfortable, time-delayed, and all-knowing position of hindsight, one that gives us what appears to be a clear event trajectory from the first alarm all the way to the specific tragic event. Our investigation, constructed in hindsight, gives us a false belief that we now have a complete understanding of what happened and why. The problem is that these constructs fail to answer the most important question of all-why the decisions made by those involved made sense to them in the moment. What did all the other decision paths not chosen look like in real time? That is, why in the heat of battle, didn’t these other choices make sense? We are forgetful that the knowledge base we possess in hindsight is nowhere near what the actors had in real time as the event was unfolding. Their view of the situation, their knowledge base, their perceptions are always smaller and impoverished when compared to the operation as a whole and what we “see” afterward. We must always be mindful that they are acting in the moment on many factors such as their perceptions, their experience level, and their knowledge base, not to mention the last set of orders received from some part of the command structure (and multiple orders from different layers of the command are not always congruent). We are always acting on what our unique perspective of any operation might be at that moment in time.

Here is the most important point: There are no individuals anywhere in the command structure, from the new hire up to the incident commander, who have the same view of the operation that we have in hindsight. It always falls short and it always will-this is normal. In fact, this is what we always discover during the post incident briefing. Unfortunately, in dangerous, time-constrained, and resource-limited operations, we attribute the mistakes to human error, case closed. The operation in real time has a unique perspective as opposed to the debriefing environment where we pass judgment and decide what to do next. Simply, the two positions are not the same, nor could they ever be. Regrettably, this is the lofty perch (judgmental debriefing) at which we make causality statements and recommendations for improvement and worst of all think we now clearly and fully understand our system.

Understanding Human Error

So how should we view “mistakes” that, from a hindsight perspective, appear to make no sense or even appear reckless now that we understand our perspective after the fact is different from those who lived it? How should we try to understand what happened when we will always know more about the event during the post-incident investigation than the actors did when moving through it in real time? Essentially, the question is, how should we view the event when we only seem to have one lens available to examine it (hindsight)?

The solution is not as easy as one would think. Despite being armed with this understanding of hindsight bias, it is very easy to still fall prey to its seductive simplicity. Nevertheless, there are three primary considerations that can put us on a path of adopting a better understanding of what might have happened: (1) to better understand complexity, (2) the search to find the actor’s local rationality, and (3) awareness of the biases that we cannot fully escape.

Complexity

As mentioned in the introduction, fireground operations are both complicated and complex, and it is important to understand that they are not the same. Complicated means that there are many moving parts all working together to produce a predictable measurable systemwide response. The other, complexity, also involves many moving parts working together, but their interactions are more synergetic in nature and, as such, the systemwide results are then amplified or sometimes reduced beyond predictability. Sudden failures and accidents that “no one saw coming” are born in this realm. In fact, in the aviation domain, essentially all the types of modern airline accidents have this nature. No one can seem to predict their occurrence or just how the system will collapse. The confluence of events that by themselves are innocuous, even commonplace, jointly become sufficient for disaster.

Our task in understanding complexity is one where we are not looking for a “smoking gun” or single-cause explanation but rather understand that these events are multifactorial and are composed of everyday actions and events. Adopting this mindset is required if we expect to be successful in truly understanding why things go wrong as opposed to labels such as “human error,” which are pointless and destructive. Once we have a grasp on the insidious nature of complexity and how systems fail, and we are committed to viewing these events through this lens, our task is now to focus on the environment and how the actors involved made specific decisions and why.

Local Rationality

If we are going to provide a reasonable and just response to these events, then our focus should center on understanding the actor’s position (perception) while they were making decisions. Some choices, when viewed at first from a hindsight perspective, may very well make no sense. Thus, the focus of any investigation should not stop here with a determination of human error but rather this discovery marks the beginning of the investigation. Looking beyond human error labels opens a door to understanding what happened.

To gain significant leverage for change, we should be more focused on trying to understand what the decision path looked like moving forward, not recreating it by looking backward. Our focus should be their “local rationality”-that is, understanding what they knew at the time (not what you know now) and thus the environment that they understood, which served as the basis and foundational understanding for their choices. Human decision making and how we view it in hindsight are directly connected to their tools, tasks, and environment-hence, how we frame an understanding in real time of what is happening around us and thus the decision path that will meet the objectives that challenge us (keeping in mind that in dynamic environments this can be a moving target). Developing an understanding of this context is where we can alter our previous methods of a judgmental approach to what can now appear to transform from a nonsensical decision when viewed in hindsight to more reasonable paths and meaningful understandings. Those in high-risk safety sensitive positions like firefighting are very aware that the choices they make can produce very negative outcomes for themselves and others. They do not come to work to die or hurt themselves or their coworkers; they come to work trying to make the best decisions possible. The problem is that during the investigation phase after an event, we view these choices from a hindsight perspective that greatly affects our ability to understand.

The extensive collection of knowledge gathered after the event is vastly different from what the people in question had at the time when trying to make good choices under time pressures and limited resources. Their knowledge is always incomplete as compared to the knowledge captured by the investigators after the event. This knowledge base, that of the people being examined, should be the focus. This was the information that they used to make their choices so it should be what you use as the investigator trying to understand paths of what we sometimes call failure. Those choices, when examined in hindsight, are what we are trying to understand from a more complete perspective where outcomes previously seemed confusing. In other words, viewing in hindsight only prevents us from understanding their local rationality and therefore any judgment without considering their unique perspective of the situation is neither fair nor just. We cannot understand human error without it and will not be able to contribute in a meaningful way to the investigation of the event. This does not support the notion of a just culture, one that is truly committed to improving the operation by making this unpredictable job as safe as possible.

Biases

Biases are preconceived notions and understandings we bring with us when trying to understand events in real time or those that we are reviewing in hindsight. Even well-trained and experienced investigators across all domains are susceptible to viewing events through this lens that clouds our understanding, judgment, and options for prevention. Although there are many biases that can affect human judgment, hindsight and outcome biases are two that are the most prevalent and infectious within investigative communities. Traditionally, investigators do not spend enough time focusing on how to avoid these mental constructs, and this represents an area where the fire service can lead a path for other domains to follow in their development of better and more meaningful investigative outcomes.

Therefore, the question offered is: How do we avoid problems such as hindsight and outcome biases? Thus far, I have built a case for understanding that hindsight does not equal foresight.3,4 When we as investigators are looking at the actor’s decision paths that brought us to its conclusion, especially when the outcome is negative, we are looking rearward at an outcome that those involved could not see as they were moving through it in time. This sets us up for hindsight bias that will affect our judgment. This perspective will always to some degree bias us in thinking that those practitioners involved should have seen the dangerous outcome looming on the horizon. It pushes us to determine that they chose poorly and, like us now after the fact, that they should have seen the path that is now ever so clear. Both anecdotal and scientific research shows that no matter how enlightened we think we are of this bias and its effects in our judgment, we are never able to fully eliminate its influences. In fact, as if this was not bad enough, there is decision-making research that clearly shows that the more severe the outcome, the more harshly we judge the specific actions from a hindsight perspective.5

To be successful, we should be sure that we fully understand the limit of our retrospective abilities and the normal and unconscious response we have judging our peers after an untoward event. We have all heard the terms “armchair quarterback” and that “hindsight is 20/20,” but do we fully appreciate how pervasive and compelling these biases are? The problem is that, despite our best efforts to remain objective, humans have a natural proclivity to observe that, when reviewing the events prior to a bad outcome, they could clearly “see it coming.” That is, they believe that if they were in the same position as their colleagues in question they would have seen the eventual path amidst the conflicting data and unknown outcomes and would have steered clear of the bad outcome. We are poorly calibrated because of hindsight bias and greatly overestimate what our ability would have been to see the negative outcome and its severity before it arrived (outcome bias). As mentioned above, outcome bias plays a role and causes us to more harshly judge those who had made the decisions. It causes us to rationalize in our mind that the more severe the outcome, the more egregious the decisions that preceded it must have been. Both these biases quite innocently cause us to think that looking rearward along a trajectory of events is the same as looking forward. This simply isn’t true, and there is extensive literature to support this misunderstanding. Despite this common psychological partiality, investigations in not only the fire service but healthcare and aviation are inundated with second opinions, official reports, and legal testimony that are plagued with these effects. These types of judgments do nothing to further craft a more reasonable understanding of what happened, which is the only successful lever for change in complex socio-technical systems such as fireground operations.

Now that we have a better understanding of human error and more productive ways to consider the paths chosen by those engaged in normal work, let’s take a closer look at the necessary institutional environment that fosters the ability to harness this perspective for meaningful change.

Safety Culture

Across many industries, buzzwords such as “safety culture” permeate the leading edge of organizational science. Although frequently used, definitions of what this means are quite variable and open to misinterpretation and misrepresentation. Although many definitions exist, the following is sufficient for our discussion: “shared values, beliefs, assumptions, and norms which may govern organizational decision making, as well as individual and group attitudes about safety.”6

Figure 1 shows the four components of a safety culture, and you should ask yourself if your department possesses any or all of these traits. Although they are all equally important to our discussion, here we will focus only on reporting and just cultures. For clarity, Table 1 contains a brief description of each component of a safety culture. It is important to understand that these are interrelated and support each other through numerous connections and interactions. For example, a just culture enables a reporting culture to be purposeful and safely collect great insight into both day-to-day and extreme operating conditions.

Reporting Culture

A reporting culture is one where management is invested in trying to gather as much information as possible about the true nature of the organization. They realize that the frontline employees, such as rank-and-file firefighters, are the best source of data as they are the day-to-day experts in how the system is functioning. They are aware of the differences between work imagined and work performed and, in many cases, are aware of potential solutions. Those who manage these reporting programs understand that, in most cases, more than 70 percent of reports received are called “sole-source” reports-reports that contain information known only to the reporter. Simply, these reports are direct intelligence on how things are done and the challenges that are faced while accomplishing normal work, which would be lost without a functional reporting program. Without this type of culture, the next time some of this information might be discovered could be after an injury or LODD-far too late.

Reporting cultures have in place one or several programs that are designed to collect employee concerns from both a protected and anonymous position. Various types of programs across domains have failed because the program infrastructure is not in place or is operated poorly. A significant component of management’s commitment to safety is to ensure that these data collection programs are operated with the correct amount of staffing in addition to the required expertise to interpret the reporter’s communication. Should any of these be absent or inept, your reporting program is destined to fail. Another consideration to promote success is that, if possible, those reviewing the reports should not be a collection of senior staff who have not done the frontline job in many years. There must be a fair balance between senior or experienced staff but also the frontline employees who can give adequate perspective on the current challenges faced and propose meaningful and realistic solutions.

Just Culture

Although numerous definitions exist, the essential idea of a just culture is one that encourages open reporting of honest mistakes. The goal of such disclosures is that those running the organization clearly understand the nature of work and the decisions made by the workforce while facing the daily challenges of any operation. Cultures that are just could also be described as those where management teams are willing to hear the bad news of their daily operation. Being willing to hear what makes them uncomfortable gives them a more proactive stance toward safety. These are cultures that recognize that to do this they must have already made the frontline workers comfortable in coming forward and sharing what has happened. They come forward because they know that they will be treated fairly. The majority of these reports contain information that would not be otherwise collected. Without protection, these are not the types of reports you are currently gathering. This is an environment that realizes and embraces that the only way to understand why things go wrong and the challenges faced is to hear from the people doing the work. In successful organizations, this is achieved because they fully recognize that punishing people for honest mistakes does nothing to prevent future occurrences with either that specific employee or others in the workforce. Make no mistake: The adage “The beatings will continue until performance improves” is alive and well in the fire service. It will never improve performance, and other domains such as aviation that have managed to drive their accident rates to astonishing low levels abandoned this ideology many years ago. They subscribe to the idea that the “bad apple theory” is not helpful in correcting the systemic problems that are present in all our operations no matter how well we think they are designed or how comprehensive or flexible we think our guidelines and training may be.

One of the most prevalent and unfortunately erroneous concepts that justifies resistance to the idea of a just culture is fear that such a program is about skipping accountability-that is, one that promotes not intentionally following the rules, policies, and procedures. Nothing could be further from the truth. In fact, it is about encouraging accountability by increasing the sharing of information about what has happened and why. It is about allowing the person involved to come forward and make the system safer and his report represents forward thinking (as opposed to rearward) and the accountability that he owes the system or organization after an event. It is never about your department promoting not following procedures but is very much about how it views actions in hindsight from only a first story as one of failure. The first story is where we simply think that the employee was to follow procedure “X” and instead followed procedure “Y”-human error. This is a pointless view that the first story explains all. Nothing could be further from the truth.

A culture that does not first turn to a punitive response but instead is dedicated with action and resources to better understanding the actor’s local rationality is one of value. What did they know, understand, and perceive? That is from where actions emerge. It is not a “get out of jail” program, and certain egregious behaviors such as on-duty usage of drugs and alcohol are clearly communicated as not tolerated; a just culture acts swiftly and firmly to prevent any such reoccurrence. However, the aviation domain has programs that, if you confess abuse/addiction, will provide help and return you to work when a rehabilitation program is completed. Report for duty under the influence, you’re done, as a confession now is too late and reporting programs are not able to help. Unfortunately, most behaviors or rule violations are not as clear cut as these and represent the operational space where most work is accomplished. Normal dynamic complex work is anything but black and white, and any decision to “correct behavior” should be given this consideration and a cautious approach taken.

Professor Sidney Dekker, who many believe is the global scholar on the subject, tells us that just cultures are about clearly communicating expectations, duties, and the willingness to learn from each other. He goes further to remind us, “If bad relationships are behind unjust responses to failure, then good relationships should be seen as a major step toward just culture.” “Good relationships are about openness and honesty but also about responsibilities for each other.”7 If management is unable to embrace these notions, your effort is over before it has started-a just culture will not be successful in your organization. However, that is not a reason to not at least try to educate your leadership. If you are a rank-and-file firefighter, educate a member of the leadership who will at least listen to your explanation of this material. Who knows, you might push discussions toward a critical mass of reasonable thought that changes their viewpoint or at least opens their minds. It does happen!

How to Implement a Just Culture

Should your organization be ready to further explore these ideas and willing to change the way it views human error and implement change, then it is ready to begin its journey. Although there are many ways to start and different tactics to make it functional in your specific organization, there are three main areas that will give you a solid foundation with which to build on. These include the development of a reporting program, creation of your safety team, and the development of a high-reliability organization (HRO) approach to your operation.

Reporting Programs

If you truly want to make changes in your department and have a better understanding of human error, then you must be willing to hear the bad news-that is, information from the rank-and-file that may be difficult to read and at times perhaps unsettling. If you are ready to start this movement toward a just culture, then you need an anonymous reporting program. This is probably the most important component and provides a safe opportunity for employees to share with the leadership the details of what is going on during normal work in your organization. It also provides a mechanism where they can share insight into not only their actions but also any larger management issues in a safe and productive environment without fear of retribution.

When you begin to craft your reporting program, you should consider hiring some expert help. Many other domains have tried a “cut and paste” mentality with disastrous results. Quite simply, programs are uniquely built on the foundation of the very structure and culture of the organization that is trying to implement them and will benefit greatly from others who have already traveled this path.

There are basic requirements that must be considered for any program including submitter protection, the ability to gather meaningful insight from the reports in the form of categories, and the creation of the safety team that will review them.

Submitter protection is probably the most important design feature of any self-reporting program. Without this protection from any administrative action, however your department may define it, you simply will not get anywhere near the number or more importantly the truly meaningful and in-depth reports you need to make changes, increase safety, and decrease the chance of an injury or LODD. With a lack of submitter protection, reports will not be submitted because the lack of protection clearly drives safety underground. Fear-based organizational mentalities have no role in creating safety and will only drive behavior in unproductive ways for system safety.

Your fire department has highly trained and invested employees who are intelligent and motivated and do not come to work to make mistakes, but when they do, will your department offer them a safe opportunity to admit it and enable management to learn as much as possible? If so, then welcome all reports and understand that not every report you receive will be that helpful or meaningful or provide any leverage for change. That’s okay and very normal for any reporting system regardless of the domain. Simply accept the report and thank the reporter for the effort to advise management about the subject at hand, then go on to other reports where meaningful safety inroads are possible. You do not want to discourage reporters from contributing these “less relevant” reports, as they may stop reporting altogether. The bottom line is that you should never put the burden of what is reportable and meaningful on the reporter but rather the onus is on the safety team that will review the reports. Your review/safety team should welcome and read them all and then decide which ones can help your organization make positive change.

Reporting Programs for Everyone?

Although probably not central to most in the firefighter world, I would be remiss to not at least briefly discuss that self-reporting programs are being called into question by some researchers for certain domains-that is, there is a push in the safety business that safety sensitive organizations should abolish all incident reporting programs. Proponents of this view believe that such programs are no longer finding the precursors to the types of accidents we are facing in some domains such as aviation and thus represents no safety value.8 The concern is the potential that having a robust, well-established, mature reporting program offers a false sense of security in that the organization now believes it understands everything that is truly going on throughout all facets of the operation.

An additional concern is that many domains have very robust reporting programs that possess very large data sets produced from extensive report collecting. Unfortunately, in these cases all their program’s resources are spent collecting, organizing, categorizing, and summarizing the findings in comprehensive reports. The problem is that this defeats the purpose of these programs when all the financial and personnel resources are spent on these efforts, as there is no time and talent left to make sense of what they are gathering and how this will affect the potential changes that the organization may make in the name of being safer. Some of these groups boast of having such large data sets but seem paralyzed regarding what they can do with this repository in any meaningful way.

The problem with this notion of canceling all reporting programs is that not all domains or organizations within those domains are equal in their ability to gather and understand the data or even their individual risk level. In other words, some domains and organizations within them are more risker than others. It stands to reason that certain safe systems may have little to gain by currently designed reporting programs, as evidence suggests that they may have little bearing on accident prevention. However, this notion of no reporting programs for extremely safe systems is a very advanced system safety notion, and we are just scratching the surface of understanding what it means and if it is true. What is most important is that, if this concept is true, then it is certainly only true for what Renee Almalberti calls “ultra-safe systems” such as aviation and nuclear power.9 Unfortunately, most domains are simply nowhere near being ultra-safe and this safety science concept has no traction. Thus, for the rest of us, as we continue our journey to be just and safer, we can gain significant safety enhancements and potential from our reporting programs for the time being.

Creation of Your Safety Team

Because of the dynamic and complex nature of any fireground operation, attempting to craft procedures for every situation is impossible and simply not needed. Organizations get themselves into trouble when becoming consumed deciding where to draw the line as opposed to who draws the line. Who in your department are you going to trust with protection of the reporter’s identity and protection of the data? That is a very serious decision that will require an investment in time and resources to find a solution. The remaining consideration is asking if you trust that the person possesses not only the required investigative skills but also the ability to do something meaningful with the report once it is researched and understood.

The individuals who are entrusted with this process should not be the same people who have daily administrative control over the frontline employees daily. Organizations that design their safety team with the same managers who decide human resources issues are asking for serious conflict of interest problems. Typically, persons in those positions are extremely busy as part of their normal job tasking and are not able to realistically take on the significant workload of a reporting program. But maybe more important than balancing workload among your department staff are the trust and a conflict of interest that this type of staffing will produce. Persons in these positions of rank-and-file oversight (quality of life issues) and control have the very real potential to bring with them an employee-driven sense of low trust. Reporters might feel intimidated and not report, because the very people who are going to read the reports and act on them are the same managers who can make their work life very challenging. It does not matter what promises are made beforehand with agreements signed but rather the overall perception of the employee group, because that is what they will act on. The additional concern is that these officers might be well suited for their current positions and responsibilities, but that does not immediately qualify one to participate as a member of a safety team. Their jobs require a skillset that is in many ways very different from the required skills of a safety team member.

HRO Approach

Accidents, however you chose to define them, are rare occurrences. Thus, your investigative team will not get a lot of practice, and as such any post-investigation educational effort must wait until the next event to maintain this skill level. I’m willing to bet that your department is like most and has more successful fireground operations than those where firefighters are injured or killed. Years ago, researchers became aware that, since this is the case, we should not be so focused on those times that we perform poorly but rather examine all the other times when we are successful. What are we doing well, and can it be improved any further? Thus, the HROs’ focus became not only on finding which domains have safety records better than what would be expected but also on the characteristics across these domains that are constant. If we can understand what similarities exist, in a retrospective view, can we apply these lessons in a prospective manner? How can we teach organizations that need safety improvement by using observations from those organizations that already have impressive safety records? What are these organizational behaviors?

Naval flight operations and later power plants and some types of manufacturing were first investigated by the researchers that suggested the HRO concept.10 The HRO characteristics discovered (listed in Table 2) essentially are centered around the notion that risk can be controlled to a limited extent and in many cases mitigated. However, HROs also recognized that this control and ability to prevent accidents are limited.

One of the overreaching tenets of an HRO mentality is that most accidents can be prevented through good organizational design and management, one where the right people for your organization are talking to the people that they need to. What does your safety organization look like? Are lines of communication open, and are people willing to both talk and listen?

HROs also see that safety is an organizational objective, which I’m sure may seem obvious. After all, what organization would say otherwise? But the devil is in the details. Although it might seem intuitive, HRO organizations believe that redundancy enhances safety in not only the mechanical sense but also from a social science perspective. For example, in a mechanical sense, having backup systems such as extra valves and electrical sources serves to increase the overall safety of the piece of equipment and how it is used on a fireground operation. Although it might come as a surprise, not every school of safety science believes this is true. For example, safety researchers who subscribe to a “normal accident” theory concept believe that increased redundancy increases the complexity of the operation and thus can make it less safe by increasing not only complexity but also a false sense of security in how dependable the operation might be.11 Although many of us do not agree with this concept, in all fairness, I mention it here as it is in direct opposition to both what HROs practice and what many practitioners might think is common sense.

Like many other safety sensitive domains, there are very dynamic events where no single person has complete knowledge of every single facet of the operation. In many ways, this is like military operations where the entire focus of the operation can change within minutes and individuals on the battlefield (or fireground) sometimes must make split decisions as there is not enough time to query those above you in rank and wait for a reply. In other words, decision making, to be as flexible and adaptive as possible, must be decentralized. HROs recognize that following the command structure is important, but they also recognize that there are times where this mechanism is not sufficient and individuals will need to make split-second decisions to operate as safely as possible. This adaptive ability is what keeps us safe in dangerous environments. Afterward, as time increases and the criticality decreases, this allows the individual and the operation to return to a more traditional centralized decision-making process. HROs recognize the importance of this flexibility and empower their employees to follow the rules but be adaptive as required for the demand.

HROs recognize that they will not always perform perfectly, they will make mistakes, and sometimes events may become more high profile or even result in a loss of life. But where HROs excel is that for every event, regardless of loss of life or imperfection of the operation, they gain as much information about the event as possible to learn as much as they can. More importantly, they recognize that even in cases where individuals are not hurt, these are essentially “free passes,” and they will capitalize on this as a unique learning opportunity as much as feasible. HROs always assess where the boundaries of safe operation are, as they are probing and learning from every event that might have undesirable outcomes. From this information, they make any adjustment if needed, including training, and move forward, always skeptical that they now understand all the possible risks to their operation.12

Finally, HROs are never happy with the current state of their operations with regard to safety. They are always looking for where other weakness might lie and, from a safety perspective, are never fully satisfied. This is not the same as being paranoid but rather could be described as a general state of unease. They know that accidents surprise even the best-run operations and are always looking for the hidden event before it occurs and shocks the entire operation.

Top 10 Ways to Ruin Your Just Culture

There is both research and anecdotal experience that reveals there are many ways to ruin the safety culture of an organization or, if early in its development, prevent it from ever being successful. In Table 3, I show what could be considered the top 10 ways you can damage or destroy your just culture. Let’s briefly review each one and how it might fit into the domain of the fire service.

1. Be naïve and think that the procedures and guidelines in numerous companywide manuals are exactly how work is done in your operation.

One of the most common complaints from leadership is that employees are not following the rules and seem to be oblivious to the department’s standard operating procedures (SOPs). Leaders frequently struggle with why work as imagined and work as performed are less than congruent. This is known as the “procedure-practice gap” and is a very real observation when we examine our day-to-day operations. (7) When observed in your operation, your first reaction should not be to ask why this person is a bad employee but rather what are the challenges in the workplace that make compliance difficult and burdensome. These realizations offer us a chance to better understand how work is performed and gives us an opportunity to alter our procedures or train differently to real-world challenges that might not otherwise be recognized. Any recognized gap should not trigger a rewrite of the SOPs every time, but we should be mindful that people for the most part want to do a good job and human performance is very much tied to people’s tools, tasks, and the environment in which they practice their craft. Your procedures and SOPs that were written months ago at a desk may not match what is suddenly discovered on the fireground this afternoon. Remember, procedure-practice gaps represent the gulf that exists between the written manuals that are drafted in a nondynamic fashion for the reality of dynamic events and daily work. When you see SOPs with which personnel are routinely not complying, the question you should asking yourself is whether the procedure is written as well as possible. Remember, the “book” will never fully match large-scale dynamic events-never. Is it outdated? Has the nature of the task changed? And what are the new challenges of the work environment that employees must navigate to complete the task while meeting numerous objectives simultaneously? None of these challenges and conflicts are encapsulated in the sterile world of written SOPs and directives.

2. Develop a self-reporting program and violate the agreements of confidentiality.

The cornerstone of any self-reporting program is the trust that enables frontline workers to feel comfortable enough to submit self-reports. The research is clear: When employees know that they are protected from job sanctions or even termination for honest mistakes or required procedural deviations, they are far more likely to come forward and explain why they felt that it was a reasonable choice at the time and, in many cases, they will include what they learned and how they can do their job better next time. But to get these meaningful and insightful reports and see how your department is functioning, this level of trust throughout the organization must be present. In fact, we would offer that this represents the most important component of any self-reporting program and management will be challenged with every report to maintain it securely. Successful programs enjoy a wealth of information about what is going wrong in both those events that are hidden and known after a fireground operation. But they will also be revealing as to everyday firehouse operations and the more typical mundane runs. Protecting the confidentiality of all reporters from the media and most importantly department upper management is paramount. It should be a prime directive of the safety department.

3. Never respond to the submitter or share with the organization how much work your safety committee is doing

If your employees are going to take the time to fill out self-reports and share with you their mistakes and observations, sometimes with painstakingly detailed reports, then they should be shown that their efforts are not in vain. It’s not that they should expect you to make sweeping wide changes as the result of each report but rather they should be made aware that you have received their report, are reviewing it, and are considering what it means by itself and in relation to the entire operation. Successful reporting programs return to each submitter a short letter saying that their report has been received and will be reviewed shortly or at the next safety committee meeting. Let them know that the report did not simply go into some dark abyss never to be seen again. It takes a lot of time to write these reports and many people, regardless of their desire to make their workplace better and safer, simply loathe writing more reports. If they are going to take the time, then the least the organization can do is advise them that it has been received and you are reviewing it.

Once the report has been reviewed and a course of action has been decided on, then the reporter should be made aware of the process at this point, even in cases where no action in any form is taken.

4. Never advertise to your employees that such a program exists and it is an important component of a just culture.

Important to the success and perseverance, even for established programs, of any self-reporting program is to remind personnel at some regular interval as to its continued existence. It is important to sell your program! It is probably intuitive that all new hires during an orientation briefing are exposed to not only the existence of the program but also how to participate. Let all personnel know how well the program is working, the typical number of reports gathered, and some of the systemic issues both discovered and resolved. This important step gives frontline workers access to the program and serves not only as a reminder of its existence but maybe more importantly as proof that their reports are received by real people who understand the job and offers them a perspective of ownership of the program, the department, and the collaborative effort that system safety represents.

5. Refuse to share event details from an investigation with other departments in your organization where appropriate.

Self-reporting programs offer an organization a chance to view an event or the job from an entirely different perspective, that of an individual frontline worker in your department from which significant leverage for change can occur. But where the real traction comes from is the chance to understand a specific event in the context of a larger picture or perspective, a system’s perspective. To have significant leverage for change, reports of specific events must be understood from that of a system’s view, one that understands not only the specific context of the event on a local scale but how it fits into and interacts with the other divisions of your organization.

Organizational problems and conflicts typically occur across territorial lines and blur the boundaries of ownership. This is normal for any complicated and complex organization, and any attempt to react to specific reported events should include cross pollination with other departments as appropriate and necessary. Understandably, small departments may be so interlocked and transparent that this point is moot. However, extremely large departments may still benefit. Complex system problems occur across predetermined boundaries, and any solutions to these issues should occur across these boundaries whether artificial or naturally occurring. Problems occur across these boundaries and so should solutions.

6. For submitted and accepted events, punish the reporters with a punitive response.

This is truly an outstanding way to destroy your just culture and reporting programs. It only takes one time and the program is over. Punishing your employees after they have entered your program in good faith will rapidly show the rest of the employees that you do not take the program seriously and the employees will stop reporting. Your program will end, and specific events with all the valuable information again go back underground. This trend and the rate at which it spreads are very rapid and have been seen many times with large companies. Can you imagine how quickly news like this would spread at a department the size of yours?

7. Have the head of your safety department report to a mid-level officer far from the department’s chief.

Safety programs such as self-reporting programs require resources and buy in from the top of your organizational chain. Thus, before the lead person of any agency can espouse to support a safety program, he needs to be familiar in detail with not only the details of your program but also frequent updates on the program and the types of problems your department is experiencing. It is not enough for your leaders to say that they support the program only to never hear another word about its operation. They will need a semi-regular update as required. If we expect accountability from the top of our organizational structure, then they require information with which to make informed decisions.

But this is not the most important reason for them to be involved and knowledgeable about the organization’s struggles. They should be aware so they can make responsible financial decisions about the best way to support your group’s overall safety effort. For them to make good decisions, they need quality information, and they can only get that by having the lead safety officer report to them directly.

8. Act on the illusion that you have now achieved a just culture and it will continue to run itself.

Like many programs in the business world, managers are accustomed to purchasing a program, offering training, and then getting back to work on the organization’s primary goal. Most business managers are taught to think this way, which causes great frustration for them when encountering many safety programs and is especially true for attaining and maintaining a just culture. They are under the illusion, like seen with other types of business efforts, that it is simply a “plug and play” type effort.

Nothing could be further from the truth for establishing a just culture and maintaining its functionality. A just culture is about relationships, and like any other type of relationship, personal or otherwise, they take work. There is no end goal where you can stop trying to keep the relationship moving forward in a positive and productive fashion. There are days, weeks, or months where the relationship will be better than others, but with persistent effort you will be able to keep it as productive as possible. Without these efforts, it has been seen that the relationship will wither on the vine and eventually die.

9. Think that if measurable indicators are decreasing, you are safe.

One of the most common mistakes we see in the private and public sectors with regard to establishing and maintaining a just culture is the overreliance on measurable indicators. By this I mean days since the last accident, number of accidents over time, and the many other metrics that we have burdened our staff with tracking. Part of the problem is that humans by our very nature love to measure anything we can. We inherit this Newtonian perspective as we are taught from a young age that if you can measure it then you can predict it. This is simply not true for complex organizations. We feel great comfort in knowing “what the numbers say” and put far too much comfort in their predictive value.

It is interesting to note that there is a literature base that shows that organizations with fewer incidents (nonfatal) have a greater number of fatal accidents.13 In other words, the safer your organization might look, the more likely you are to suffer a fatal loss. This has also been described for the aviation domain, supporting the notion that there is little empirical evidence for believing that the fewer incidents you have the safer you are.14 Numbers and the trends they produce do not always tell the whole story as to how safe your organization is. The primary reason is related to complexity and how complex systems like fireground operations fail.

10. Stop communicating internally, especially about events that make us uncomfortable, and you are in serious trouble.

Frequently avoided discussions in open forums from within organizations should be those regarding bad news about their operation. These discussions about the specific details of the things that are going wrong can for various reasons and quite frequently make us uncomfortable. This is not how cultures that are just function. Organizations with a robust just culture want to talk about these problems, they want to know as many details possible, and they are excited at the opportunity to tackle the problem. Again, like in any relationship, we sometimes must have conversations that make us feel uncomfortable to maintain a meaningful relationship. It is this open dialogue that can help keep disaster at bay and ensure that we are doing everything we can to prevent it.

Safety Improvements

“Wringing honesty out of people in vulnerable positions is neither just nor safe. It does not bring out a story that serves the dual goal of satisfying calls for accountability and helping with learning. It really cannot contribute to a just culture.” (7) We live in a world of uncertainty-in our lives, in our work, and in our accountability expectations. Complex socio-technical domains, like fireground operations, will never be completely safe, and harm is always around the corner. Despite our best effort, when the call does come, there will always be a chance we can lose another hero.

Knowing this, we should be empowered to find a way to keep that sacrifice as infrequent as humanly possible despite the unknown challenges that lie ahead when the trucks arrive. But when things do go wrong, things that we never saw coming, we should recognize this as a call to action to look within ourselves and our organization. This is the challenge we are offered, a chance to look beyond meaningless labels such as human error. If we continue to hopelessly cling to these hollow philosophies of admonishment to increase human performance, we will continue to lose America’s finest.

Leverage for change-that is, real change that further reduces the dangers we face during the next call-is born from our willingness to look deeply into the system we have created-a system that is anything but balanced but rather is complex with small adjustments occasionally yielding much larger events that could possibly have been predicted. And just as this synergy appears disappointing by increasing danger, safety may also emerge from what on the surface seems to be just the day’s seemingly innocuous decisions. Safety is born from the same complex brew of time constraints and limited resources, and all humans and their decisions are products of their tools, tasks, and environment. But to be successful, we must be willing to listen and be fair. Are we?

References

1. NIOSH, “Summary of a NIOSH firefighter fatality. A Career Captain Dies and 9 Fire Fighters Injured in a Multistory Medical Building Fire – North Carolina,” Fire Fighter Fatality Investigation and Prevention Program. Washington, D.C., 2012.

2. Dekker, S. The Field Guide to Understanding Human Error, Ashgate Publishing, Ltd., 2014.

3. Fischhoff, B. “Hindsight≠ foresight: the effect of outcome knowledge on judgment under uncertainty,” Quality and Safety in Health Care, 12(4), 304-311, 2003.

4. Henriksen, K., & Kaplan, H. “Hindsight bias, outcome knowledge and adaptive learning,” Quality and Safety in Health Care, 12(suppl 2), ii46-ii50, 2003.

5. Caplan, R. A., Posner, K. L., & Cheney, F. W. “Effect of outcome on physician judgments of appropriateness of care,” Jama, 265(15), 1957-1960, 1991.

6. Ciavarelli, A., Figlock, R., & Sengupta, K. “Organizational factors in aviation accidents,” Proceedings of the Ninth International Symposium on Aviation Psychology (pp. 1033-1035), 1996.

7. Dekker, S. Just Culture: Balancing Safety and Accountability. Ashgate Publishing, Ltd., 2007.

8. Cook, R. Personal communication on June 9th, 2016.

9. Amalberti, R. “The paradoxes of almost totally safe transportation systems,” Safety Science, 37(2), 109-126, 2001.

10. Rochlin, G. I., La Porte, T. R., & Roberts, K. H. “The self-designing high-reliability organization: Aircraft carrier flight operations at sea,” Naval War College Review, 51(3), 97, 1998.

11. Perrow, C. Normal Accidents: Living with high risk technologies. Princeton University Press, 2011.

12. Baumann, M. R., Gohm, C. L., & Bonner, B. L. “Phased Training for High-Reliability Occupations Live-Fire Exercises for Civilian Firefighters,” Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 548-557, 2011.

13. Saloniemi, A., & Oksanen, H. “Accidents and fatal accidents-some paradoxes,” Safety Science, 29(1), 59-66, 1998.

14. Dekker, S., & Pitzer, C. “Examining the asymptote in safety progress: A literature review,” International Journal of Occupational Safety and Ergonomics, (just-accepted), 1-27, 2015.

Shawn Pruchnicki, MS, RPh, ATP, CFII is a faculty member at Ohio State University (OSU), where he teaches aviation safety, human factors, accident investigation, and complex aircraft operation for the Department of Aviation. He is the owner of Human Factors Investigation and Education, where he and his team perform research and help organizations better understand failure and accidents. Prior to coming to OSU, he was an independent contractor/research engineer for San Jose State University at NASA Ames, and he flew as an airline captain with Comair Airlines (Delta Connection) for 10 years. Before that he was a firefighter/paramedic with Jackson Township in Grove City, Ohio. His research focus includes the role that safety culture, human performance, and human error play in understanding accident causation. He is pursuing a PhD in cognitive engineering at OSU.