DTR016 High Reliability Organisations: A Model for Highly Effective Risk Management and Decision Making

Downloadable PDF Resources

  1. English
  2. English/Spanish

This module will take you through High Reliability Organisations: A Model for Highly Effective Risk Management and Decision Making.

  1. High Reliability Organisations: A Model for Highly Effective Risk Management and Decision Making

The information in this document is part of the Deltar
‘Level 4 Management Award in Advanced Risk and Crisis Management’

High Reliability Organisations: A Model for Highly Effective Risk Management and Decision Making

In Part 1 of this series, published in the December 2015 edition of Risk UK, David Rubens discussed the advent of the concept of High Reliability Organisations. In this issue he discusses their value as a model for wider risk management practice.

It is often claimed that the causes of crisis management failures are embedded in the increasing complexity of a highly networked and interconnected world, in which it is impossible to model the cascading consequences of tightly inter-dependent systems. It might in turn be assumed that the frameworks that have been developed on a ‘steady state’ basis to manage risk, evolving and adapting in order to meet the challenges of each new generation, have been superseded by the almost inconceivably rapid development in 21st century risk environments, whether they are political, financial, social, environmental or any of the other myriad contexts within which risk management is expected to operate. Whether we are concerned with international financial systems, climate change or critical national infrastructure, in the face of such complexity we might conclude that we are no more than hostages to fortune as we wait the next major incident that, if unmanaged, could lead to catastrophic systems failures that will have multiple trans-jurisdictional consequences beyond our capabilities to either model or manage.

While it is certainly true that such situations exist, it is also the case that many, if not all, of the major catastrophic events that define risk management at the cutting edge of our personal, professional and organisational capabilities to deal with it, are highly predictable, both in terms of the way in which they develop, mature and finally are triggered, and in the reasons for our failures to deal with them. One only has to look at the front pages of any newspaper to see examples of risk management failures leading to major harmful impacts which, rather than having extreme and unpredictable causes, are actually the inevitable results of conscious decisions made by people who had all of the information necessary in order to understand the consequences of their actions.

These may be major financial institutions accepting hundreds of millions or even billions of pounds in fines as merely part of the cost of doing business; repeated flooding at increasingly unprecedented levels that continue to overwhelm the defence systems that have been built specifically to prevent them; the collapse of social care in terms of old people’s homes, child care, mental health facilities or merely the ability to support those that have fallen through the gaps in our social networks; or the ability of international rescue teams to operate in the chaotic environments associated with emergency response. In each case, the causes of those failings are in the management systems that are supposed to deal with exactly those challenges, rather than in the external environment over which we have neither influence nor control. However, such failures are not inevitable or unavoidable, and in fact can be directly linked to decisions made by those in authority concerning the development or otherwise of effective management procedures.

From a High Reliability Organisation perspective, the causes of those failings are both clear and unacceptable. HROs are those organisations in which failure is not an option. These include critical national infrastructure, national air traffic control systems, nuclear submarines, aircraft carriers and other similar extremely technical, highly-engineered systems that require the highest level of management and oversight at every stage of their operation. Like so many supposedly sophisticated systems, the fundamental beliefs that support them are extremely simple, and can even be encapsulated in two succinct belief systems.

The first is that the development of HRO is not based on highly technical manuals (though they exist) or highly detailed response options (though they also exist), but is rather the reflection of an attitude or state of mind. The state of mind is not ‘We have a programme that will allow us to succeed’, but rather ‘We will not fail’. It is the commitment to erase the possibility of failure that distinguishes the HRO from other organisations that are focussed on developing the tools for success – tools which are demonstrably fallacious in both their assumptions and the ability of the organisation to implement and manage them effectively.

The second belief system that runs through every aspect of an HRO is personal responsibility. Everyone is responsible for ensuring that their aspect of the operation is run in such a way as to exclude the possibility of failure. What is more, part of that responsibility is the requirement to continuously pressure test their own systems, to consciously search for potential failure points and then, having identified them, to ensure that they are either monitored, controlled or eradicated. HROs are extremely sensitive to anything that can be seen as a potential problem, and all such potential problems are considered as critical issues. This is because not only are they considered as problems in their own right, but also that they are indicators of management problems that go deeper, and that need to be considered, analysed and responded to on a tactical and strategic level rather than merely as single, isolated events.

If one was to offer a third basic principle of HROs, it would be that the entire system is designed to support and encourage people to find problems. The issues with many catastrophic failures, whether it was NASA space launches, preparations for Hurricane Katrina or the Fukushima TEPCO nuclear power failure, or any events of which readers of this article may have first-hand knowledge, the causes of the problem were widely known, but the organisational culture was one in which not only were such subjects not discussed, it was politically unacceptable to do so.

One of the founding academic studies of HROs (2) identified five characteristics that differentiated them from other organisations. They were:

  • Preoccupation with failure (in which the possibility of failure is examined at every stage of an operation on a pro-active basis)
  • Reluctance to simplify interpretations (so that the inherent complexity of problems, and potential solutions, are accepted as part of the problem-solving process)
  • Sensitivity to operations (in which there is the realisation that solutions are only effective if they work within the realities of the operating environment, rather merely existing as paper-based options)
  • Commitment to resilience (in that resilience, and the ability to adapt to the widest possible range of challenging environments, is considered as a critical function in any operational plan)
  • Under-specification of structures (which means that individuals and teams have the freedom to develop their own working relationships, rather than being forced to adhere to pre-set organisational restrictions).

It is perhaps paradoxical that it is in exactly the highest-engineered organisations on the planet, such as nuclear power stations or nuclear submarines, that those at the bottom of the command chain are specifically empowered to take critical decisions. However, it is exactly this approach that prevents the ‘wishful thinking’ approach to managing complex operations and environments that lead to inevitable errors. Once that culture is destroyed, and the ‘deference to expertise’ is replaced by political decision making, the inevitable result is that, as in the run-up to the Challenger Space Shuttle disaster, the attempt to maintain the illusion that everything was OK, meant that ‘It did things that were actually stupid’, or as in Three Mile Island, the US’s most serious nuclear disaster, it was found that time and again warnings were ignored, unnecessary risks taken, sloppy work done, and a culture of deception and cover-up were embedded at the heart of the senior management structure.

The simple truth is that the reasons for failures are well known. They do not just happen, but they are often the result of smart people taking bad decisions, and maintaining those decisions over time until they become an integral part of the culture of that organisation. If high reliability values are to be introduced into organisations, then it is the responsibility of the management to create a culture where such values are not only accepted, but are considered to set the foundations for everything else that might follow.