While watching the recently released film “Deepwater Horizon” about the catastrophic well blowout in the Gulf of Mexico that caused the largest oil spill in U.S. history, I remembered the term “fail-dangerous,” a term I first encountered in correspondence with a risk consultant for the oil and gas industry.
We’ve all heard the term “fail-safe” before. Fail-safe systems are designed to shut down benignly in case of failure. Fail-dangerous systems include airliners which don’t merely halt in place benignly when their engines fail, but crash on the ground in a ball of fire.
For fail-dangerous systems, we believe that failure is either unlikely or that the redundancy that we’ve build into the system will be sufficient to avert failure or at least minimize damage. Hence, the large amount of money spent on airline safety. This all seems very rational.
But in a highly complex technical society made up of highly complex subsystems such as the Deepwater Horizon offshore rig, we should not be so sanguine about our ability to judge risk. On the day the offshore rig blew up, executives from both oil giant BP and Transocean (which owned and operated the rig on behalf of BP) were aboard to celebrate seven years without a lost time incident, an exemplary record. They assumed that this record was the product of vigilance rather than luck.
And, contrary to what the film portrays, the Deepwater Horizon disaster was years in the making as BP and Transocean created a culture that normalized behaviors and decision-making which brought about not an unavoidable tragedy, but rather what is now termed a “normal accident”–a product of normal decisions by people who were following accepted procedures and routines.
…click on the above link to read the rest of the article…