The Long View on Longford
1st September 2018
ON 25 September 1998, a vessel in the Esso Longford gas plant 1 fractured, releasing hydrocarbon vapours and liquid. Explosions and a fire followed. Two employees died and eight more were injured. Pumps had tripped, halting the flow of hot oil. Cold oil and condensate continued to flow, causing the temperature in a heat exchanger to drop considerably. When hot oil was reintroduced, the heat exchanger had a brittle fracture. The resulting fire burned for two days and full supply of gas to the state of Victoria was not restored until 14 October.
As the 20th anniversary of the incident approaches, I spoke to Andrew Hopkins, who was an expert witness at the investigation into the disaster, to discuss the lessons learned.
TK: Longford was a landmark incident for industry in Australia. Looking back, what were the immediate impacts on industrial safety practice?
AH: I think the most important change was the introduction of a safety case regime in Victoria and then progressively around Australia. The regime was much better funded in Victoria than elsewhere, because of the impact of the Longford accident. This made it much more effective.
Because major accidents are so rare, companies tend to lose focus on them, and the regulator has a vital role to play in maintaining that focus.
One major consequence of the accident, and the report, was the widespread recognition that process safety needed to be distinguished from personal safety and managed differently.
TK: I certainly remember the regime as it was introduced. At the time, I was an engineer working at a refinery and we spent a great deal of time working on the risk assessments to build the safety case. This was my first exposure to significant risk assessments, where we assessed every unit. It was certainly a detailed process where I learnt a great deal about my own units as well as how they interacted with others. It made us think in a structured way and collaborate with others. I later went on to have a role in the Major Hazards Advisory Committee, which was the oversight structure implemented to work with the regulator to ensure it fulfilled its role. This was an interesting role to look at how the regulator prioritises its activities, reviews the industry performance, and engages with both the community representatives as well as the workforce representatives.
Lasting effects
TK: We saw some immediate changes as you mention. A key question is whether these have endured?
AH: The regulatory changes have endured and the establishment of centres such as the IChemE Safety Centre serves to institutionalise some of the lessons. But the problem is that companies come and go; they continually evolve as commercial circumstances change, and their personnel changes. This means that at any one point in time, some companies are simply not where they need to be with respect to process safety.
TK: That is a really interesting point. As we see turnover in organisations, or even mergers, acquisitions or organisational restructures we can see the culture of an organisation change and it fails to maintain its knowledge. As Trevor Kletz said: “Organisations have no memory, only people have memories, and they move on”. Sometimes we assume because it was learnt in the past we don’t need to repeat it. I think this is why we need to keep driving for improvement and remember that even though we may have learnt something some time ago, there are people coming through industry today that still need to learn.
TK: Do you think a Longford-type event could happen again?
AH: Based on my work with companies, I see all sorts of things that fall short of good practice that make another Longford only a matter of time.
But, let me make the point a little differently. Australia has had two very high-profile accidents since Longford. In 2008 Varanus Island suffered a pipeline rupture and major explosion that took out 30% of the state’s gas supply and cost industry A$3bn (US$2.2bn). In 2009, the Montara well blew out, causing a major fire and widespread ocean pollution that spread as far as Indonesia. In neither case was there loss of life, but there could easily have been. These events show that ‘Longfords’ can indeed recur in Australia and we must be ever vigilant.
TK: What do you think needs to be done to ensure the lessons learned are better remembered through time and across industries?
AH: Tell the stories. In the airline industry people are very aware of the lessons of the most famous airline disasters – what happened and why. In other industries much less so. People at the top need to be able to stand up and present on a major accident in their industry - what the lessons were and whether they have been implemented in their organisation.
TK: Yes, story telling is a key way to share messages. As children we learn about the world through stories, but as adults we often think we are above stories. The fact is, we need to be willing to embrace stories. That is partly why we developed the ISC Case Studies, as a way to tell stories that engage the audience and can be used at all levels in an organisation, including the most senior roles, to drive home the messages.
The human element
TK: In the Royal Commission investigation into Longford there was an attempt to blame human error of the operators. This was dismissed by the Commissioner. What was your reaction to this?
AH: Much time is wasted trying to establish what proportion of accidents are caused by human error. 80% is a common figure. This is totally misleading. We should start from the position that human error is almost always involved – if not errors by people at the front line, then errors by managers, designers, and so on. But identifying the human errors (or violations) is just the starting point. We must immediately ask the why question. Why did they make the error, or fail to comply with the procedure? As soon as we ask why, we move beyond individual blame into organisational causes, about which we may be able to do something. Other vital why questions are: why was it possible to make this error, or why was this error not caught before it was too late? Answering these questions really does advance our understanding. So remember, always ask: Why?
TK: What are the most common cultural errors you see repeated in the process industries?
AH: One the most disheartening things I see is the enormous amount of effort that goes into managing the measure, rather than focussing on what might be learnt from the incident. In particular, companies seem super concerned to avoid classifying process safety events as T1 – the most serious category - and finding reasons to classify them as T2, or better. It may well turn out that from a purely technical point of view the event is not T1 after all, but events initially classified as T2 are not subject to the same scrutiny to ensure that their seriousness has not been underestimated. The result is a bias towards underestimation.
The other disheartening thing is the suppression of bad news. This is sometimes conscious but often it results simply from the process of summarising information for transmission to a higher level in the organisation. “Summary” invariably means that the disturbing details are lost. I have found very few leaders soliciting the bad news in detail, which is the very best way to promote process safety.
TK: That certainly sounds familiar to me. I have seen a great deal of time spent on justifying why an incident should be recorded as a lower severity, and then little time spent trying to fix the issue in the first place. Followed by spending an inordinate amount of time wordsmithing reports for the board, rather than using plain English to state the facts. I have heard of some organisations describing fires as “thermal events” rather than telling it like it is – there was a fire.
TK: What are the key red flags that make you concerned about a company or site’s process safety setup?
AH: Many companies have corporate technical standards but also a process by which site managers can obtain an exemption from these standards. A key red flag for me is the ease with which sites are able to obtain such waivers. The exemption from standards leads over time to the substandard practices which are often implicated in major accidents. One way to control this problem is to measure the frequency of such waivers and to try to drive this number down. I see few companies doing this.
A second red flag is the reporting arrangements for process safety personnel. Ideally, they report up a functional line that culminates with the CEO. More commonly, they report to a business unit manager at a lower level. The lower that level, the less influence they wield. The other relevant factor here is whether they have decision-making power or merely an advisory role. These considerations have an enormous impact on how well process safety is managed.
A third red flag for me is when I hear that bad news about process safety is not getting through to more senior managers. The fact is that bad news is good news, because it shows that information channels are working. The corollary is that no news is bad news. There is always bad news at the grassroots of an organisation, and no news means that this bad news is not getting through.
TK: A key indicator for me is where the safety – both process safety and OHS – reports to in an organisation. I have previously turned down roles where the safety manager reported to the HR department or operations. Personally, I believe safety professionals can add a great deal to the overall function of a business and should be involved in business decisions. They need a direct line to a seat at the table that is sufficiently independent to enable them to give advice without fear or favour. A challenge we have in this though is that the safety professionals need to be able to speak the language of business if they expect to be involved in business decisions.
TK: What do you think are the key steps process safety professionals should be taking to improve process safety outcomes?
AH: Apart from what I have already said, I think process safety professionals should cultivate whatever channels are available to make contact with the highest process safety managers in their organisation. They should use these high-level professionals as a sounding board to check out difficult decisions. Alternatively, or in addition, they should take an active part in whatever “communities of practice” are available to get moral support for the difficult decisions they face. The more communication there is about all this, the better.
Know your history
TK: Finally, some of our undergraduate readers will not have been born when Longford occurred. What specific message do you have for them?
AH: The best way to maintain the requisite level of vigilance and “chronic unease” is to read accounts of major accidents and identify the causes, both technical and organisational. You should then ask yourself “could this story be repeated at my workplace?”
TK: I think it is really important to have access to good quality materials to learn from. Some of these include the reports and videos made by the US Chemical Safety Board as well as reports from Royal Commissions or enquiries. For example, the Longford Royal Commission, or the Cullen report into Piper Alpha. Other excellent reports include the Buncefield investigation and the Pike River Coal Mine Royal Commission. The ISC, in conjunction with the IChemE Safety and Loss Prevention Special Interest Group committee was successful in lobbying the HSE to develop and release a free edition of the Cullen report so it can be shared and accessed by all (see p13).
Further reading
- Hopkins, A, Lessons From Longford: The Esso Gas Plant Explosion, CCH Australia, 2000, ISBN: 9781864684223.
- Dawson, D and Brooks, B, (1999), The Esso Longford Gas Plant Accident: Report of the Longford Royal Commission, Melbourne: Parliament of Victoria.
- Cullen, The Hon Lord W Douglas (1990), The Public Inquiry into the Piper Alpha Disaster, London: HMSO, ISBN: 0101113102.
Article By
Trish Kerin
Director - IChemE Safety Centre