Skip to content

Event Risk Scoring – How risky was that incident?

Published: 31 October, 2018


Major hazard industries are often urged to learn from safety-related events, even those which cause no harm or damage. Such near misses offer “free lessons” if only we take note. In reality the situation is a bit more complicated – learning from events is not a zero-cost activity. It requires resources to collect, analyse and make sense of event data. A particular challenge for large organisations with mature reporting cultures is that they can be faced with large numbers of reported events. In those cases, how does an organisation prioritize investigation and learning effort? Can Event Risk Scoring help?

In the recently published Energy Institute guidance on Learning from incidents (LFI) [1], this prioritization is compared to a medical triage process. Recent research commissioned by the Rail Safety and Standards Board (RSSB) [2] shows that a variety of scoring and classification methods have been developed in different organisations to help triage incidents.

In this article, Edward Smith considers the pros and cons of two broad approaches:

  • Scoring the risk of an event re-occurring
  • Scoring how close an event came to an accident

Scoring the risk of an event re-occurring
A common approach to scoring reported events is to use a risk matrix such as below.

20181031-event-risk-scoring-770x328pxl

This approach has the advantage of widespread usage and hence is likely to be familiar to many users. In addition, it is generic enough that it can be applied in a wide range of events.

However, the user may not be clear what they are being asked to score. For example, in applying the matrix, what is the likelihood scale referring to? In some applications it refers explicitly to the “likelihood of the event re-occurring”. But if interpreted literally that same event will never happen again in the sense of the same equipment, location, time, people, operational conditions, etc. This approach is effectively shorthand for assessing the risk of a “similar event re-occurring”. It is difficult to make sure that everyone has the same understanding of what is a similar event without comprehensive lists of categories and checklists.

Even if consistency can be ensured understanding what is a “similar event”, the process of selecting a cell from the matrix above may be a long way from assessing the details of the specific event. Will the selection take account of, for example, specific human factors such as fatigue levels or prevailing environmental conditions. Rather the process could be closer to conducting the types of pre-event risk assessment of broad hazards or accident categories that organisations should already have conducted. Revisiting such risk assessments following events is clearly a good thing to do, but it might not link closely with individual events.

Despite these issues, organisations using such risk matrices to score events have found them useful in improving overall LFI.

Scoring how close an event came to an accident
An alternative approach to event risk scoring and prioritization is to re-imagine the event in such a way that it escalates to an accident. Once a reasonably foreseeable accident sequence has been constructed, the user can determine how close the event came to that hypothetical accident. In determining “how close”, the user considers the various controls (or “barriers”) in the system. A score can then be associated with the remaining barriers that did not fail in the actual event.

20181031-event-risk-scoring-770x411pxl

Several methods (e.g. ARMS [3]) have used this broad approach to develop event risk scoring based on the consequences of the potential accident outcome and the likelihood of barriers failing.

The potential advantages of this approach include:

  • By explicitly considering what could have happened as well as what did happen it should help extract more learning from the event.
  • It forces the user to think about barriers that succeeded or remain as well as barriers that have failed in the actual event. It therefore aligns well with developing thinking around barrier management and bow tie development.
  • Again, it can be applied to a wide range of event types.

Re-imagining an event is clearly a subjective process that could lead to a variation in scores between users. Chance factors, as well as formal system controls, will also play a part in determining how close an event came to an accident.

RSSB research into event risk scoring
As part of a RSSB research project DNV analysed 26 methods for event risk scoring from rail, aviation, oil and gas, nuclear, healthcare and other safety critical industries. In addition to the two broad approaches described above, some methods also scored purely on the actual consequences of an event. Others were bespoke methods tailored to scoring event types. The pros and cons of different approaches were weighed up, and we developed the escalated event approach further in the RSSB project. The project produced a scoring system that took account of the actual consequences, and the potential escalated consequences combined with a barrier effectiveness score.

20181031-event-risk-scoring-770x606pxl

A workshop with potential users showed several ways in which the method and its associated guidance could be improved. We incorporated several suggestions to reduce score variability into the method.

As well as enabling organisations to gain a fuller understanding of the effectiveness of their critical controls, the method can also be used to trend monitor safety performance when scores from multiple events are combined.

Given that many organisations are now making use of barrier models (e.g. bow-tie diagrams) to visualize patterns and trends when analyzing multiple events, a barrier based approach to event risk scoring would complement this process.

See also Edward’s post on “Learning from Incidents“.

[1] Energy Institute (2016) “Learning from incidents, accident and events”, 1st edition, August 2016.

[2] RSSB (2018) “Development of a Common Event Risk Scoring Method, T1121”, Research in Brief, May 2018.

[3] ARMS Working Group (2007-2010) “The ARMS Methodology for Operational Risk Assessment in Aviation Organisations”.

10/31/2018 8:00:00 AM