This is a blog I originally published on the World Economic Forum’s website here.
___________________________________________________________
• Safety engineering practices can be readily applied to cybersecurity.
• Developing safety ‘scenarios’ helps build a more comprehensive response to cyberthreats.
• Scenarios are also useful for communicating cybersecurity best practice to professionals outside the field.
A cybersecurity strategy informed from lessons learned in the safety engineering community will help executives and practitioners in the field reduce risks more efficiently. By considering scenarios instead of singular components of a cyberattack, as well adopting safety-engineering’s methodical approach to planning, cybersecurity professionals can put a more robust approach in place.
As a byproduct of more thoughtful and scenario-focused planning, it will also be possible to better communicate to operations staff and to other non-cybersecurity executives, including boards of directors, using cybersecurity scenarios as a storytelling mechanism. Currently, cybersecurity professionals using their own professional language are sometimes at odds with operational staff and business leaders.
Cybersecurity strategies should be based on scenarios and include the following three key recommendations:
- Analyze scenarios instead of singular items.
- Derive scenarios from intel-driven and consequence-driven analysis.
- Prioritize and remove barriers for where cybersecurity and safety intersect.
By learning directly from the practices of safety engineering, the resulting insights can directly contribute to the most important functions of an organization, such as protecting human life.
1. Analyze scenarios instead of singular items
Intrusions into organizations are initiated by humans, not by malware. Which is why cybersecurity analysis should not be monopolized by a singular focus on controls such as patching or anti-malware. Instead, organizations should try to gain a holistic view across the intrusion lifecycle – particularly of the steps taken by the humans behind the malware.
Take, for example, the attack on a petrochemical plant’s safety instrumented systems in Saudi Arabia in 2017, which resulted in the first cyberattack targeted directly at human life. In this scenario, a preoccupation with malware and the final step of the adversary’s attack that caused the safety-system disruption, obscured valuable insights about the deeper risks posed by the attacker’s techniques across more than a dozen distinct steps they performed over three years. The organization focused on identifying and remedying the attack by sharing technical details about the malware; while important, this is easy for the adversary to change in any follow-up attack.
The attack had actually begun in 2014. From 2014-2017, the adversary compromised the organization and moved throughout their industrial networks learning about the operations and equipment. The team behind the attack, dubbed XENOTIME, engaged in a series of steps leading up to the deployment of their malware, called TRISIS or TRITON: over a dozen unique ones in total. In other cases involving this same adversary, many of the steps remained consistent, even though the specific malware leveraged was not observed again. This is common in cybersecurity where adversaries change capabilities, but maintain a level of consistency in the style of attack.
With each action in the chain, there are multiple compensating controls against the risk the adversary poses that would inform any organization how to prepare against such attacks. For example, monitoring for the way the adversary moves through the networked environment. Told across the full scenario, the case study presents a story of how to develop and communicate a defensive strategy that prepares organizations for any other adversary that shares any overlap with how XENOTIME operates. Sharing strategies is a common practice for cybercriminals and gives defenders an upper hand in responding.
A scenario-based analysis makes it easier to understand the risk, without a high degree of technical jargon or acumen. The longstanding practices of safety engineers can provide an excellent template for this kind of analysis. For instance, by performing a hazard and operability (HAZOP) analysis process that examines and manages risk as it relates to the design and operation of industrial systems. One common method for performing HAZOPs is a process hazards analysis (PHA) that uses specialized personnel to develop scenarios that would result in an unsafe or hazardous condition. It is not a risk reduction strategy that simply looks at individual controls, but considers more broadly how the system works in unison and the different scenarios that could impact it.
2. Derive scenarios from intel-driven and consequence-driven analysis
Cybersecurity threats are the work of deliberate and thoughtful adversaries, whereas safety scenarios often result from human or system error and failures. As a result, a safety integrity level can be measured with some confidence by failure rates, such as one every 10 years or 100 years. In contrast, trying to take frequency or likelihood into account for cybersecurity scenarios is a highly unpredictable and failing practice. Instead, organizations should view protection from these risk scenarios as a binary, yes-or-no decision. Either an organization wants to be prepared for that type of incident or not.
To create scenarios that maximize the commonalities between safety and cyber-risks, organizations should consider a two-pronged approach:
• Intelligence-driven scenarios – those based on real attacks – have the benefit of being a documented case of precisely what happened to other organizations that led to incidents. The study of previous cyberthreats and the methods utilized is an excellent teacher.
• Consequence analysis is more akin to the art-of-the-possible (i.e. thinking through a near-limitless range of possibilities) and should be conducted by a diverse team ranging in skill sets from cybersecurity to plant engineering. Understanding what consequences would be most impactful to the organization or plant site can then be thought through in terms of how they could be influenced or conducted through cyber means.
The combination of ground-truth reality and impactful art-of-the-possible scenarios will create overlapping layers of security and risk reduction that form the basis for meaningful cybersecurity strategies.
3. Prioritize and remove barriers for where cybersecurity and safety intersect
Cybersecurity efforts that can be tied directly to safety should be prioritized and resourced in the interest of the overall organization, the safety of plant personnel, and the safety of people and environments around our plants.
In many organizations, cybersecurity is billed as an IT service provided to business units or individual plants. However, most organizations have consistently deemed safety-related expenses a company-level expense, which does not negatively impact plant budgets, performance bonuses, and key metrics. Not all cybersecurity efforts contribute to safety, but those that do should be prioritized and fully resourced at corporate level, not expensed to individual plants.
Through understanding broader cyberattack scenarios, and not focusing overly on any one step, preventive, detective and responsive controls can be crafted as part of an overall cybersecurity strategy. Scenarios that consider cybersecurity risks and that can impact safety directly should be prime candidates for prioritization and resourcing.