How Do You Assess Risk In Cybersecurity?

If you can’t measure it, you can’t stop it


Skanda Vivek

2 years ago | 6 min read

In June of 2017, a global cyberattack resulted in a estimated $10 billion loss, much of this from major global businesses. Attributed to Russia, this cyber-attack (known as NotPetya) was a result Windows machines that had not installed recent updates among other issues.

Earlier that year, the National Security Agency (NSA) had warned Windows about a vulnerability that took advantage of the Server Message Block (SMB) protocol.

As I mention in an earlier article, SMB stands for Server Message Block protocol, used for sharing access to files, and devices like printers and other resources on a network. Windows released critical updates in March of 2017, as well as details of the flaw, known as the EternalBlue vulnerability.

Soon after, there were 2 major cyberattacks that year which took advantage of unpatched machines, resulting in tens of billions of dollars in damages for many major companies.

The first attack was the WannaCry ransomware attack, now attributed to North Korea, which targeted machines running Windows, encrypted their data, and demanded ransom payments in the form of Bitcoin.

The second was NotPetya that used the same EternalBlue vulnerability. NotPetya was initially used to target Ukraine’s critical infrastructure systems including the power grid, gas stations, but spread to companies outside Ukraine through a backdoor in a commonly used tax accounting software — M.E. Doc, apparently used by more than 80% of companies operating in Ukraine. Through M.E. Doc it spread to companies like Merck, FedEx, Maersk, and more.

These series of attacks illustrate the importance of risk assessment in large organizations. If companies like Maersk and Merck had done a thorough assessment of security vulnerabilities and identified unpatched Windows updates, their systems would have been less likely to have been compromised.

So what are the tools for risk assessments that are available?

NIST Cybersecurity Framework

Credit: N. Hanacek/NIST

In 2014, the Obama administration released an executive order aimed at improving critical infrastructure cybersecurity. It tasked the National Institute of Standards and Technology (NIST) for developing a cybersecurity framework.

This framework eventually evolved into the most popular cybersecurity risk assessment framework used by organizations. NIST estimates that half of all organizations use the cybersecurity framework.

The framework is divided into 5 basic functions, critical for cybersecurity.

  • Identify: “Develop the organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities.”
  • Protect: “Develop and implement the appropriate safeguards to ensure delivery of critical infrastructure services.”
  • Detect: “Develop and implement the appropriate activities to identify the occurrence of a cybersecurity event.”
  • Respond: “Develop and implement the appropriate activities to take action regarding a detected cybersecurity event.”
  • Recover: “Develop and implement the appropriate activities to maintain plans for resilience and to restore any capabilities or services that were impaired due to a cybersecurity event.”

A good acronym to remember these 5 critical functions is IPDRR (if that helps). Another way is to visualize how organizations should respond to cyberattacks.

The best response is of course: avoiding attacks in the first place. But the way to do this is not by simply hoping for the best. It is a continual process of identifying risks and fixing them.

Once you identify risks, you need to protect infrastructures against attacks that exploit these risks. You could do that by fixing underlying risks such as ensuring systems are up to date and patched against common vulnerabilities.

Another way could be to develop protocols to minimize exposures if attackers are successful. But again, the key is to first know about vulnerabilities, before protecting against attacks that leverage these flaws.

Even if you are prepared, chances are that hackers exploit a previously unknown vulnerability. Such attacks are called zero day exploits. In this case, you need a way to detect such intruders as fast as possible, in order to stop them.

But detection is not enough, you also need to figure out the optimal response once you have detected intruders. Should you shut your entire IT network down? Or just the computers that are impacted? Are there any risks to customers and when do you inform them?

Finally, once the damage is done — how do you recover back your impacted systems? In the case of Maersk, all end-user devices including 49000 laptops were destroyed.

More than half the servers were destroyed. Data was preserved in backups, but the applications themselves couldn’t be restored, as they would be immediately infected. Luckily, they were able to recover an undamaged copy of their active directory from Nigeria, because there was a power outage at the time of the cyberattack, and this system was not connected to the Internet.

Within a week they were able to recover most of their IT networks. However, this was not a planned recovery, and illustrates the need to be prepared for such events including how to recover once the damage is done.

Organizations implementing the NIST framework give scores to various subcategories under the 5 functions. These scores range from 1–4. The 4 tiers are described as partial, risk informed, repeatable, and adaptive respectively.

Ultimately, the organization self assesses itself and gets a final score. In addition, they also have a target cybersecurity profile that the organizations wish to achieve in the near future.

FAIR Framework

While the NIST framework is a good starting point for organizations to take stock of their current cybersecurity state and objectives, it does not estimate the exposure in the event of attacks. Ultimately, organizations care about reducing such exposures — be it financial or reputational harm.

The Factor Analysis of Information Risk (FAIR) model was developed to assess these risks stemming from events.

In the FAIR assessment, the frequency and magnitude of losses are taken into consideration, and the multiplication of the two factors gives the exposure in a certain timeframe. In addition, losses are categorized into primary and secondary. Primary losses are financial losses directly from the attack.

For example, a cyberattack that shuts down customer billing services means lost revenue from customers for a utility company during that period of time.

Secondary losses result from additional impacts as a result of the cyberattacks. For example, if the utility company is taken to court for willfully ignoring security protocols and putting their customers at risk, they might pay fines as part of a settlement.

In the FAIR analysis, bounding values are given for frequency and magnitude of disruptions. Estimates are presented as a PERT distribution, defined by the minimum, maximum, and most likely values that a variable can take. These distributions are inputted to a Monte Carlo method that samples values from them, to give an estimate of primary and secondary financial losses.

Going Beyond Static Risk Assessments

The NIST and FAIR models are useful starting points. In the last decade they have become commonplace in organizations. However, there are several failings of these.

The NIST framework does not explicitly consider individual risks and how they might cause multiple disruptions that are not fairly obvious. Many organizations use this as a way to “check the box,” without going in depth as to what possible scenarios are, and what their biggest strategic cybersecurity risks are.

The FAIR model does help organizations assess the financial impacts from individual risks, and thus identify which risks are of particular concern. However, this is just one dimensional.

With the increasing connection between IT and OT systems, societal concerns could take precedence over financial. For example, life threatening risks from dangerous critical infrastructure disruptions such as attacks on connected vehicles or public utility services.

Another criticism is that these models do not address the chain of consequences stemming from cyberattacks i.e. how a cyberattack would move from IT networks to causing disruptions in society. A good example is the recent Colonial Pipeline incident — where a ransomware attack caused gas shortages across the South East U.S. for more than a week.

In my recent publications, we explore how complex network theory, along with novel data based algorithms can be used to assess vulnerabilities in critical infrastructure networks. One example is our study on transportation disruptions stemming from cyber-attacks on connected vehicles, which was highlighted by Forbes.

Transportation impacts from Hacked Connected Vehicles on the Manhattan Road Network

We published another study accounting for spatiotemporal delay propagations stemming from cyberattacks on air networks — which was presented at the Conference for Cyber Conflict (CyCon) hosted by NATO in 2021.

In the coming years, hopefully organizations will consider a myriad of different approaches to assessing cybersecurity risk. Broad frameworks like NIST and FAIR will continue to have impacts due to the bird’s eye view of cybersecurity.

However, they need to be supplemented with detailed scenario based evaluations that illustrate the chain of consequences from specific attacks of concern. In this area I think there is much advancement to be made in understanding how cybersecurity incidents play out in multiple interconnected infrastructures.


Created by

Skanda Vivek

Senior Data Scientist in NLP. Creator of







Related Articles