trackd_logo_dark-1
Vulnerability Prioritization

What is Vulnerability Prioritization?

Vulnerability prioritization is the practice of deciding which of the many unpatched vulnerabilities on the typical corporate network should be addressed first because they pose the most cyber risk to the organization. Vulnerability prioritization is also known as a key element of “risk-based vulnerability management,” or RBVM, as the primary consideration when prioritizing vulnerabilities is the risk the unpatched vulnerability poses to the organization’s overall cyber risk profile.

Why is Vulnerability Prioritization Important?

Depending on the study or survey cited, between one third and more than 50% of breaches can be traced to an unpatched vulnerability. Moreover, many of the vulnerabilities to which breaches can be traced were one or two years old at the time of the exploitation, a far cry from dramatic zero-day vulnerabilities that are sexy, but rarely responsible for breaches. Indeed, the reality of vulnerabilities and the breaches resulting from them is far more pedestrian than it is Hollywood. Because unpatched vulnerabilities are an enticing target for bad actors, it’s important to patch them as quickly as possible, something that’s easier said than done. Over 22,000 new vulnerabilities were discovered in 2022, or about 60 per day. Of course, not every organization has exposure from all 60 every day, but large organizations in particular, are likely to experience a constant stream of new, relevant vulnerabilities on top of the hundreds or thousands already on their networks unremediated. 

Vulnerability prioritization is a key element of an effective vulnerability management program

In a fantasy world, vulnerability remediation teams would patch all vulnerabilities as soon as they’re discovered and reported, rendering prioritization irrelevant. Two primary realities prevent that ideal world from coming to fruition: 1) patching requires resources that are often scarce or otherwise occupied, and 2) patching can cause disruption (although much less frequently than generally perceived, which we’ll discuss in more detail later in this piece). So, much like a patient triage operation in a busy hospital or battlefield aid station, it’s not possible to treat every vulnerability with the same urgency, so prioritization primarily on the basis of risk to the organization is required.

What Factors Should be Considered when Prioritizing Vulnerabilities?

There are several factors to consider when deciding which vulnerabilities should receive the attention of the vulnerability remediation team first.

The CVSS Score

Every time a vulnerability is identified and reported, a severity score for that vulnerability is generated. The CVSS (Common Vulnerability Scoring System) scores vulnerabilities on a scale from 0 to 10, corresponding with Low, Medium, High, and Critical designations. When prioritizing vulnerabilities, the CVSS score is a good place to start, but its key deficiency is that it lacks context. That is, the CVSS score for any given vulnerability is the same irrespective of the network it’s on. A “critical” CVE (Common Vulnerability and Exposure) could be on a device with direct access to the internet on one network, and completely isolated from public internet exposure on another, rendering it much less of a risk on the latter network.

The Availability of a Known Exploit

Perhaps the most important factor impacting the prioritization of any vulnerability is whether or not an exploit has been developed for it and is available in the wild. An exploit is a small software program that enables users to penetrate the network on which the unremediated vulnerability resides. Exploits can be purchased on the Dark Web by threat actors who wish to engage in criminal cyber activity, but have neither the technical expertise or the time to develop the exploit on their own. Those with technical expertise can monetize that knowledge by developing the exploits and selling them, generating a financial gain with much less criminal risk exposure. Clearly, an unpatched vulnerability on the network for which a known exploit exists represents a high risk to the organization, and therefore should be prioritized.

Location on the Network

The location on the network of the device housing the vulnerability is also an important and fairly obvious consideration. Devices with direct access to the public internet pose a higher risk to the organization than a device with the same vulnerability isolated from a connection to the outside world. Thus, thoughtful network segmentation design can go a long way to mitigating the risk of inevitable unpatched vulnerabilities.

Criticality of the Asset

The importance of the asset to the organization is also a consideration, albeit can yield a false sense of security. Clearly, a business-critical asset is more worthy of attention and protection than one that can be off-line with minimal immediate impact. That having been said, attackers are unlikely to limit their ill-gotten network access to just the initial compromised device, and will use a number of techniques to move about the network for as long as they can remain undetected. For this reason, the criticality of the asset to the business to the vulnerability prioritization formula is not a consideration that should be overemphasized. Note that later in this discussion, we introduce the topic of “outstanding assets” which, often by definition, would not be considered business-critical, but nonetheless can often be considered high-risk devices from the perspective of vulnerability prioritization.

Compliance Requirements

Finally, some organizations may have specific compliance requirements around vulnerability remediation that may impact prioritization, but vulnerability remediation and compliance is beyond the scope of this discussion.

Identifying Outstanding Assets

A more obscure factor in vulnerability prioritization is something called “gold-nuggetting”, or identifying outstanding assets on a network, a process that parrots what experienced pen testers (and their black-hatted counterparts in the cyber criminal community) employ. It was pioneered by AI researchers at Delve Security (now part of SecureWorks).

In short, pen testers will survey a network looking for the lowest-hanging fruit, or devices that are likely to be out of place, unique in some way relative to the network’s architecture, and presumably less-vigorously maintained by the IT team. Indeed, the IT team may be unaware of the asset, an even better prospect for pen testers and bad actors. A quick example may be a Linux server in an area of the network mostly populated by Windows machines. A Delve Security blog explains it this way:

“Enterprise networks house thousands of devices (IoT devices, servers, laptops, etc.), some of which present particularly ripe targets for bad actors.  Over the course of years of experience, the best pen testers can very quickly find these priority assets, the ones best suited to compromise or to collect valuable information, and from which to launch a successful attack.

When first evaluating the results of network scans, typically an Nmap, a prime vulnerability report example. The experienced pen tester – or intruder – will quickly get a ‘feel’ for the type of network he is currently in. During this critical early stage of an attack, the intruder is attempting to understand the relationship between the network devices he is looking at, or the underlying structure in the data, all of which is highly context-dependent.  It is only when he understands the overall context of the network that he can start ‘digging for gold.’” 

Automation in Vulnerability Prioritization

This discussion, coupled with the reality that there are hundreds, thousands, or even tens of thousands of unpatched vulnerabilities on most enterprise networks, makes it fairly obvious that attempting to manually prioritize vulnerabilities is a fool’s errand. Luckily, a number of products on the market offer automated vulnerability prioritization solutions, often leveraging machine learning, to help remediation teams identify the highest-risk vulnerabilities. A quick internet search of “automated vulnerability prioritization” will yield several options, but it’s important to understand the difference between automated vulnerability management and the specific sub-element of that function, vulnerability prioritization. As discussed previously, it’s also important to distinguish between vendors that simply use the CVSS score as their primary means of prioritization from those that account for, at a minimum, the factors discussed in this blog.

What Comes Next after Vulnerability Prioritization?

The obvious answer to this question is simply patch the un-remediated vulnerabilities, starting with the highest priority ones. That, however, as remediation teams know well, is easier said than done for two primary reasons (the ones given by remediation team members and mentioned previously): 1) resources, and 2) fear of disruption. Since patching isn’t easy, it’s even more important to prioritize the highest risk vulnerabilities, but that “high priority” list could include tens or hundreds of patches, each of which requires a patching strategy unique to the system being updated. At trackd, we’re providing remediation teams with real-world (and real-time) data that can greatly influence that patching strategy process. Similar to a Google Review of a local business or on-line product, the trackd platform gathers feedback (we call it “patching telemetry”) on the patching experience of organizations using the solution, specifically, if the applied patch caused a disruption or not. Armed with this data, remediation teams can be more (or less) confident that a given patch will cause disruption and therefore adjust their patching strategy appropriately. The ultimate goal of the trackd platform is to facilitate greater use of auto-patching to remove as many patches from the list of those that require manual attention as possible, greatly increasing the productivity of remediation teams and reducing the organization’s cyber risk profile.

Does Patching Often Cause Disruption?

Believe it or not, no. Some private studies put the number of disruptive patches at less than 2%, yet fear of disruption continues to plague the vulnerability remediation community. There are 3 primary reasons for this:

  1. Bad memories from the old days. In the early days of networking, patches much more often caused disruptions. Although those days are long gone, the psychological impact still lingers.
  2. Until now (see trackd.com), there’s been no way for remediation teams to know which of the thousands of patches they’re tasked with applying fall into the 2% and which are safe. Without solid data on the patch’s applied history, they’re effectively gambling, albeit with very good odds.
  3. Risk is defined as the likelihood of an occurrence combined with the consequence of the occurrence. Thus, even if there’s a less than 2% chance a patch will cause a disruption (likelihood of occurrence), the impact of a disruption can be very painful for the organization (consequence of the occurrence), not to mention the career trajectories of the remediation team members.