Future Accountability: Vulnerability Management Best Practices

Estimated Reading Time: 5 minutes

There are a few key reasons organizations start taking vulnerability management (VM) seriously. You may operate in a highly-regulated industry like finance or healthcare, and are thus subject to compliance. Perhaps an auditor has made it clear you need better ways to manage risk and report plans through the organization. Finally, you may have launched a VM program to improve visibility and harden your attack surface more effectively. Taking it seriously means going beyond scanning and assessing; it means understanding the risks involved, prioritizing effectively, and managing an effective on-going process.

Unfortunately, risks are ever-evolving and abundant, making VM necessary for any modern SecOps effort. Program leaders are in many ways tasked with future proofing the organization. You must anticipate risk, and predict which fixes will have the greatest impact on the organization. When your program is new, or when you start working with a VM service provider, the volume can at first seem overwhelming. To get your program off on the right foot, here are a few ideas on the early stages of any new VM effort.

VM goes beyond scanning, but that’s where it starts.

Vulnerability management starts with an understanding of the attack surface. While vulnerability *management* goes beyond scanning, a vulnerability assessment is where the VM program starts. Scanning tools, such as the one with our partner Tenable, assesses the network for relevant IT assets in your environment. Designed to identify every potential source of vulnerability risk, this helps map out your attack surface and is the backbone of your VM effort. Once you know what assets you have, the vulnerability scanner can then tell you what vulnerabilities and misconfigurations exist across your landscape of workstations, firewalls, servers, and devices. But the tool alone won’t solve the actual problems.  

First, scanning tools will initially return thousands of configuration issues, outdated software to be patched, or hidden vulnerabilities that must eventually be addressed. The challenge is to fine-tune scanning tools to reduce the number of alerts. False positives drain resources. Adjustments can only be made through skilled analysts with experience into your unique environment and a clear understanding of desired security outcomes. Once you get this higher fidelity on your scans, then you need to know how to evaluate the risks associated and prioritize efforts to actually patch and mitigate the vulnerabilities. 

Prioritizing Across Expanding Environments

Every asset is an attack vector, but not every asset is of critical importance to business continuity. When an executive’s email is down, the pressure may be real, but it’s nothing like having to take down an AWS server or patch legacy software on every desktop.

No matter what industry you’re in, chances are you’ve seen remarkable growth in the number of assets or endpoints to contend with. Remote work, telehealth adoption, IoT, cloud–all leave many SecOps teams scrambling to understand where to focus detection efforts, or to plan remediation activities such as patching. 

BYOD and remote workforce policies mean contending with phones, laptops and tablets, all with their own multitude of apps, operating systems, and disparate software on those devices that need to be centrally managed and secured of any potential risks. As organizations move to more complex hybrid cloud environments, they place sensitive data at risk of being accessed, viewed or mishandled. Identifying and categorizing assets for criticality is the first step to prioritization.

The objective of VM is business risk reduction, not merely the identification of risks. To reduce business risk, teams must first identify risks that impact revenue and business continuity, then prioritize efforts where the juice is worth the squeeze, so to speak.

Consider the prioritization of patching software. When a Zero-Day is discovered and revealed by researchers or analysts, the clock starts ticking on a rush of threat actor activity. According to Microsoft, the volume of attacks from a Zero-Day escalates in the two weeks following its announcement, as threat actors feverishly take advantage of the reveal. Attacks typically reach a peak in the two months following the announcement. The Deepwatch Adversary Tactics and Intelligence Team (ATI) calls this fact out in our Zero-Day advisories.

For critical systems, organizations must patch vulnerabilities almost as quickly as they are discovered, but many fail to do so. According to one report, the average organization takes over 60 days to patch standard operating systems and applications, and months or even years to patch more complex business applications and systems. 

Establish Best Managing Metrics

The effectiveness of a vulnerability management program is often overlooked as a key success metric. Many organizations typically focus on quantitative metrics that don’t truly support business risk reduction. According to research by Gartner, the most tracked VM metrics are not risk-based and are often derived in silos, which leads to ineffective, low-value prioritization with negative impacts and higher costs.

Metrics captured are often purely volumetric and are not in a business context, presenting a lack of value to senior-level executives. Short-term metrics don’t capture the process maturity attained through sustained efforts over time.

Predict with Confidence

In the end, VM efforts require a level of creativity and overcommunication to future proof the organization from threats. Your challenge will be to translate visibility into action, then effectively communicate whether creating an in-house VM program or working with a VM provider like Deepwatch, establish metrics which are both quantitative and qualitative.

In an instance with a customer in healthcare, Deepwatch performed initial vulnerability scans that exposed over 100,000 high priority vulnerabilities that had gone unpatched. After three months of close collaboration with Deepwatch, the customer fixed over a million vulnerabilities. Two years later the customer has managed to significantly narrow their attack surface and protect critical assets. As a result of the program’s success, the customer renewed its partnership with Deepwatch for three additional years.

Deepwatch’s Vulnerability Management services provide a baseline for collaboration with our customers. We help identify the critical assets, threats, and vulnerabilities relevant to an organization, and provide prioritization strategies that reduce business risk. Discover today how Deepwatch provides the people, processes, and technologies to fully or partially administer vulnerability management programs for our customers.

Learn more here: https://www.deepwatch.com/resource/vulnerability-management-data-sheet/


LinkedIn Twitter YouTube

Subscribe to the Deepwatch Insights Blog