5 Vulnerability Management Best Practices

High-severity vulnerabilities have become more frequent. This post provides best practices to help prioritize improvements to vulnerability management programs.
5 Vulnerability Management Best Practices

Vulnerability management is the process of identifying, prioritizing, remediating, and reporting on vulnerabilities to proactively reduce your cyber risk. As organizations transition to the cloud, and the number of open-source libraries increases, vulnerability management gets more and more difficult. In 2021, there were more than 20,000 CVEs (common vulnerabilities and exposures) reported, a 212% increase from 2016. And that is just the CVEs we know of. According to Palo Alto’s State of Exploit Development Report, 80% of public exploits are published before the CVEs, with an average difference of 23 days between when the exploit is publicly available in Exploit Database and when the CVE is published.

It is nearly impossible for any cyber team to completely eliminate vulnerabilities from their environment, but here are some best practices for finding the areas to prioritize.

Create and maintain an asset inventory

Simply put, you can’t protect what you don’t know exists. Creating an accurate asset inventory is an essential first step to vulnerability management. In fact, the Center for Internet Security (CIS) lists “inventory and control of enterprise assets” and “inventory and control of software assets” as numbers one and two, respectively, on their 2021 list of Critical Security Controls, the recommended set of actions for cyber defense. Your inventory should include software and hardware, not forgetting about cloud services or OT devices. You want to know about all of the hosts, even the ones that don’t support installing agents, are outside the domain, or are no longer, but were recently, active. This type of inventory likely already at least partially exists but often is broken up into different systems owned by different departments, and there can sometimes be miscommunication over who should manage a central asset repository, IT or security teams.

Whoever assumes responsibility for it, you should be collecting enough information to not only locate all of your assets but understand what security policies should be applied to them. This information may include hostname, unique identifier such as serial number or MAC address, network address, who owns the asset, location of the asset, business function, data sensitivity, OS version, and software licensing status.

Vulnerability scanning can help you locate assets but might not provide all of the business contexts you need. If an inventory doesn’t already exist, you may need to do some legwork, talking with business leaders, reviewing technology purchases, software licensing, maintenance contracts, infrastructure devices such as DNS servers, etc., to get the full picture.

Post COVID-19, with the rise in popularity of work-from-home policies, an additional layer of complication is added to your asset inventory, as you need to be able to manage an influx of endpoints spread out across numerous home Wi-Fi networks. Scanning becomes less valuable without the use of an algorithm that can map hosts across current and past assessments, known as fingerprinting, which can help maintain consistency in scanning results when IP addresses or hostnames change.

Without an accurate asset inventory, it becomes impossible to map vulnerabilities to the assets they impact, which is an essential step in accurately prioritizing remediation.

Complete vulnerability scanning at least quarterly, augmented by penetration testing

A commonly asked question in vulnerability management is how often you should be performing vulnerability scans. This will largely depend on the size and complexity of your security environment, how often you make changes, and the regulations you need to comply with, but at a minimum, it is recommended that you perform an internal and external scan at least once per quarter. You may opt instead for a monthly basis, depending on your risk tolerance. Keep in mind those 20,000 CVEs from 2021… that is more than 1,600 per month, or 5,000 per quarter. Obviously, not all CVEs will impact your environment — and if you have bottlenecks in your remediation process, you may be rescanning the same vulnerabilities you haven’t fixed yet if you scan too frequently — but not performing frequent scans could lead to vulnerabilities piling up. Your scanning frequency should strike a balance between detecting new vulnerabilities in a timely manner and not wasting resources scanning more frequently than makes sense for your capabilities.

It is also recommended that you perform a scan after significant network changes, such as adding new system components or changing network structure, making changes to your firewalls, or significant authorization/access changes. If your organization frequently makes changes to web applications or cloud assets, it is a good idea to integrate testing tools into your deployment pipeline via an API with your scanning tool so you can catch vulnerabilities earlier before release. For large-scale changes, you may want to do penetration testing as well. Penetration testing goes a step further than scanning, attempting to exploit vulnerabilities. This can be helpful to confirm scanning results, which often have false positives, find vulnerabilities that aren’t commonly detected by scanners, and prioritize which vulnerabilities pose the most risk in a real-world scenario. Penetration testing is generally more costly and a more manual and invasive process than scanning, so penetration testing may be conducted less frequently and with more deliberate and limited targeting.

Prioritize vulnerability remediation based on risk

As the number of new critical vulnerabilities climbs, it has become impossible for security teams to remediate every single vulnerability they discover. Prioritization is ,therefore, more important than ever, but how do you determine which vulnerabilities to target first? Many traditional vulnerability management programs have used the CVE score of a vulnerability as the deciding factor in prioritization. While it is true that high severity vulnerabilities should naturally be ranked higher, CVE scores can sometimes lack the context necessary to truly gauge the risk that a vulnerability will pose to your organization.

More recently, the concept of risk-based prioritization advocates for prioritizing the vulnerabilities that cause the most risk, factoring in not only their CVE severity, but other factors such as the conditions needed to exploit, where they are located in your environment, the business criticality of the impacted assets, as well as whether a patch is readily available. Focusing on remotely exploitable, high-severity CVEs, with an available patch that show up in your most critical assets will generally prove the most effective for reducing the largest amount of risk for your efforts.

Invest in tools capable of automation

Another important factor in the success of your vulnerability management program is the tools you’re using. You may not be able to manually remediate every vulnerability, but if your vulnerability management tool can initiate automated remediation workflows, you’ll have a great head start. Remediation tends to be one of the biggest bottlenecks in the vulnerability management process. You’ve performed your scanning, but if you’re using technology that won’t integrate with your threat intelligence, your asset inventory, your IT ticketing system, or the systems that you use to deploy your patches, there is likely going to be a delay between when you discover the vulnerability and when the actual remediation is completed. In fact, a 2021 report by WhiteHat Security found that the average time to fix a critical vulnerability was 205 days. Compare that to the time they report it took to weaponize a bug… seven days. The longer it takes to remedy a known vulnerability, the more likely it is that you will become a victim of an attack that could have been preventable. Investing in the right tools and putting in place the workflows needed for automation will speed up your time to remediation, significantly reducing your risk.

 Integrate vulnerability management into your application development lifecycle.

According to the Synopsys Cybersecurity Research Center (CyRC) ’s annual Open-Source Security and Risk Analysis (OSSRA) report for 2022, 78% of code in codebases was found to be open source, of those, 81% contained at least one vulnerability. Utilizing open-source code bases makes application development faster and easier, but using them also opens the organization up to significant security risks. Rather than attempt to scramble to secure applications right before, or worse, after, release, organizations are “shifting security left,” a buzz phrase meaning that security is literally shifted left in the linear workflow so that security is integrated into DevOps, as opposed to being a post-production afterthought.

If you are going to integrate security into DevOps, it has to be truly built-in so as not to hamper the rapid CI/CD pipeline. Tacking on traditional security processes will not work and will end up slowing down application development. The modern application development cycle is continuous; therefore, your security monitoring must be as well. Continuous monitoring, the concept of implementing automated processes to detect vulnerabilities and misconfigurations at every stage of the pipeline, is an essential part of this transition to a security-as-code mindset. Once again, automation and the selection of tools with advanced capabilities is essential.

One of the challenges that organizations face when implementing continuous monitoring is analysis paralysis caused by the sheer volume of data that is collected. It becomes impossible for humans to sift through all of the events collected, which means you need a system that can aggregate logs and event data from various sources, find correlations, detect deviations, and provide meaningful information that can be acted on.

You should employ a combination of static application security testing (SAST) tools to examine your code files as they’re developed and dynamic application security testing (DAST) tools to run automated scans when deployed. Finally, software composition analysis tools can help detect vulnerabilities in dependencies to help reduce the risk of third-party code libraries.

Are you interested in collaborating with other security professionals to improve your vulnerability management program? RH-ISAC members can join RH-ISAC’s vulnerability management working group to participate in vulnerability management discussions and exchange of best practices. Learn more about RH-ISAC membership.

More Recent Blog Posts