Ashley Lukeeram, Country Manager – Canada, Tenable
Running an effective security program is an ever-growing daunting task for cyber security practitioners. The expanded digital footprint is a must for organizations to differentiate themselves through better customer service, new market penetrations and higher profitability ratio. “Work From Home” policy has also kicked in to ensure business continuity and employee safety in this pandemic situation. This business transformation means a bigger attack surface for hackers to go after, while organizations still have the same limited cyber security staff. I often get asked this question by security leaders across Canada:
“We are in an unbalanced battle with the bad actors. How do we stay ahead of them?”
For the purpose of this blog, I will focus on a foundational element for any cyber security programs which can help address this challenge – Vulnerability Management (VM). This may not sound as exciting as the next-gen thing out there on the market. But per my colleague @James Smith “It is as important as flossing our teeth to have a healthy gum”. We come across organizations which conduct vulnerability assessment (VA) once or twice in a year and then deliver a thick recommendation booklet to their IT operations & leadership teams. The more mature organizations have a higher scan frequency than the annual/quarterly compliance checkmark exercise. Others build on this by putting penetration tests and incident response exercises in place. But progress is still being measured by the number of systems patched in the environment. In my opinion, this is where we are missing the target! It is time to change the game towards a risk-based paradigm for vulnerability management. The following section outlines why we need to take a different approach and what should be part of this new model.
To begin with, as an ethical hacker myself, the first thing bad actors do is a reconnaissance of the targeted environment. The main goal is to identify loopholes and build an attack plan from there. There’s a plethora of exploit kits in the dark web which the hackers can choose from to exfiltrate data. These threat actors and techniques are changing every day. Imagine a point in time assessment has categorized a vulnerability as a low/medium criticality. This typically puts the vulnerability on a lower patching priority. Today, if a hacker starts actively going after this vulnerability because an exploit kit is available, then this environment will get hacked. The reason we scan is primarily because there’s a need to minimize risk to the organization’s infrastructure and business. So, the first area to consider is how to add threat intelligence to vulnerability data.
Threat intel should also matter to VM programs because the current metric used by the industry (Cyber Security Vulnerability Score – CVSS) is not enough. CVSS is only a technical severity of the vulnerability. In 2018, 60% of all vulnerabilities (over 10,000) were classified with a CVSS of high or critical rating. If everything is important, nothing really is! IT Operations often complains about vulnerability fatigue primarily because CVSS is being used as the only metric for patching. By the way, this is one of the most common reasons why security teams often limit their scanning scope and/or frequency. Is this not antagonistic to what security teams should be doing – see everything in as real time as possible to take risk mitigation activities? Many of these security practitioners are often stuck between level 1 & 2 of the vulnerability management maturity model
From a full visibility perspective, I also come across organizations using different tools to assess vulnerabilities on their web applications, cloud, operational technologies, containers and traditional IT (server, desktop, network). Such a situation leads to silos of data and therefore there is no common taxonomy of risk for the organization. When a new vulnerability gets announced by the media, the first question senior executives ask their teams is “How secure are we against this vulnerability?” Only a single quantitative risk-based measurement of vulnerability irrespective of the environment coverage, can address this question.
Any risk practitioners would argue that the above model discussed so far, has one key element missing. It is the impact component (Risk = Threat likelihood x Impact). This is where asset criticality (business context) must be factored into the risk-based vulnerability model (RBVM). With such an approach, organizations can start winning by patching the most critical systems first – the ones which would impact their business the most if they were breached. Not all systems and data are created equal! No one has infinite resources! So why waste time with this Whack-A-Mole game of patching when there’s an approach to prioritize efforts on the higher vulnerability risks of the organization?
To summarize, if we want to start winning the battle against hackers, organizations must start doing their cyber hygiene very well. Prioritization of patching based on risk is much needed. The conversation must evolve from number of systems patched to how much risk are we reducing continuously. Point in time assessments/compliance does not equate to being secure. Technology exists today where artificial intelligence (AI) and data science models for risk-based vulnerability management (RBVM) are being implemented. But like anything, change needs to start with our mindset first!