Skip to content Skip to navigation

15 Days of Cyber Insurance: Risk quantification data

Risk quantification relies heavily on datasets from security vendors and insurance claims aggregators. The Verizon DBIR and the NetDiligence Cyber Claims Study have been cited in this Medium series before, and they provide valuable trends and data aggregations across multiple industries. However, they also have gaps, which can point to areas where additional information from researchers would be valuable, or areas that insurers may want to investigate to improve their own risk quantification methods.

The most recent Verizon report was recently criticized for its list of the top ten most common exploits as many of those exploits were based on older signatures and accounted for upwards of 80% of all exploited vulnerabilities noted. CVE ID numbers contain a date that represents the date that vulnerability was submitted to the numbering authority, and the overall goal of the project is to catalog known vulnerabilities to make them easier to fix and patch. However, these vulnerabilities are only about half of known or disclosed vulnerabilities, because some vulnerabilities are not of the type cataloged by CVE, and some are not submitted to CVE. The VulnDB report may be a more thorough accounting of vulnerabilities, but further research is required to determine how often the additionally identified vulnerabilities are exploited.

Another piece of the Verizon DBIR top-ten vulnerabilities list that jumped out at researchers was how many of the vulnerabilities were old. Four of the top ten were from 2001 or earlier, which raises the question of why they wouldn’t already have been patched; insurers auditing the security systems of clients would do well to ensure that clients are scanning newly-added hardware regularly, and not assuming that older vulnerabilities have been eradicated from a system. The eight scanners used by Verizon to compile their data (Beyond Security, Tripwire, McAfee VM, Qualys, Tenable, BeyondTrust, Rapid7, and OpenVAS) would be a good start when drafting recommendations that customers scan their systems regularly.

Digging into the data at this level of granularity is interesting to some, but others are likely questioning why looking at individual vulnerabilities will make a difference when determining an overall likelihood of being breached. As with any statistical analysis, the more accurate the underlying data, the more accurate the eventual recommendation, and as some risk quantification schemes rely heavily on resources like the Verizon DBIR fully understanding where its data come from is important to understanding the eventual quantification of risk results.

-Allison Berke

Join the conversation on Medium