NVD) assign a severity rating to every vulnerability; "High", "Medium", or "Low". The rating is determined by ranges of CVSS (Common Vulnerability Scoring System) v2 scores. I've not been a big fan of CVSS: I don't think it works particularly well when applied to software that is shipped by multiple vendors, or for open source software and libraries that don't know all the possible use-cases of their software.
Even though I'm not a fan, NVD publish a CVSS score for every issue, security companies are using those scores in their vulnerability feeds to customers, and people are using them for metrics. So it's important that these scores are accurate.
I decided to take a look at how accurate the CVSS scores were, so for every vulnerability we fixed in any Red Hat product for June 2007 examined the CVSS score given by NVD. For each one figuring out if the CVSS base metrics were correct, and where they were not submitting the correction back to NVD. This analysis of the vulnerabilities was based on their possible worst-case threat to all platforms (I didn't adjust the CVSS scores for how the issues affected Red Hat products specifically).
There were 39 total vulnerabilities for which unfortunately only 8 scores were accurate. I submitted corrections to NVD and they fixed the CVSS scores on the remaining 31 vulnerabilities.
20 vulnerabilities ended up moving down in ranking, 6 vulnerabilities moved up, and 5 stayed the same (although the CVSS score changed).
Before the corrections there were 14 issues rated "High" out of 39, after the corrections there are just 3 rated "High".
Those corrections are now live in the NVD, and I really appreciate how quick the folks behind NVD were at checking and making the changes. I've submitted to them corrections for a couple more months too, and I'll write about those when there complete. Unfortunately it does take a lot of time to investigate each issue and do the corrections, so it will limit how far back into 2007 we can correct.
0 comments (new comments disabled)