mark :: blog

<< prev [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 ] next >>


The National Vulnerability Database (NVD) assign a severity rating to every vulnerability; "High", "Medium", or "Low". The rating is determined by ranges of CVSS (Common Vulnerability Scoring System) v2 scores. I've not been a big fan of CVSS: I don't think it works particularly well when applied to software that is shipped by multiple vendors, or for open source software and libraries that don't know all the possible use-cases of their software.

Even though I'm not a fan, NVD publish a CVSS score for every issue, security companies are using those scores in their vulnerability feeds to customers, and people are using them for metrics. So it's important that these scores are accurate.

I decided to take a look at how accurate the CVSS scores were, so for every vulnerability we fixed in any Red Hat product for June 2007 examined the CVSS score given by NVD. For each one figuring out if the CVSS base metrics were correct, and where they were not submitting the correction back to NVD. This analysis of the vulnerabilities was based on their possible worst-case threat to all platforms (I didn't adjust the CVSS scores for how the issues affected Red Hat products specifically).

There were 39 total vulnerabilities for which unfortunately only 8 scores were accurate. I submitted corrections to NVD and they fixed the CVSS scores on the remaining 31 vulnerabilities.

20 vulnerabilities ended up moving down in ranking, 6 vulnerabilities moved up, and 5 stayed the same (although the CVSS score changed).

Before the corrections there were 14 issues rated "High" out of 39, after the corrections there are just 3 rated "High".

Those corrections are now live in the NVD, and I really appreciate how quick the folks behind NVD were at checking and making the changes. I've submitted to them corrections for a couple more months too, and I'll write about those when there complete. Unfortunately it does take a lot of time to investigate each issue and do the corrections, so it will limit how far back into 2007 we can correct.


Last month I read a blog entry from hadess via Fedora Planet about hardware to let you run homebrew applications on Nintendo DS. There is a ton of homebrew applications available, but as of yet no jabber client.

My home automation system is all based around XMPP, with a standard Jabber server to which all the home automation systems connect to share messages. I wrote it like this so that it would be easy to just take some existing Jabber client for a platform and be able to come up with a nice looking front end with minimal effort.

I found Iksemel, a portable C XML parser and protocol library that looked perfect, and it only took a couple of hours to have it ported on the NDS, and a couple more hours to get it working with PAlib for wifi. It's not a generic Jabber chat client, but it wouldn't take too much work to make it into one (although I didn't bother with encryption support so you won't be able to use it with Google talk servers for example). Anyway, the code might save someone a few hours, so I've made the source available.

I've included a copy of Iksemel, so if you want to build this yourself all you need is a working development environment: devkitpro and PAlib. This still needs some work, I need to integrate a library to handle displaying images from the network (when the phone rings it can pop up the callers picture or a streaming picture from one of the cameras when the doorbell is pushed)

NDS Ham


Although Red Hat is well known for Red Hat Enterprise Linux we actually have a large number of other supported products, both layered on top of Enterprise Linux (like Red Hat Application Stack) and stand-alone (like Red Hat Directory Server). The majority of these products are serviced through the Red Hat Network and get our security advisories in a standard way and are included in the Security Response Team metrics. But our analysis scripts were not particularly consistent in dealing with product names.

Common Platform Enumeration (CPE) is a naming scheme designed to combat these inconsistencies, and is part of the 'making security measurable' initiative from Mitre. From today we're supporting CPE in our Security Response Team metrics: we publish a mapping of Red Hat advisories to both CVE and CPE platforms (updated daily) and you can use CPE to filter the metrics. Some examples of CPE names:

cpe://redhat:enterprise_linux:5:server/firefox -- the Firefox browser package on Red Hat Enterprise Linux 5 server.
cpe://redhat:enterprise_linux:3 -- Red Hat Enterprise Linux 3
cpe://redhat/xpdf -- the xpdf package in any Red Hat product.
cpe://redhat:rhel_application_stack:1 -- Red Hat Application Stack version 1


For the past 12 months I've been keeping metrics on the types of issues that get reported to the private Apache Software Foundation security alert email address. Here's the summary for Jul 2006-Jun 2007 based on 154 reports:

User reports a security vulnerability
(this includes things later found not to be vulnerabilities)
47 (30%)
User is confused because they visited a site "powered by Apache"
(happens a lot when some phishing or spam points to a site that is taken down and replaced with the default Apache httpd page)
39 (25%)
User asks a general product support question
 
38 (25%)
User asks a question about old security vulnerabilities
 
21 (14%)
User reports being compromised, although non-ASF software was at fault
(For example through PHP, CGI, other web applications)
9 (6%)

That last one is worth restating: in the last 12 months no one who contacted the ASF security team reported a compromise that was found to be caused by ASF software.


The National Vulnerability Database provides a public severity rating for all CVE named vulnerabilities, "Low" "Medium" and "High", which they generate automatically based on the CVSS score their analysts calculate for each issue. I've been interested for some time to see how well those map to the severity ratings that Red Hat give to issues. We use the same ratings and methodology as Microsoft and others use, assigning "Critical" for things that have the ability to be remotely exploited automatically through "Important", "Moderate", to "Low".

Given a thundery Sunday afternoon I took the last 12 months of all possible vulnerabilities affecting Red Hat Enterprise Linux 4 (from 126 advisories across all components) from my metrics page and compared to NVD using their provided XML data files. The result broke down like this:

Red Hat
13% Crit 24% Important 39% Moderate 24% Low
NVD
30% High 20% Moderate 50% Low

So that looked okay on the surface; but the diagram above implies that all the issues Red Hat rated as Critical got mapped in NVD to High. But that's not actually the case, and when you look at the breakdown you get this result: (in number of vulnerabilities)

 NVD: High
23 Critical
24 Important
35 Moderate
8 Low

 NVD: Moderate
9 Crit
18 Important
22 Moderate
12 Low

 NVD: Low
7 C
32 Important
62 Moderate
51 Low

That shows nearly half of the issues that NVD rated as High actually only affected Red Hat with Moderate or Low severity. Given our policy is to fix the things that are Critical and Important the fastest (and we have a pretty impressive record for fixing critical issues), it's no wonder that recent vulnerability studies that use the NVD mapping when analysing Red Hat vulnerabilities have some significant data errors.

I wasn't actually surprised that there are so many differences: my hypothesis is that many of the errors are due to the nature of how vulnerabilities affect open source software. Take for example the Apache HTTP server. Lots of companies ship Apache in their products, but all ship different versions with different defaults on different operating systems for different architecture compiled with different compilers using different compiler options. Many Apache vulnerabilities over the years have affected different platforms in significantly different ways. We've seen an Apache vulnerability that leads to arbitrary code execution on older FreeBSD, that causes a denial of service on Windows, but that was unexploitable on Linux for example. But it has a single CVE identifier.

So if you're using a version of the Apache web server you got with your Red Hat Enterprise Linux distribution then you need to rely on Red Hat to tell you how the issue affects the version they gave you -- in the same way you rely on them to give you an update to correct the issue.

I did also spot a few instances where the CVSS score for a given vulnerability was not correctly coded. CVSS version 2 was released last week and once NVD is based on the new version I'll redo this analysis and spend more time submitting corrections to any obvious mistakes.

But in summary: for multi-vendor software the severity rating for a given vulnerability may very well be different for each vendors version. This is a level of detail that vulnerability databases such as NVD don't currently capture; so you need to be careful if you are relying on the accuracy of third party severity ratings.


It sometimes seems like me and my team are pushing security updates every day, but actually a default installation of Enterprise Linux 4 AS was vulnerable to only 3 critical security issues in the first two years since release. But to get a picture of the risk you need to do more than count vulnerabilities. My full risk report was published today in Red Hat Magazine and reveals the state of security since the release of Red Hat Enterprise Linux 4 including metrics, key vulnerabilities, and the most common ways users were affected by security issues. It's all about transparency, highlighting the bad along with the good, and rather than just giving statistics and headlines you can game using carefully selected initial conditions we also make all our raw data available too so we can be held accountable.


Red Hat Enterprise Linux 5 was released back in March 2007 so let's take a quick look back over the first three months of security updates to the Server distribution:

This data is interesting to get a feel for the risk of running EL5, but isn't really useful for comparisons with other versions or distributions -- for example previous versions didn't include Firefox in a default Server installation.


As part of our security measurement work since March 2005 we've been tracking how the Red Hat Security Response Team first found out about each vulnerability we fix. This information is interesting as can show us which relationships matter the most, and identify trends in vulnerability disclosure. So for two years to March 2007 we get the following results (in percent):

20070410-relationships

I've separated the bars into two sections; the red sections are where we get notice of a security issue in advance of it being public (where we are told about the issue 'under embargo'). The grey sections are where we are reacting to issues that are already public.

The number of issues through researchers and co-ordination centers seem lower than perhaps expected, this is because in many cases the researcher will tell a group such as vendor-sec rather than each distributor separately, or the upstream project directly.


Red Hat has shipped products with randomization, stack protection, and other security mechanisms turned on by default since 2003. Vista recently shipped with similar protections and I read today an article about how the Microsoft Security Response Team were not treating Vista any differently when rating the severity of security issues. The Red Hat Security Response team use a similar guide for classification and I thought it would be worth clarifying how we handle this very situation.

We rate the impact of individual vulnerabilities on the same four point scale, designed to be an at-a-glance guide to how worried Red Hat is about each security issue. The scale takes into account the potential risk of a flaw based on a technical analysis of the exact flaw and it's type, but not the current threat level. Therefore the rating given to an issue will not change if an exploit or worm is later released for a flaw, or if one is available before release of a fix.

For the purpose of evaluating severities, our protection technologies fall into roughly three categories:

  1. Security innovations that completely block a particular type of security flaw. An example of this is Fortify Source. Given a particular vulnerability we can evaluate if the flaw would be caught at either compile time or run time and blocked. Because this is deterministic, we will adjust the security severity of an issue where we can prove it would not be exploitable.

  2. Security innovations that should block a particular security flaw from being remotely exploitable. Examples of this would be support for NX, Randomisation, and Stack canaries. Although these technologies can reduce the likelyhood of exploiting certain types of vulnerabilities, we don't take them into account and don't downgrade the security severity.

  3. Security innovations that try to contain an exploit for a vulnerability. I'm thinking here of SELinux. An attacker who can exploit a flaw in any of the remotely accessible daemons protected by a default SELinux policy will find themselves tightly constrained by that policy. We do not take SELinux into account when setting the security severity.

I've not been keeping a list of vulnerabilities that are deterministically blocked, but I have a couple of examples I recall where we did alter the severity:


Enterprise Linux 5 has been consuming much of my time over the last few months. From work on the signing server and new key policy, through testing of the new update mechanism, and continuing audits of outstanding vulnerabilities. Yesterday was release day, and we also pushed security updates for 12 packages in Enterprise Linux 5.

It may seem surprising that we release security updates for a product exactly at the same time we release it, but product development is frozen for some weeks before we release the product to give time testing from the various Quality Engineering teams as well as release engineering work. During that time we want to minimise the number of changes that will invalidate the overall testing, so we instead prepare the changes as updates. Since the vulnerabilities being fixed are already public, we push the updates out as soon as we can; holding them off to some scheduled monthly date would just increase customer risk.

Security advisories for Enterprise Linux 5 are available from the usual places, on the web, sent to the enterprise-watch-list mailing list, and via OVAL definitions. Red Hat Network subscribers can also get customized mails for the subset of issues that affect the packages they actually have installed.

For me, what's going to be interesting to watch over the next few months is how specific vulnerabilities and exploits affect this platform. Red Hat Enterprise Linux 5 packages are compiled both with Fortify Source and stack smashing protection in addition to all the security features that were in version 4. I'll be reporting on what difference this makes through the year.

<< prev [ 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 ] next >>

Hi! I'm Mark Cox. This blog gives my thoughts and opinions on my security work, open source, fedora, home automation, and other topics.