Between releases there are lots of changes made to improve security and I've not listed everything; just a high-level overview of the things I think are most interesting that help mitigate security risk. We could go into much more detail, breaking out the number of daemons covered by the SELinux default policy, the number of binaries compiled PIE, and so on.
|Fedora Core||Fedora||Red Hat Enterprise Linux|
|Firewall by default||Y||Y||Y||Y||Y||Y||Y||Y||Y||Y||Y|
|Signed updates required by default||Y||Y||Y||Y||Y||Y||Y||Y||Y||Y||Y|
|NX emulation using segment limits by default||Y||Y||Y||Y||Y||Y||Y||Y||Y2||Y||Y|
|Support for Position Independent Executables (PIE)||Y||Y||Y||Y||Y||Y||Y||Y||Y2||Y||Y|
|Address Randomization (ASLR) for Stack/mmap by default3||Y||Y||Y||Y||Y||Y||Y||Y||Y2||Y||Y|
|ASLR for vDSO (if vDSO enabled)3||no vDSO||Y||Y||Y||Y||Y||Y||Y||no vDSO||Y||Y|
|Restricted access to kernel memory by default||Y||Y||Y||Y||Y||Y||Y||Y||Y|
|NX for supported processors/kernels by default||Y1||Y||Y||Y||Y||Y||Y||Y2||Y||Y|
|Support for SELinux||Y||Y||Y||Y||Y||Y||Y||Y||Y|
|SELinux enabled with targeted policy by default||Y||Y||Y||Y||Y||Y||Y||Y|
|glibc heap/memory checks by default||Y||Y||Y||Y||Y||Y||Y||Y|
|Support for FORTIFY_SOURCE, used on selected packages||Y||Y||Y||Y||Y||Y||Y||Y|
|All packages compiled using FORTIFY_SOURCE||Y||Y||Y||Y||Y||Y|
|Support for ELF Data Hardening||Y||Y||Y||Y||Y||Y||Y|
|All packages compiled with stack smashing protection||Y||Y||Y||Y||Y|
|SELinux Executable Memory Protection||Y||Y||Y||Y|
|glibc pointer encryption by default||Y||Y||Y||Y|
|FORTIFY_SOURCE extensions including C++ coverage||Y|
The graph below shows the total number of security updates issued for Red Hat Enterprise Linux 5 Server up to and including the 5.1 release, broken down by severity. I've split it into two columns, one for the packages you'd get if you did a default install, and the other if you installed every single package (which is unlikely as it would involve a bit of manual effort to select every one). So, for a given installation, the number of packages and vulnerabilities will be somewhere between the two extremes.
So for all packages, from release up to and including 5.1, we shipped 94 updates to address 218 vulnerabilities. 7 advisories were rated critical, 36 were important, and the remaining 51 were moderate and low.
For a default install, from release up to and including 5.1, we shipped 60 updates to address 135 vulnerabilities. 7 advisories were rated critical, 26 were important, and the remaining 27 were moderate and low.
Red Hat Enterprise Linux 5 shipped with a number of security technologies designed to make it harder to exploit vulnerabilities and in some cases block exploits for certain flaw types completely. For the period of this study there were two flaws blocked that would otherwise have required critical updates:
This data is interesting to get a feel for the risk of running Enterprise Linux 5 Server, but isn't really useful for comparisons with other versions or distributions -- for example, a default install of Red Hat Enterprise 4AS did not include Firefox. You can get the results I presented above for yourself by using our public security measurement data and tools, and run your own metrics for any given Red Hat product, package set, timescales, and severities.
Common Platform Enumeration (CPE) is a naming scheme designed to combat these inconsistencies, and is part of the 'making security measurable' initiative from Mitre. From today we're supporting CPE in our Security Response Team metrics: we publish a mapping of Red Hat advisories to both CVE and CPE platforms (updated daily) and you can use CPE to filter the metrics. Some examples of CPE names:
cpe://redhat:enterprise_linux:5:server/firefox -- the Firefox browser package on Red Hat Enterprise Linux 5 server.
cpe://redhat:enterprise_linux:3 -- Red Hat Enterprise Linux 3
cpe://redhat/xpdf -- the xpdf package in any Red Hat product.
cpe://redhat:rhel_application_stack:1 -- Red Hat Application Stack version 1
User reports a security vulnerability|
(this includes things later found not to be vulnerabilities)
User is confused because they visited a site "powered by Apache"|
(happens a lot when some phishing or spam points to a site that is taken down and replaced with the default Apache httpd page)
User asks a general product support question||38 (25%)|
User asks a question about old security vulnerabilities||21 (14%)|
User reports being compromised, although non-ASF software was at fault|
(For example through PHP, CGI, other web applications)
That last one is worth restating: in the last 12 months no one who contacted the ASF security team reported a compromise that was found to be caused by ASF software.
The National Vulnerability Database provides a public severity rating for all CVE named vulnerabilities, "Low" "Medium" and "High", which they generate automatically based on the CVSS score their analysts calculate for each issue. I've been interested for some time to see how well those map to the severity ratings that Red Hat give to issues. We use the same ratings and methodology as Microsoft and others use, assigning "Critical" for things that have the ability to be remotely exploited automatically through "Important", "Moderate", to "Low".
Given a thundery Sunday afternoon I took the last 12 months of all possible vulnerabilities affecting Red Hat Enterprise Linux 4 (from 126 advisories across all components) from my metrics page and compared to NVD using their provided XML data files. The result broke down like this:
So that looked okay on the surface; but the diagram above implies that all the issues Red Hat rated as Critical got mapped in NVD to High. But that's not actually the case, and when you look at the breakdown you get this result: (in number of vulnerabilities)
| ||NVD: High||
That shows nearly half of the issues that NVD rated as High actually only affected Red Hat with Moderate or Low severity. Given our policy is to fix the things that are Critical and Important the fastest (and we have a pretty impressive record for fixing critical issues), it's no wonder that recent vulnerability studies that use the NVD mapping when analysing Red Hat vulnerabilities have some significant data errors.
I wasn't actually surprised that there are so many differences: my hypothesis is that many of the errors are due to the nature of how vulnerabilities affect open source software. Take for example the Apache HTTP server. Lots of companies ship Apache in their products, but all ship different versions with different defaults on different operating systems for different architecture compiled with different compilers using different compiler options. Many Apache vulnerabilities over the years have affected different platforms in significantly different ways. We've seen an Apache vulnerability that leads to arbitrary code execution on older FreeBSD, that causes a denial of service on Windows, but that was unexploitable on Linux for example. But it has a single CVE identifier.
So if you're using a version of the Apache web server you got with your Red Hat Enterprise Linux distribution then you need to rely on Red Hat to tell you how the issue affects the version they gave you -- in the same way you rely on them to give you an update to correct the issue.
I did also spot a few instances where the CVSS score for a given vulnerability was not correctly coded. CVSS version 2 was released last week and once NVD is based on the new version I'll redo this analysis and spend more time submitting corrections to any obvious mistakes.
But in summary: for multi-vendor software the severity rating for a given vulnerability may very well be different for each vendors version. This is a level of detail that vulnerability databases such as NVD don't currently capture; so you need to be careful if you are relying on the accuracy of third party severity ratings.
I've separated the bars into two sections; the red sections are where we get notice of a security issue in advance of it being public (where we are told about the issue 'under embargo'). The grey sections are where we are reacting to issues that are already public.
The number of issues through researchers and co-ordination centers seem lower than perhaps expected, this is because in many cases the researcher will tell a group such as vendor-sec rather than each distributor separately, or the upstream project directly.
We rate the impact of individual vulnerabilities on the same four point scale, designed to be an at-a-glance guide to how worried Red Hat is about each security issue. The scale takes into account the potential risk of a flaw based on a technical analysis of the exact flaw and it's type, but not the current threat level. Therefore the rating given to an issue will not change if an exploit or worm is later released for a flaw, or if one is available before release of a fix.
For the purpose of evaluating severities, our protection technologies fall into roughly three categories:
I've not been keeping a list of vulnerabilities that are deterministically blocked, but I have a couple of examples I recall where we did alter the severity: