mark :: blog
There have been quite a few stories over the last couple of weeks
about the NULL character certificate flaw, such as this
one from The Register.
The stories center around how open source software such as Firefox was
able to produce updates to correct this issue just a few days after
the Blackhat conference, while Microsoft still hasn't fixed it and are
"investigating a possible vulnerability in Windows presented during
But the actual timeline is missing from these stories.
The NULL character certificate flaw (CVE-2009-2408) was actually
disclosed by two researchers working independantly who both happened
to present the work at the same conference, Blackhat, in July this
year. Dan Kaminsky mentioned it as part of a series of PKI
flaws he disclosed. Marlinspike had found the same flaw, but was
able to demonstrate it in practice by managing to get a
trusted Certificate Authority to sign such a malicious certificate.
The flaw was no Blackhat surprise; Dan Kaminsky actually found this
issue many months ago and responsibly reported the issues to vendors
including Red Hat, Microsoft, and Mozilla. We found out about this
issue on 25th February 2009 and worked with Dan and some of the
upstream projects on these issues in advance, so we had plenty of time
to prepare updates and this is why we were able to have them ready to
release just after the disclosure.
We keep all our friends and family contacts in a single text file in vCard
format. We sync this file to our phones (mobile and house DECT phones) and home
automation system (for caller ID and phone book). I also print out a copy to
take when travelling. Except I rarely print out an update as I've failed to
find any useful program to pretty print the contacts. Previously I used a quick
hack script in perl to convert the vcard entries to HTML, but it wasn't clever
enough to handle page breaks and needed manual setting all the margins and page
sizes correctly. I like to print it to fit in my paper planner, a Compact size
Franklin Covey planner system.
I've been using Scribus for a few months,
mostly for our
wedding invites and
stationary, and spotted that Scribus had a Python API. So a few hours later and
out has popped a Python script you can use to pretty print a vCard vcf file,
handling page breaks, images, and large margins to skip the hole punches.
Here is an extract from a sample vCard file:
ADR;TYPE=work:;;10 Downing Street;London;SW1A 2AA
TEL;TYPE=fax:+44 2079 250918
You'll need a few things:
- a sample vCard
file or your own one
script (version 1.0)
- A recent version of Scribus. 1.3.5 works, but earlier ones will not.
- You'll also need the python vobject library installed
if you haven't already got it
Use the "Script"
"Execute Script" option, find and select vcf2scribus.py and
hopefully you'll end up with something like this:
You can then save it as a pdf or print it direct.
The script is a bit of a hack and has hard-coded page sizes, fonts, margins,
vcard sections used, and so on. But I figure it might save someone a couple of
hours and only needs a bit of modification to suit. It would be fairly easy to
extend the script to use the Scribus API to let folks select the vcard file,
page sizes, fonts, and things. Bonus points if you fix it to figure out the final sizes of the
images and right align them. This is my second ever python program so no
sniggering at the code!
I've written about a lot of technical things in my blog over the years, but not
so much that is personal. However nothing is more important to me than a
special date last month, 8th August 2009, when I got married to Tracy.
I met Tracy via ICQ over nine years ago and we've been partners for eight, so it
was about time we got around to getting married. We both wanted a castle
wedding, and as Monty Python fans we couldn't resist Doune Castle, a restored
14th century stronghold not far from Glasgow which was used in the "Holy Grail"
So we couldn't resist a few Python moments, with coconuts, the French guards door scene,
and the killer rabbit of Caerbannog. Run away!
Our cake topper was commissioned from an artist in Brazil we found via
All the artwork including invitations, day plans, and place cards was drawn and
created by ourselves using Inkscape and Scribus on Fedora, I'm going to blog
about that and share the files when I have some more time next month.
The only issues we had on the day where with the cake which the supplier (Marks
and Spencers) lost part of and were unable to replace with the right cake, and a
poor substitute DJ (our chosen DJ ended up in hospital just before the event).
Everything else went amazingly well and to plan and despite the poor August weather in Scotland
this year managed great weather with only a light shower as guests were leaving
Red Hat Enterprise Linux 5.4 was released today, just over 7 months since the
release of 5.3 in January 2009. So let's use this opportunity to take a quick
look back over the vulnerabilities and security updates we've made in that time,
specifically for Red Hat Enterprise Linux 5 Server.
The chart below illustrates the total number of security updates issued for Red
Hat Enterprise Linux 5 Server as if you installed 5.3, up to and including the
5.4 release, broken down by severity. I've split it into two columns, one for
the packages you'd get if you did a default install, and the other if you
installed every single package (which is unlikely as it would involve a bit of
manual effort to select every one). For a given installation, the number of
package updates and vulnerabilities that affected you will depend on exactly what you
have installed or removed.
So for a default install, from release of 5.3 up to and including 5.4, we shipped 51
advisories to address 166 vulnerabilities. 8 advisories were rated critical, 18
were important, and the remaining 25 were moderate and low.
Or, for all packages, from release of 5.3 to and including 5.4, we shipped 78 advisories
to address 251 vulnerabilities. 9 advisories were rated critical, 28 were
important, and the remaining 41 were moderate and low.
The 9 critical advisories were for just 3 different packages. In all the
cases below, given the nature of the flaws, ExecShield protections in RHEL5
should make exploiting these memory flaws harder.
- Seven updates to Firefox (February, March 4th, March 27th, April 21st, April 27th, June, July )
where a malicious web site could potentially run arbitrary code as the user
- An update to kdelibs
where a malicious web site could potentially run arbitrary code as the
user running the Konqueror browser. kdelibs is not a default installation package.
- An update to the NSS library
a service could present a malicious SSL certificate causing
a heap overflow which could potentially run arbitrary code as the user running
a browser such as Firefox.
Updates to correct all of these critical vulnerabilities were available via
Red Hat Network either the same day, or up to one calendar day after the issues were
In fact for Red Hat Enterprise Linux 5 since release and to date, every
critical vulnerability has had an update available to address it available from
the Red Hat Network either the same day or the next calendar day after the issue
Other significant vulnerabilities
Although not in
of critical severity, also of interest during this period were several NULL
pointer dereference kernel issues. NULL pointer dereference flaws in the Linux
kernel can often be easily abused by a local unprivileged user to gain root
privileges through the mapping of low memory pages and crafting them to contain
valid malicious instructions:
was public on August 24th and a working privilege escalation exploit was
published about a week later. This issue was addressed for Red Hat Enterprise
Linux 5 by
a kernel update on
was public on August 13th and a working privilege escalation exploit was
published the same day.
This issue was addressed for Red Hat Enterprise Linux 5 by
a kernel update on
was public on July 16th along with a working privilege escalation exploit. This issue
affected only beta versions of the Red Hat Enterprise Linux 5.4 kernel and
it was addressed prior to the release of Red Hat Enterprise Linux 5.4.
Red Hat Enterprise Linux since 5.2 has contained backported patches from the
upstream Linux kernel to add the ability to restrict unprivileged mapping of low
memory, designed to mitigate NULL pointer dereference flaws. However it was found that
this protection was not sufficient, as a system with SELinux enabled is more
permissive in allowing local users in the unconfined_t domain to map low memory
areas even if the mmap_min_addr restriction is enabled. This is
and will be addressed in a future kernel update.
Red Hat Enterprise Linux 5 shipped with a number of security technologies
designed to make it harder to exploit vulnerabilities and in some cases block
exploits for certain flaw types completely. From 5.3 to 5.4 there
were three flaws blocked that would otherwise have required critical updates:
a stack buffer overflow flaw in dhclient.
FORTIFY_SOURCE protection detects the overflow and causes dhclient to exit with
no security consequence. No security update for users of Red Hat Enterprise
Linux 5 was needed.
a buffer overflow flaw in NTP caught by FORTIFY_SOURCE.
We issued an
update as a remote attacker could still cause a denial of service.
a uninitialized pointer free in krb5. glibc provides a hardened malloc/free
implementation which mitigates the exploitability of this flaw, however we
issued an update as a remote attacker could still cause a denial
To compare these statistics with previous update releases we need to take into
account that the time between each update is different. So looking at a default
installation and calculating the number of advisories per month gives the results
illustrated by the following chart:
This data is interesting to get a feel for the risk of running Enterprise Linux
5 Server, but isn't really useful for comparisons with other versions,
distributions, or operating systems -- for example, a default install of Red Hat
Enterprise Linux 4AS did not include Firefox, but 5 Server does. You can use
our public security
measurement data and tools, and run your own custom metrics for any given
Red Hat product, package set, timescales, and severity range of interest.
5.2 to 5.3,
5.1 to 5.2, and
5.0 to 5.1
In his Black Hat paper and presentation yesterday, Dan Kaminsky highlighted
some more issues he has found relating to SSL hash collisions and other PKI
of the presentationis online now, so I'm sure the PDF paper will follow
shortly. Some of these issues affect open source software, and some parts have
already been addressed, so here is a quick summary including CVE names of the
MD2 signature verification
The first issue is that many web browsers still accept certificates
with MD2 hash signatures, even though MD2 is no longer considered a
cryptographically strong algorithm. This could make it easier for an
attacker to create a malicious certificate that would be treated as
trusted by a browser. It turns out that there are not many valid MD2 hash certificates
around any more, and the main one that does exist is at the trusted
root level anyway (and there is actually no need for a crypto library
to verify the self-signature on a trusted root). So most vendors have
chosen to address this issue by disabling MD2 completely for
certificate verification. This is allocated CVE name CVE-2009-2409 (
single name for all affected products).
OpenSSL. For upstream OpenSSL we have disabled MD2 support
completely. This was done in two stages; the first was a patch in June 2009 that
redundant check of a trusted root self-signed certificate. Then in July,
totally disabled. So this issue does not affect OpenSSL 1.0.0 beta 3 or later.
Although there have not yet been an upstream release of 0.9.8 containing this
fix, a future OpenSSL 0.9.8 (after 0.9.8k) will disable MD2, probably in a few
GnuTLS. The upstream GnuTLS library has for some time meant to
MD2 support, although due to a broken patch it wasn't actually disabled
correctly until January 2009. So this issue does not affect GnuTLS versions
2.6.4 and above, or GnuTLS versions 2.7.4 and above.
NSS (and hence Firefox). The upstream NSS library since version 3.12.3
(April 2009) has disabled MD2 and MD4 by default (although legacy applications
could turn it back on using an environment variable
"NSS_ALLOW_WEAK_SIGNATURE_ALG" if they need to). Mozilla Firefox since version
3.5 has used this NSS version and
is disabled. I suspect this issue will get addressed in a future Firefox 3.0
update in the future too if they rebase to the new NSS.
There is no immediate panic to address this issue as a critical security
issue, as in order for it to be exploited an attacker still has to create a MD2
collision with this root certificate; something that is as of today still a
significant amount of effort.
My CVSS v2 base score for CVE-2009-2409 would be 2.6 (AV:N/AC:H/Au:N/C:N/I:P/A:N)
Differences in Common Name handling
This issue is about how Common Names are checked for validity by
applications. For example if a server presents a certificate with two CN
entries, how does the app validate those. Does it use the first one, the
last one, or all of them?
OpenSSL. OpenSSL provides an API that allow applications to
check CN names any way they want. It turns out that, without sound guidance,
applications have tended to do things differently. A summary of a
in this Red Hat bugzilla comment. But as a CA should validate all CN names in a
certificate being signing, these are really just bugs and do not have a security
Leading 0's in Common Name handling
The second issue is all about inconsistencies in the interpretation of subject
x509 names in certificates. Specifically "issue 2b, subattack 1" is where a
malicious certificate can contain leading 0's in the OID. The idea is that an
attacker could add in some OID into a certificate that, when handled by the
Certificate Authority, would appear to be some extension and ignored, but when
handled by OpenSSL would appear to be the Common Name OID. So the attacker
would present the certificate to a client application and it might think that
the OID is actually a Common Name, and accept the certificate where it otherwise
OpenSSL. This is not a security issue for OpenSSL. Steve
Henson explains: "OpenSSL does tolerate leading 0x80 but it does
_not_ recognize this as commonName because the NID code checks for a precise
match with the encoding. Attempts to print this out will never show commonName
nor will attempts to look up using NID_commonName". However this will be
addressed as a bug fix in the future.
NSS (and hence Firefox). NSS is noted in the paper as having a
similar issue, but again it's not fooled into treating the OID as a Common Name
so this is not a security issue (and therefore I didn't check if this is already
fixed in the new upstream NSS).
OID overflow in Common Name handling
"issue 2b, subattack 2" is where a malicious certificate can have a very large
integer in the OID. The idea is that an attacker could add in some OID into a
certificate that, when handled by the CA, would appear to be some extension and
ignored, but when handled by OpenSSL would overflow and appear to be the Common
Name OID. So the attacker would present the certificate to a client application
using OpenSSL and it might think that the OID is actually a Common Name, and
accept the certificate where it otherwise should not.
OpenSSL. This issue was actually fixed upstream in September 2006
in OpenSSL 0.9.8d by switching to using
the bignum library for
handling the OID. Even for older versions though it's really not a security
issue for the same reason as given earlier: the OpenSSL NID code
checks for a precise match with the encoding. So attempts to print this out
will never show it being a Common Name, nor will attempts to look it up as a
Common Name succeed.
NULL bytes in Common Name handling
"issue 2, attack 2c" is regarding NULL terminators in a Common Name field.
If an attacker is able to get a
carefully-crafted certificate signed by a Certificate Authority trusted by
a browser, the attacker could use the certificate during a man-in-the-middle
attack and potentially confuse the browser into accepting it by mistake.
My CVSS v2 base score for CVE-2009-2408 would be 4.3 (AV:N/AC:M/Au:N/C:N/I:P/A:N)
OpenSSL 'compat mode' subject name injection
"issue 2d" is how the OpenSSL command line utility will output unescaped
subject X509 lines to standard output. So if some utility runs the openssl
application from the command line and parses the text output, and if an attacker
can craft a malicious certificate in such a way they fool a CA into signing it,
they could present it to the utility and possibly fool that utility into
thinking fields were different to what they actually are, perhaps allowing the
certificate to be accepted as legitimate.
This attack requires that some utility will parse the output of OpenSSL
command line using the default 'compat' mode. Applications should never do
this. Upstream OpenSSL are unlikely to address this issue directly, although in
the future the default output mode perhaps could be changed to something other than
'compat', and it's likely a documentation update will remind users that
parsing the output of running such an openssl command is not the right way to
OpenSSL ASN1 printing crash
Also mentioned in the paper is a flaw in the filtering modes when a two or four
byte wide character set is asked to be filtered.
My CVSS v2 base score for CVE-2009-0590 would be 2.6 (AV:N/AC:H/Au:N/C:N/I:N/A:P)
A few years ago I automated the treadmill in our guest room as a way of motivating
Tracy and I to keep fit. The treadmill sent us emails when we used it, and the
touch panels around the house showed how much we'd used it in the last week and
month. This worked really well for some time; until the point we realised if we both
agreed to stop using it on the same day then there would be no competition, no winner, no loser,
and neither us would feel bad.
Last winter the Red Hat video team came to my house to record some footage for both
internal and external use. On one of the internal videos they look at my home
automation system, point the camera at a wall tablet, and figure out that I'd not
used my treadmill in over two years. So there were really two options (1) remove the
year from the display so it would never look like we were slacking for more than
a year, or (2) find a way to get motivated again.
Recently we both started using Twitter, so it seemed like a natural progression to
hook the treadmill to twitter and have it publicly embarrass us for slacking
So the treadmill now has it's own twitter page.
We called it 'twedmill' ('tweadmill' perhaps is more correct, but just sounds like a
factory that weaves twead jackets).
Here is how it works:
The treadmill itself is pretty standard; it's from Trimline and has a fancy
computer. When I looked inside and saw a PIC I was tempted to interface direct
to the computer, but didn't really have the time to get around to that.
Although the treadmill does things like have a variable incline and measurement of heart
rate, all I really care about it making sure we were using it, for how long,
and how far we got.
Under a cover in the base are the PWM controllers, motors, and the belt
drive to the treadmill deck. The treadmill itself measures the belt speed
by having a single magnet on the wheel and a small sensor next to it, one
revolution giving one pulse. So to keep things simple I just hot-glued a
spare reed switch I had around so the same magnet would trigger it. The reed
switch happily copes with the treadmill even on top speed, so no real need
for anything more fancy.
I didn't have anything that could accurately measure the diameter of the roller, so
by counting pulses at various speeds and comparing to the onboard
display it worked out at 8122 pulses/revolutions per (uk) mile (so that's
about 198mm of travel per pulse, making the diameter of the
roller about 63mm).
I use a 1-wire network in the house to measure temperatures, watch the doorbell,
and control the central heating system, so I wanted to use the same system
to deal with the treadmill. So the reed switch connects to a DS2423
counter (Unfortunately it seems the DS2423 is discontinued now). The DS2423 was
only available in a surface-mount package, so I found some converters on ebay
to save having to design a PCB just for three components. The
DS2423 connects into a 1-wire hub in node0, then to a 1-wire USB adapter on our main
server, currently running Fedora 10.
The software used in based on the source code from 'digitemp'
as it includes
cnt1d.c to read the counter values. Every ten
seconds the jabber treadmill bot switches to the right network segment
on the 1-wire hub then polls the counter of the DS2423 to see
if the treadmill has moved. Once the treadmill has stopped moving for
a while the software stores the total distance travelled and time in
a database, sends an email, and uses the perl Net::Twitter module to
post a mesage to twitter. (It can also draw a graph showing speed over
time, but that turned out to be not very interesting)
For the future I'd quite like to hook directly into the
treadmill computer, perhaps giving two way control of the treadmill programs, as
well as recording the incline and heart rate. Another idea has been to use the
current treadmill speed to decide which music video to play next based on bpm (the tv is
connected to an old XBOX running XMBC so could easilly be remotely controlled to
switch videos). Or perhaps link it to google streets for a virtual jog through
some random town. Finally, you currently have to select who is using the
treadmill before (or very quickly after) using it using the touch panels in the
house; which seems like a good excuse to play with some RFID in our shoes, perhaps
also using that to select a playlist of music videos per person.
Tracy and I got to see a preview showing of the new Star Trek movie last night.
No spoilers here. However they were being really over the top with security
theatre given the movie isn't out here for another week. Firstly they had
employed a large number of suited security guys, most of which looked like
they'd be more comfortable on the door of a nightclub. They made sure to
confiscate one cell phone from everyone on the way in, and more thoroughly
search those without one. Once inside, through the entire movie, the hired
goons stood at the front and side scanning us all, and another guy with a video
camera filmed a scan of the audience every five or ten minutes.
It was a little distracting as whenever I see security theatre I can't help
thinking of ways it could fail. For example anyone entering the cinema with two
cell phones would evade the more extensive searches after giving up their first
phone. Bruce Schneier calls this a security
mindset. However, it was definately a movie worth trading some freedoms to
watch in advance, but I can't help wondering how long it will be before they try
to do this at every movie.
A few years ago I received a Mastercard with a CCV of 000. The CCV is the last
3 digits printed on the signature strip on the back asked for by merchants to
verify you actually hold the card as those digits are not encoded on the magstrip
(although as anyone who has handled the card or has hacked any of the online
mechants at the time you use it also knows it). It's sometimes called CVV,
CVV2, or CVC2 too.
Having a CCV of 000 seems nice and easy to remember, but actually was a bit
of a curse. To start with, companies would sometimes not believe that 000 is
your real CCV when you tell them by phone. But usually after a few attempts you
can convince them to at least try it, and then all is well.
The real problems came when using the card online as several merchants
refused to accept the card. Any programmer reading this will have guessed the
ways this could fail already. Rather than web applications checking for a
CCV of three digits, I imagine some of them stored the field as an integer and
had "0" overloaded as "didn't enter a CCV".
Scan Computers was the first casualty; my first order with them using the
card appeared to get accepted, but then got stuck and the order stalled. That
took a phone call to sort out, but at least the guy I spoke to by phone
recognised and understood the problem and I only ended up getting my stuff a
day late. It's worked okay with them since, I guess they fixed it.
Some other merchants I've been less lucky with. Some refused to accept the
CCV at the time I entered it, but at least with those you know immediately and
can use a different card. Other merchants accepted the CCV at the order time
but then later rejected the order usually without giving a reason; probably when
they did some batch processing with the stored CCV.
So you'd think there would be a lot of people with this problem: if the CCV
is generated by the issuer using some hash then it ought to be 1/1000th of the
card holding population. Perhaps some issuers deliberately avoid giving out a
000 security code, or perhaps I was just unlucky in my choice of merchants.
The experiment has sadly come to an end now as the card expired and was been
replaced by one with a different CCV. I'm hoping one day to get 999.
I've recently started using a Fedora 10 Live USB image for emergencies: with
it's persistent overlay and encrypted home directory support it's just perfect
if we're out and Tracy has her small Asus eee 901 with her. Since the Asus has
a SD card slot, I bought a cheap 2Gb MicroSD card and have been using that
instead of a USB stick as the Asus is happy to boot from it and it's easier to
fit inside my wallet.
I also recently bought a new phone, a HTC Touch HD, to replace my aging Mio
A701. Although it has a Windows OS, it doesn't force you to use ActiveSync and
you can set it to instead appear as a card reader when plugged in via USB.
It got me wondering if the Asus would boot from the phone. It does:
The phone comes with a 8Gb microSD card so plenty of room for the Fedora image
without disturbing the other phone software, pictures, and so on. I just used
the Live image creator to write the image to the microSD card, made it bootable,
put it into the phone and set the phone to card reader mode. Now every time the
phone is plugged in into a PC it appears to be just a bootable USB stick with
Fedora live image installed. All I need now is a small retractable USB cable
and then there is no need to carry around the separate MicroSD card (or a USB
From time to time I publish metrics on vulnerabilities that affect
Red Hat Enterprise Linux. One of the more interesting metrics looks at
how far in advance we know about the vulnerabilities we fix, and from where
we get that information. This post is abstracted from the upcoming "4 years of Enterprise Linux 4"
For every fixed vulnerability across every package and every
severity in Enterprise Linux 4 AS in the first 4 years of its life, we
determined if the flaw was something we knew about a day or more in advance of
it being publicly disclosed, and how we found out about the flaw.
For vulnerabilities which are already public when we first hear about them
we still track the source as it's a useful internal indicator on where the
should focus their efforts.
So from this data, Red Hat knew about 51% of the security vulnerabilities that
we fixed at least a day in advance of them being publicly disclosed. For those
issues, the average notice was 21 calendar days, although the median
was much lower, with half the private issues having advance notice of 9
days or less.
Hi! I'm Mark Cox. This blog gives my
thoughts and opinions on my security
work, open source, fedora, home automation,
and other topics.
pics from my twitter: