Heartbleed and other security holes in the OpenSSL cryptographic library have preditably ignited the perennial debate around whether or not open source software is more - or less - secure than closed or proprietary code.
Microsoft, which is more often than not mentioned in the security bulletins due to the popularity of its products, was not affected by Heartbleed. Redmond's proprietary implementation of Secure Sockets Layer/Transport Layer Security (SSL/TLS) functionality meant it was less susceptible to the vulnerability.
Does that mean closed source is more secure?
First, even though OpenSSL rates as a popular and critical piece of internet infrastructure, it is also a special case that involves complex cryptography code worked on by a few volunteers (who are now better resourced and funded after the Heartbleed heartstopper).
Second, whether or not the source code is published is largely irrelevant when it comes to security. The proof is in the pudding: if open source was insecure by definition, there would be little point in developing things like OpenSSL or security-oriented operating systems like OpenBSD.
Likewise, if closed source was that bad, nobody would be handing over licensing fees to Microsoft et al because there are better alternatives out there for free.
It’s fair to say that nobody is particularly worried that going open source equates to less security, even for mission-critical software.
One good example here is the NICTA-developed seL4 microkernel project that is used in the high-assurance real-time operating system eChronos that is finding its way into the unmanned aerial vehicles (UAVs, a.k.a. drones) under the auspices of the US military’s Defence Advanced Research Project Agency (DARPA).
US backers General Dynamics and NICTA intend to make the "world's first operating-system kernel with an end-to-end proof of implementation correctness and security enforcement" open source in just over a month's time - to boost its popularity.
That's the main reason.
But as Gernot Heiser of NICTA’s Trustworthy Systems put it when asked if there are any security benefits from offfering seL4 to the open source community: “Nope, but those are vague at best anyway. Equally, there are certainly no security benefits from keeping it secret – it’s shocking how many in the industry believe in security by obscurity.”
Nevertheless, there are some interesting cases when irrespective of open or closed source, security can be compromised by the tools used to create software.
Anyone who has coded in C or C++ knows how easy it is to screw up and create holes big enough to drive a horse and carriage through.
Even adding sanity checks won’t always save you, thanks to a relatively new class of bugs brought on by so-called optimisation-unstable code.
This is an esoteric issue that MIT researchers have worked on in recent years and it popped up again at a recent USENIX conference in the United States.
It is caused by compilers used on languages such as Java, C and C++. Because programmers make mistakes, write redundant stuff or even statements that will never be executed, compiler creators have built in optimisation features that remove undefined code, some of which are errors, others which could be, for instance, bounds checks.
This makes total sense as it helps programs run better, at least until the compiler decides to lop off functional code. There is a risk that vital security checks might not be executed, leading to exploitable holes.
With thousands and millions of lines of code to consider and a program that seems to work just fine after compilation, this is a difficult problem to solve.
Those bugs were caused by open source and closed source compilers aggressively optimising away “dead code”, leaving the systems in question vulnerable to attack if the removed statements are used for security checks.
It’s a widespread problem too: the researchers found 32 bugs in the Linux kernel alone, and almost 3,500 of the Debian Wheezy application packages contained at least one instance of unstable code.
This is a very subtle class of bugs, and it’s not the easiest to understand. Some of it is brought on by the designs of languages like C and C++, which essentially trust programmers to know what they’re doing. That's not always the case!
However, in other cases, researchers suggests “compiler writers should rethink optimisation strategies” so as not to generate small, fast code by sacrificing security.
With an increasing number of embedded, networked systems (the internet of things, if you must) on the rise and processing all manner of critical information and processes, we could be in for some interesting times.