One reason IT security is so difficult to get right is that we’re dealing with multiple interconnected systems, software, and hardware that interact with each other in sometimes unexpected ways.
We rarely know the full story on what is happening within a specific IT system. When even experts have to painstakingly trawl through code to figure out what it actually does and how it handles hardware - as opposed to what the developer thought it would do - you realise that there’s an awful lot taken on trust in the IT business.
One unfortunate consequence of this is when you need to add another layer of security to systems, you end up increasing their complexity. And complexity makes everything harder to understand and less secure.
Take Transport Layer Security (TLS) authentication and encryption: on paper it’s a great idea. Only a totally careless person sends data in plain text over the internet these days, so it’s a good thing to add TLS for security, right?
If TLS works as intended, yes, you'll be more secure. But in practice, incidents over the last few years have shown that anyone deploying TLS needs to sleep with one eye open.
This can be due to bugs or inattention and laziness from vendors, as was found in a recent security product audit, which showed most came with some pretty serious TLS interception flaws that could be exploited.
Let’s be honest: TLS and associated bits and pieces require plenty of expertise and understanding to get right (which is why you see so many mistakes).
Outsourcing the management of TLS to specialists who have that deep level of expertise and understanding can be a good idea rather than learning things yourself, but even that can go wrong.
And when it does, you need a vendor that’s not only quick to react, but straight with the facts so you can assess what damage has been done.
The recent Cloudbleed scare discovered by Google’s Project X security squad is a good example: a code flaw in parts of content delivery network and reverse proxy provider Cloudflare meant that in some cases, the content of uninitialised system memory was returned to TLS clients.
That data could have contained sensitive information such as passwords, and, to make matters worse, search engines cached some of that information.
Cloudflare handled the situation with aplomb, and provided an honest assessment of what had happened, as well as a technical deep dive into Cloudbleed.
The flaw was fixed quickly and most of the search engine results deleted, but as a testimony to how complex and hard to understand these things are, a number of customers were unsure if they needed to do anything and whether they were affected.
Who can blame them, when even experts like Cloudflare’s engineers missed a case of unexpected software behaviour with potentially serious consequences.
Whilst reducing complexity in your systems might seem like the right response, it won't always fix the problem.
Ultimately, it boils down to living with the fact that the unexpected can - and will - happen, and having a plan in place when complexity bites you.
That, and hope that sometime soon we’ll figure out how to focus on the simple and elegant rather than the complex and convoluted.