It's been a long and winding road, but it would appear that many organisations are now finally heeding the advice to invest resources into assessing and securing their custom-developed applications.
While I'm relieved to see that security assessment is now beginning to encompass more custom software elements, it is discouraging to see that the skills and understanding needed to remedy any discovered flaws are generally poor. There seems to be an increasing gap between understanding the context of a particular security flaw, and the ability to remedy it.
Whenever I conduct an application assessment – whether that is against a web-based or compiled application or off-the-shelf software – it is often necessary to include sample exploit or proof-of-concept code. While the final reporting deliverables provide detailed information on the vulnerability's significance, along with the relative impact and probability of exploitation, the fix information is often fairly concise.
But, sadly, it would appear that even basic fix information must be expanded upon if a security flaw is likely to be repaired successfully.
On a recent engagement, a flaw was uncovered within a compiled application when submitting an unexpectedly long parameter. Proof-of-concept code was supplied giving a quick example of how a long string of the repeated letter A would trigger a stack-based overflow, along with an analysis of how this could be exploited remotely and turned into an attack.
The fix information included advice in how to implement bounds checking and sanitise user-submitted data before using it within the application.
As is so often the case, the developers said "Cool, so that's how you'd hack it," while the project managers said "Oh no, this is going to add weeks on to the release schedule."
After a few discussions as to what constituted bounds checking and data sanitisation procedures, a fix was made available by the developers for re-testing. The developers were adamant their changes had fixed the security flaw, and that our proof-of-concept code no longer resulted in the stack-based overflow. Unfortunately, they had taken the proof-of-concept code a little too literally, assuming that just blocking a long string of As was sufficient.
I would hope that this inadequate fix was just down to a lack of security knowledge, and an assumption that our code was the only way to exploit the flaw, but I suspect that tight release schedules and the necessity for a "clean" report were also contributing factors.
The problem goes beyond homegrown custom software, manifesting itself in many commercially available enterprise solutions – particularly mainstream databases. The necessity to get a fix out the door as quickly as possible, combined with poor security insight, often results in inadequate fixes.
My colleague, vulnerability researcher David Litchfield, could probably give a daily example. Many of the email threads with product development teams discussing their fixes make for entertaining (if scary) reading. One of the classic responses being: "Why are you passing a long string? Shouldn't you be passing the username and password?"
Whether it's poor security insight or naivety, too many developers lack the skills to provide a robust fix for the software they are responsible for. On the whole, firms that do well in fixing security flaws tend to have strong (and separate) code-testing teams, and have periodic security refreshers for developers.
I've also found that having developers participate in daily conference calls throughout the security assessment, and join in on discussions about the security findings as they crop up, tends to increase the likelihood of a good fix being developed, as well as maximising knowledge transfer.
Gunter Ollmann is director of professional services at Next Generation Security Software