There's not an office in the land that doesn't hear the klaxon of a fire alarm once a week. It's not because fires keep breaking out; it's because all safety equipment needs testing regularly. Yet, many organisations install security technology and never check that it is doing what it's supposed to.
Penetration, or 'pen', testing is becoming more important as auditors and external partners increasingly expect proof of security rather just installation of systems. Mastercard and Visa, for example, require all retailers using their systems to conform to the Payment Card Industry set of security criteria.
For many years, pen testing has involved scanning a range of IP addresses to see which services were running on that network; the testers would then try to hack those services to gain access to privileged information.
However, with the advent of firewalls and other security technologies, as well as the conversion of services to web-based systems, pen testing methods have had to evolve, too.
Many testing companies now offer an internal attempt to hack into clients' systems in addition to an external check of web applications.
"For the past couple of years, web attacks have been the only things that still work externally," says Richard Brain, technical director of ProCheckUp.
His company uses a combination of manual and automated techniques to find holes in web-facing applications.
Frequently, tests manage to uncover undocumented flaws in commercial software. And, in Brain's experience, SQL insertion techniques are responsible for between 45% and 60% of the weaknesses his people find.
Testing a complicated web application properly takes about half a day, he estimates. This makes it difficult to spot all possible vulnerabilities.
Brain recalls his company being called in by one large US corporation after several pen testing firms had failed to explain how hackers had broken in. ProCheckUp found the flaw in that most unlikely of places, the "Contact us" page.
An internal test typically involves one of two scenarios: a hacker with no knowledge of systems; or a disgruntled employee with some degree of expertise. "Either way, we'll go in with the full toolbox," says Dave Beesley, managing director of Network Defence. "We'll have virtualised Linux running on laptops, packet analysers, network capture devices, network sniffing tools ... We'll collect data with the sniffing tools, find out what services are running and look for vulnerabilities that will enable us to gain access and escalate privileges."
Internal tests such as these will usually reveal flaws, with Beesley estimating that nine out of ten organisations will have at least minor holes in their security. "Often though, they're low risk. For example, there may be a service open with no exploit code available." However, one in three tests tends to reveal a severe flaw.
The NCC Group, along with a few other pen testers, goes further, using social engineering techniques to gain access to systems. This may involve ringing staff while pretending to be radio producers or claiming to be telephone maintenance personnel to get physical access to the building. "We ask how far we're allowed to go beforehand, and we ensure we don't leave the client less secure than when we started," says Paul Vlissidis, the company's head of security testing.
Peace of mind - at a price
Depending on the client, a pen test may consist of a blanket check-up of all systems, or a specific application or group of applications that need testing. Since pen testing can run to £1,000 or more a day, usually for five days on average for a moderately sized network, most clients will typically go for a blanket check-up the first time, with only specific systems given regular check-ups later on. These may involve using a different pen tester, just to ensure that the original contractors didn't miss anything.
The ultimate test of pen testers themselves is whether they can find every single flaw and stop anyone breaking in. This requires a combination of resources and training. Ian Reece, S3 manager at Integralis, says his company's pen testers attend the same conferences as black-hat hackers, subscribe to security mailing lists and have access to whatever machines and systems they need.
Certification options
The Certified Ethical Hacker Certification, designed by the International Council of E-Commerce Consultants, is one way of training and certifying pen testers. It is available to organisations through companies such as The Training Camp.
However, not everyone's a fan. "All it does it certify you can do a hack," Reece says. "But you could be anybody. It's no test of identity." This proof of identity is important to many clients. Although there is frequently an image of the "poacher turned gamekeeper" attached to pen testers, few clients or testing companies are willing to trust those who were once on the wrong side of the law.
"I trust the firms that have always had a strong ethical focus," says Stuart Okin, associate partner at Accenture's security practice. "It's down to personal choice, but if a client asked me to recommend a firm, I'd go for those that have always been on the 'white-hat' side."
There is a UK certification that requires a complete background check: the Government's Communications Electronic Security Group CHECK scheme.
CHECK certification ranges from 'red' through to 'green' for full clearance.
All levels involve a viva and a practical demonstration of pen testing skills.
And passing once isn't enough, regular recertification of both individuals and company are required as well.
In lieu of any other pen testing standard of ability and trustworthiness, CHECK Green status has now become part of almost all pen-testing tenders.
There are, however, many pen testing companies that do not have this status at present, with even IBM only getting CHECK Red.
Uncertain future
Uptake of pen testing services in the UK is still quite small. Steven Cox, principal consultant for security management at Computer Associates, admits that none of the company's clients have ever asked for the service or even recommendations. Although it is being forced on some, most companies feel their exposure to risk doesn't warrant the costs and time a pen test will take. Those who have already been broken into often feel differently, however.
Ultimately, pen testing exists in a slightly opaque world where it's sometimes hard to separate the good, the bad and the downright unnecessary.
Add in the proposed amendment to the Computer Misuse Act (see box p35), and the future for pen testing as a commercial enterprise, at least in its current form, starts to look a little uncertain. Despite this, for some CTO's there is always going to be an annoying voice that says: "How do you know your network really is secure? Have you tested it?". As long as there is doubt, there will be a desire for pen testing.
PENETRATION TESTING AND THE LAW
Proposed changes to the UK's Computer Misuse Act (CMA) have been criticised by security experts who fear the changes could make software tools used in penetration testing illegal.
An amendment put forward by Conservative peer Lord Northesk to delete section III of the act, which relates to software tools, has failed to pass committee stage discussions. He argued that attempts by the judiciary to combat organised crime could backfire as law enforcement agencies would fall foul of the same law and it could also jeopardise legitimate pursuits, such as ethical hacking and penetration testing.
Commercial pen testers have also voiced concern. "Tools simply reduce the time that a hack would take; but hackers aren't short of time," said Ken Munro, MD of SecureTest. "Pen testers' time is paid for by clients, so the net effect of these CMA changes may be to increase the time a pen test takes. Imagine trying to run a large port scan by hand, instead of using NMAP." However, lawyers are less concerned about the precise wording of the act, pointing to standard UK legal practice. Simon Halberstam, partner and head of e-commerce law at Sprecher Grier & Halberstam, said: "There is a long line of case law to support the idea that possession of software tools is not illegal. You cannot simply assume intent - you would have to establish both capacity and intent to secure a conviction, even when the situation is suspicious."
The British Computer Society (BCS) takes a similar view, and said in a recent statement: "Most tools used by systems administrators and computer forensics investigators are commercially available products used in the course of load and resilience testing. The distinctions between the lawful and unlawful use of such tools seems a fine one. We consider, however, that the prosecution would have to prove intent, and either the article had been specifically designed or adapted to commit the offence, or had been obtained in order to do so."
There is also scepticism about whether the changes will have any effect on the wider, global problem. Prosecuting hackers operating outside the UK would still be difficult.
However, the BCS admits that all may not be rosy: "While the intention of the clause is positive, its full impact will become clearer only when tested in a court of law ... The BCS would like clarification as to the position regarding software testing tools."