It would be easier if hackers, who say they’re acting in the public interest by releasing information on the vulnerabilities they find, would just get real jobs and stop pointing out the weaknesses in our software, right? Wrong.
As most who work in the IT security field will tell you, all the software that we use is shipped in a vulnerable state. The security holes are there from day one, and if the good guys don’t find the bugs, the bad guys will. The only way to defend an operating system or an application against a bug is to know of the existence of the bug in the first place.
Just 10 years ago, the bug-hunting community was a mish-mash of hackers, system administrators and programmers. Many were geeks seeking kudos for finding the latest "zero-day" or "fresh" vulnerability.
Since then, IT security has become a booming business and vulnerability information is worth its weight in gold. Scores, if not hundreds of full-time bug hunters now spend their days earning hefty salaries pulling apart software and looking for bugs — a weird sort of third-party quality assurance service for software companies.
They disclose their findings to the vendor, which releases a patch, then they release information about the bug to the wider community. But what are the ethics of security research? How much information should researchers release when they find a bug?
'You talk about why people crack things; I think the benefit is that it keeps the vendors in line, its holds them accountable,” says Rick Forno, the former chief security officer of Internic. 'And chances are if the good guys find something, the bad guys have known about it longer than the good guys.'
US-based Forno is currently studying for a PhD on vulnerability disclosure at Curtin University in Western Australia. In his role as Internic’s CSO, he was responsible for securing the Internet’s root domain name servers — the core directories responsible for matching domain names to IP addresses. In short, they’re important machines.
While Forno defends security researchers who disclose information on the vulnerabilities they uncover — even "proof of concept exploit code", the software researchers sometimes release, which allows all and sundry to use the vulnerability — he says there’s a right way to do it and a wrong way.
'Knowledge is neutral. How do you use it, to patch a system or exploit a system,?' he asks. 'There is a big movement now to restrict adverse information ... but where do you draw the line between where information is deemed to be adverse or helpful. Too often people err on the side of caution.”
In this feature, you’ll hear from the hackers themselves, who largely serve the public interest. Some have disclosed information that’s led to computer worms being unleashed by unscrupulous hackers. Others have written tools the bad guys use to penetrate networks. All say they’ve acted in the public interest.
Are they mischievous characters or guardian angels? Read on and decide for yourself.
Why we need hackers
By Patrick Gray on Jan 29, 2007 2:16PM