The ethics of security

By on
The ethics of security

[Blog post] Where did that zero-day go?

The people who are paid to poke holes in your company’s defences lead interesting existences in an area that’s both legally and ethically very much shades of grey.

To be effective they’ll need to think and act like attackers. This could mean utilising techniques and methodologies that, although technologically and intentionally neutral, may seem border line to outside observers.

Usually, security people like to share information as much as possible because this makes their job easier and their clients safer, but there are situations when information sharing and transparency falls between the cracks.

Last year, it became clear that one security vendor was selling zero-day exploits to the United States National Security Agency (NSA).

French vendor VUPEN apparently earned tens of millions from the US spies and I am told it’s not alone in this practice.

That kind of commercial self-interest grinds the gears of many in the security business, as it’s something of a Russian Roulette game in case unknown zero-days spill into the wild and are abused by criminals.

It’s not just security consultants themselves that are sometimes tempted to clip the ticket both ways - that is, if a zero-day is found, make money out of it first before it’s patched.

One person who refused to be named told me that security advisors find themselves in a difficult spot when working with more savvy and experienced clients.

Some clients insist on long and detailed non-disclosure agreements that forbid any mention of the security consultants’ work with the client - and also bans publication of any findings that result from the testing.

In such a situation, if the security consultants find an egregious vulnerability, or a zero-day exploit that’s repeatable and which can be built and distributed, what’s to stop the client from capitalising on it? 

If the client hasn’t agreed contractually on terms for what to do if a security issue is discovered, a raft of things could happen. Zero-days could be sold to criminals or spy agencies (or given away for free if the client’s particularly patriotic).

Worse, the client could be tempted to deploy the zero-day against competitors or even dabble in some cyber crime by themselves.

Far worse, the vulnerability could leak out into the underground and run rampant around the world.

Plugging such a “meta security” hole is arguably important for the security industry.

While understandably reluctant to embrace regulation it’s not in the interests of security professionals to have vulnerabilities leak out via clients.

Clearly, security consultants that uncovered and disclosed the vulnerability were authorised by the client to do so. This same authorisation should offer legal protection - but being connected with a dodgy client leaking zero-days is not a good look for any security business.

Here, the security industry could and should look at its predecessor, the anti-virus business.

The AV crowd spent many years nutting out ethics and how to conduct themselves. For example, the AV industry considered whether to hire virus writers as these people, while creating problems in the first place, may have valuable skills to build better products.

Computer viruses often were and still are destructive and dangerous. It’s not hard to see why the AV industry decided to take a cautious approach as it is the only way to keep their customers safe.

A reputable AV company does not resell threats or hides them for later, and that’s a principle the wider security industry should adopt - or face having it forced on them.

Tags:
Juha Saarinen
Juha Saarinen has been covering the technology sector since the mid-1990s for publications around the world. He has been writing for iTnews since 2010 and also contributes to the New Zealand Herald, the Guardian and Wired's Threat Level section. He is based in Auckland, New Zealand. Google
Read more from this blog: SigInt

Most Read Articles

Log In

Username:
Password:
|  Forgot your password?