Security research: Obama’s new Prohibition

By on
Security research: Obama’s new Prohibition

[Blog post] Let's learn from past mistakes.

A massive void is opening up between those that think so-called “bug bounties” and ethical security research is beneficial to our industry and those that don’t.

Furthermore, this debate is no longer limited to the infosec community: it’s getting political.

Just last week we saw an excellent example why senior executives maybe shouldn't write on corporate blogs, send tweets, or post their unfiltered thoughts on Facebook and LinkedIn - especially if they are prone to venting personal frustrations or opinions that aren’t aligned with the corporate collective.

If you don’t know what I’m talking about, click through here and you’ll see why Oracle quickly deleted its chief security officer's online tirade against the bug hunting community.

Shortly after, Oracle’s chief corporate architect Edward Screven released a statement saying, "We removed the post as it does not reflect our beliefs or our relationship with our customers."

Actions, however, speak louder than words.

Of course, Oracle CSO Mary Ann Davison might have a valid excuse: she may have forgotten to take this year’s security awareness refresher or just skipped the section on ‘social media basics.’

She may not have realised that once something is posted on the internet, it’s likely out there for good (which would be somewhat concering for a chief security officer of a technology company). In this case, thanks to the guys at her post is up there for all to see.

All joking aside, what’s disappointing from this particular outburst is that she wagged an accusatory finger at diligent security researchers looking for vulnerabilities in Oracle’s codebase, suggesting they’re doing no good and are simply violating Oracle’s licence agreement.

She went as far as to say that Oracle is good enough at finding vulnerabilities without anyone else’s help. I’m sure security legends, such as David Litchfield [pdf], who’s been helping to find and plug holes in Oracle’s software since late last century, might disagree.

Litchfield (now a security engineer with Google) is very active in doing exactly what Mary Ann Davidson seems to despise - helping Oracle make its codebase and architecture immeasurably more secure with his own, unpaid, private testing. 

This Oracle mis-step came to light as another troublesome trend was becoming apparent. Right across the world, politicians and governments are jumping on the bandwagon to vilify security researchers.

President Obama is now pushing for increased legislation and penalties that criminalise the activity of legitimate security researchers who are working to find the bugs before the bad guys.

Joseph Lorenzo Hall, chief technologist at the Centre for Democracy & Technology, when asked about the new legislation said, “It seems to criminalise sharing information that aids an attack.”

This means companies sharing findings related to security vulnerabilities would fall under the hammer of the Racketeer Influenced and Corrupt Organisations Act - also known as the RICO Act.

However, in stark contrast to this move of insanity by the government, some companies are actually increasing the cash fund available for bug bounties higher than even before.

After the launch of Windows 10, Microsoft has doubled the war chest it sets aside to pay researchers to find security flaws.

In doing so, Microsoft has effectively recruited a commission-only team of independent bug hunters, reverse engineers and forensic experts to scour its code for issues. Now that’s clever.

Worst-case scenario

So, what might happen if Obama’s plans succeed? To start with, US legislation won’t deter security researchers in China, Russia, South America and Eastern Europe from searching for vulnerabilities that can be sold on the black market.

If the open market for vulnerabilities becomes stifled by legislation in the US the good guys will simply stop doing their work and the arms race stops.

The world will become increasingly reliant on vendors finding bugs before the bad guys do, but as that’s an unwinnable race - given that bugs are being discovered from codebases dating back 10 or 15 years - it’s simply impossible to expect vendors to use their best coders in this way.

Even if they wanted to, how many highly paid experts would vendors need to employ on a fulltime basis to look for an unquantifiable number of vulnerabilities? The cost overhead would have to put their product prices up, again stifling competition and putting many out of business.

This is a truly intractable problem and can only be addressed by allowing the ethical security researchers of the world the freedoms to dismantle things as they see fit and look for the weaknesses.

This is and always will be the best and only way to address this problem. If our politicians lead us down any other path, we’ll pretty quickly find ourselves in a threat environment that is 10 times worse than it is today.

A warning from history

We should learn from historical examples where unintended consequences from legislation meant even the simplest of objectives are not met and the world becomes a more dangerous place.

Take the era of Prohibition introduced in the US in the 1920s: for over ten years, the big clampdown on liquor was designed to instill a degree of temperance in the American public.

However, it fostered nothing but excess, violence and a brand new, underground liquor market. This legislative solution devised to address the societal problem of alcohol abuse served to make the problem much worse.

This is exactly what will happen if legislators outlaw the security research that our brightest and best engineers are compelled by nature to do, leaving it open to the black market to take over.

An arms race between vendors and criminals will ensue, which the vendors have no hope of winning. Police, courts and jails will now be forced to work a brand new criminal demographic that previously wasn’t on their radar.

Prohibition was a failure. Let’s not repeat the mistakes of the past.

Tony Campbell
Tony Campbell has been a technology and security professional for over two decades, during which time he has worked on dozens of large-scale enterprise security projects, published technical books and worked as a technical editor for Apress Inc.

He was was the co-founder of Digital Forensics Magazine prior to developing security training courses for infosec skills.

He now lives and works in Perth, where he maintains a security consulting role with Kinetic IT while continuing to develop training material and working on fiction in his limited spare time.

Read more from this blog: Unpatched

Most Read Articles

Log In

|  Forgot your password?