Want an Internet of Things? Fix the internet first

By

[Blog post] What if the physical world inherits the bugs of the virtual one?

The thousands of gadgets that will soon enough converge as the ‘Internet of Things’ have many of the same security vulnerabilities as their older siblings: computers, mobile devices, routers, and online/cloud services.  

Want an Internet of Things? Fix the internet first

Most run some flavour of Linux and use many of the same libraries that make up our online world, so their core is just as vulnerable.

In fact many of them have proven to be worse, in part because eager young developers either didn’t give security a thought, or said “she’ll be right, mate”, or put it off for “later”. Oops.

You’ve probably also read that many of the ‘first generation’ of IoT devices don't have the ability, or at least not a customer-friendly ability, to have their firmware upgraded, not even to plug security holes or fix bugs, let alone provide new features. While that is changing, the fundamental problem remains. 

While these IoT gadgets may not seem especially critical to our lives if they were to be compromised by malware, they can in fact provide a great foothold from which to launch attacks on other computers/devices from within our LAN, which is an equally insidious prospect.

IoT devices have become one of the weakest links in the security chain of domestic and even business networks for the foreseeable future.

Worse, we’re often talking about gadgets that occupy the cheap end of the market, where some vendors are already adopting a sell-it-but-not-support-it approach (until you buy Version 2). We can only hope that’s an approach that won't fly once we have a more competitive market. 

In response, some geeks advocate putting them on a separate or ‘guest’ LAN/WLAN, thereby sweeping the problem under a ghetto rug.

Some vendors' architectures have all the devices communicate over some other wireless protocol (Zigbee, MiWi, 6LoWPAN, Z-Wave, etc) and use a central gateway connected to your LAN/WLAN for all the devices to obtain needed internet access, but this doesn't really avoid the problem - it just gets squeezed into the central gateway (and provides a juicier target - hack one, own all).  

Any assertion you can focus your efforts on hardening a single device's implementation is little more than bluster and ego.  Either way, ordinary consumers will simply connect them to their one and only WLAN in blissful ignorance.

All these approaches perpetuate a big fat blind spot in the software development ecosystem - a lack of ubiquitous focus on security.  Yeah, yeah, I hear you moan, we all know it, security is hard, and it’s been a long hard slog over the last 15 years with way more to go, and the big end of town is still crashing into gobsmacking lapses in security, as 2014 alone has shown us.  What hope for the rest of us?

Recently the IEEE - Institute of Electrical and Electronic Engineers - decided to take the lead in advocating a new focus on security in software, creating their new ‘Centre For Secure Design’, and releasing a ten point guide to common design flaws that lead to insecurity, and the promise of more to come.

"The Center intends to shift some of the focus in security from finding bugs to identifying common design flaws in the hope that software architects can learn from others' mistakes.”

It is worth checking out (see later in this article) – it is a decent list and a great starting point if your 'security thinking' hasn't gone much beyond updating to the latest and greatest of all the third-party libraries you're leveraging.

Even the more seasoned developers with responsibilities in systems architecture should read it - it's easy to let one of these considerations slip, for many reasons that may have seemed like a good idea at the time.

The IEEE is hardly alone in advocating that we all need to pull up our security socks, so why isn't it working?  I wonder whether there is so much news about the 'bad guys' and their seemingly bottomless grab-bag of zero-day exploits that many programmers simply flick back to their IDE and continue coding that insanely great new feature to make the world a better place...

For how much longer do we keep bailing water out of our leaky boat? 

We see media coverage weekly, if not daily, of more software, sites or services being hacked, user credentials pasted publicly, an unknown number of penetrations that aren't published or even known about by the victim, matched awkwardly by an institutionalised approach to releasing reactive security updates on a calendar schedule. How much further can consumer confidence in software and online security be pushed before some tipping point is reached and 'everyone heads for the hills', abandoning, for example, online banking?

What can be done to restore faith?

I'm reminded of the 'EMC Directive' (ElectroMagnetic Compatibility) introduced by the EU in the mid-90s, formalising the requirements for the limits of RF immunity and emissions, ESD (static zap) immunity, power line noise and the like,  and dictating what was required of most electronics-based products before they could be legally brought to market.

It was a suite of standards somewhat more stringent than the 'FCC regulations' commonly observed up to that point, aimed at making our rapidly growing world of gadgets play nicely together.  Until that point, no widespread regulations entertained the possibility that nearly everyone might be walking around with a radio transceiver with a power of several watts clipped to our belts, and thus many electronic devices behaved erratically when cellphones were brought near them.

The world had changed, so our collective game had to step up.  It took professional electronics engineers years to develop a suite of design considerations, tools, test regimes, new ranges of electronic components dedicated to the cause. It took a market of independent test labs. And most importantly, it required feedback about what you could get away with to meet 'Regulation X in Country Y' and what wouldn’t pass, before electronics engineers developed 'eyes' to be able to anticipate what couldn't be seen.

I believe 'security thinking' needs to become a part of programming - just as inherent as awareness of variable type or scope. That would be a critical step in the industry's maturation to address this embarrassingly endless slew of exploits, the looming crisis of the public's confidence in 'cloud' security, and to avoid the Internet Of Things from becoming a morass of malware.

I hate to say it, but it’s hard not to see yet more accreditation and qualification schemes popping up, or existing ones broadening.  I think we’ll see huge growth and maturation in automated code testing, system testing, and penetration testing products and services, with both private and industry organisations giving stamps of approval for the security of software.

The IEEE initiative is one part of this, I applaud its intent, but they can’t and never will be all things to all developers.  It has to be *much* broader than this, and it needs to happen as soon as possible.  The intent in this IEEE initiative needs to be replicated everywhere programmers turn to for learning and advice, from universities to stackexchange.com, from text books to MeepUps, from industry conferences to hack-spaces; for 'security' to be no longer relegated to a mere subset of the software developer ecosystem and 'someone else's problem' for the rest.

Until then, our Internet of Things is an endless trainwreck that's already begun, multiplying our 'attack surface' in ways even the 'bad guys' may not have thought of yet.

IEEE’s Guide to ‘Avoiding the Top 10 Security Flaws’:

  1. Earn or give, but never assume, trust.
  2. Use an authentication mechanism that cannot be bypassed or tampered with.
  3. Authorize after you authenticate.
  4. Strictly separate data and control instructions, and never process control instructions received from untrusted sources.
  5. Define an approach that ensures all data are explicitly validated.
  6. Use cryptography correctly.
  7. Identify sensitive data and how they should be handled.
  8. Always consider the users.
  9. Understand how integrating external components changes your attack surface.
  10. Be flexible when considering future changes to objects and actors.
Got a news tip for our journalists? Share it with us anonymously here.
Tags:
Anthony May
Leaving behind a 12 year diversion into small-business IT consulting with a focus on security, Anthony May has returned to his 'first love' of electronics design engineering, only to discover it's Groundhog Day and its latest incarnation - the Internet of Things - is beset with the same security nightmares he thought he was leaving behind.
Read more from this blog: Connected things

Most Read Articles

Apple loses bid to dismiss US smartphone monopoly case

Apple loses bid to dismiss US smartphone monopoly case

WA Police Force to spend $30.8m on IT 'optimisation'

WA Police Force to spend $30.8m on IT 'optimisation'

Victoria's first government tech chief steps down

Victoria's first government tech chief steps down

NSW seeks to build unhackable netbook network

NSW seeks to build unhackable netbook network

Log In

  |  Forgot your password?