With the reports of escalating cyber-threats to companies from organised crime and government spies, it is easy to forget that a significant portion of this cyber-threat involves not strangers, but insiders; employees and businesses that the target companies are supposed to trust.
Of all the ways that these insider security breaches can occur, perhaps the most insidious are those breaches that are brought about through software written in-house or obtained from trusted suppliers.
This year, Google agreed to pay $7 million in the US for collecting unencrypted data from wireless networks as part of Street View. The company claimed that the code for collecting this data had been added by an engineer on his own initiative.
The code managed to bypass all internal checks and be used in production for some time before it came to light in 2010.
But software developers adding undocumented features to software is not uncommon.
Often justified because it makes administration and support easier, these backdoors can be later exploited not only by the authors but also by others who come to learn of their existence.
Because these code changes are authored in a way to bypass any auditing that may accompany the software, they are usually hard to detect by business users, coming to light only after a breach or incident has occurred.
The Verizon 2013 data breach report into corporate data breaches showed that half of insider breaches were former employees that exploited backdoors or old accounts that hadn’t been disabled.
In a more recent incident with parallels to that of Google’s, users of the ESEA gaming network discovered that ESEA client software they had installed on their computers had surreptitiously been using their high powered graphics cards to mine the virtual currency Bitcoin.
Again, this turned out to be an ESEA employee that had added the code out of interest, ostensibly never intending it to go live. In this case, it was ESEA users that spotted the activity on their computers.
How companies decide what software and suppliers of software to trust is problematic.
Companies and employees agree everyday to send companies such as Microsoft automated crash and bug reports that have the potential for revealing sensitive information.
We may implicitly trust a company such as Microsoft, but as the experience with Google has shown, they may not always be in control of what their software is doing.
Companies like Microsoft produce millions of lines of code a year. It would be impossible for them to guarantee that they are aware of how all of it functions.
An older but probably more frightening example of a reputable company installing backdoors in their software was Borland who had included a username and password into their Interbase databasesoftware that would allow remote access over the internet.
This went undetected for seven years before coming to light when Borland open sourced the software.
And late last year, hardcoded usernames and passwords were found in a line of Telstra broadband routers that could allow attackers access to customer networks.
When it comes to the use of apps on devices such as mobile phones that employees may control, there is very little chance of being certain that the software is not acting maliciously.
This is exacerbated by the fact that much of the data for these apps is being stored in some associated proprietary cloud.
An additional problem comes from the fact that these apps are seemingly endorsed by Apple when they are submitted to the App Store.
Users perhaps misplace trust in the applications because of this despite the fact that the checking process that Apple undergoes is cursory at best and does not deal with what data is collected, how it is stored and what it is used for.
Protecting employee and corporate data from incidents involving software, whatever the source, is difficult. Software backdoors and apps disguising a secondary use of data as part of its legitimate functionality is not something that traditional security software will detect.
Even with access to source code, audits may not pick up rogue functionality and of course you have to trust those conducting the investigations.
Possibly the most salient advice on avoiding the threats posed through software of data breaches is to avoid keeping data in the first place and keep tabs on what is left.