Last week’s report by the Citizen Lab researchers at the University of Toronto’s Munk School of Global Affairs is yet another timely reminder of a largely forgotten subject, namely law enforcement malware.
To recap, Citizen Lab have been tracking an Italian-based company, The Hacking Team (no really, that’s what they call themselves), which produces malware that can be used to infect and control people’s devices remotely for the purposes of surveillance and data retrieval.
“Policeware” has been with us for way longer than many people realise. At the start of the new millennium, the United States Federal Bureau of Investigation tried to squirm its way out of explaining what the purpose of the “Magic Lantern” virus.
At the time, the FBI insisted there were legal processes and requirements that regulated the use of malware as a surveillance tool (the idea behind Magic Lantern was to act as a keylogger attack so as to defeat end to end encryption) and to prevent such technology from being abused.
As National Security Agency contractor Edward Snowden’s leaks have shown, what rules were in place mattered little and agencies helped themselves to as much information as they wanted.
There are multiple moral and ethical problems with the police using malware that are sufficient to demand some serious transparency around the area.
The Hacking Team and other companies such as Gamma Group (which markets the Finfisher spyware rumoured to be in use by the Australian Federal Police) have declined to confirm use by police forces and insist that such products are only used legitimately.
This despite evidence to the contrary that shows that policeware has been used by despotic regimes, to suppress dissent, often through very violent means.
There is little or no public oversight of how law enforcement uses malware, or how often.
The thought of coppers exploiting zero-day vulnerabilities willy-nilly should be enough to scare the living daylights out of anyone with a modicum of IT experience.
Worse yet, why is it seen as a good idea for law enforcement to keep quiet about new malware and vulnerabilities instead of reporting them?
“This is at the core of the debate," agrees Morgan Marquis-Boire, one of the researchers at Citizen Lab. "Should the government and/or police keep zero-day vulnerabilities to themselves - which makes everyone less safe, but allows them to break into 'bad people's' computers - or do they have a duty to have these problems fixed?”>
The situation is made worse by the fact that the creation of malware is outsourced to private companies, again with little or no oversight and transparency.
Due to private enterprise being involved, there is now an incentive to discover and stockpile vulnerabilities for profit and that makes all of our IT, from business users to Joe and Joelene Average, less secure.
Policeware can fall into the hands of criminals easily, just like any other type of virus or malicious application that is disseminated into the wild. Criminals don’t even have to capture the malware to make use of it, but can instead masquerade their own stuff as the official goods.
The forty-one nation Wassenaar Arrangement was amended in January this year to cover the trade in exploits, but it’s only a suggestion and not binding as such.
Outright regulation of police use of malware may not be the right way to go. It’s unlikely that governments would want to limit their own powers, and any regulation would likely be used to silence security researchers instead.
However, just as with tasers, speed cameras, alcohol checkpoints and other law enforcement technology, we really need to know more about “policeware”. After all, the technology represents a massive invasion of privacy that can lead to serious consequences so the time has come to shine a light on its use.