A big data research group funded by the Government is pitching to expand its research targets so it can use data analytics to assist the Government in its new efforts to vet public sector staff on an ongoing basis for security purposes.
.jpg&h=420&w=748&c=0&s=0)
At the Security in Government conference in Canberra this week, the chairman of the Data to Decisions Cooperative Research Centre (D2D CRC), Tim Scully said the problem of the insider threat was tailor-made for data analytics.
Attorney-General George Brandis announced during the previous day at the conference that federal agencies would now be required to screen their staff on an ongoing basis in order to protect government data against "insider threats" posed by the likes of Edward Snowden and Bradley (Chelsea) Manning.
The D2D CRC commenced operations on 1 July 2014, with $25m in funding from the federal government and another $67m from industry and academia.
Its five-year mandate is to research and develop tools that maximise the benefits of big data for Australia’s defence and national security sector.
Scully, the group's head, said insider threat cases - such as Snowden and Manning - could have been detected if the tools had been available to collect, correlate and analyse and report from the technically visible footprints of the trusted insider.
While mitigating the insider threat was not a research target originally addressed by the CRC, interest in this area had increased significantly with the Snowden leaks and the new vetting requirements announced by the Attorney-General, Scully said.
He said there were many similarities in using big data to detect terrorists to the insider threat - both required smart data storage solutions to present data from internal and external sources for analysis.
The one major difference was that in security vetting, the “vettee” was a willing and usually a compliant participant, Scully said.
“The vettee is not trying to hide during the process. They will willingly provide documents and give reams of detail about their friends and family in what many still consider a very intrusive process,” he said.
Big data analytics could be applied to this process “relatively easily” and provide a “one stop data shop” which would alert for triggers of unusual activity inside the network with employees already cleared.
“For example plugging a USB into a classified network should be immediately logged and flagged as a red alert. Unusual surfing activity should be flagged," Scully said.
"That is where a person has access material that is not relevant to their work role, particularly if that activity is frequent or prolonged. Detecting a personal log on that doesn’t correlate with the user’s current location. There’s a whole lot of indicators."
Big data tools could automate the time-consuming verifications and check credit, foreign contacts, police history, social media, emails, psychological profiling and many more, Scully said.
The real challenge was where the vettee had settled into their job or has been there for many years with extended privileged access and knowledge.
“The present system has no effective way of warning security personnel of reportable changes that have occurred in the employee’s circumstances,” Scully said.
"We can’t continue to rely on people to continue to detect and report abnormalities in an employee’s behaviour."
He argued that big data analytics would be able to detect and alert when a trusted person breaches their conditions to holding a clearance.
“The capacity to gather and prepare the information for technical analysis already exists,” he said.
“All of these clearance conditions leave a digital footprint and can be collected, analysed and reported upon.”
Scully's ideal option involved a personnel security assessment record, (available only to those authorised to see it) being presented in a Wikipedia-like format, containing all of the person's security details.
“Unlike Wikipedia, which relies on structured and semi-structured data, we had to the mix data fields that use in-house and external structured and unstructured data sources to provide automatic continuous and dynamically updated assessment of security status,” Scully said.
A person's record would be flagged if, for example, their credit rating report indicated an abnormality that met a risk reporting threshold under certain security clearance conditions, or if a note included in a police database mentioned the person’s name in connection with a criminal gang or some other important circumstance, he said.
An unscheduled variation to your holiday travel that sees an employee diverting from Phuket to Hong Kong might also merit a flag, he said.
However Scully admitted his vision could also be seen as intrusive, and with considerable privacy issues.
“All of this is starting to sound like Big Brother rather than big data,” he said. “Just because we can do it, does not mean we should.”
To mitigate such issues, Scully suggested tying in the CRC's big data research efforts with its existing law and policy research program - the research would then cover all the ethical, legal and policy considerations of the application of big data techniques to detecting insider threats.