Data61 wants to 'vaccinate' machine learning algorithms against attacks

By

Recognising and fending off attacks.

The Commonwealth Scientific and Industrial Research Organisation’s Data61 has developed a set of techniques to protect machine learning algorithms from attacks, similar to how vaccines work in living creatures.

Data61 wants to 'vaccinate' machine learning algorithms against attacks

Machine learning techniques, now in common use across most industries and easily visible even in Google suggestions, gather vast amounts of data over time to work out which qualities meet the criteria of algorithms and which can be dismissed.

Data61’s machine learning group leader, Dr Richard Nock, said that by adding a layer of noise (ie, an ‘adversary’) over an image, attackers could deceive machine learning models into misclassifying the image.

“Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sigh as a sped sign, which could have disastrous effects in the real world,” Nock said.

The small-scale, everyday version of an adversarial attack would be watching a YouTube video from an account with a large following of conspiracy theory enthusiasts, and before you know it every other video suggestion is promoting flat-earth pseudoscience or claiming an innocuous vegetable will cure all that ails you.

The effects of a self-driven adversarial attack are easily shrugged off or overlooked, but as machine learning algorithms direct our parcel deliveries, eradicate invasive species and drive our cars, the damage from an attack becomes much more significant.

However, Nock said machine learning models can be trained to recognise and fight off malicious data in a process similar to vaccination.

“We implement a weak version of an adversary, such as small modifications or distortion to a collection of images, to create a more ‘difficult’ training data set.

“When the algorithm is trained on data exposed to a small dose of distortion, the resulting model is more robust and immune to adversarial attacks.”

Given that autonomous buses and taxis are already on Australian streets, inoculating the algorithms that spot humans, obstacles and road markings from attack seems like a positive step forward.

Data61’s chief executive, Adrian Turner, said these new defensive techniques could spark a new line of machine learning research and ensure the positive use of transformative AI technologies.

This plays into the CSIRO’s push towards more ethical use of artificial intelligence, especially in regards to using diverse training sets of data to develop AI solutions that don’t encode things like racial or gender bias inherent in things like historical hiring data and criminal convictions.

It might also mitigate instances of public embarrassment, like Microsoft’s AI bot ‘Tay’ which was turned into a misogynistic anti-semite in less than 24 hours thanks to the magic of Twitter, by ensuring chatbots that learn from the general public can identify racist trolling and avoid picking up the phrases they use.

Data61’s research on the paper, Monge blunts Bayes: Hardness Results for Adversarial Training, was presented at ICML in Los Angeles earlier this morning and can be found here (pdf).

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

ANZ consolidates operational risk into ServiceNow

ANZ consolidates operational risk into ServiceNow

CBA looks to GenAI to assist 1200 'security champions'

CBA looks to GenAI to assist 1200 'security champions'

Defence's AI Centre hunts value in 1 billion unstructured documents

Defence's AI Centre hunts value in 1 billion unstructured documents

Salesforce blocks AI rivals from using Slack data

Salesforce blocks AI rivals from using Slack data

Log In

  |  Forgot your password?