Google has revealed it is training a new AI bot to watch YouTube videos so it can automatically flag and remove content such as graphic violence, sexual content and hate speech - but in the same breath has admitted “machine learning is actually still really bad at determining context”.

Google’s Asia Pacific public policy head of content and AI, Jake Lucchi, this week told the Human Rights and Technology conference in Sydney that automated content moderation is still very much a work in progress.
“We felt uncomfortable using them in ways that would do any kind of automated content removal,” Lucchi said.
If all videos used by terrorists were automatically removed, for instance, then any news coverage using snippets of that footage would also be wiped away.
Lucchi used the very real example of a CBS News tribute to journalist James Foley, believed to have been beheaded by Islamic State in 2014. Vision from the apparent murder was subsequently uploaded to YouTube.
Google's machine learning algorithms couldn’t distinguish between the intent of the tribute and videos that depicted the event in a very different light.
One response is blanket removal of anything depicting such acts, but Lucchi said: "Even though it’s easier in a way just to remove all the content that likely violates our policies, there would be a big cost to that, in that we would likely over-censor and over-remove content."
Yet automated detection of video nasties matters to Google because it's under pressure from governments to crimp the spread of propaganda and hate speech. YouTube's global team of human moderators is therefore under pressure to take down offensive content before it starts to spread across the internet.
YouTube's hybrid solution - with AI automatically sending contentious content to human reviewers - has already dramatically improved the speed with which inappropriate content was removed.
Lucchi said this hybrid approach saw 76 percent of the content removed at the end of 2017 received zero views. At the start of the same year, the majority of removed videos had over 100 views by the time YouTube’s content moderators hit 'Delete'.
He added that of the 8.4 million videos removed in the fourth quarter of last year, 6.7 million of those had been flagged by machine learning classifiers.
“That allows us to get content [taken down] before it’s viewed by people, which is obviously a problem in the past when you have to rely on user-flagged content.”
“It’s not to say that humans are perfect but at least it gives us that human accountability - someone who can pass comment and look at these videos with context in mind.
“It also makes sure that the real-world impact of this kind of content is not any bigger than it needs to be.”