A radical overhaul of Australia’s analogue era censorship and classification laws alongside reforms to the Privacy Act to “capture technical data and other online identifiers” under the umbrella of “personal information” have emerged as key themes from the government’s highly anticipated response to the ACCC’s Digital Platforms Inquiry.
In dual policy drops on Wednesday and Thursday, Communications Minister Paul Fletcher confirmed Australia’s arcane system of reviewers sitting in a dark room stamping classification labels onto content before it can be pulled offline are numbered.
Instead, a heavily boosted eSafety Commissioner will be handed much of the job for faster response times, and a decent sized stick to whack recidivists with.
The moves are a significant loss for the US social media lobby and, if enforced, could see them regularly fronting court.
Censorship is the new black
The head and shoulders of the new regime, according to the government is “developing a uniform classification framework across all media platforms” that would replace the inconsistent mishmash of official ratings (eg Refused Classification, X, R, MA15 etc) and various voluntary and self-regulatory codes.
At the moment, pay and subscription television, terrestrial TV, Netflix and other on demand services ranging from Apple to Google all have different ways of grading content, with the government having essentially opted for a self-regulatory approach due to lack of resources.
And while there’s no major immediate rub for the tech industry in shifting to “a platform-neutral regulatory framework covering both online and offline delivery”, what is moving very quickly is the regime for take-downs, complaints and pulling the plug on digital nasties.
It takes a little unpacking, especially as there are overlapping areas, especially after the fast introduction of the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 in the wake of the Christchurch Mosque attacks and Facebook’s feeble response.
ISP’s firmly roped-in “among others”.
The targets of the new codes, the government says are “social media services (such as Facebook, Instagram and Twitter), instant messaging services (such as Facebook Messenger, WhatsApp and Viber), interactive online games, websites, and apps, and Internet Service Providers, among others.”
The “amongst others” is a biggy in the new code, especially after the early 2019 introduction of the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019, which compelled ISPs, “content service providers” and “hosting service providers” to block such content if called upon to do so by the Australian Federal Police.
iTnews understands that the big three cloud operators – Microsoft, Google and Amazon Web Services – felt that they were covered by the term “hosting services providers”, and attempted to have the Amendment recognise that it is not reasonable to have them held responsible for customers’ use of their servers.
The big three were also concerned that the broad wording of the Amendment would also impact smaller Australian cloud operators and hosting companies.
Those arguments fell on deaf ears and industry has criticised the Amendment’s unusually-brief path through Parliament, as it consumed fewer than 24 hours from its introduction to passage.
Whether the new codes make things better or worse remains to be seen.
So what’s moving?
While the Digital Platforms Inquiry response confirms the big censorship overhaul, the real regulatory real teeth will now essentially vest with the eSafety Commissioner and a swag of new powers to swat nasties that will flow from the Classification Board taking a step back.
Put simply, for “harmful online content” – and it’s a broad definition – the eSafety Commissioner takes over as both the umpire and enforcer with a range of boosted powers we’ll get to shortly.
“The current online content scheme is not suited to the contemporary online environment and the technologies and services used by Australians every day.
“It is limited in its ability to deal with harmful content hosted overseas, the services to which it applies are out-dated, and the reliance on the assessment and classification of online content by the Classification Board imposes unreasonable delays in dealing with harmful online content,” the government said in fact sheet issued with its eSafety policy drop on Wednesday.
A key problem to date has been people haven’t been sure who they can complain to and who enforces action, especially under the existing system, where bad stuff had to be stamped with a classification.
The line between industrial grade nasties – think ISIS promos, child exploitation and hidden cameras and user generated material – explicit revenge videos, non-consenting activity, bullying and bashings – can be blurry.
So the government has gone for the effect of the content rather than its origin, in terms of take downs and delisting.
“Seriously harmful content will be able to be reported directly the eSafety Commissioner. The Commissioner will investigate the content and will be able to issue a takedown notice for seriously harmful content, regardless of where it is hosted, and refer it to law enforcement and international networks if it is sufficiently serious,” the government’s fact sheet says.
“Where takedown notices are not effective, the ancillary service provider notice scheme will be able to be used request the delisting or de-ranking of material or services.”
The government will now put content into two categories, Class 1 and Class 2.
Their definitions are:
- Class 1 or seriously harmful content will include content that is illegal under the Commonwealth Criminal Code, such as child sexual abuse material, abhorrent violent material, and content that promotes, incites or instructs in serious crime.
- Class 2 content will be defined as content that would otherwise be classified as RC, X18+, R18+ and MA15+ under the National Classification Code. This includes high impact material like sexually explicit, high impact, realistically stimulated violent content, through to content that is unlikely to disturb most adults but is still not suitable for children, like coarse language, or less explicit violence. The most appropriate response to this kind of content will depend on its nature.
In broad terms, the worst material will sit with the eSafety Commissioner who will also get powers to have content taken down or de-listed by search engines sped up. This essentially moves from a 48 hour period to 24 hours.
“If the industry member does not comply, the Commissioner would have a range of enforcement powers at their disposal, including civil penalties for non-compliance,” the government fact sheet says.
“For harmful material that is sufficiently serious, the eSafety Commissioner would refer matters to the Australian Federal Police and state and territory law enforcement, or international networks like INTERPOL and INHOPE, as appropriate.”
Just plain offensive
For all the other stuff – that runs the non-criminal gamut from Passolini’s Salo to Rodney Rude’s displeasure with Santa Claus – which can also be covered by other codes, eSafety also gets to stick its nose in.
“eSafety would have graduated sanctions available to address breaches of industry codes under the online content scheme, including warnings, notices, undertakings, remedial directions and civil penalties,” the government fact sheet says.
Also in the mix are penalties and a takedown regime for cyber bullying directed at Australian adults, with the Office of the eSafety Commissioner saying it gets more than 40 queries per month “from adults experiencing cyber abuse”.
Reports of abuse could also trigger 24hr takedown notices, and would presumably extend to trolling, with the worst cases also referred to police.
It should be a very interesting 2020 Budget.