Immmediate applications of Australia's new cyber-abuse takedown laws that came into force on January 23 remain unclear, with parties on all sides saying it is too early to have access to meaningful data.
The "world-first scheme" gives Australia's eSafety commissioner Julie Inman Grant authority to have the 'worst of the worst' content removed from the internet, "no matter where it is hosted".
The commissioner also now has the power "to obtain identity information behind anonymous online accounts used to bully, abuse or exchange illegal content," the government said last month.
Previous legislation, such as Australia's encryption-busting laws, were passed in part due to a claimed immediate need for the powers by authorities.
If there were content or accounts that the government wanted to immediately target with its new cyber-abuse takedown powers, these remain unclear, and may not be known for some time.
An eSafety spokesperson told iTnews that "it’s too early to glean any meaningful data, particularly with regards to the new adult cyber-abuse scheme."
Head of investigations Toby Dagg told a parliamentary inquiry that the commissioner's office "began taking reports as soon as [the laws] went live."
Dagg said the eSafety commissioner had seen "big increases" generally in the numbers of reports it is receiving, compared to this time last year.
The Act requires the eSafety to publish information about removal requests and other actions it takes; it is understood these will be published in its annual report.
Service providers that receive takedown notices under the scheme are not barred from revealing they were received.
It is likely this data will be folded into providers' existing transparency reports, though it is unclear if the data will be specific enough to identify specific instances of use of the cyber-abuse takedown laws.
Telcos and social media providers spoken to by iTnews were broadly supportive of the online safety laws, and said they would treat takedown requests as they did any other assistance notices from governments.
None were yet able or willing to say if they had received takedown requests in the first week of the law being in effect.
“We have rules in place to help keep our community and will remove any content that breaches these rules," Meta Australia, New Zealand and Pacific Islands director of public policy Mia Garlick said in an emailed statement that was indicative of others received by iTnews.
"In addition to our own policies, we’ve supported the introduction of online safety regulation, including Australia’s Online Safety Act.
"We have a long track record of a productive working relationship with the eSafety Commissioner, and we’ll continue to work constructively with her Office on important safety matters.”
Twitter Australia did not reply to requests for comment on their views on the Online Safety Act or how they will implement it, but have recently expressed concern over a significant global increase in government demands to remove content.
Electronic Frontiers’ Australia chair Justin Warren characterised the reporting requirements in the Act as "mostly activity reporting, not effectiveness reporting requirements."
He added that “determining what information is healthy, democratic freedom of speech and what is harmful content is simply too much power for one individual to have.”