COVER STORY: Regulating the metaverse

By

Three big risks: tech giants, crooks and each other

Crypto volatility, lack of regulation, and an almost inconceivable amount of data providing intimate details on user biology and behaviours, along with cybercrime are key risks associated with the metaverse.


And that's before the marketers get involved. Brands have long been in the business of manipulating the human brain, hoping the visual and auditory signals they present to consumers will trigger that small hit of dopamine. But the tools they have are still largely anchored in the world of 19th-century US department store magnate John Wannamaker.

A century of Wannamaker shuffled home to the returns department, the tools available to the spruiking-class are about to be supercharged.

Metaverse marketers will ingest vast amounts of data from the firehouse of real-time biometric feedback, enhancing their ability not only to read the brain, but also to manipulate it. And as social media, and ad tech more generally has demonstrated for two decades, marketers never met a tool they couldn't abuse.

People suck

Yet the law today barely recognises what is coming, outside of some very specific industries such as biomedical devices. That will need to change if we are to avoid all the worst kinds of behaviours we already see daily on social media, according to the growing band of practitioners now considering the implications of the next generation internet. Meta's great gift to human ethics in the digital age is that if you give people monster-making machines they will make monsters.

On a global scale the recent reports about the appalling behaviour coming out of Kiwifarms presages a very ugly future. Likewise at a local and more intimate level the behaviour of students from  Sydney's elite Knox Grammer on discord. Both utilised old tech. Weaponised by augmented and virtual reality, the consequences of each would have been much worse. And kiwifarms is already very, very bad.

While enormous corporate bets are being taken today on metaverse technologies  McKinsey and Company research revealed that more than US$120 billion has been poured into the metaverse in 2022  there are fundamental challenges and ethical considerations that Web3, the metaverse and Web3 technologies present.

According to Louis Rosenberg, a pioneer and a world authority in augmented and virtual reality, as well as chief scientist and CEO of Unanimous AI, and CTO of the recently formed Responsible Metaverse Alliance, metaverses today fall into two categories. The first is a place you visit like Decentraland, or Meta’s Horizon Worlds. The second type is the world you live in today, enhanced through augmented and virtual reality. Both types of metaverses come with risks.

A familiar problem, supercharged

He described the first key risk as a content moderation problem, exacerbating the challenges that already exist in Web2.

“Content moderation problems exist right now in social media,” said Rosenberg.

“It is a problem and it gets even harder in the metaverse because it becomes real-time. The types of harassment, the types of hate speech and all those things in the metaverse can be real-time.”

While Rosenberg highlights regulation as necessary to combat user-on-user harassment, he acknowledged that the problem in and of itself is a hard one, especially when it comes to moderating human behaviour.

Send in the crooks

Bad actors are another key risk to companies and individuals operating in the metaverse, especially when it comes to identity thieves.

Rosenberg told Digital Nation Australia, “There already is identity theft. There already are cons and scams online.

“In the metaverse, the idea of identity theft goes to direct impersonation. If I can steal your avatar or steal what you look like, because avatars will become photorealistic if I can steal your look and your voice, I can impersonate you. And if I can impersonate you, I can be a co-worker who gets you to talk about secrets. I could be a family member who gets you to talk about your bank accounts. I could be an adult pretending to be a child. I mean, all of these things become very dangerous.”

If you can't trust the Facebooks of the world...

The vast amounts of data that metaverse platforms will be able to collect is exponential according to Rosenberg, which presents huge challenges around privacy and the ethical use of data.

“The platforms need to know where you are, what you're doing, what your posture is, what your gait is doing, what your facial expressions are doing, vocal inflections. They'll need to know who you're with and what direction you're looking, how long your gaze is lingering,” he said.

While there are risks in both virtual and augmented metaverses, Rosenberg said that the total fabrication of experiences in the virtual version presents significant challenges.

“Those immersive experiences are basically manipulating what you see and what you hear and what you feel. And, there's a lot of opportunity or possibilities for abuse of that.”

Dr Catriona Wallace is one of the founders of the Responsible Metaverse Alliance, a group working internationally with politicians and governments to address the harms of the metaverse and advocate for the responsible development of metaverses and virtual worlds. The RIA advocates for metaverse platforms that are ethical and transparent, consistent with user expectations, user well-being and safety, organisation values, diversity and inclusion and reflect societal laws and norms.

She told Digital Nation Australia that it is beholden on the industry, on regulators and on the community to address the issues such as the design, deployment, culture and safety of the metaverse now, before its too late.

“How do we avoid all the challenges we've had with Web 2.0 and social media, given that metaverse is coming very quickly? We've probably got four or five years before it really takes significant hold,” said Wallace.

“Let's this time, set it up as a responsible, ethical, diverse, and inclusive environment, which it’s certainly not looking like right now, instead of trying to retrofit practices and policies and regulation after the horse has completely bolted.”

One of the challenges in regulating the metaverse is determining jurisdiction, as virtual worlds are not defined by state or national boundaries, she said.

“The return to philosophy is also something that's up now. It's like, what is a virtual world and who governs it and how do you govern it when it doesn't actually resemble anything that we're used to in the physical or the digital world?”

While many of these big questions are as yet unanswered, Wallace insists that lessons need to be learnt from the past to ensure international collaborations are taking place now.

Neurotechnology

Digital Nation Australia spoke to lawyer Dr Allan McCay, Deputy Director of The Sydney Institute of Criminology and an Academic Fellow at the University of Sydney's Law School, about the risks that neurotechnology presents in the context of the metaverse.

Neurotechnology is an emerging industry that represents a cross between technology and neuroscience whereby technical components such as computers are integrated with the nervous system.  

While neurotechnology can provide greater insight into the brain or nervous system function, McCay said its risks largely relate to the type of neurotech that not only can read from the brain but can write to it.

“The privacy issue is the manipulation issue. Once you know something about someone you know about how to pull their levers and possibly control them in a commercial sense or in a political sense,” said McCay.

“One of the things that that needs to be considered is whether consumer protection law is fit for purpose, given the likely greater capacity of marketers to use their knowledge about people to influence them.”

McCay believes that there is a low level of understanding in the legal profession when it comes to the implications of neurotechnology, especially when it comes to consumer neurotech.

“There's some companies that provide non-invasive brain reading devices that don't go through the TGA or FDA approval. So there might well be something coming on the horizon for that. There might be a reason to think that there might be some change to say the privacy laws, because brain data is especially sensitive.”

Got a news tip for our journalists? Share it with us anonymously here.
© Digital Nation
Tags:

Most Read Articles

Powercor to tap into agentic AI across the organisation

Powercor to tap into agentic AI across the organisation

Optus' first AI chief Samantha Lawson exits

Optus' first AI chief Samantha Lawson exits

Transurban explores bringing agentic AI to its chatbot

Transurban explores bringing agentic AI to its chatbot

Westpac pilots AI to analyse inbound call content

Westpac pilots AI to analyse inbound call content

Log In

  |  Forgot your password?