How unsafe is the metaverse? And what can be done to improve it?
- A disturbing new report suggests the metaverse created by Meta is a “cesspool of toxic content”
- Another report predicts safety and data privacy issues in metaverse platforms will worsen as user engagement grows
- ABI Research analyst doesn’t expect metaverse developments or usage to slow down, despite safety concerns
- GlobalData and ABI Research suggest concrete ways to increase protective features of the new virtual platforms
The excitement and hype surrounding metaverse platforms are being blunted by an increasing number of reports of disturbing and forced experiences, raising doubts over the safety capabilities of such platforms and prompting industry analysists to call for enhanced safety efforts so the digital services sector can avoid the harmful content failures of the social media sector.
Last year was largely dedicated to further developing the concept of a virtual environment where avatars representing real-life people would do things like playing sports, attending concerts and meeting with colleagues. Many companies from different corners of the tech industry placed big bets on their own developments in an effort to be among the metaverse pioneers. The most notable move in this space was the notorious metamorphosis of Facebook, which rebranded to Meta Platforms Inc. in October 2021. (See The matter with Meta.)
While this move raised many eyebrows and even prompted ridicule, the industry paid attention and many companies quickly jumped on the bandwagon, if they weren’t already on a similar path. Examples include: Qualcomm’s $100 million investment into metaverse R&D and the partnership it struck with Microsoft for augmented reality (AR) acceleration; a $1 billion funding round raised by games developer Epic Games’ to fuel its metaverse ambitions; NVIDIA’s expansion of its Omniverse, which it dubs as “the world’s first simulation and collaboration platform that is delivering the foundation of the metaverse”; Apple’s reported work on a mixed reality (MR) headset product and associated software; and a $150 million new funding secured by technology company Improbable for the launch of a network that promises to reach a new level of interoperability, interactivity and scalability for the metaverse. (See UK’s Improbable banks $150 million to help make the metaverse dream a reality.)
Some major mobile operators have also made strides towards building the new virtual space.
So far, so good – users are promised entertaining, educational and work-related benefits in a world where virtually everything imaginable is, in fact, feasible.
At the same time, companies that are quick to make up their mind and shift investment efforts into developing products and services necessary for the metaverse to function are aware of the enormous revenue potential this new endeavour brings. According to separate findings from research company ABI Research, the immersive collaboration market will reach $22 billion by 2030, while spending on extended reality (XR)-related hardware, software and content is expected to hit almost $5 billion in 2024.
But what happens when one finds out the metaverse is not a land of virtual smiling faces, sweet-smelling roses and cloudless skies?
A report issued in late May by non-profit advocacy organisation SumOfUs claimed one of its researchers encountered sexual harassment an hour after entering Meta Platforms-developed virtual reality (VR) social networking platform Horizon Worlds. According to the report, the act was “non-consensual” and made the researcher’s experience “‘disorienting’ and confusing”. The organisation also claimed that after complaining about the situation, the company “not only failed to take action against the aggressor, but actually blamed the beta tester for inadequate use of personal safety features.”
Describing the virtual space created by Meta as “another cesspool of toxic content”, it urged regulators to hold Meta accountable for “the harms found on its platforms, weaken its grip over technology industries and rein in its ruthless data harvesting practices.”
More findings about disturbing behaviour in the metaverse can be found in SumOfUS’s report here (we caution that it contains distressing content). Previous reports regarding unsolicited and disturbing experiences on the platform have been published by MIT Technology Review, The New York Times and blog platform Medium.
Lack of safety in the metaverse set to grow
What raises even more concerns is a prognosis that data privacy and safety issues in the metaverse will worsen as more people use such platforms. That’s the view of UK-based data analytics and consulting company GlobalData, which issued a report on the topic earlier this week.
“Current safety and data privacy issues seen in current social media platforms will likely be extended or even exacerbated in the metaverse. Not only will the platform follow a similar ad-based model, but it will be more immersive, integrated even further into most aspects of users’ lives and will be harder to regulate,” explained Rupantar Guha, Principal Analyst at the research body’s Thematic Intelligence team.
What heightens such issues are factors such as “the sheer magnitude of personal data that can be extracted,” the participation from multiple various developers in such platforms, the “disassociation” of metaverse platforms from national authorities and the unknown reach and potential of the extended reality world.
Ways to protect the metaverse and the wider digital environment
Guha called for moderating behaviour to be prioritised by metaverse developers as “a foundational aspect” of their work, and issued a warning that companies will experience “a detrimental impact” on their metaverse ambitions and reputation if they fail to filter toxicity out of the virtual environment.
ABI Research’s principal analyst Michael Inouye, whose research area is mainly focus on the metaverse, told TelecomTV that some of the methods to protect metaverse users will have to come with a compromise with regards to privacy because, he noted, “identifying and stopping the bulk of these issues is extremely difficult without using some form of AI to monitor these virtual spaces.” To be able to address most safety-related cases, Inouye said, would require “active monitoring by the services, which to some (if not many) is a violation of privacy. In some regards users will have to accept some compromises in privacy for a safer virtual environment. When or if we’ll get to that point, I can’t say for sure, but the current trends don’t look positive.”
If such a level of monitoring is unacceptable, another option, “a somewhat lower goal,” would be enhanced community regulation. “A user’s digital identity, however, will need to be matchable to their real-world self. In this case, if users report bad behaviour, these transgressors could be permanently banned – the challenge here is the permanent part as we’ve seen countless times when users are banned and create new accounts to join again,” the analyst explained.
According to Inouye, regulations or legislation could bring “better parity” to online activity with real-world actions – for example, “if there were stronger legal repercussions for harassment and bad behaviour online (along with enforcement), this would at least reduce the bad actions from users who for some reason believe their actions are ‘harmless’. This could also encourage better reporting – some users may be reluctant to report troubling behaviour because they believe nothing will be done or they feel others will judge them as over-reacting (which clearly isn’t the case).”
Inouye noted, however, that the issues users and developers experience in the metaverse are no different to problems across any other digital environment. The reason lies in the potential for anonymity or varying levels of “identity obfuscation – some users will view this anonymity as an opportunity to behave and act in ways they wouldn’t in real life,” which is something that can be seen in other areas, such as multiplayer gaming, social networks and online commenting.
What makes the metaverse different, though, is the deeper level of immersion that can make negative experiences “far more impactful and troubling,” noted Inouye.
“Ultimately, there needs to be accountability for one’s actions. Until we view these horrendous online actions/behaviours on the same plane as if they were made in real life, it will be difficult to create (and enforce) stronger regulations and impose stronger consequences for bad behaviour,” the ABI Research analyst added.
So far, Inouye stated, there is no expectation for companies to stop developing metaverse-related offerings over safety concerns. And despite the potential for negative and distressing events, users are also likely to stay in such platforms because of the potential to engage with their friends, family, and favourite celebrities and brands.
“If the overall user experience, devices, etc. for reasons unknown do not advance at a healthy rate or fail to meet expectations, then this could curtail adoption. Similarly, if the business models fail to develop this would also reduce investments – for example, if brands view these virtual spaces as too toxic or risky to be associated with, then this would certainly impact development of the metaverse. So far, though, we haven’t seen either of these cases, but time will tell,” Inouye concluded.
In the end, digital services are being created to entertain people, enhance their lives and make them feel better, and to do things more efficiently. If a metaverse or any other form of virtual experience causes anxiety and other negative feelings, then something will have gone deeply wrong.
- Yanitsa Boyadzhieva, Deputy Editor, TelecomTV
Stay up to date with the latest industry developments: sign up to receive TelecomTV's top news and videos plus exclusive subscriber-only content direct to your inbox – including our daily news briefing and weekly wrap.