The metaverse as Mark Zuckerberg imagines it is a seamless digital world where you can walk your avatar from the conference room to the virtual Walmart without leaving your couch. This world, Zuckerberg proposes, will allow us to do everything we love to do IRL — just, you know, virtually.
So does that mean the metaverse will make it easier to carry out acts of terrorism? A recent piece published by The Conversation treats this idea as an inevitability. Written by three professors from the University of Nebraska Omaha (Joel S. Elson, Austin C. Doctor, and Sam Hunter), the article argues that the metaverse will prove itself a hotbed of extremist and terrorist activity. “A resurrected bin Laden could meet with would-be followers in a virtual rose garden or lecture hall,” the professors write (without even an ounce of irony).
It’s a pretty alarmist conclusion and one that rests upon a distinct misunderstanding about how the “metaverse” will change the world.
It’s nothing new — The argument that the metaverse will open up new organizing opportunities for extremists relies on a simple idea: that the propagation of virtual worlds will provide revolutionary means by which to communicate online. While this is a nice idea, early entries into the metaverse have done little to actually show we’re moving in that direction. Adding a virtual component to your meetings is not implicitly improving them; if anything, it adds another layer of complication and distraction.
The idea that the metaverse will revolutionize terrorist organizing presupposes that the internet is not already an incredibly powerful mobilization tool. One need not look any further than Facebook’s place in the January 6 Capitol riots to understand this. The Taliban, meanwhile, recruit and organize via social media on a daily basis.
The metaverse doesn’t telegraph the coming of virtual extremism because that reality is already here.
But yeah, moderating it will suck — The focal argument of this piece is misguided. This being said, the sentiment its authors are getting at — that the metaverse’s complexity will make it exceedingly difficult to moderate — is valid.
Take Twitter for example. Moderators had plenty of trouble moderating even just text-based tweets; before figuring out a legitimate strategy to keep Twitter safe, the company added a suite of audio features, opening up entirely new avenues for users to spread misinformation and hate.
If even Twitter or Facebook is this difficult to control, just imagine the scale needed to moderate the enormous complexities of a full virtual world. Our AI is nowhere near good enough for that, and a human moderator workforce would not be able to handle the workload without severe consequences for their mental health.
These researchers are right to worry about the inherently unruly nature of a true metaverse. The good news is we’re nowhere near ready for that kind of digital world to launch. When it does, there will absolutely be people who attempt to use it for harm. But perhaps it would be best to devote our attention to internet extremism as it exists today before we worry too much about a digital pipe dream Mark Zuckerberg wants to use as his personal cash cow?