Will the Metaverse be an ‘Omnidirectional Panopticon’?

A conversation with Mark Pesce about the future of virtual reality

Micah Sifry
7 min readDec 16, 2021
Screengrab from Mark Zuckerberg’s demo of Meta’s metaverse

Recently, I had the chance to catch up with my friend Mark Pesce, a futurist and writer living in Australia. Mark co-invented VRML (virtual reality markup language) back in 1994 and he’s been involved in every wave of tech innovation (and hype) ever since. We’ve been friends for years, started in 2008, when he gave a talk at Personal Democracy Forum on “hyperpolitics,” presciently warning the audience that as the Internet gave everyone a voice, it would not lead to more democracy but something more like a Hobbesian “war of all against all.” At the time, just about everyone in the tech and politics world were overly optimistic about the future, so I’ve been paying attention to Mark since then. Last year he published Augmented Reality: Unboxing Tech’s Next Big Thing, (which I reviewed on my Substack newsletter here) and with all the new attention going to the so-called “metaverse” it was a good time to check in. (Note to readers: Since this was a long conversation, I’m posting this interview in two parts. In part one, we focus on the metaverse as a privacy dystopia. In part two, we look at the potential for a metaverse that could serve humanity.)

Q: A year ago your book Augmented Reality came out. I thought it was a prescient and important synthesis that was looking around the corner at what big tech was investing in hoping to conquer AR and VR [virtual reality]. It included a bit of a hopeful vision as well of how we might be able to harness the inherent value of some of these technologies. How is the future looking to you now, compared to you know, a year ago?

A: One of the things that I pointed at but didn’t really fully understand was exactly the kind of advantage an extremely large technology company has when it comes to realizing these technologies. These technologies are fantastically hard. Once the sums are all done it’s probably half a trillion dollars. And it won’t all come from Meta and Apple and Google and Microsoft. But between the four equally you could see about a half a trillion dollars in total research investment. Even the best funded startup, Magic Leap, which got I think, around $6 billion in investment, was unable to even really scratch the surface on VR/AR. We have such a collection of capital that’s been thrown at a very hard problem in order to be the people who own the solutions.

Q: In your book what I remember most clearly was how you describe the core problem of making augmented reality work, which is essentially the continuous real-time mapping of the physical environment around a user. And that every user then becomes a surveillance node, feeding the machine that is feeding them what they are then consuming. And that seems like a really hard problem. Let’s just put that there for a second. Is the metaverse that we’ve now been given a little demo of by Mark Zuckerberg and his colleagues the same? It looks more like Second Life, a souped-up video game that you’re walking around in. So augmented reality has these huge construction problems and costs, whereas virtual reality doesn’t seem to be as complex. Am I wrong?

A: You know, after you write a book, you come up with ideas that you wish you’d stuck in the book and then, what, you write another book? So, with augmented reality, I’ve got it down to a two-word phrase. I call it the omni-directional panopticon. Because augmented reality devices are not just looking out at the world to be able to situate you in it, but they’re also looking at you and specifically at your gaze, and things that you’re looking at. When Facebook introduced Project Aria glasses that are designed to lead the way into the kinds of data they need to collect for their system, one of the features that was highlighted was inward gaze detection cameras.

Now, here’s the thing. When we’re talking about a metaverse or augmented reality, the edges are not clean. This is not a binary. It is not one or the other. There’s quite a bit of bleed between both, technically and also experientially. For the metaverse [to work], in terms of you being in it, and not just being fundamentally disembodied in that world, but actually having presence and having the feeling of the presence of others (which we’ll come to in a second), you will need to be as well scanned continuously as you would in augmented reality, because it’s going to be looking at the position of your body so it can reflect the body language of yourself in the metaverse. All of that stuff will be coming in order to provide a verisimilitude of your own experience now.

I really want to point out because it’s something that probably everyone has danced over. There is a fundamentally hard problem that the field has struggled to solve for 60 years. You can’t get a second person in with you. The metaverse will all be pretty and shiny and then as soon as you try to get a second person in, it doesn’t work because our bodies encode so much rich experience. Zoom fatigue, which we’re all familiar with, is actually caused by the fact that we’re not getting as much of that rich experience is we would be in the real world. And our minds are working overtime trying to fill in the gap. If you’re getting say a hundredth of that in Zoom, you get none that inside the metaverse. The idea of co-presence, which is kind of fundamental to everyone’s explanation of what the metaverse is, is that it’s not just you. It’s a really hard problem and there are solutions to that problem that are also hard and expensive and involve training surveillance on everyone everywhere all of the time, at least minimally, to do that. So we don’t actually escape this surveillance problem. We will probably only be amplifying it in the metaverse.

Q: This sounds like Shoshana Zuboff’s worst nightmare.

A: I’ve been in her head at night, whispering gently and she’s been having some bad nightmares. Yes.

Q: So, do we think we have a chance of heading off this nightmare? Do the technical and capital challenges themselves make it a bit hard to solve? My joke about aspirations for the metaverse, whether your wearing the Oculus or whatever, is the main thing that will hinder its popularity, particularly with older people, is nausea. That you just won’t really like it. I can see younger people whose brains are still a bit more pliable could grow up liking it. So is it inevitable that we’re going to get it? Do we have to do more to prevent it? Where do you come down on that?

A: I’m glad you brought up the nausea issue because this comes to the other side of the coin which is the fact that we are in bodies. There is, I think, an almost utopian transhuman dream of a disembodied space where you don’t have to worry about the body. I did VR as a young man, and I’ve done VR now as a rather older man, and the kinds of things that I could tolerate 30 years ago, which was really pretty much all you could work yourself up to 30 years ago, even in a million-dollar computer, would be kind of bad VR right now. In fact, there’s a lot of really badly designed VR now, and I will get motion sick in ways that I never even thought about 30 years ago because my body has changed.

And everyone’s body day to day is quite different. Like you may have had a bad night’s sleep. So something that you tolerate on a good day you won’t tolerate on a bad day. And that’s going to be true, particularly for someone who’s older, but it’s also for someone who’s younger. There’s anecdotal evidence that indicates that women are also profoundly more sensitive to motion sickness, probably having to do with a sensitivity to poisoning around pregnancy. That means that virtual worlds in a lot of senses have to be really designed to be able to accommodate and to suit the person who is the most sensitive, not the least, and there’s not a lot of that design thinking going on right now. So again, you make a very realistic point.

I want to circle to my original point about capital. The requirements for capital in the metaverse are a lot lower than they are in augmented reality, because of technologies that are kind of in place. Yeah, we’ll get better with head mounts and all of that stuff. But you know, you’re talking to someone who invented a particular facet using the technology 30 years ago. What Meta is doing is they are trying to do create an identification between themselves and the metaverse. They’re just saying we are the metaverse and don’t look at anything else. And the thing is because they have 2.7 billion monthly users, they may very well be the metaverse. And so Meta is a tactic to take lower barriers to entry than augmented reality — where you can control it by spending capital — to simply try to sort of suck all of the possibility for other players to exist by a little bit of Jedi Mind magic. You know, ‘Don’t look over there. This is the meta you’re looking for.’

In the second half of our conversation, we look at how augmented reality could be harnessed to serve global needs. Stay tuned!

--

--

Micah Sifry
Micah Sifry

Written by Micah Sifry

Co-founder Civic Hall. Publisher of The Connector newsletter (theconnector.substack.com)

No responses yet