2023 could be the year of mixed reality

2023/01/03 Innoverview Read

As we kick off a new year, the public is still largely confused by one of the biggest buzzwords of last year: The metaverse. After being promised a society-changing technology by breathless media influencers, what many people actually encountered was either (a) cartoonish virtual worlds filled with creepy avatars or (b) opportunistic platforms selling “virtual real estate” through questionable NFT schemes.

To say the industry overpromised and underwhelmed in 2022 would be an understatement.

Fortunately, the metaverse really does have the potential to be a society-changing technology. But to get there, we need to push past today’s cartoonish worlds and deploy immersive experiences that are deeply realistic, wildly artistic, and focus far more on unleashing creativity and productivity than on minting NFT landlords. In addition, the industry needs to overcome one of the biggest misconceptions about the metaverse: The flawed notion that we will live our daily lives in virtual worlds that will replace our physical surroundings. This is not how the metaverse will unfold. 

Don’t get me wrong, there will be popular metaverse platforms that are fully simulated worlds, but these will be temporary “escapes” that users sink into for a few hours at a time, similar to how we watch movies or play video games today. On the other hand, the real metaverse, the one that will impact our days from the moment we wake to the moment we go to sleep, will not remove us from our physical surroundings.

Instead, the real metaverse will mostly be a mixed reality (MR) in which immersive virtual content is seamlessly combined with the physical world, expanding and embellishing our daily lives with the power and flexibility of digital content. 

A mixed reality arms race

I know there are some who will push back on this prediction, but 2023 will prove them wrong. That’s because a new wave of products is headed our way that will bring the magic of MR to mainstream markets.

The first step in this direction was the recent release of the Meta Quest Pro which is hardware-ready for quality mixed reality with color passthrough cameras that capture the real world and can combine it with spatially registered virtual content. It’s an impressive device, but so far there is little software available that showcases its mixed reality capabilities in useful and compelling ways. That said, we can expect the real potential to be unleashed during 2023 as software rolls out.  

Also, in 2023, HTC is scheduled to release a headset that looks to be even more powerful than the Meta Quest Pro for mixed-reality experiences. To be unveiled at CES in January, it reportedly has color passthrough cameras of such high fidelity you can look at a real-world phone in your hand and read text messages in mixed reality. Whether consumers prefer HTC’s new hardware or Meta’s, one thing is clear: An MR arms race is underway, and it’s about to get more crowded. 

That’s because Apple is expected to launch its own MR headset in 2023. Rumored to be a premium device that ships midyear, it will likely be the most powerful mixed reality product the world has seen. There are claims it will feature quality passthrough cameras along with LiDAR sensors for profiling distances in the real world. 

If the LiDAR rumor pans out, it could mean the Apple device is the first MR/augmented reality (AR) eyewear product to enable high-precision registration of virtual content to the real world in 3D. Accurate registration is critical for suspension of disbelief, especially when enabling users to interact manually with real and virtual objects. 

Why so much momentum towards mixed reality?

Simple. We humans do not like being cut off from our physical surroundings. Sure, you can give someone a short demo in virtual reality (VR), and they’ll love it. But if you have that same person spend an hour in fully immersive VR, they may start to feel uneasy.

Approach two hours, and for many people (myself included), it’s too much. This phenomenon first struck me back in 1991 when I was working as a VR researcher at Stanford and NASA, studying how to improve depth perception in early vision systems. Back then, the technology was crude and uncomfortable, with low-fidelity graphics and lag so bad it could make you feel sick. Because of this, many researchers believed that the barrier to extended use was the clunky design and poor fidelity. We just needed better hardware, and people wouldn’t feel uneasy. 

I didn’t quite agree. Certainly, better hardware would help, but I was pretty sure that something else was going on, at least for me personally — a tension in my brain between the virtual world I could see and the real world I could sense (and feel) around me. It was this conflict between two opposing mental models that made me feel uneasy and made the virtual world seem less real than it should.

To address this, what I really wanted to do was take the power of VR and combine it with my physical surroundings, creating a single immersive experience in which my visual, spatial and physical senses were all perfectly aligned. My suspicion was that the mental tension would go away if we could allow users to interact with the real and the virtual as if they inhabited the same perceptual reality.    

By a stroke of luck, I had the opportunity to pitch the U.S. Air Force and was funded to build a prototype mixed reality system at Wright Patterson Air Force Base. It was called the Virtual Fixtures platform, and it didn’t just support sight and sound, but touch and feel (3D haptics), adding virtual objects to the physical world that felt so authentic they could help users perform manual tasks with greater speed and dexterity. The hope was that one day this new technology could support a wide range of useful activities, from assisting surgeons during delicate procedures to helping technicians repair satellites in orbit through telerobotic control

Two worlds snapping together

Of course, that early Air Force system didn’t support surgery or satellite repair. It was developed to test whether virtual objects could be added to real-world tasks and enhance human performance. To measure this, I used a simple task that involved moving metal pegs between metal holes on a large wooden pegboard. I then wrote software to create a variety of virtual fixtures that could help you perform the task. 

The fixtures ranged from virtual surfaces to virtual cones to simulated tracks you could slide the peg along, all while early passthrough cameras aligned the activity. And it worked, enabling users to perform manual tasks with significantly greater speed and precision

I give this background because of the impact it had on me. I can still remember the first time I moved a real peg towards a real hole and a virtual surface automatically turned on. Although simulated, it felt genuine, allowing me to slide along its contour. At that moment, the real world and the virtual world became one reality, a unified mixed reality in which the physical and digital were combined into a single perceptual experience that satisfied all your spatial senses — visual, audio, proprioception, kinesthesia, and haptics. 

Of course, both worlds had to be accurately aligned in 3D, but when that was achieved, you immediately stopped thinking about which part was physical and which was simulated. 

That was the first time I had experienced a true mixed reality. It may have been the first time anyone had. I say that because once you experience the real and virtual combined into a single unified experience, all your senses aligned, the two worlds actually snap together in your mind. It’s almost like one of those visual illusions where there’s a hidden face you can’t see, and then something clicks, and it appears. That’s how a true mixed reality experience should be: A seamless merger of the real and the virtual that is so natural and authentic that you immediately realize our technological future will not be real or virtual, it will be both. One world; one reality. 

As I look ahead, I’m impressed by how far the industry has come, particularly in the last few years. The image above (on the left) shows me in 1992 in an Air Force lab working on AR/MR technology. The image on the right shows me today, wearing a Meta Quest Pro headset. 

What is not apparent in the picture are the many large computers that were running to conduct my experiments thirty years ago, or the cameras mounted on the ceiling, or the huge wire harness draped behind me with cables routed to various machines. That’s what makes this new wave of modern headsets so impressive. Everything is self-contained — the computer, the cameras, the display, the trackers. And it’s all comfortable, lightweight, and battery-powered. It’s remarkable. 

And it’s just getting started. The technology of mixed reality is poised to take off, and it’s not just the impressive new headsets from Meta, HTC, and (potentially) Apple that will propel this vision forward, but eyewear and software from Magic Leap, Snap, Microsoft, Google, Lenovo, Unreal, Unity and many other major players.

At the same time, more and more developers will push the limits of creativity and artistry, unlocking what’s possible when you mix the real and the virtual — from new types of board games (Tilt Five) and powerful medical applications (Mediview XR), to remarkable outdoor experiences from Niantic Labs.  

This is why I am confident that the metaverse, the true metaverse, will be an amalgamation of the real and the virtual, so seamlessly combined that users will cease to think about which elements are physical and which are digital. We will simply go about our daily lives and engage in a single reality. It’s been a long time in the making, but 2023 will be the year that this future really starts to take shape.  

(VentureBeat 2023 could be the year of mixed reality | VentureBeat)