Presented by Avalon Holographics
The pace of AI continues to be staggering. From simple pattern recognition systems to large language models (LLMs), and now as we move into the physical AI reality, the power of these systems continues to improve our lives. But humans always need to be in the loop.
We need to see the data, interact with it, and identify the simulation-to-reality gaps; we need to help these systems help us. Spatial computing has traditionally been in the realm of human understanding; we now share this space with AI. Understanding the different ways humans should interact with 3D data helps guide the medium where we can get the best from AI.
1. The 2D screen: the precision desktop
The 2D screen has been the reliable workhorse since spatial computing started and continues to be the primary interface, with most professional work still happening here. For a developer training a model or a single user doing 3D modeling, the 2D screen is great for the individual contributor. However, using a 2D screen forces a “3D-to-2D” mental translation, where the user has to keep the model in their mind, rotating, zooming, and interacting with this specific corner of the spatial world. The cognitive load of this mental model can cause the brain to work overtime to understand it.
2. VR: the immersive workspace
VR offers that first jump beyond 2D. By completely immersing yourself in the 3D world, you gain a capability that is accessible and effective. When training a robotic system, being in the place of the autonomous system, helping that system by showing the human movement and training the system, VR is the place to be. But you are inherently by yourself. Even with avatars, you’ve lost touch with reality; only the digital world exists.
3. AR: the expert in your ear
AR was supposed to be a potential fix, but in reality, AR goes down a different road. AR is the angel on your shoulder, or more specifically, in your ears and eyes, giving you helpful guidance. Turn left here, rotate that bolt. What’s the history of this castle? AR is the king of instructional guidance. It’s always there to give you helpful tips. But it is inherently just for you; only you can see what’s in your AR headset.
4. Holograms: The collaborative space
Holograms, specifically light field holograms, are the pinnacle of the visualization stack. They do what nothing else can do: recreate the digital object as if it were real, making it visible in the real world for all to see. Holograms provide glasses-free, 3D visualization of digital twins for everyone to see simultaneously and are visible to the naked eye. The value of holographic systems becomes compelling where shared spatial understanding materially changes outcomes.
The power of the shared physical context
The true value of the holographic display in the era of physical AI lies in the ability to solve the referential ambiguity problem. In a holographic environment, the light rays are physically reconstructed. Multiple people look at the same reconstruction from individual perspectives; if I point at a joint in a humanoid robot, a cancerous tumour, cover versus concealment, there’s no ambiguity about what I am pointing at. Everyone knows EXACTLY what I’m pointing at. This shared experience, where I can see you, your full reaction, and you can see mine, creates a level of trust that no other medium can match.
Further, there’s no onboarding friction with a hologram. No special equipment is required — simply walk into the light field and you see the hologram. There’s none of the discomfort or isolation that comes with wearable devices. Everyone can see the hologram together, immediately.
When to choose holograms
There are many situations where humans need to act with spatial intelligence. When alone, individual mediums like screens and AR/VR are great solutions and should be the first choice. But when there are high stakes, when discussion, collaboration, and trust between people are paramount, nothing can approach the value that holographics bring to the table.
The first use cases are those in which the consequences of a bad decision are life-threatening, typically in medical and defence applications. The cognitive load and side effects of individual solutions are too much to accept in these situations. But physical AI is quickly encroaching into these high-consequence areas. Autonomous systems are driving our cars, running our factories, and are moving into these high-consequence areas that already demand human-in-the-loop decisions. Holograms allow teams to use their own spatial reasoning to identify simulation-to-reality gaps that may be invisible in other mediums.
The future of the visualization stack
Looking forward, we are nearing the end of the 2D screen. As holographic light field technology matures, we will see a fundamental and inevitable shift towards holography. The 2D monitor will eventually be relegated to the same status as the typewriter. AR and VR will likely settle into niche roles — AR for field service utility and VR for deep solitary immersion. Holographic light field displays will become our primary interface to the digital world, because seeing 3D naturally is what humans have evolved to do.
Wally Haas is president of Avalon Holographics.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
