The ability to touch and manipulate 3D images is key to the future of interactive entertainment, not to mention every other episode of Star Trek: The Next Generation. Now two UC-Santa Barbara researchers say they’ve built a prototype room-sized 3D display using projectors, a user-tracking system, and two FogScreens, which produce 2D images using microscopic water droplets and ultrasound.
To achieve the 3D effect, the same image is rendered on two overlapping screens at different depths. Users’ head positions are tracked since the 2D images on each screen depend on the user’s viewing direction. The system computes the image alignment in real time, and users see a single, fused 3D image where the screens overlap.
But a room-sized DFD [depth-fused 3D] still presents technical challenges for researchers. For instance, the fog from two FogScreens can bleed through and disrupt each other, air conditioners and open doors can cause turbulence that interferes with the image quality, and alignment and tracking errors can occur because people view the 3D images with two separate eyes.
Possible future applications include virtual museums, surgery, and offices, not to mention virtual catch or Frisbee.
[Image: 3D teapot by Cha Lee, UCSD, IEEE]