We like to think of Building Maker as a cross between Google Maps and a gigantic bin of building blocks. Basically, you pick a building and construct a model of it using aerial photos and simple 3D shapes – both of which we provide. When you’re done, we take a look at your model. If it looks right, and if a better model doesn’t already exist, we add it to the 3D Buildings layer in Google Earth. You can make a whole building in a few minutes.
It’s entirely browser-based, too, so no compatibility problems. Of course, you don’t get the freedom of Second Life, where you can build any damned building you feel like… but then learning how to build well in SL can take weeks of practice, whereas Google have aimed to make it as easy as possible. Which is a sensible move if you want people to do work for free, I guess… [image by Visentico/Sento]
Each Lumino block has a pattern on its base that identifies its 3D shape, and the Surface table can read them using its four internal cameras that peer up at the acrylic top. That means the computer can build up a 3D picture of what lies on its surface.
The Luminos can also make themselves known to the Surface when they’re stacked up, however. They are packed with fibre-optic threads that ferry the pattern of any block placed on top of another down to the screen. So, although a second storey Lumino isn’t in direct contact with the touch screen, the computer knows it’s there.
As blocks stack up, the risk increases that the patterns from different layers of Luminos will become too jumbled for the screen to interpret. But the fibre-optic bundles are angled so that the pattern visible to the screen at the bottom of a stack includes parts of the patterns of all its blocks. That can allow the screen to recognise stacks up to 10 blocks high.
I really want some hardware like that for use as a combined coffee-table and workbench… though I think I’ll wait until someone other than Microsoft is making them.
Apparently BSkyB plans to launch a 3D TV network in Europe next year, for which you’ll need a special display set and glasses (and, no doubt, a hefty subscriber fee). That’s gonna claw viewers back fro the temptations of the intertubes, for sure!
The ability to touch and manipulate 3D images is key to the future of interactive entertainment, not to mention every other episode of Star Trek: The Next Generation. Now two UC-Santa Barbara researchers say they’ve built a prototype room-sized 3D display using projectors, a user-tracking system, and two FogScreens, which produce 2D images using microscopic water droplets and ultrasound.
To achieve the 3D effect, the same image is rendered on two overlapping screens at different depths. Users’ head positions are tracked since the 2D images on each screen depend on the user’s viewing direction. The system computes the image alignment in real time, and users see a single, fused 3D image where the screens overlap.
But a room-sized DFD [depth-fused 3D] still presents technical challenges for researchers. For instance, the fog from two FogScreens can bleed through and disrupt each other, air conditioners and open doors can cause turbulence that interferes with the image quality, and alignment and tracking errors can occur because people view the 3D images with two separate eyes.
Possible future applications include virtual museums, surgery, and offices, not to mention virtual catch or Frisbee.