Tag Archives: images

Streetview, art and atemporality

I’m having a great morning for internet serendipity*, and I thought this particular synchronicitous pairing might float well here at Futurismic. First of all, Joanne “Tomorrow Museum” McNeil has an essay connected to the New Museum “Free” show that riffs on Google Streetview, daguerreotypes and atemporality:

Someday we will press a button to rewind and fast-forward through the history of Google Street View images. We will watch entire neighborhoods created, remade, destroyed, or left unchanged except in the subtlest ways. And in the course of it, we will find flashes of human experiences like the man standing with the shoeshiner in the Boulevard du Temple daguerreotype.

[…]

The future was once represented in fantastically romantic ways: white spacesuits, buildings infinite in height, interplanetary travel, alien interactions, an abundance of wealth, and robot servitude. Now the future is represented as something more compressed and accessible. The future is on the Internet, in those screens we glance at intermittently at all waking hours of the day. Our expectation is the “IRL” world will look not much unlike what we see today. It is a future of gradual changes, incorporating familiar aspects with new but not too crazy updated technology. What is in abundance is not wealth but information.

The idea of the future is now a distorted mirror. It is the future of screens. Like the daguerreotype, screens contain memory and reflection, as well as an unknown difference only discerning eyes can see. We are overfutured. We’ve reached the point where the past, present, and future look no different from one another.

The Eternal Electronically-Mediated Now; space and time mashed up into one seamless manipulable digital dimension.

And now see here [via BoingBoing]: Streetview-fed-through-Mapcrunch also helps corrode established visual stereotypes about what different countries look like. A sly rejoinder to those who claim that the web necessarily reinforces clichés: not so! It merely feeds them to those who wish to be fed. Novelty, difference, contrast… it’s all there for the finding for them as wants to look. Don’t like the time or place where you find yourself? Just Google yourself up a new reality; it’s all just raw data until we story it.

[ * A few days a friend on Twitter lamented having to choose between her love of beards and her love of cupcakes; and lo, the internet provideth. Does its pointlessness make it any less beautiful to the right person at the right moment? ]

Large Hadron Collider up and running; world not destroyed

But then you’d have to be a staggeringly ignorant fool to believe it would have been, anyway.

Yes, just as planned, the Large Hadron Collider at CERN was activated this morning… and while it hasn’t actually started doing collision tests yet, the boffins have been revving protons around the ring and checking everything works as it’s supposed to. And apparently, it’s going better than they had hoped. Here’s a computer representation of particles produced by protons smashing into collimators*:

Large Hadron Collider proton collision graphic

The Holy Grail of the Large Hadron Collider project is a subatomic particle known as the Higgs Boson, the conjectural key to the Unified Theory that physicists have been chasing after for years.

However, not everyone thinks it will be that simple – Steven Hawking himself has a $100 bet that the Higgs will not be found. Particle physics isn’t my field (arf!), but I’d be hesitant to bet against a guy with Hawkings’s track record. I guess we’ll just have to wait and see. [image courtesy CERN via New Scientist article]

* – No, I’m not entirely sure what a collimator is, either. And I’ve probably mis-termed or described at least one thing wrong in the above post, because that’s what happens when writers try to report on Big Physics; I try my best, but I’m not on a journalist’s salary here. I’m sure some of our friendly readers in the field will correct any errors with their usual alacrity. 🙂

Immersive 3D: ‘Please touch’ coming soon?

immaterialteapotThe ability to touch and manipulate 3D images is key to the future of interactive entertainment, not to mention every other episode of Star Trek: The Next Generation. Now two UC-Santa Barbara researchers say they’ve built a prototype room-sized 3D display using projectors, a user-tracking system, and two FogScreens, which produce 2D images using microscopic water droplets and ultrasound.

To achieve the 3D effect, the same image is rendered on two overlapping screens at different depths. Users’ head positions are tracked since the 2D images on each screen depend on the user’s viewing direction. The system computes the image alignment in real time, and users see a single, fused 3D image where the screens overlap.

But a room-sized DFD [depth-fused 3D] still presents technical challenges for researchers. For instance, the fog from two FogScreens can bleed through and disrupt each other, air conditioners and open doors can cause turbulence that interferes with the image quality, and alignment and tracking errors can occur because people view the 3D images with two separate eyes.

Possible future applications include virtual museums, surgery, and offices, not to mention virtual catch or Frisbee.

[Image: 3D teapot by Cha Lee, UCSD, IEEE]