All posts by JustinP

Imagining the Adaptive City

In his writings on ‘cyborg urbanisation‘, Prof. Matthew Gandy (UCL) has compared the relationship between the city and its inhabitants with the cyborg – an archetype familiar to science fiction. For Gandy, the cyborg can help us understand the various networks that enable bodies to function in the modern city.

So, when Dan Hill (City of Sound) posted a vision of something he described as the Adaptive City, I was thinking of cyborgs … triggering a whole different set of neural pathways;

Facilitated by networks of sensors, the data emerging from the new [urban] nervous system appears limitless: near-imperceptible variations in air quality and water quality, innumerable patterns in public and private traffic, results of restaurant inspections, voting patterns in public referenda, triggers of motion sensors, the output of heating ventilation and air conditioning systems, patterns of water usage, levels of waste recycled, genres of books returned at local libraries, location of bicycles in the city’s bike-sharing network, fluctuations in retail stock controls systems, engine data from cars and aeroplanes, collective listening habits of music fans, presence of mobile phones in vehicles enabling floating car data, digital photos and videos locked to spatial co-ordinates, live feeds from CCTV cameras, quantities of solar power generated and used by networks of lamp-posts, structural engineering data from the building information models of newly constructed architecture, complex groupings of friends perceptible in social software multiplied by location-based services, and so on. Myriad flows of data move in and around the built fabric. As many or most objects in the city become potential nodes in a wider network … this shimmering informational field provides a view of the entire city.

But while science fictional tropes see the cyborg as defined either in terms of internal implants or some kind of powered exoskeleton (both dependent on the processes and contours of the individual body), Hill’s ‘Adaptive City’ externalises the cybernetic, projecting it outwards … into the environment; the physical landscape of which the organic body is but one among many. Perhaps the ‘Adaptive City’ is a decentralised cyborg … using feedback loops to harness the power of the collective, and watching its effects as …

[t]he invisible becomes visible … [and] the impact of people on their urban environment can be understood in real-time. Citizens turn off taps earlier, watching their water use patterns improve immediately. Buildings can share resources across differing peaks in their energy and resource loading. Road systems can funnel traffic via speed limits and traffic signals in order to route around congestion. Citizens take public transport rather than private where possible, as the real-time road pricing makes the true cost of private car usage quite evident. The presence of mates in a bar nearby alerts others to their proximity, irrespective of traditional spatial boundaries. Citizens can not only explore proposed designs for their environment, but now have a shared platform for proposing their own. They can plug in their own data sources, effectively hacking the model by augmenting or processing the feeds they’re concerned with.

(‘The Adaptive City’ has a companion piece, ‘The street as platform’ – also at City of Sound … image by taiyofj)

Where’s my jetpack? FUSIONMAN has it.

amazbuck jet pack“Where’s my jetpack?”

Three words to strike fear in the hearts of futurists and SFnal types everywhere. Fed into google, it returns 59,600 hits, including – aptly enough – this xkcd comic. A paleo-futuristic emblem of faded dreams and disappointment.

Now, finally, an answer – !!FUSIONMAN!! has it.

Last week, The Guardian reported on how FUSIONMAN (also known as Swiss aviator-inventor Yves Rossy) had been preparing for an attempted crossing of “the English Channel propelled by a jet-powered wing” with a number of test flights;

“Yves … jumped from a plane above the Swiss town of Bex and reached speeds of up to 180mph during his 12 minutes of jet-powered flight before landing at an airfield in Villeneuve. Rossy first unveiled his jet-powered wing in May with an 8-minute aerobatic display over the Alps.

“Everything went well, it was awesome,” said Rossy after the flight. “It’s my longest flight with this wing. If there are no technical problems, it’s OK for the English Channel. I can’t wait for this next challenge!”

His attempt had originally been thwarted by a collection of technical failures, including a leaking gas tank and two aborted flights during which the engines stopped within seconds of jumping from his support plane. He blamed these failures – which forced him to deploy his parachutes early – on “electronic interference problems”.

The successful flight involved him jumping out of the aircraft at 2,300m, flying horizontally under jet power from a height of 1,700m and then switching off the jet engines before deploying two parachutes at 1500m and 1200m.

The wing does not include moving parts such as flaps to control direction, but Rossy is able to steer by shifting his weight and moving his head.

When he reached the ground he still had 2 litres of fuel left in his wing, suggesting that he would have some margin for error during the cross-channel flight.”

The cross-channel attempt is scheduled for the 24th September (weather-permitting), and will be streamed live on the National Geographic Channel.

(image courtesy of Wikimedia Commons)

It’s alive! – BT looking to artificial life

rhizomeQ: What do the Nuer, social insects, and BT have in common?

A: The first two are organised along acephalous (‘headless’) principles, while researchers working for the third have begun to hail the advantages of following suit.

At this week’s Artificial Life XI conference in Winchester, BT researchers explained how ‘[i]nsights from artificial life could soon be helping run [the firm’s] networks’

“If we look at the biological world, there is a huge amount of change, complexity, and adaptation,” said former biologist Paul Marrow who works in BT’s Broadband Applications Research Centre.

“These artificial life ideas are a very useful source of inspiration as the products and services we provide become increasingly complex and demanding in terms of resources.

In stark contrast to the heirarchical structures of traditional network architecture,

BT hopes to tap the secrets of another of life’s defining features called self-organisation

“With self-organisation, you have very simple rules governing individual units that together perform a bigger task – a typical example is ant colonies,” said Fabrice Saffre, principal researcher at BT’s Pervasive ICT Research Centre.

The simplicity of the rules makes for less computation, and therefore is easier on the network. “It’s a very economical solution – especially for problems that are very dynamic. Anything you can do with self-organisation is basically a ‘free lunch’,” said Dr Saffre.

Mmm … rhizomatic! 🙂

[story via the BBC / image by kevindooley, via flickr / for more on the Nuer, see the work of anthropologist E. E. Evans-Pritchard]

Multi-touch goes “global”

vista-spheresHere at Futurismic, we’ve talked about multi-touch interfaces before. However, today, Microsoft researchers have revealed a development of the technology which is, well, something of a conceptual leap.

While flat-panel displays might be the current interface zeitgeist, Todd Bishop (amongst others) believes this development means “Microsoft thinks the shape of things to come might be a sphere”

Microsoft researchers are taking the wraps off a prototype that uses an internal projection and vision system to bring a spherical computer display to life. People can touch the surface with multiple fingers and hands to manipulate photos, play games, spin a virtual globe, or watch 360-degree videos …

Sphere is a cousin of the Microsoft Surface tabletop computer, already being used in retail and hospitality settings. The underlying hardware for Sphere is sold commercially by Global Imagination of Los Gatos, Calif., but Microsoft researchers made numerous enhancements and developed specialized software.

In a broader sense, the project reflects Microsoft’s belief that many more surfaces will become computer displays, with embedded microprocessors, in the years to come.

To wrap your head around some of the potential applications of this muti-touch globe, check out the video!

For me, it’s with the omnidirectional video / surveillance applications that this technology really begins to show its value …

[story via Todd Bishop’s Microsoft Blog. Image by likeyesterday, via Flickr]

23andWe – genomics goes social

23andmeDrawing on his experiences with 23andMe‘s personal genetics service, Kevin Kelly has made a couple of interesting observations. Focusing on what happens when the logic of crowdsourcing is applied to biotechnology, he comments on

how fast and how eager users have been to share their genetic data. We’ve been conditioned by anxious media reports to believe that people want to hoard their very personal genetic profile, in fear of what would happen if governments, corporations, insurance companies and the neighbors were to see it. But in fact like a lot of other things that have made it online, genetic information only increases in value when shared.

Experts thought only a fringe minority would dare share their genes, but swapping genetic info will mostly likely be the norm for a generation that shares everything else. Sharing your genetic info with family members, relatives, and even apparent strangers (who must be related somehow) is exciting, and certainly educational.

[Story via The Quantified Self. Image by CrashIntoTheSun]