Tag Archives: technology

Why do devices still have power cables?

I mean, it’s not like we don’t have loads of other fancy and elegant options for transferring power and data to our gadgets and machines, right? But as this piece at Wired points out, power cables are cheap, versatile and simple to produce by comparison to all the more advanced solutions – and that’s why we still have ’em.

It’s a good concise example of something that searching out stories for Futurismic has taught me over the years: that innovative new technology may not actually be as revolutionary as it initially appears, and that the “gadget of the future” may remain a marginal gimmick long after its fanfare’d launch at some trade show or another. Pragmatism and profit margins are very important factors in forming the shape of the future.

Of course, this is one of the arguments that favours the medium-term survival of the dead-tree book, even as the ereader manufacturers shape up for a price war. For the (possibly mythical) average consumer who reads a couple of books a year and no more, buying them as paperbacks will make a lot more sense… and there’s a lot more of those average consumers than there are rabid readers, I’m guessing.

Gestural interface: like a Wacom tablet, just without the plastic bits

Via SlashDot, here’s a project from Potsdam University in which the clever boffins have built a user interface that requires only hand gestures as input:

We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all “feedback” takes place in the user’s imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space. The interaction is tracked by a tiny camera device clipped to the user’s clothing and pointed at the user’s hands.

A bit rough and ready, sure, but it’s early days. Bolt this onto AR (they both need similar face-mounted hardware, so convergence is pretty inevitable), and stuff gets weird real quick. Cities full of people wandering around, seemingly talking to themselves and waving their hands in gnomic gestures… it’d look like a city of mad magicians.

Or, y’know, like Burning Man or Glastonbury at 5am on a Saturday. 🙂

The multiphrenic world: Stowe Boyd strikes back on “supertasking”

… which is really a neologism for its own sake (a favourite gambit of Boyd’s, as far as I can tell). But let’s not distract from his radical (and lengthy) counterblast to a New York Times piece about “gadget addiction”, which chimes with Nick Carr’s Eeyore-ish handwringing over attention spans, as mentioned t’other day:

The fear mongers will tell us that the web, our wired devices, and remaining connected are bad for us. It will break down the nuclear family, lead us away from the church, and channel our motivations in strange and unsavory ways. They will say it’s like drugs, gambling, and overeating, that it’s destructive and immoral.

But the reality is that we are undergoing a huge societal change, one that is as fundamental as the printing press or harnessing fire. Yes, human cognition will change, just as becoming literate changed us. Yes, our sense of self and our relationships to others will change, just as it did in the Renaissance. Because we are moving into a multiphrenic world — where the self is becoming a network ‘of multiple socially constructed roles shaping and adapting to diverse contexts’ — it is no surprise that we are adapting by becoming multitaskers.

The presence of supertaskers does not mean that some are inherently capable of multitasking and others are not. Like all human cognition, this is going to be a bell-curve of capability.

As always, Boyd is bullish about the upsides; personally, I think there’s a balance to be found between the two viewpoints here, but – doubtless due to my own citizenship of Multiphrenia – I’m bucking the neophobics and leaning a long way toward the positives. And that’s speaking as someone who’s well aware that he’s not a great multitasker…

But while we’re talking about the adaptivity of the human mind, MindHacks would like to point out the hollowness of one of the more popular buzzwords of the subject, namely neuroplasticity [via Technoccult, who point out that Nick Carr uses the term a fair bit]:

It’s currently popular to solemnly declare that a particular experience must be taken seriously because it ‘rewires the brain’ despite the fact that everything we experience ‘rewires the brain’.

It’s like a reporter from a crime scene saying there was ‘movement’ during the incident. We have learnt nothing we didn’t already know.

Neuroplasticity is common in popular culture at this point in time because mentioning the brain makes a claim about human nature seem more scientific, even if it is irrelevant (a tendency called ‘neuroessentialism’).

Clearly this is rubbish and every time you hear anyone, scientist or journalist, refer to neuroplasticity, ask yourself what specifically they are talking about. If they don’t specify or can’t tell you, they are blowing hot air. In fact, if we banned the word, we would be no worse off.

That’s followed by a list of the phenomena that neuroplasticity might properly be referring to, most of which are changes in the physical structure of the brain rather than cognitive changes in the mind itself. Worth taking a look at.

Maybe it doesn’t matter that the internet is “making us stupid”

High-profile internet-nay-sayer and technology curmudgeon Nick Carr is cropping up all over the place; these things happen when one has a new book in the offing, y’know*. He’s the guy who claims that Google is making us stupid, that links embedded in HTML sap our ability to read and understand written content (cognitive penalties – a penalty that even the British can do properly, AMIRITE?), and much much more.

The conclusions of Carr’s new book, The Shallows – that, in essence, we’re acquiring a sort of attention deficit problem from being constantly immersed in a sea of bite-sized and interconnected info – have been given a few polite kickings, such as this one from Jonah Lehrer at the New York Times. I’ve not read The Shallows yet, though I plan to; nonetheless, from the quotes and reviews I’ve seen so far, it sounds to me like Carr is mapping the age-related degradation of his own mental faculties onto the world as a whole, and looking for something to blame.

I should add at this point that, although I disagree with a great number of Carr’s ideas, he’s a lucid thinker, and well worth reading. As Bruce Sterling points out, grumpy gadfly pundits like Carr are useful and necessary for a healthy scene, because the urge to prove them wrong drives further innovation, thinking, research and development. He’s at least as important and worth reading as the big-name webvangelists… who all naturally zapped back at Carr’s delinkification post with righteous wrath and snark. The joy of being a mere mortal is, surely, to watch from a safe point of vantage while the gods do battle… 😉

But back to the original point: there’s always a trade-off when we humans acquire new technologies or skills, and what’s missing from commentators decrying these apparent losses is any suggestion that we might be gaining something else – maybe something better – as part of the deal; technological symbiosis is not a zero-sum game, in other words. Peripherally illustrating the point, George Dvorsky points to some research that suggests that too good a memory is actually an evolutionary dead end, at least for foraging mammals:

These guys have created one of the first computer models to take into account a creature’s ability to remember the locations of past foraging successes and revisit them.

Their model shows that in a changing environment, revisiting old haunts on a regular basis is not the best strategy for a forager.

It turns out instead that a better approach strategy is to inject an element of randomness into a regular foraging pattern. This improves foraging efficiency by a factor of up to 7, say Boyer and Walsh.

Clearly, creatures of habit are not as successful as their opportunistic cousins.

That makes sense. If you rely on that same set of fruit trees for sustenance, then you are in trouble if these trees die or are stripped by rivals. So the constant search for new sources food pays off, even if it consumes large amounts of resources. “The model forager typically spends half of its traveling time revisiting previous places in an orderly way, an activity which is reminiscent of the travel routes used by real animals, ” say Boyer and Walsh.

They conclude that memory is useful because it allows foragers to find food without the effort of searching. “But excessive memory use prevents the forager from updating its knowledge in rapidly changing environments,” they say.

This reminds me of the central idea behind Peter Watts’ Blindsight – the implication that intelligence itself, which we tend to think of as the inevitable high pinnacle of evolutionary success, is actually a hideously inefficient means to genetic survival, and that as such, we’re something of an evolutionary dead end ourselves. Which reminds me in turn of me mentioning evolutionary “arms races” the other day; perhaps, instead of being in an arms race against our own cultural and technological output as a species, we’re entering a sort of counterbalancing symbiosis with it. Should we start considering technology as a part of ourselves rather than a separate thing? Are we not merely a species of cyborgs, but a cyborg species?

[ * The irony here being that almost all the discussion and promotion of Carr’s work that does him any good occurs… guess where? Hint: not in brick’n’mortar bookstores. ]

Live action replays and analysis moves from the sports field to the battlefield

The Harris Corporation supplies instant replay systems to big-brand sports teams, but they may just have cracked a whole new market… one with a budget that (inexplicably) never seems to shrink. The Pentagon has decided that the ability to collect, replay and analyse battlefield video feeds will make it easier to score touchdowns instil shock and awe liberate oil people from oppressive regimes, and they’re working with Harris Corp toward that end:

The system, called Full-Motion Video Asset Management Engine (FAME) uses metadata tags to encode important details — time, date, camera location — into each video frame. In a football game, those tags would help broadcasters pick the best clip to re-air and explain a play. In a war-zone, they’d help analysts watch video in a richer, easier-to-grasp context. And additional tags could link a video clip to photographs, cellphone calls, databases or documents.

Makes a certain amount of sense, but I suspect there’ll be a point where a greater volume of incoming data will become counterproductive, and your multiscreen generals will be so caught up looking at the trees that they forget there’s a forest… which would be business as usual, I suppose, just with more cool toys for the folk behind the front line.

And hey, here’s a potential monetization stream: edit together and sanitise the daily rushes, offer ’em as live streams to warporn fans… or sell the material and outsource the marketing to someone with more experience, like ESPN. Man, this thing’s really got legs – anyone wanna form a collective to buy up Harris Corp shares?