Tag Archives: technology

Cortical coprocessors: an outboard OS for the brain

The last time I remember encountering the word “coprocessor” was when my father bought himself a 486DX system with all the bells and whistles, some time back in the nineties. Now it’s doing the rounds in this widely-linked Technology Review article about brain-function bolt-ons; it’s a fairly serious examination of the possibilities of augmenting our mind-meat with technology, and well worth a read. Here’s a snippet:

Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an “operating system” that defines how the overall system works as a unified whole–analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.

Of course, the idea of a brain OS brings with it the inevitability of competing OSs in the marketplace… including a widely-used commercial product that needs patching once a week so that dodgy urban billboards can’t trojan your cerebellum and turn you into an unwitting evangelist for under-the-counter medicines and fake watches, an increasingly-popular slick-looking solution with a price-tag (and aspirational marketing) to match, and a plethora of forked open-source systems whose proponents can’t understand why their geeky obsession with being able to adjust the tiniest settings effectively excludes the wider audience they’d love to reach. Those “I’m a Mac / I’m a PC” ads will get a whole new lease of remixed and self-referential life…

Kevin Kelly on technological literacy

Via BoingBoing, here’s a New York Times piece by Kevin Kelly, where he discusses what he learned about technology and education while homeschooling his son for a year:

… as technology floods the rest of our lives, one of the chief habits a student needs to acquire is technological literacy — and we made sure it was part of our curriculum. By technological literacy, I mean the latest in a series of proficiencies children should accumulate in school. Students begin with mastering the alphabet and numbers, then transition into critical thinking, logic and absorption of the scientific method. Technological literacy is something different: proficiency with the larger system of our invented world. It is close to an intuitive sense of how you add up, or parse, the manufactured realm. We don’t need expertise with every invention; that is not only impossible, it’s not very useful. Rather, we need to be literate in the complexities of technology in general, as if it were a second nature.

He goes on to add some more specific aphorism-style lessons – koans for a digital world, almost:

  • Before you can master a device, program or invention, it will be superseded; you will always be a beginner. Get good at it.
  • The proper response to a stupid technology is to make a better one, just as the proper response to a stupid idea is not to outlaw it but to replace it with a better idea.
  • Nobody has any idea of what a new invention will really be good for. The crucial question is, what happens when everyone has one?
  • The older the technology, the more likely it will continue to be useful.
  • Find the minimum amount of technology that will maximize your options.

Some very sf-nal thinking in there… no surprise coming from Kelly, but even so, it reiterates something of Walter Russel Mead’s praise of the genre as the source of a useful way of looking at the world.

It’s also pleasing to see Kelly’s focus on trying to instil an appreciation of (and desire for) learning in his son. I’m far from the first person to observe that the UK education system has long favoured the retention of facts over independent analytical and critical thinking as educational goals, and I’ve seen plenty of reports that suggest the US system has a similar problem. Kelly’s aphorisms underline the point: if you make kids memorise facts, their education is obsolete as soon as it’s finished. Learning how to learn is the most important lesson of them all, and the one that seems hardest for schools and universities to deliver.

Street-level sousveillance tech

Internet serendipity strikes again… a Twitter friend mentioned their discovery of the word ‘sousveillance‘ the other day, and I remarked that I’d not mentioned it here at Futurismic for some time, despite it being one of my multitudinous minor obsessions. And lo, a few days later, two state-of-the-street-art sousveillance items crop up in my daily feed trawl*!

First up is the Lookxcie, a little head-mounted camera that stores the last thirty seconds of footage it captured at the press of a button [via Shira Lipkin‘s Google Buzz feed]:

Loop the Looxcie over your ear and go about your day. If you see anything you think may be worth saving, hit the button and the previous 30 seconds are saved, and even uploaded to your selected social networking site to be instantly shared, or you can watch and edit the video first if you prefer. And it stores up to five hours of video!

The Looxcie is a pretty cute little gizmo (and seemingly straight out of an early cyberpunk novel), but there’s an obvious flaw that renders it less useful in certain, ah, high-tension scenarios, let’s say. But other, more robust options are available: BoingBoing points to a column at Reason that covers smartphone apps that are ideal for videoing law enforcement and/or “freelance security” types who might subsequently arrest your device and make the footage disappear while it’s in their care:

Qik and UStream, two services available for both the iPhone and Android phones, allow instant online video streaming and archiving. Once you stop recording, the video is instantly saved online. Both services also allow you to send out a mass email or notice to your Twitter followers when you have posted a new video from your phone. Not only will your video of police misconduct be preserved, but so will the video of the police officer illegally confiscating your phone (assuming you continue recording until that point).

[ Just-in-time activism! ]

Neither Qik nor UStream market themselves for this purpose, and it probably would not make good business sense for them to do so, given the risk of angering law enforcement agencies and attracting attention from regulators. But it’s hard to overstate the power of streaming and off-site archiving. Prior to this technology, prosecutors and the courts nearly always deferred to the police narrative; now that narrative has to be consistent with independently recorded evidence. And as examples of police reports contradicted by video become increasingly common, a couple of things are likely to happen: Prosecutors and courts will be less inclined to uncritically accept police testimony, even in cases where there is no video, and bad cops will be deterred by the knowledge that their misconduct is apt to be recorded.

And to those who say that we shouldn’t feel the need to video the police, I respond with the tired and logically flawed aphorism that’s supposed to make us all feel better about ubiquitous closed-circuit surveillance: if they’ve done nothing wrong, then surely they have nothing to fear, right?

[ * Coincidence? Synchronicity? The Baader-Meinhof phenomenon? Your guess is as good as mine… ]

Techlepathy: decoding words from brain signals

Another piece slots in to the mind-machine interface puzzle: via George Dvorsky comes news that University of Utah neuroboffins have decoded individual words from embedded electrode scans of brain activity.

The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy – temporary partial skull removal – so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them.

Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.

Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals – such as those generated when the man said the words “yes” and “no” – they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

As always with this sort of story, though, it’s early days yet:

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time – better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person’s thoughts into words spoken by a computer.

“This is proof of concept,” Greger says, “We’ve proven these signals can tell you what the person is saying well above chance. But we need to be able to do more words with more accuracy before it is something a patient really might find useful.”

So you’ll have to wait a little longer for that comfy little skull-cap that’ll read your as-yet-unwritten novel straight out of your head (worse luck). But proof-of-concept’s better than nothing, especially for a technology that – even comparatively recently – was considered to be pure science fiction.

Remember to spray on your deodorant first, yeah?

A brief “hey, look, tech!” post, simply because it seems to be everywhere at the moment, and I’d totally jump off a cliff if all my friends were doing it too*: spray-on clothing!

The spray consists of short fibres that are mixed into a solvent, allowing it to be sprayed from a can or high-pressure spray gun. The fibres are mixed with polymers that bind them together to form a fabric. The texture of the fabric can be varied by using wool, linen or acrylic fibres.

The fabric, which dries when it meets the skin, is very cold when it is sprayed on, a limitation that may frustrate hopes for spray-on trousers and other garments.

“I really wanted to make a futuristic, seamless, quick and comfortable material,” said Torres. “In my quest to produce this kind of fabric, I ended up returning to the principles of the earliest textiles such as felt, which were also produced by taking fibres and finding a way of binding them together without having to weave or stitch them.”

Apparently it takes fifteen minutes to spray a T-shirt onto a model, which (for now at least) pretty much ruins the only practical selling point of spray-on clothing, namely convenience. But sensibly Torres has other (more sensible but less headline-worthy) applications in mind, e.g. medical. The cynic in me wonders if he didn’t think of the medical apps first and come up with the clothing thing as an effective marketing gambit… whether he did or not, it seems to have worked.

And your sf-nal pat-ourselves-on-the-back-for-prescience moment: Technovelgy points out that good ol’ Stanislav Lem wrote about spray-on clothes back in 1961. I dare say it’s been mentioned in fiction a few times since.

[ * That particular parental rejoinder has always bothered me. I remember responding to it once with something along the lines of “if I saw a trampoline at the bottom, then yes”. I think I may have been sent to my room afterwards. ]