Teens don’t read and can hardly write, right?

Inbox overload!Wrong… unless those 40,000 words they text out over a month don’t count [via LifeHacker; image by nate steiner].

Sure, a lot of those texts will be rote replies and simple questions, but the point stands: teenagers communicate heavily using a form of the written word. When I was a teenager in the nineties I used to write a lot of letters, but I’d be very surprised if I approached a tenth of that wordcount, which equates to the lower limit for a novel (or at least it used to). Text has a lower bandwidth than face-to-face speech, but SMS messages have the advantage of asynchronicity over a regular phone call, and as gnomic as the compressed words and pseudo-1337 of text messages may be to us older folk, they have the same capability for hidden meaning and word-play as “proper” writing.

Where am I going with this? I’m not sure, to be honest… but I’m increasingly convinced that blaming technologised teen lifestyles for their perceived disinterest in reading is a fiction born of contempt and generational differences. The “cellphone novel” is a popular format in Japan – has anyone really tried pushing it here in the West? Or will we need to wait for that generation to grow its own stars and mavens organically without the help of old-media gatekeepers?

Attention, futurist gamblers: long odds on Artificial General Intelligence

Pop-transhumanist organ H+ Magazine assigned a handful of writers to quiz AI experts at last year’s Artificial General Intelligence Conference, in order to discover how long they expect we’ll have to wait before we achieve human-equivalent intelligence in machines, what sort of level AGI will peak out at, and what AGI systems will look and/or act like, should they ever come into being.

It’s not a huge sample, to be honest – 21 respondants, of whom all but four are actively engaged in AI-related research. But then AGI isn’t a vastly populous field of endeavour, and who better to ask about its future than the people in the trenches?

The diagram below shows a plot of their estimated arrival dates for a selection of AGI milestones:

AGI milestone estimates

The gap in the middle is interesting; it implies that the basic split is between those who see AGI happening in the fairly near future, and those who see it never happening at all. Pop on over to the article for more analysis.

The supplementary questions are more interesting, at least to me, because they involve sf-style speculation. For instance:

… we focused on the “Turing test” milestone specifically, and we asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.

If you follow the transhumanist/AGI blogosphere at all, you’ll know that the friendly/unfriendly debate is one of the more persistent bones of contention; see Michael Anissimov’s recent post for some of the common arguments against the likelihood of friendly behaviour from superhuman AGIs, for instance. But even if we write off that omega point and consider less drastic achievements, AGI could be quite the grenade in the punchbowl:

Several experts noted potential impacts of AGI other than the catastrophic. One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.” This would be disruptive, but not necessarily bad. Another expert thought that, “societies could accept and promote the idea that AGI is mankind’s greatest invention, providing great wealth, great health, and early access to a long and pleasant retirement for everyone.” Indeed, the experts’ comments suggested that the potential for this sort of positive outcome is a core motivator for much AGI research.

No surprise to see a positive (almost utopian) gloss on such predictions, given their sources; scientists need that optimism to propel them through the tedium of research…. which means it’s down to the rest of us to think of the more mundane hazards and cultural impacts of AGI, should it ever arrive.

So here’s a starter for you: one thing that doesn’t crop up at all in that article is any discussion of AGIs as cult figureheads or full-blown religious leaders (by their own intent or otherwise). Given the fannish/cultish behaviour that software and hardware can provoke (Apple /Linux/AR evangelists, I’m looking at you), I’d say the social impact of even a relatively dim AGI is going to a force to be reckoned with… and it comes with a built-in funding model, too.

Terminator-esque dystopias aside, how do you think Artificial General Intelligence will change the world, if at all?

Modular armoured wall system

McCurdy’s Armor - modular armoured wall systemFile under “inventions that I’m rather surprised to find didn’t exist already”: modular military encampment armour [via BLDGBLOG; image borrowed from linked article].

… an armored wall system known as McCurdy’s Armor could have Marines rapidly erecting 6.5-foot-tall mortar-, RPG- and bullet proof fortresses in less than an hour, saving the days it can take to fortify an area by conventional means and making forward-operating units more nimble.

Named for Ryan S. McCurdy—a Marine killed in Iraq in 2006 while hauling a wounded comrade to safety—the system is designed to offer troops increased protection and mobility when setting up outposts in hostile areas. The walls can be ferried into place in panels that are easily stackable in a truck or trailer. Once in position, four Marines can assemble a single panel in less than ten minutes without any special tools or additional equipment. The panels then snap together like bomb-proofed Legos secured with steel pins to form a blast- and bullet-proof shelter.

Neat idea. Also an easily-copied technology; given its lo-fidelity nature, budget clones (weaker armour, cheaper materials) of this stuff will pop up everywhere and anywhere there’s a use for it.

Also easily re-used by one’s opponent; likely to dot post-conflict landscapes for years to come, be repurposed as housing material or weld-on armour for vehicles. The street – or the valley, or the high pass, or the desert – finds its own use for things. What would you use it for?

Playing Our Way To the Future: Consumer Science and Technology goes Military

Last month, I spoke at a United States Army Training and Doctrine Command event billed as a mad scientist conference. That was actually quite an honor, and I enjoyed it more than I expected to, even though it was hard to spend three days thinking about threats based on new technology. I’ve got a blog entry up at my regular site that talks more about the conference, but suffice it to say I’ve been thinking about the military and science/science fiction. In the way of all attractive coincidences, I was also recently asked to write a military science fiction story. All that, and I’m basically a pacifist! Continue reading Playing Our Way To the Future: Consumer Science and Technology goes Military

Neurocinematics

popcorn?Spare a thought for poor old Hollywood; it seems that no matter how much they spuff away on CGI budgets and celebrity actor’s fees, they just can’t keep us as at a perpetual peak of emotional engagement. They could spend more money on better writers, of course… but what do they know about audience engagement and emotional response, eh? [image by serendipitys]

No, no – far better to use a technological fix to replace the flawed human responses of the traditional focus group. So how’s about we wire test viewers up to an fMRI unit and watch how their brains respond to the latest batch of daily rushes? That way we can learn how to keep the grey matter revving at the red line for 120 minutes at a time…

MindSign has already helped advertisers dial in their commercials’ second-by-second noggin delight and has even assisted studios in refining movie trailers and TV spots […] Now the company wants to replace that ancient analog heuristic, the dreaded focus group. Carlsen claims that focus group members not only misrepresent the likes and dislikes of the broader population — they can’t even articulate their own preferences. Often, they’ll tell a human researcher one thing while the fMRI reveals they’re feeling the opposite.

See, it’s all our fault – if we didn’t lie to Hollywood executives, they’d make better movies! They’re only trying to help!

Neurocinema helpfully speeds up a process Hollywood began years ago, namely the elimination of all subjectivity in favor of sheer push-button sensation. By quantifying which set pieces, character moments, and other modular film packets really lather up my gray matter, the adfotainment-industrial complex can quickly and efficiently deliver what I actually want. Movies won’t be “made,” they’ll be generated. Michael Bay, with access to my innermost circuitry, can really get in there and noogie the ol’ pleasure center. And here’s the best part: Once the biz knows what I want, it can give me more of the same. I’ll soon be reporting levels of consumer satisfaction previously known only to drug abusers. My moviegoing life will, literally and figuratively, be all about the next hit.

You see? You can’t ask for what you really want, because you don’t actually know what you want!

“But now movies will be more formulaic than ever!” purists whine. Au contraire, aesthete scum. “Formula” is for suckers. It implies narrative — peaks and valleys. What MindSign seems to be offering is a new model — not formulaic, but fractal. Forget ups and downs, suspense and release. What if every moment were a spike, every scene “trailer-able”? In fact, movies will become essentially a series of trailers, which, incidentally, are far better-loved than the oft-crummy features they encapsulate. Movie houses will become crack dens with cup holders, and I’ll lie there mainlining pure viewing pleasure for hours. Why not? I can’t decide what I want to watch anyway. Luckily, Hollywood is there to make those tough choices for me. And to show me the zombie shark I never even knew I was dying to see.

Looks like I have a closing argument for that debate I had last night with my girlfriend over whether we should invest in annual cinema passes. I’ve got more story stashed on a three inch stretch of my bookcases than Hollywood has strained out in the last few years… and if I stay home, I don’t need to take out a mortgage in order to get myself a soft drink.

Sarcasm aside, the prospect of neurally-optimised cinema is kind of intriguing, even though it does foreshadow a final forking of the road between spectacle and storytelling. Rather than going to see a simple tale of boy-meets-girl (or airhead-becomes-trial-lawyer, or blue-elves-get-saved-by-compassionate-corporatism), why not just go and plug your eyeballs into two hours of refined emotional sugar… or amphetamines, or sedatives, a blend of all three, whatever’s your poison. (I guess the Saw franchise really is ahead of the curve after all, having exchanged the last vestiges of plot for visceral button-pushing that caters explicitly to the intensely desensitised.)

Step on a bit further, and the interactive movie concept that’s been batted around for years becomes not just possible but necessary – once you’ve optimised for a whole demographic, the only route forward is to optimise on a per-viewer basis, spooling out selected scenes and vignettes in direct response to the brainspikes of the individual, who could also nudge the flow of spectacle in the direction of whatever emotional response they feel they want from moment to moment. That’d be tricky to do on a big screen for a few hundred people at once, of course (though there’s probably some sort of augmented-reality cyberspex solution to that problem, if you can be bothered to look), but by the time it’s a viable commercial technology, “going to the cinema” will sound as old-fashioned as waltzing to a Wurlitzer at a tea-dance.

I honestly can’t bring myself to mourn that much.