Tag Archives: research

Attention, futurist gamblers: long odds on Artificial General Intelligence

Pop-transhumanist organ H+ Magazine assigned a handful of writers to quiz AI experts at last year’s Artificial General Intelligence Conference, in order to discover how long they expect we’ll have to wait before we achieve human-equivalent intelligence in machines, what sort of level AGI will peak out at, and what AGI systems will look and/or act like, should they ever come into being.

It’s not a huge sample, to be honest – 21 respondants, of whom all but four are actively engaged in AI-related research. But then AGI isn’t a vastly populous field of endeavour, and who better to ask about its future than the people in the trenches?

The diagram below shows a plot of their estimated arrival dates for a selection of AGI milestones:

AGI milestone estimates

The gap in the middle is interesting; it implies that the basic split is between those who see AGI happening in the fairly near future, and those who see it never happening at all. Pop on over to the article for more analysis.

The supplementary questions are more interesting, at least to me, because they involve sf-style speculation. For instance:

… we focused on the “Turing test” milestone specifically, and we asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.

If you follow the transhumanist/AGI blogosphere at all, you’ll know that the friendly/unfriendly debate is one of the more persistent bones of contention; see Michael Anissimov’s recent post for some of the common arguments against the likelihood of friendly behaviour from superhuman AGIs, for instance. But even if we write off that omega point and consider less drastic achievements, AGI could be quite the grenade in the punchbowl:

Several experts noted potential impacts of AGI other than the catastrophic. One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.” This would be disruptive, but not necessarily bad. Another expert thought that, “societies could accept and promote the idea that AGI is mankind’s greatest invention, providing great wealth, great health, and early access to a long and pleasant retirement for everyone.” Indeed, the experts’ comments suggested that the potential for this sort of positive outcome is a core motivator for much AGI research.

No surprise to see a positive (almost utopian) gloss on such predictions, given their sources; scientists need that optimism to propel them through the tedium of research…. which means it’s down to the rest of us to think of the more mundane hazards and cultural impacts of AGI, should it ever arrive.

So here’s a starter for you: one thing that doesn’t crop up at all in that article is any discussion of AGIs as cult figureheads or full-blown religious leaders (by their own intent or otherwise). Given the fannish/cultish behaviour that software and hardware can provoke (Apple /Linux/AR evangelists, I’m looking at you), I’d say the social impact of even a relatively dim AGI is going to a force to be reckoned with… and it comes with a built-in funding model, too.

Terminator-esque dystopias aside, how do you think Artificial General Intelligence will change the world, if at all?

Blue-sky bioengineering on the DARPA drawing-board

If you’re looking for the sort of bat-shit Faustian gambles that form the back-bone of much military science fiction, following the news from the Pentagon’s science and tech division is like supergluing your lips to a firehose… and Wired’s DangerRoom blog is one of the better consumer-level sources to start with (if you don’t mind a bit of snark on the side).

Here’s DangerRoom‘s Katie Drummond on DARPA’s latest wheeze: immortal synthetic organisms with a built-in molecular kill-switch. SRSLY.

As part of its budget for the next year, Darpa is investing $6 million into a project called BioDesign, with the goal of eliminating “the randomness of natural evolutionary advancement.” The plan would assemble the latest bio-tech knowledge to come up with living, breathing creatures that are genetically engineered to “produce the intended biological effect.” Darpa wants the organisms to be fortified with molecules that bolster cell resistance to death, so that the lab-monsters can “ultimately be programmed to live indefinitely.”

Of course, Darpa’s got to prevent the super-species from being swayed to do enemy work — so they’ll encode loyalty right into DNA, by developing genetically programmed locks to create “tamper proof” cells. Plus, the synthetic organism will be traceable, using some kind of DNA manipulation, “similar to a serial number on a handgun.” And if that doesn’t work, don’t worry. In case Darpa’s plan somehow goes horribly awry, they’re also tossing in a last-resort, genetically-coded kill switch:

“Develop strategies to create a synthetic organism “self-destruct” option to be implemented upon nefarious removal of organism.”

The project comes as Darpa also plans to throw $20 million into a new synthetic biology program, and $7.5 million into “increasing by several decades the speed with which we sequence, analyze and functionally edit cellular genomes.”

That post goes on to quote a professor of biology, who’s keen to point out that DARPA’s view of evolution as a random string of events is going to prove a major stumbling block to any attempts to “improve” the process. As to what sort of genuine advantage over extant military technologies these synthetic organisms would have, the pertinent questions are absent, as are those dealing with the moral and ethical issues surrounding military meddling with fundamental biological processes, and the unexpected ways in which they might go wrong. And to hark back to an earlier post from today: would killing a bioengineered military organism be a legitimate act of war?

Also absent (but somewhat implicit, depending on your personal politics) are any observations that the world’s biggest military budget shows no sign of helping the US gain the upper hand against a nebulous and underfunded enemy armed predominantly with a fifty-year-old machine gun design and explosives expertise that’s a short step up from the Anarchist’s Cookbook… I’m all for wild ideas and blue-sky thinking, but I’m not sure they’re much use as a military panacea any more. The days of peace through superior firepower are long gone, and the more complex you make your weapons, the more likely they are to blow up in your face.

Interpreting facts as failure: the neuroscience of science

There’s a fascinating essay at Wired UK about a guy called Kevin Dunbar, who studies the science of science. The philosophy and theory of science – the seven-step method you had drilled into you at school, for instance – is very elegant, but it doesn’t reflect the way that real science gets done, and it doesn’t take into account our innate propensity to misinterpret anomalous results, so Dunbar went out and researched the way real researchers research. The results are an interesting mix of the obvious and the counterintuitive:

The reason we’re so resistant to anomalous information — the real reason researchers automatically assume that every unexpected result is a stupid mistake — is rooted in the way the human brain works. Over the past few decades, psychologists have dismantled the myth of objectivity. The fact is, we edit our reality, searching for evidence that confirms what we already believe. Although we pretend we’re empiricists — our views dictated by nothing but the facts — we’re actually blinkered when it comes to information that contradicts our theories. The problem with science, then, isn’t that most experiments fail — it’s that most failures are ignored.

Well worth a read, especially in light of the aspersions cast on science by the climate change debate. Individual scientists may make mistakes, but science as a system – as a communal project, as an evolving body of knowledge – turns those failures into new theories. [image by Horia Varlan]

Virtual economies, virtual reputations and virtual business suits

Metaverse office space?Once the hype over Second Life died out, virtual worlds kinda disappeared from the high-profile headlines. But there’s still plenty of stuff going on in the metaverse, not least its use as a test-bed for theories to apply in reality. [image by Ramona.Forcella]

Economics is a popular choice; we’ve reported before on the bank runs and currency collapses of EVE Online, and now Edward Castronova – author of Synthetic Worlds, which should be your first port of call if you’re even vaguely interested in metaverse economics – is leading a team who’re examining the economy of EverQuest II. [via SlashDot]

Researcher Edward Castronova, professor of telecommunications at Indiana University, said researchers can learn almost anything about human society in games as they really are human societies.

However unlike real society they can be observed and tweaked.

“We can do controlled experiments in virtual worlds, but we can’t do that in reality,” said Castronova.

“Controlled experimentation is the very best way to learn about cause and effect. We are on the verge of developing that capacity for human society as a whole.”

[…]

After studying 314 million transactions within the fantasy world of Norrath in “EverQuest II,” including trading in-game goods like armor, shields, leather, herbs and food, the researchers were able to calculate the GDP of one of the game servers (the back-end computer that hosts thousands of players in one world).

As more people opened accounts and flocked to Norrath, spending money on new items, researchers saw inflation spike more than 50 percent in five months.

Game economies are, much like real economies, predicated on more than just a currency. Reputation scores are a big part of game economies (and many social networks, too), but the problem with “karma” systems is that they’re usually implemented in a way that renders them pointless, and which leads to the formation of in-game “mafias” [via BoingBoing]:

There can be no negative public karma-at least for establishing the trustworthiness of active users. A bad enough public score will simply lead to that user’s abandoning the account and starting a new one, a process we call karma bankruptcy. This setup defeats the primary goal of karma-to publicly identify bad actors. Assuming that a karma starts at zero for a brand-new user that an application has no information about, it can never go below zero, since karma bankruptcy resets it. Just look at the record of eBay sellers with more than three red stars-you’ll see that most haven’t sold anything in months or years, either because the sellers quit or they’re now doing business under different account names.

A different (though related) kind of reputation will be bothering the business crowd, however, and the Gartner firm of analysts is convinced that in less than five years, 70% of businesses will have issued avatar dress-codes to their employees [via SlashDot]:

“As the use of virtual environments for business purposes grows, enterprises need to understand how employees are using avatars in ways that might affect the enterprise or the enterprise’s reputation,” said James Lundy, managing vice president at Gartner, in a statement.

“We advise establishing codes of behavior that apply in any circumstance when an employee is acting as a company representative, whether in a real or virtual environment.”

This puts me in mind of a recurring motif in William Gibson’s novels, where he repeatedly makes the point that the most powerful and resource-rich virtual environments will be the ones that look subtle and understated, while the low-budget hucksters will dress to impress with excessive bling and extravagant eye-candy. The subtle grunge and mundane decay of reality is harder to simulate than grandiose overstatement; as in real life, it’ll be wise to tread lightly around the ostentatious.

Not interested in playing games or doing business in the metaverse? Well, you could always go learn to speak a dying language.

First bot with a human brain?

neuronsOK, so it won’t be a whole human brain… but two researchers at the University of Warwick Reading are preparing to upgrade their rat-neuron robot to use human brain cells instead:

To make the system a better model of human disease, a culture of human neurons will be connected to the robot once the current work with rat cells is completed. This will be the first instance of human cells being used to control a robot.

One aim is to investigate any differences in the behaviour of robots controlled by rat and human neurons. “We’ll be trying to find out if the learning aspects and memory appear to be similar,” says Warwick.

And in case you were wondering about the potential ethical minefield involved with doing research on human tissue cultures… well, apparently it’s just not an issue in this case:

Warwick and colleagues can proceed as soon as they are ready, as they won’t need specific ethical approval to use a human neuron cell line. That’s because the cultures are available to buy and “the ethical side of sourcing is done by the company from whom they are purchased”, Whalley says.

I’m not sure which is more of a science-fictional kick to the mind – the fact that there’ll soon be a robot powered by human brain cells, or the fact that ethically-sourced human brain cells can just be mail-ordered like any other lab supply. [image by Khazaei]