All posts by Paul Raven

An origin story: it all started when…

I’ve been a bit lax in not mentioning this before (blame the January churn, if you like), but better late than never, eh? Over at Locus Online, Karen Burnham’s busy rebooting the Locus Roundtable blog; one presumes they decided on including some fringe amateurs to balance out the heavy hitters, because yours truly has been invited to contribute. By way of introduction, Karen’s been posting the origin stories of the contributors – how we got our start in genre fiction, basically – and my little potted history appeared there just last night. So if you’re curious as to how I ended up reviewing science fiction novels and blogging about the shape of the imminent future, by all means go and find out!

Alternatively, check out the more interesting and erudite histories of such notables as Jon Courtenay Grimwood, Gary K Wolfe, Adrienne Martini, Paul DiFilippo, Charles Tan, and many more yet to come. (I figure if one is going to indulge in a bit of Imposter Syndrome, then it might as well be generated by the company of genuinely interesting and smart people.)

There are many more conversations and debates on the cards for the months ahead, so if you’d like a side-serving of meta-discussion with your usual science fiction diet, you could do far worse than clip that RSS feed into your reader…

100% renewable energy by 2030?

“Yeah, right,” I hear you say… and that’s pretty much what I thought as well. But a new study says that, on paper at least, an all-renewable energy infrastructure could be built within just two decades of today… and built is the operative word:

Achieving 100 percent renewable energy would mean the building of about four million 5 MW wind turbines, 1.7 billion 3 kW roof-mounted solar photovoltaic systems, and around 90,000 300 MW solar power plants.

[…]

Delucchi and colleague Mark Jacobson left all fossil fuel sources of energy out of their calculations and concentrated only on wind, solar, waves and geothermal sources. Fossil fuels currently provide over 80 percent of the world’s energy supply. They also left out biomass, currently the most widely used renewable energy source, because of concerns about pollution and land-use issues. Their calculations also left out nuclear power generation, which currently supplies around six percent of the world’s electricity.

To make their vision possible, a great deal of building would need to occur. The wind turbines needed, for example, are two to three times the capacity of most of today’s wind turbines, but 5 MW offshore turbines were built in Germany in 2006, and China built its first in 2010. The solar power plants needed would be a mix of photovoltaic panel plants and concentrated solar plants that concentrate solar energy to boil water to drive generators. At present only a few dozen such utility-scale solar plants exist. Energy would also be obtained from photovoltaic panels mounted on most homes and buildings.

Of course, the technological plausibility of an all-renewable energy economy has always been theoretically understood. So why does it seem so unbelieveable?

The pair say all the major resources needed are available, with the only material bottleneck being supplies of rare earth materials such as neodymium, which is often used in the manufacture of magnets. This bottleneck could be overcome if mining were increased by a factor of five and if recycling were introduced, or if technologies avoiding rare earth were developed, but the political bottlenecks may be insurmountable.

Ah, yes – the p-word. Might’ve guessed that’d crop up in there somewhere. The saddest thing of all is the lost opportunities for political solutions that pushing for even a quarter of this vision would create: massive building programs would create loads of jobs and envigorate flagging economies, at the same time as removing major sources of atmospheric pollution and the incentive to go to war over increasingly scarce fossil fuel resources. Pretty much everyone would stand to benefit… except that tiny percentage of people currently profiting from the status quo, of course.

But were I to suggest that they were involved in spending millions of dollars on obfuscatory political chicanery and misiniformation campaigns to prevent the status quo from shifting, why, I’d be some sort of rabid conspiracy theorist! After all, everyone knows the real conspiracy is being masterminded by neoMarxist extremists masquerading as climate scientists, right? Right?

[ I really shouldn’t need to point out that the last few sentences there are meant to be read with a tone of extreme sarcasm, but – what with this being the internet – consider this a disclaimer to that effect. And to pre-empt the other obvious objection, I strongly suspect the 100%-by-2030 projection is ludicrously optimistic, even were global agreement and cooperation toward that aim within grasp; however, the underlying point is that the technology exists right now, and we’re not using it to even a fraction of its potential. ]

Flora Police

Via grinding.be, Policing Genes is a project by one Thomas Thwaites that looks at the potentially dystopian future of genetically modified plant-life.

Pharmaceutical companies are experimenting with pharming – genetically engineering plants to produce useful and valuable drugs. Currently undergoing field trials are tomato plants that produce a vaccine for Alzheimer’s disease and potatoes that immunise against hepatitis B. Many more plant-made-pharmaceuticals are being developed in laboratories around the world.

However, the techniques employed to insert genes into plants are within reach of the amateur…and the criminal. Policing Genes speculates that, like other technologies, genetic engineering will also find a use outside the law, with innocent-looking garden plants being modified to produce narcotics and unlicensed pharmaceuticals.

The genetics of the plants in your garden or allotment could become a police matter…

Homegrown biohacking and pharming is pretty much a given, but I think the concept of police bees is a little more marginal… that said, it’s a brilliant science fictional story hook. 🙂

How I learned to stop worrying and love the Singularity

Fetch your posthumanist popcorn, folks; this one could roll for a while. The question: should we fear the possibility of the Singularity? In the red corner, Michael Anissimov brings the case in favour

Why must we recoil against the notion of a risky superintelligence? Why can’t we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?

In the blue corner, Kyle Munkittrick argues that Anissimov is ascribing impossible levels of agency to artificial intelligences:

My point is this: if Skynet had been debuted on a closed computer network, it would have been trapped within that network. Even if it escaped and “infected” every other system (which is dubious, for reasons of necessary computing power on a first iteration super AGI), the A.I. would still not have any access to physical reality. Singularity arguments rely upon the presumption that technology can work without humans. It can’t. If A.I. decided to obliterate humanity by launching all the nukes, it’d also annihilate the infrastructure that powers it. Me thinks self-preservation should be a basic feature of any real AGI.

In short: any super AGI that comes along is going to need some helping hands out in the world to do its dirty work.

B-b-but, the Singulitarians argue, “an AI could fool a person into releasing it because the AI is very smart and therefore tricksy.” This argument is preposterous. Philosophers constantly argue as if every hypothetical person is either a dullard or a hyper-self-aware. The argument that AI will trick people is an example of the former. Seriously, the argument is that  very smart scientists will be conned by an AGI they helped to program. And so what if they do? Is the argument that a few people are going to be hypnotized into opening up a giant factory run only by the A.I., where every process in the vertical and the horizontal (as in economic infrastructure, not The Outer Limits) can be run without human assistance? Is that how this is going to work? I highly doubt it. Even the most brilliant AGI is not going to be able to restructure our economy overnight.

As is traditional, I’m taking an agnostic stance on this one (yeah, yeah, I know – I’ve got bruises on my arse from sitting on the fence); The arguments against the risk are pretty sound, but I’m reminded of the orginal meaning behind the term “singularity”, namely an event horizon (physical or conceptual) that we’re unable to see beyond. As Anissimov points out, we won’t know what AGI is capable of until it exists, at which point it may be too late. However, positing an AGI with godlike powers from the get-go is very much a worst case scenario. The compromise position would appear to be something along the lines of “proceed with caution”… but compromise positions aren’t exactly fashionable these days, are they? 🙂

So, let’s open the floor to debate: do you think AGI is possible? And if it is possible, how likely is it to be a threat to its creators?

Hackers rake off big bucks from EU carbon exchange

Another case of life imitating (very) contemporary science fiction, the book in question being Ian McDonald’s excellent The Dervish House: some sneaky shenanigans via the compromised accounts of Czech traders has allowed some black-hat hacker types to rake off millions of dollars from the EU carbon trading exchange [via SlashDot]. Worse still, this is far from the first embarrassment of this type that the exchange has suffered from