Quantum computing for dummies

Heard people talking about quantum computing, but not really sure you understand what they mean? Well, you’re far from alone (as the late great Richard Feynman once said, “anyone who claims to understand quantum physics doesn’t understand quantum physics”), but why let that stop you from trying to get a layman’s grasp of the basic ideas?

That, one assumes, is the spirit in which this brief introduction to quantum computing at Silicon.com has been written [via SlashDot]… though I’m in no position to comment on how accurate or useful it is. Input from passing physicists is, as always, more than welcome. 🙂

Hang on, what’s quantum entanglement when it’s at home?

I was afraid you were going to ask. Quantum entanglement is the point where scientists typically abandon all hope of being understood because the thing being described really does defy the classical logic we’re used to.

An object is said to become quantumly entangled when its state cannot be described without also referring to the state of another object or objects, because they have become intrinsically linked, or correlated.

No physical link is required however – entanglement can occur between objects that are separated in space, even miles apart – prompting Albert Einstein to famously dub it “spooky action at a distance”.

The correlation between entangled objects might mean that if the spin state of two electrons is entangled, their spin states will be opposites – one will be up, one down. Entangled photons could also share opposing polarisation of their waveforms – one being horizontal, the other vertical, say. This shared state means that a change applied to one entangled object is instantly reflected by its correlated fellows – hence the massive parallel potential of a quantum computer.

Accuracy aside, what’s interesting to me is seeing this sort of bluffer’s guide in a venue like Silicon.com, which is more of a business organ than a tech one. Prepping the Valley VCs for upcoming investment decisions, perhaps?

The end of geography

Dovetailing neatly with discussions of Wikileaks and Anonymous, here’s a piece at Prospect Magazine that reads the last rights rites for geography as the dominant shaper of human history [via BigThink]. The West won’t be the best forever, y’know:

The west dominates the world not because its people are biologically superior, its culture better, or its leaders wiser, but simply because of geography. When the world warmed up at the end of the last ice age, making farming possible, it was towards the western end of Eurasia that plants and animals were first domesticated. Proto-westerners were no smarter or harder working than anyone else; they just lived in the region where geography had put the densest concentrations of potentially domesticable plants and animals. Another 2,000 years would pass before domestication began in other parts of the world, where resources were less abundant. Holding onto their early lead, westerners went on to be the first to build cities, create states, and conquer empires. Non-westerners followed suit everywhere from Persia to Peru, but only after further time lags.

Yet the west’s head start in agriculture some 12,000 years ago does not tell us everything we need to know. While geography does explain history’s shape, it does not do so in a straightforward way. Geography determines how societies develop; but, simultaneously, how societies develop determines what geography means.

[…]

As can see from the past, while geography shapes the development of societies, development also shapes what geography means—and all the signs are that, in the 21st century, the meanings of geography are changing faster than ever. Geography is, we might even say, losing meaning. The world is shrinking, and the greatest challenges we face—nuclear weapons, climate change, mass migration, epidemics, food and water shortages—are all global problems. Perhaps the real lesson of history, then, is that by the time the west is no longer the best, the question may have ceased to matter very much.

Amen. It’d be nice if we could get past our current stage of global socialisation, which might be best compared to a group of people sat in a leaking boat arguing over who should do the most bailing.

Wikileaks and Anonymous

There are a few things worth noting about the latest Wikileaks document-dump, the first and most obvious being how utterly unsurprising (though still deeply saddening) the contents were; for me at least (and I suspect for many others) it’s more of a confirmation of long-held suspicions than anything else.

The second is the reaction from the US and UK governments, which have focussed on the supposed risk to military personnel that the leaks will create; we heard that warning last time, too, and it turned out to be hollow. But it’s proving a very effective distraction to career journalists and their readers, most of whom have overlooked one very telling fact – namely that the aforementioned governments have made no attempt to claim the leaked documents are false. “OK, so we lied… but we were doing it to protect you!” Oh. That has worked out well, hasn’t it?

Thirdly is an observation from Mike Masnick of TechDirt, who compares Wikileaks with everyone’s favourite internet-prankster boogiepersons, Anonymous. The common themes are that they’re both products of our newly-networked era, and that they’re both being underestimated by the very powers that they most threaten.

I’d argue that the time to take the concept of Anonymous seriously came quite some time ago, actually. Even as people dismiss the group as often immature and naive (at times, quite true), what’s impressive about it is that Anonymous is a perfect example of truly distributed, totally anonymous, ad hoc organizations. When the group puts out statements, they’re grandiose and silly, but there’s a real point buried deep within them. What the internet allows is for groups to form and do stuff in a totally anonymous and distributed manner, and there really isn’t any way to prevent that — whether you agree with the activity or not.

Some think that “a few arrests” of folks behind Anonymous would scare off others, but I doubt it. I would imagine that it would just embolden the temporary gathering of folks involved even more. Going back to the beginning of the post, if the US government really was effective in “stopping” Julian Assange, how long do you think it would take for an even more distributed group to pick up the slack? It could be Anonymous itself, who continues on the tradition of Wikileaks, or it could be some other random group of folks who believe in the importance of enabling whistleblowing.

And yes, there’s a smattering of self-aggrandisement on my part here, because I made a similar suggestion back in July:

It’ll never be a big-bucks business, I’d guess, but the accrued counter-authority power and kudos will appeal to a lot of people with axes to grind. But what if they manage to make it an open-source process, so that the same work could be done by anyone even if Wikileaks sank or blew up? An amorphous and perpetual revolving-door flashmob, like Anonymous without the LOLcats and V masks? It’s essentially just a protocol, albeit one that runs on human and electronic networks in parallel.

Nowadays I flinch from making bold statements about profound change, but I find it very hard not to look at distributed post-geographical movements like Wikileaks and Anonymous and not see something without historical precedent. Whether it will last (let alone succeed in toppling the old hierarchies) is an open question that I’d not want to gamble on just yet, but what’s pretty much undeniable is that the nation-state is under attack by a virus for which its immune system has no prepared response.

Implanted obsolescence

We privileged early-adopter types are increasingly accustomed to our technology becoming obsolete… but what happens when the technology in question is actually a physically-embedded part of you? Suddenly your upgrade path is a little trickier than hopping on a Boris-Bike and going to your nearest Apple store. Tim Maly points out the risky side of early-adopter human augmentation tech:

On the ground, the realities of the only brain-mounted interface I know of – cochlear implants – are brutal. Here’s a taste: You can’t hear music. For a sense of what that’s like, try these demos. The terrifying truth is that once you’ve signed up for one kind of enhancement (say, the 16 electrode surgery) it’s very hard to upgrade, even if Moore’s law ends up applying to electrode counts and the fidelity of hearing tech.

If you are an early adopter for this kind of thing, the only thing we can say for sure about it is that it’ll be slow and out of date very soon. Unless they find a way to make easily-reversible surgery, your best strategy is to wait for the interface that’s whatever the brain-linkage equivalent is to 300dpi, full colour, high refresh screens.

[…]

Medical advancements demand sacrifices. Someone needs to wear the interim devices. Desperation is one avenue for adoption. Artificial hearts are still incomplete and dicey-half measures, keeping people alive while they wait for a transplant or their heart heals. This is where advances in transplants and prosthetics find their volunteers and their motivation for progress. It’s difficult to envision a therapeutic brain implant – they are almost by definition augmentations.

An avenue to irreversible early adoption is arenas where short term enhancement is all that’s required. The military leaps to mind. With enlistment times measured in a few short years, rapid obsolescence of implants doesn’t matter as much; they can just pull virgin recruits and give them the newest, latest. If this seems unlikely, consider that with the right mix of rhetoric about duty and financial incentives, you can get people to do almost anything including join an organization where they will be professionally shot at.

Picture burnt-out veterans of the Af-Pak drone wars haunting the shells of long-deserted strip-malls, sporting rusty cranial jacks for which no one makes the proprietary plugs or software any longer… you can probably torrent some cracked warez that’ll run on your ageing wetware, but who knows what else is gonna be zipped into that self-installing .deb?

Meanwhile, Adam Rothstein brings a bit of Marxist critique to the same issue, and points out that the same problems apply to external augmentations:

It is easy to envision these uncanny lapses between classes occurring when we start fusing bodies with machines, because to imply that our bodies can easily be obsolete machines threatens a certain humanist concept of our bodies as a unifying quality to our species. But we don’t have to start invading the body to find differences that affect our ability to stratify ourselves into classes. If the equilibriums of the relations of production can develop a rift between first and third world without personal technology, between upper class and lower class both before, and as we start to use computers to identify ourselves as class member, why would one not also occur between “cutting-edge” and “deprecated” classes as technology becomes more “personal”–magnetizing that one kernel social structure not yet susceptible to fracture and evolution? At what point will our devices themselves reinforce the equilibriums of choice they themselves provide, by being the motive force for separating individuals into groups? If not by lasting only as long as their minimal service contracts in a planned obsolesce that intensifies the slope of device turnover, then by active means? An app only for the iPhone 8, that can detect models of the iPhone 5 and below–letting you know that you’ve wandered into an area with a “less than savory technological element?” When will emergency services only guarantee that they can respond to data transponder calls, and not voice requests? The local watchman has been phased out, in favor of centrally dispatched patrols that require phones to access. Isn’t it only a matter of time before central dispatch is phased out for distributed drone network policing? The ability to use a computer is a requirement for many jobs. When will the ability to data uplink hands-free be a requirement?

Insert unevenly-distributed-future aphorism here.

Zero History, Counter(cyber)culture, Atemporality, Network Realism…

Way to make me feel out of the loop, folks! Seems like everyone‘s talking about Gibson’s Zero History right now*, and yours truly still hasn’t even gotten around to reading Spook Country. *sigh*

Still, the vicarious thrill of other people’s intellectual appreciation will do for now – here’s Alex Vagenas responding to ZH, and to Adam Greenfield’s own response to such (which we mentioned here a while back):

Leaving all the references and knowingness aside, it can be read, like the rest of Gibson’s work and certainly much of the rest of the cyberpunks, as a lament for a certain counter-cultural ethos. It evinces a nostalgia for something that existed or might still exist in potentia perhaps, not fully achieved, but definitely a romantic idea of some sort of subcultural autonomy. It is a theme that can be traced from Burroughs straight down to Gibson, Sterling, Shirley and Stephenson, via Pynchon of course, and more famously theorised by Hakim Bey. In the past, subcultures were visible and exposed. They became monolithic. The web has provided ways in which subcultures can circumscribe “temporary autonomous zones” for themselves and become more diffuse on certain levels, but they still remain searchable and cannot avoid the inevitability of commodification and co-optation. Zero History describes an even more cryptic form of that, however. Gabriel Hounds is a truly secret brand. It has withdrawn into actual off-the-grid circulation. It looks like Gibson is alluding to an ideal that can be tentatively realised on those terms only.

As much as everyone seems a bit sniffy (in one way or another) about Bruce Sterling’s atemporality riff (including Vagenas, earlier in the piece linked to above), there’s a vindication of sorts in the observable phenomenon that even its detractors end up having to talk about it on its own terms. Vagenas simplifies it down to “old post-modernism in new bottles” (my paraphrase), but po-mo (to me at least) has always implied a knowing and conscious bridging of cultural time; by comparison, atemporality (altermodernism?) is instinctive, unavoidable, something we do almost in spite of ourselves.

And what better way to pretend to ourselves that we’re not doing a particular thing than giving that thing a more palatable name? “Network realism”, maybe [via TechnOcculT]?

Network Realism is writing that is of and about the network. It’s realism because it’s so close to our present reality. A realism that posits an increasingly 1:1 relationship between Fiction and the World. A realtime link. And it’s networked because it lives in a place that’s that’s enabled by, and only recently made possible by, our technological connectedness.

Zero History is Network Realism because of the way that it talks about the world, and the way its knowledge of the world is gathered and disseminated. Gibson seems to be navigating the spider graph of current reality as wikiracing does human knowledge.

What many people—including me—have been bothered about with Zero History is it’s lack of futureness. Matt took Gibson’s comment that “We have too many cards in play to casually erect believable futures” to mean that “Science Fiction is losing the timeline”. Russell is depressed by the lack of future in SciFi and much else. And I wrote, reading the book, “The problem is not that we don’t have jetpacks, but that no one is writing about jetpacks.”

I think these are misreadings of Network Realism. This writing exists on a timeline, but it’s not a simple line back-to-the-past and forward-to-the-future. It’s a gathering-together of many currently possible worldlines, seen from the near-omniscient superposition of the network. The Order Flow of the Universe. Speculative Realism, Networked Fiction: Network Realism.

Even when we quite deliberately stop calling science fiction by its original association-tainted name, even when we slice it up into stylistically and/or thematically disparate (or interrelated) subsubgenres, even when its authors are interviewed in serious newspapers and never once asked about their favourite rocket ships or whether they’ve met an alien… we still can’t agree on what it actually is, or how and why it works, or indeed whether it actually works at all.

And that, I’m increasingly convinced, is the true source of science fiction’s uniqueness and longevity. If we ever manage to define sf in a way that everyone can agree on, it’ll probably ossify and die within months. And you might even argue that it follows logically (in a way that Darwin might recognise) that sf has become interested in atemporality because atemporality is the best survival strategy available to it.

Just don’t ask me which came first, all right? 😉

[ * Yes, including this very blog. Honestly, I don’t plan these things; the Zeitgeist gods of RSS and email and Twitter just dump stuff in my lap every day, and sometimes it just so happens that the batch will contain two or three shiny little nuggets that happen to be the same colour or shape or texture. When it happens, I can’t help but pluck them out, make a set from them. Guess you might call it some sort of… pattern recognition?** ]

[ ** OK, I’ll get my coat. ]