Code is law: metaverse worlds as the ultimate sovereign states

Paul Raven @ 09-11-2010

A disappointingly brief interview piece at New Scientist has Greg Lastowka talking about the subject matter of his new book, Virtual Justice. I say disappointingly because there’s whole raft-loads of fascinating implications behind the bits that made the cut; I guess I’ll just have to buy the damned book (which was probably the entire point of the interview, to be fair).

Carping aside, Lastowka is talking about law and governance in virtual worlds… or rather the need for such. Thing is, it looks to me like he’s also implicitly conceding that trying to enforce such legal frameworks from without (i.e. from meatspace reality) will be, at best, an uphill battle:

NS: Surely technology has always influenced law. Are things fundamentally different today?

GL: Yes, I think so. To an extent, technology is displacing law. A virtual world owner has a choice between law and technology as tools to further their interests – and they are generally turning to technology first. In 1999, Lawrence Lessig used the phrase “code is law”, and it applies to virtual worlds today. If you control the very nature of the simulation – how gravity works, how a person walks, where they go, what they can say – then you have the power to govern the environment in a way that no sovereign in real space can.

NS: So virtual law could end up being quite powerful?

GL: The government can do a lot of things but it can’t reverse the direction of gravity. Owners of virtual worlds can do an amazing number of things with regard to surveillance and interpersonal interactions.

If they so choose… and bear in mind the market value of being one of the worlds that chooses not to.

But it’s this final line that carries a whole book’s-worth of interesting implications… and probably a trilogy’s-worth of post-cyberpunk plot hooks:

In a sense, technology has outpaced the law. Any owner of a technological platform essentially has the ability to regulate society.

Seriously, think about it: that last sentence there is just huge, saying so much in such a short space. Just as the geographically-defined nation-state begins the final process of withering, the non-Euclidian geography of the metaverse steps in to offer a space over which your control can be more gloriously totalitarian than the greatest despots of the world ever aspired to!

Problem is, if your citizens can emigrate by simply hitting Ctrl-Q and signing up with someone else, how do you encourage them to stick around? Godlike control over the local laws of physics and commerce sounds pretty sweet at first, but unless you want to be godking of a sandbox empire populated by the twenty-five deluded cranks who read your Randian blog back in the noughties (ahem), you’d better start figuring out a legal (and metaphysical) framework that has some sort of appeal to potential digital ex-pats. Money-laundering and tax-haven status might be a good place to start.


EU demanding your “right to be forgotten”

Paul Raven @ 05-11-2010

Shades of Eric Shmidt’s deniable childhoods in the news that the European Commission wants to enshrine the right of every citizen to be “forgotten” by the titans of the web, should they so choose. Take it away, Ars Technica:

As part of its newly outlined data protection reform strategy, the EU says it believes individuals have a “right to be forgotten.” That is, people should be able to give informed consent to every site or service that processes their data, and they should also have the right to ask for all of their data to be deleted. If companies don’t comply, the EU wants citizens to be able to sue.

“The protection of personal data is a fundamental right. We need clear and consistent data protection rules,” EU Justice Commissioner Viviane Reding said in a statement.

[…]

The new guidelines focus on more than just the right to be forgotten—the EU wants to cover most aspects of an individual’s personal data and how it can be used. For example, rules for how someone’s personal information can be used in a police or criminal justice setting will be changed, as well as how citizens can securely transfer their data to places outside the EU.

A laudable sentiment, but one which I rather suspect will be impossible to enforce in any realistic way. But hey, at least the right to sue will keep all those poor starving lawyers in work, right?


The case for cognition enhancement advocacy

Paul Raven @ 04-11-2010

It’s yet another hat-tip to George Dvorsky, this time for pointing out a paper by Gary Miller in which he lays out the obstacles in the path of supporters of cognitive enhancement pharmacology, and ways for overcoming such:

I argue that, regardless how miniscule the risks or how blatantly obvious the benefits, a majority of U.S. citizens is unlikely to support the unrestricted dissemination of cognition enhancing drugs, because each individual member of the majority will be led astray by cognitive biases and illusions, as well as logical fallacies.

If this premise is accurate, then the people of the United States may already be suffering an opportunity cost that cannot be recouped. While a minority of the U.S. electorate can challenge the constitutionality of a policy enacted by a majority, a minority cannot sue to challenge the legislature’s refusal to enact a specific policy. In other words, we in the minority have no way of claiming we were harmed by what “good” could have come—but did not come—due to the legislature’s inaction. We cannot claim the “opportunity cost to the greater good” as an injury, and we cannot compel a court to balance that opportunity cost of inaction against the individual interests that dissuaded the majority from action. Our only recourse is to compel the majority to change its stance via persuasion.

Sounds remarkably like Mike Treder’s suggestion that calm rational discussion of the pros and cons is the best way to advance the transhumanist project, no? No big surprise, I guess, given the overlap between the groups in question… though as I said before, as much as it’s the most morally sensible course of action available, I don’t know how much good calm rational advocacy will be on a an irrational and sensationalist political landscape. I guess we’ll just have to wait and see.


Reasons not to worry about brain enhancement drugs

Paul Raven @ 20-08-2010

Professor Henry Greely reckons it’s high time (arf!) that we stopped trying to ban cognitive enhancement drugs and focus our attentions on developing rules governing their use [via SentientDevelopments]. It’s a pragmatic approach; as Greely points out, the current grey legality of “revision drugs” like Ritalin isn’t doing anything to stop their use, and as the pharmacological industry introduces more cognition-boosting chemicals onto the market (albeit ostensibly as treatments for various maladies of the mindmeat), that situation is unlikely to reverse itself.

Of course, lots of people are scared of the idea of brain enhancement, and there are some good reasons for that. But there are also some bad (or at least illogical) reasons. take it away, Mr Greely:

There are at least three unsound reasons for concern: cheating, solidarity, and naturalness.

Many people find the assertion that enhancement is cheating to be convincing. Sometimes it is: If rules or laws ban an enhancement, then using it is cheating. But that does not help in situations where there are no rules or the rules are still being determined. The problem with viewing enhancements as cheating is that enhancements, broadly defined, are ubiquitous. If taking a cognitive-enhancement drug before a college entrance exam is cheating, what about taking a prep course? Using a computer program for test preparation? Reading a book about taking the test? Drinking a cup of coffee the morning of the test? Getting a good night’s sleep before the test? To say that direct brain enhancement is inherently cheating is to require a standard of what the “right” competition is. What would be the generally accepted standard in our complex and only somewhat meritocratic society?

The idea of enhancement as cheating is also related to the idea that enhancement replaces effort. Yet the plausible cognitive enhancements would not eliminate the need to study; they would just make studying more effective. In any event, we do not reward effort, we reward success. People with naturally good memories have advantages over others in organic chemistry exams, but they did not work for that good memory.

Some argue that enhancement is unnatural and threatens to take us beyond our humanity. This argument, too, suffers from a major problem. All of our civilization is unnatural. A fair speaker could not fly across a continent, take a taxi to an air-conditioned auditorium, and give a microphone-assisted PowerPoint presentation decrying enhancement as unnatural without either a sense of humor or a good argument for why these enhancements are different. Because they change our physical bodies? So do medicine, good food, clothing, and a hundred other unnatural changes. Because they change our brains? So does education. What argument justifies drawing the line here and not there? A strong naturalness argument against direct brain enhancements, in particular, has not been—and I think cannot be—made. Humans have constantly been changing our world and ourselves, sometimes for better and sometimes for worse. A golden age of unenhanced naturalness is a myth, not an argument.

I’m guessing that most readers here are open to the idea of cognitive enhancement (by whatever method)… but even so, what’s the most compelling argument you’ve heard against it?


Legislating against orbital warfare

Paul Raven @ 08-07-2010

Those of you of a certain age will remember Star Wars… not the movies (though you probably remember those pretty well, too) but the Reagan-era space weapons program that took its name from them. And maybe you remember 2008’s brief spate of chest-thumping from the US and China as they demonstrated their abilities to destroy satellites using missiles launched from Earth.

Well, the Obama administration is putting orbital warfare back on the agenda, but in a slightly more positive way – namely by reversing the Bush administration’s previous refusal to discuss potential arms control measures against the weaponisation of near-Earth space. It’s a fine gesture, but there’s a problem – in that swords and ploughshares are very hard to tell apart in this particular domain. Think of it, perhaps, as a nation-state scale version of the street finding its own use for things.

“Dual-use technology will hugely complicate the issue of agreements,” says Joan Johnson-Freese of the US Naval War College in Newport, Rhode Island. For example, missiles that can shoot down other missiles to shield a country from attack could also be used to destroy a satellite in space. Indeed, there is “no fundamental difference” between the missiles used in each application, says Ray Williamson of the Secure World Foundation (SWF) in Washington DC.

[…]

Other double-edged swords are satellites designed to autonomously navigate their way to the vicinity of another satellite in space, a technology that the US demonstrated by flying a mission called XSS-11 in 2005.

A country could use such technology to inspect and repair one of its own malfunctioning satellites or to grab it and drag it into the atmosphere to dispose of it without adding to space junk. But the technology could also be used to interfere with or damage another country’s satellite, says Brian Weeden of SWF. “If you can remove a piece of debris from orbit, then if you really wanted to you could probably remove an active satellite maliciously,” he says. “The rendezvous technology is spreading to a lot of places, because people are seeing economic incentive in on-orbit servicing.”

So, how to prevent warfare in orbit? Call in the lawyers and policy wonks!

“I think the key is in trying to constrain behaviours rather than capabilities, because the capabilities are not going to be constrained,” says Krepon. So even if missile interceptors themselves remain legal, an agreement could outlaw their use in tests that destroy satellites.

To deal with the issue of malicious satellites with autonomous rendezvous technology, spacefaring nations might agree to a code of conduct requiring a country to provide advance notice if it expects one of its satellites to closely approach one belonging to another country.

Lots of sensible and noble thinking going on there… but as with all such agreements, the end result is rather dependent on there being no nation-state (or corporation, or other entity) that’s willing to risk international opprobrium by breaking the rules (O HAI, North Korea!). It’s not too big a deal at the moment, perhaps, but if (as seems likely) we start finding good ways to get valuable resources from beyond the gravity well, the economic incentives for playing it fast and loose in Satellite Town will become a whole lot stronger. (Always assuming, of course, that more immediate and mundane economic concerns don’t distract us from peering at the stars from our vantage point in the gutter, so to speak.)

Also worth remembering that there is a genuine need for destructive intervention in orbit; remember us mentioning the rogue zombiesat that no one could switch off? Still wandering about up there, apparently.


« Previous PageNext Page »