Apple quietly unlocks the gate in the garden wall

Paul Raven @ 10-06-2011

Well, well, well – chalk one up for market forces. Remember Apple slamming the gate on the iOS app ecosystem walled garden by insisting on in-app subscriptions with a 30% rake-off? Lots of sad faces among former evangelistas of the iPad-as-future-of-publishing that week.

But now, perhaps due in part to big-name venues like the Financial Times refusing to play ball and opting out of the ecosystem, or perhaps just due to a realisation that a walled garden excludes as many customers as it potentially encloses, the Cupertino crew have quietly back-pedalled on the whole idea.

And so a restrictive information-channelling business model is scaled back due to opposition from other businesses and the customer base, all without the need for any heavy-handed regulation or monopoly inquests; who’d have thought, eh? 😉


Biopharming: transgenic animals as medicine-factories

Paul Raven @ 03-02-2011

That sound you can hear is sound of bioconservatives gnashing their teeth in horror: COSMOS Magazine has a decent long piece on transgenic animals and the role they may play in tomorrow’s pharmacology:

The greatest impact biopharming will have on the world’s medicine cabinet is one of supply – it will dramatically boost the availability of biopharmaceuticals, also known as ‘biologics’. Biologics are defined as medicinal products extracted from or produced by biological systems – many are made by genetically manipulating cells of bacterial, animal or human origin.

The majority of biologics are proteins such as hormones, enzymes, growth factors and antibodies, which can be collectively called therapeutic proteins, as well as viral proteins for use in vaccines.

This method of drug manufacture will make things cheaper… but not to the degree that you might expect:

A review of the scientific literature shows that a slew of antibody-based drugs manufactured in transgenic animals are poised to enter the market as soon as their branded competitors’ patents expire.

Traditionally, this would result in the transgenic animal-manufactured drugs being labelled as generic drugs – a non-patented, cheaper alternative to brand-name medications with the same active ingredient.

But because the antibodies that the transgenic animals produce are extremely complex monoclonal antibodies – large protein-based structures that specifically recognise one part of a target molecule – no two are alike. This means that, unlike the less complex ‘small molecule’ (non protein) structure of most drugs, they cannot technically be called generics.

“When GTC Biotherapeutics start marketing Herceptin from transgenic cows, it will be classed as a biosimilar, not a biogeneric. We may even end up having a better Herceptin, what we’d call a ‘biobetter’,” notes Heiden.

The ‘better’ refers to aspects of a drug’s profile that may be more desirable than those of its competitor, such as better efficacy or fewer side effects. These traits will affect pricing, but biosimilars will still be cheaper.

“The cost savings will be in the order of 30% – not the 80% price drops we see when a generic small molecule drug goes to market,” says Heiden. That’s because of where in the manufacturing process the savings impact. “Where we save money is at the front end. But the downstream cost of goods, which is about half of the total, is the same regardless of whether you are using cell culture or animals on a farm. You still have to extract and purify your product.”

Well, you do if you’re playing by the rules… I’ll bet there’s plenty of corners that can be cut if you’re not too bothered about meeting safety standards. Hmmm, the ideas for my genetic police procedural are all falling rapidly into place…


Code is law: metaverse worlds as the ultimate sovereign states

Paul Raven @ 09-11-2010

A disappointingly brief interview piece at New Scientist has Greg Lastowka talking about the subject matter of his new book, Virtual Justice. I say disappointingly because there’s whole raft-loads of fascinating implications behind the bits that made the cut; I guess I’ll just have to buy the damned book (which was probably the entire point of the interview, to be fair).

Carping aside, Lastowka is talking about law and governance in virtual worlds… or rather the need for such. Thing is, it looks to me like he’s also implicitly conceding that trying to enforce such legal frameworks from without (i.e. from meatspace reality) will be, at best, an uphill battle:

NS: Surely technology has always influenced law. Are things fundamentally different today?

GL: Yes, I think so. To an extent, technology is displacing law. A virtual world owner has a choice between law and technology as tools to further their interests – and they are generally turning to technology first. In 1999, Lawrence Lessig used the phrase “code is law”, and it applies to virtual worlds today. If you control the very nature of the simulation – how gravity works, how a person walks, where they go, what they can say – then you have the power to govern the environment in a way that no sovereign in real space can.

NS: So virtual law could end up being quite powerful?

GL: The government can do a lot of things but it can’t reverse the direction of gravity. Owners of virtual worlds can do an amazing number of things with regard to surveillance and interpersonal interactions.

If they so choose… and bear in mind the market value of being one of the worlds that chooses not to.

But it’s this final line that carries a whole book’s-worth of interesting implications… and probably a trilogy’s-worth of post-cyberpunk plot hooks:

In a sense, technology has outpaced the law. Any owner of a technological platform essentially has the ability to regulate society.

Seriously, think about it: that last sentence there is just huge, saying so much in such a short space. Just as the geographically-defined nation-state begins the final process of withering, the non-Euclidian geography of the metaverse steps in to offer a space over which your control can be more gloriously totalitarian than the greatest despots of the world ever aspired to!

Problem is, if your citizens can emigrate by simply hitting Ctrl-Q and signing up with someone else, how do you encourage them to stick around? Godlike control over the local laws of physics and commerce sounds pretty sweet at first, but unless you want to be godking of a sandbox empire populated by the twenty-five deluded cranks who read your Randian blog back in the noughties (ahem), you’d better start figuring out a legal (and metaphysical) framework that has some sort of appeal to potential digital ex-pats. Money-laundering and tax-haven status might be a good place to start.


The ever-more-invisible (and uncontrollably emergent) hand of the not-actually-free market

Paul Raven @ 10-05-2010

Via Chairman Bruce, the US government is getting (more) worried about automated trading in the wake of last week’s largely-unexplained and possibly emergent “stock tornado”; insert aphorism about horses and barn doors here, possibly modified to suggest that the farmer has been letting the horse run the stud for years.

Investment bankers are naturally keen to point out all the benefits of automated trading and “dark pools”:

Goldman Sachs Group Inc., the most profitable firm in Wall Street history, has shared memos with lawmakers and SEC officials that say computer-driven trading and an increase in stock transactions that occur off public exchanges has reduced consumer costs and brought more liquidity to markets.

Well, if we can’t trust Goldman Sachs, who can we trust? #scathingsarcasm


Regulating military robots

Paul Raven @ 12-03-2009

triple-gun robot droneFollowing on neatly from Tom’s post about the Pentagon’s future war brainstorms and the US Office of Naval Research’s recent report on battlebot morality, philosopher A C Grayling takes to his soapbox at New Scientist to warn us that we need to regulate the use of robots for military and domestic policing uses now… before it’s too late.

In the next decades, completely autonomous robots might be involved in many military, policing, transport and even caring roles. What if they malfunction? What if a programming glitch makes them kill, electrocute, demolish, drown and explode, or fail at the crucial moment? Whose insurance will pay for damage to furniture, other traffic or the baby, when things go wrong? The software company, the manufacturer, the owner?

[snip]

The civil liberties implications of robot devices capable of surveillance involving listening and photographing, conducting searches, entering premises through chimneys or pipes, and overpowering suspects are obvious. Such devices are already on the way. Even more frighteningly obvious is the threat posed by military or police-type robots in the hands of criminals and terrorists.

As has been pointed out before, the appeal of robots to the military mind seems to be that they’re a form of moral short-cut, a way to do the traditional tasks of battle and control without risking the lives of real people. But as Grayling says, that’s a short-sighted approach: it’s not a case of wondering if things will go wrong, but when… and then who will carry the can?

Call me a cynic, but I doubt the generals and politicians will be any keener to shoulder the blame for mistakes than they already are. [image by jurvetson]


Next Page »