The happy demise of the rejection letter

Much of the recent debate about the future of fiction publishing has focussed on the end product and the distribution (and possible illicit duplication) thereof, but there’s another side to the story – that of the aspiring writer’s experience. Literary agent Nathan Brandsford suggests that the sea change in publishing economics may do away with a much-loathed (though also much obsessed-over) artefact of the process, namely the rejection letter [via Matt Staggs]. That doesn’t mean everyone will get to be J K Rowling, though…

Clay Shirky […] notes that we’re moving from an era where we filtered and then published to one where we’ll publish and then filter. And no one would be happier than me to hand the filtering reins over to the reading public, who will surely be better at judging which books should rise to the top than the best guesses of a handful of publishing professionals.

I don’t see this transition as the demise of traditional publishing or agenting. Roles will change, but there are still some fundamental elements that will remain. There’s more that goes into a book than just writing it, and publishers will still be the best-equipped to maintain the editorial quality, production value, and marketing heft that will still be necessary for the biggest books. Authors will still need experienced advocates to navigate this landscape, place subsidiary rights (i.e. translation, film, audio, etc.), and negotiate on their behalf.

What’s changing is that the funnel is in the process of inverting – from a top down publishing process to one that’s bottom up.

Yes, many (if not most) of the books that will see publication in the new era will only be read by a handful of people. Rather than a rejection letter from an agent, authors will be met with the silence of a handful of sales. And that’s okay!! Even if a book is only purchased by a few friends and family members — what’s the harm?

Bransford is arguing in favour of the crowdsourced curation model, in other words – that what is “good” will succeed in a free market with nigh-nonexistent barriers to entry. And that’s probably true, as far as it goes, but the critic in me wonders about the definition of “good”. We’re still very hung up on the fallacious notion of popularity being an indicator of quality (probably because quality is such a hard thing to define objectively for a subjective experience like reading a story), and a theoretically flat playing field will exacerbate the problem…

… and this is where I think you can say that genres, for all their own problems of objective definition, may be a saving grace in the long run, at least for those of us who like to analyse the things we love. As culture continues to fragment, each little literary clade will construct its own canons in real-time, and each clade will consist of multiple subclades arguing for their own definitions of quality… and then it’s fractal subsubclades all the way down to the individual. This may sound like the horrifying and centreless endgame of postmodernism to some, but I think we’ll be to busy enjoying the opportunity to exercise (and advocate) our own personal preferences to care.

Canuck filmmaker considers streaming live video from his bionic eye

Well, this sidesteps the clunky implementations of lifelogging that we’ve seen so far. Rob Spence lost the vision in his tright eye in a shooting accident, and decided to replace it with a small camera unit, making it onto Time Magazine‘s best inventions list for 2009 (even though they’ve only had the thing working properly for a short time).

Now Spence’s eye has a wi-fi transmitter that can stream its video output to a computer; from there, it’s a short step to making Spence’s field of vision a free-to-view live feed available to anyone with an internet connection [via SlashDot]. There are some minor technical issues to iron out first, though:

The prototype in the video provides low-res images, but an authentic experience of literally seeing through someone else’s perspective. The image is somewhat jerky and overhung by huge eyelashes; a blink throws everything out of whack for a half-second.

[…]

The Eyeborg prototype in the video, the third, can only work for an hour an a half on a fully charged battery. Its transmitter is quite weak, so Spence has to hold a receiving antenna to his cheek to get a clear signal. He muses that he should build a Seven of Nine-style eyepiece to house it. He’s experimenting with a new prototype that has a stronger transmitter, other frequencies and a booster on the receiver.

It surely won’t be all that long before equivalent hardware could be slipped into a fully-functional biological eye… possibly without the knowledge or permission of the eye’s owner. Which suggests that the tin-foil bonnet brigade will upgrade their fears of surveillance through compromised cell phones to a fear of covertly-implanted audio and video capture devices… hey, it could happen, man*.

[ * Though this assumes, as do most such paranoid conspiracy theories, a level of competence, clandestine secrecy and forward planning of which most nation-state governments seem utterly incapable. I wouldn’t credit the UK government with the ability to successfully tap a barrel of beer, let alone my eyesight… and if they did somehow pull it off, they’d only go and leave the footage on the back seat of a bus. ]

This is sure to end well: Afghanistan’s vast untapped mineral resources

Looks like my cynicism gland gets an early boost this week, as the New York Times reports that the US government has discovered Afghanistan holds an estimated US$1 trillion in previously untapped mineral deposits [via MetaFilter].

The previously unknown deposits — including huge veins of iron, copper, cobalt, gold and critical industrial metals like lithium — are so big and include so many minerals that are essential to modern industry that Afghanistan could eventually be transformed into one of the most important mining centers in the world, the United States officials believe.

An internal Pentagon memo, for example, states that Afghanistan could become the “Saudi Arabia of lithium,” a key raw material in the manufacture of batteries for laptops and BlackBerrys.

Looks like I’m reading from the same page as Charlie Stross:

Note the presence of lithium in that list. It’s a vital raw material for high-capacity rechargable batteries, used in everything from mobile phones to hybrid or electrically-powered automobiles — and there’s a growing worldwide shortage of the stuff. There’s no intrinsic shortage of lithium, but high grade mineral sources are hard to find — it’s mostly bound up in other mineral deposits, in very low concentrations. Half the known exploitable reserves are in Bolivia (at least, before this new discovery).

It doesn’t take a rocket scientist to make the inductive jump from oil:old burning-stuff-to-keep-warm economy to lithium:new post-carbon alternative energy economy. And by applying the PNAC’s equation of control over energy reserves with maintenance of competitive advantage (by applying the choke collar to rivals), it’s fairly likely that, coming at this time, the discovery of Lots of Lithium in Afghanistan will be used to reinforce western support for an increasingly unpopular war of occupation.

Charlie expresses his hope that he’s being overly cynical; it’s a hope I share, but not one I’d like to put money on. But here’s Thomas Barnett with a slightly different take on the situation:

Before anybody gets the idea that somehow the West is the winner here, understand that we’re not the big draw on most of these minerals–that would be Asia and China in particular. What no one should expect is that the discovery suddenly makes it imperative that NATO do whatever it takes to stay and win and somehow control the mineral outcomes, because–again–that’s now how it works in most Gap situations like Africa.  We can talk all we want about China not “dominating” the situation, but their demand will drive the process either directly or indirectly.  There is no one in the world of mining that’s looking to make an enemy out of China over this, and one way or another, most of this stuff ends up going East–not West.

[…]

Here’s the simplest reality test I can offer you:  if we’re just at the initial discovery phase now, we’re talking upwards of a decade before there will be mature mines.  Fast-forward a decade in your mind and try to imagine the US having a bigger presence in Afghanistan than China.  I myself cannot.

Start with that realization and move backward, because exploring any other pathway will likely expose you to a whole lotta hype.

A rather more optimistic viewpoint than my own (and, to judge by the content of my Twitter feed, a lot of other people’s). We’ll just have to wait and see… which will certainly be an easier experience for us Westerners than for the poor Afghanis. Better make some more adjustments to that perpetually mutating narrative, eh?

Gestural interface: like a Wacom tablet, just without the plastic bits

Via SlashDot, here’s a project from Potsdam University in which the clever boffins have built a user interface that requires only hand gestures as input:

We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all “feedback” takes place in the user’s imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space. The interaction is tracked by a tiny camera device clipped to the user’s clothing and pointed at the user’s hands.

A bit rough and ready, sure, but it’s early days. Bolt this onto AR (they both need similar face-mounted hardware, so convergence is pretty inevitable), and stuff gets weird real quick. Cities full of people wandering around, seemingly talking to themselves and waving their hands in gnomic gestures… it’d look like a city of mad magicians.

Or, y’know, like Burning Man or Glastonbury at 5am on a Saturday. 🙂

Did the Iranian “Twitter Revolution” actually happen?

You know, I’m always advising people not to believe everything they read, but I’m just as bad at doing it as anyone else – we all give credence to the stories we want to believe, I guess (and hell knows that media companies know how to exploit that).

So, remember the Twitter Revolution in Iran? That there was a revolution is not in question, but that the revolution was powered by social media? That’s not so clear [via MetaFilter]:

… it is time to get Twitter’s role in the events in Iran right. Simply put: There was no Twitter Revolution inside Iran. As Mehdi Yahyanejad, the manager of “Balatarin,” one of the Internet’s most popular Farsi-language websites, told the Washington Post last June, Twitter’s impact inside Iran is nil. “Here [in the United States], there is lots of buzz,” he said. “But once you look, you see most of it are Americans tweeting among themselves.”

A number of opposition activists have told me they used text messages, email, and blog posts to publicize protest actions. However, good old-fashioned word of mouth was by far the most influential medium used to shape the postelection opposition activity. There is still a lively discussion happening on Facebook about how the activists spread information, but Twitter was definitely not a major communications tool for activists on the ground in Iran.

[…]

To be clear: It’s not that Twitter publicists of the Iranian protests haven’t played a role in the events of the past year. They have. It’s just not been the outsized role it’s often been made out to be. And ultimately, that’s been a terrible injustice to the Iranians who have made real, not remote or virtual, sacrifices in pursuit of justice.

I’m starting to wonder if a faith in the hierarchy-corrosion of modern communications systems isn’t becoming a core plank of what, for want of a less contentious or partisan label, we might call the postmodern progressive liberal platform. Maybe because we feel ourselves to have been liberated from something by the internet (even though we’re not sure what it is that we’ve been liberated from), we think that it can deliver liberation to others from things that are far more oppressive and powerful (at least at the level of curtailment of individual freedoms) than we have the context and experience to understand? That political revolution can be as safe, easy (and fun!) as our spare time whiled away on social media? (See also: the illusion of participation produced by slacktivism.)

Or maybe it’s just old-fashioned and fallacious Golden Age pulp technophilia: “Twitter is the future! The future is something we progress toward! Democracy in Iran would be progress! Therefore Twitter will help create progress toward democracy in Iran!”

I’m having a weird week; I’ve been spending a lot of time thinking about how we make pretty much everything into a story that reflects what we already believe to be true. The trouble with dwelling on that for a while is that you reach a point where you realise that, if that assumption is true, then that assumption is also part of a narrative that’s reinforcing itself through you. Which is a pretty weird psychological and philosophical paradox… not to mention being remarkably unconducive to getting anything practical done.