Perpetual perfect present: journalism strategies for an atemporal world

Apparently the BBC has been doing this for a while, but this is the first time I’ve seen anyone mention it explicitly; The Guardian attempts to address the atemporality of the globalised 24/7 newsriver:

So our new policy, adopted last week (wherever you are in the world), is to omit time references such as last night, yesterday, today, tonight and tomorrow from guardian.co.uk stories. If a day is relevant (for example, to say when a meeting is going to happen or happened) we will state the actual day – as in “the government will announce its proposals in a white paper on Wednesday [rather than ‘tomorrow’]” or “the government’s proposals, announced on Wednesday [rather than ‘yesterday’], have been greeted with a storm of protest”.

The BBC website, among others, adopted a similar strategy some time ago and I feel it gives an immediacy to their reports akin to watching or listening to a live news broadcast. So in a sense we are, perhaps belatedly, recognising another way in which a website is different from a newspaper.

We are likely to make much more use of the present tense (“the government is facing a deepening crisis …”) and present perfect tense (“the crisis engulfing the government has intensified …”); until the change of approach, we would probably have written “the crisis engulfing the government intensified tonight …”

Largely unmentioned is the root cause of the problem being addressed, namely that folk who aren’t “digital natives” don’t make a habit of checking the date and time on online articles. To be fair, I only learned that necessity the hard way, after being called out on having posted some five-year-old nugget as news…

Though this raises an interesting facet of atemporality, namely that not all information is time sensitive to the same degree. A lot of more general knowledge is “news” if it’s new to the person reading it. The central channel of the river flows faster than the edges…

Watson’s victory clear, but perhaps not as impressive as it seems

So, Watson won at Jeopardy!… by a pretty significant lead, too. Inevitably, lots of folk are keen to downplay this victory, and for a variety of reasons. Commonest complaint would have to be regarding Watson’s speed-to-buzzer advantage, but its minders designers say that it’s not really that big a deal:

Though Watson seemed to be running the round and beating Jennings and Rutter to the punch with its answers many times, Welty insisted that Watson had no particular advantage in terms of buzzer speed. Players can’t buzz in to give their questions until a light turns on after the answer is read, but Welty says that humans have the advantage of timing and rhythm.

“They’re not waiting for the light to come on,” Welty said; rather, the human players try to time their buzzer presses so that they’re coming in as close as possible to the light. Though Watson’s reaction times are faster than a human, Welty noted that Watson has to wait for the light. Dr. Adam Lally, another member of Watson’s team, noted that “Ken and Brad are really fast. They have to be.”

A re-run with some sort of handicap might prove this one way or the other, but I suspect the doubters will find new advantages to pin on the machine… which , to my mind, rather misses the point of the exercise, which was to demonstrate whether or not a machine could outperform humans at a particular task. Quod erat demonstrandum, y’know?

A more interesting point is that even Watson’s creators aren’t entirely sure how Watson achieves what it achieves. George Dvorsky:

Great quote from David Ferrucci, the Lead Researcher of IBM’s Watson Project:

“Watson absolutely surprises me. People say: ‘Why did it get that one wrong?’ I don’t know. ‘Why did it get that one right?’ I don’t know.”Essentially, the IBM team came up with a whole whack of fancy algorithms and shoved them into Watson. But they didn’t know how these formulas would work in concert with each other and result in emergent effects (i.e. computational cognitive complexity). The result is the seemingly intangible, and not always coherent, way in which Watson gets questions right—and the ways in which it gets questions wrong.

As Watson has revealed, when it errs it errs really badly.

This kind of freaks me out a little. When asking computers questions that we don’t know the answers to, we aren’t going to know beyond a shadow of a doubt when a system like Watson is right or wrong. Because we don’t know the answer ourselves, and because we don’t necessarily know how the computer got the answer, we are going to have to take a tremendous leap of faith that it got it right when the answer seems even remotely plausible.

Dvorsky’s underlying point here is that we shouldn’t be too cocky about our ability to ensure artificial intelligences think in the ways we want them to. They’re just as inscrutable as another human mind. Perhaps even more so… which is why he and Anders Sandberg (among others) believe we should foster a healthy fear of powerful AI systems.

But the most interesting point I’ve seen made about Watson’s victory is a skeptical stance over at Memesteading:

When Alex Trebek walked by the 10 racks of 9 servers each, said to include 2880 computing cores and 15 terabytes (15,000 gigabytes) of high-speed RAM main-memory, I couldn’t shake the feeling: this seems like too much hardware… at least if any of the software includes new breakthroughs of actual understanding. As parts of the show took on the character of an IBM infomercial, the feeling only grew.

[…]

An offline copy of all of Wikipedia’s articles, as of the last full data-dump, is about 6.5GB compressed, 30GB uncompressed – that’s 1/500th Watson’s RAM. Furthermore, chopping this data up for rapid access – such as creating an inverted index, and replacing named/linked entities with ordinal numbers – tends to result in even smaller representations. So with fast lookup and a modicum of understanding, one server, with 64GB of RAM, could be more than enough to contain everything a language-savvy agent would need to dominate at Jeopardy.

But what if you’re not language savvy, and only have brute-force text-lookup? We can simulate the kinds of answers even a naive text-search approach against a Wikipedia snapshot might produce, by performing site-specific queries on Google.

For many of the questions Watson got right, a naive Google query of the ‘en.wikipedia.org’ domain, using the key words in the clue, will return as the first result the exact Wikipedia article whose title is the correct answer.

[…]

With a full, inverse-indexed, cross-linked, de-duplicated version of Wikipedia all in RAM, even a single server, with a few cores, can run hundreds of iteratively-refined probe queries, and scan the full-text of articles for sentences that correlate with the clue, in the seconds it takes Trebek to read the clue.

That makes me think that if you gave a leaner, younger, hungrier team millions of dollars and years to mine the entire history of Jeopardy answers-and-questions for workable heuristics, they could match Watson’s performance with a tiny fraction of Watson’s hardware.

All of which isn’t to demean Watson’s achievement so much as to suggest that perhaps the same results could be reached with a much smaller hardware outlay… though there is an undercurrent of “Big Iron infomercial” in there, too.

A kraken, enraged

This Ars Technica rundown of the whole HBGary Federal vs. Anonymous/Wikileaks thing is really quite astonishing for a whole number of reasons, not least the staggering hubris and chutzpah of Aaron Barr, but there’s also the comparative ease with which Anonymous nailed Barr to his own mizzen. Maybe it’s just me, but the subtext I get from the whole business is that Barr’s desire to “take down” Anonymous stems from a sort of envy and admiration of them; funnier still are the communications between Barr and his pet programmer, who makes no bones about telling Barr he’s walking out onto very thin ice indeed.

Most astonishing of all (though hardly news in this day and age) is the staggering amount of money that shadowy and largely unaccountable outfits like can charge government agencies for work that neither party fully understands or – more importantly – wants the general public to know about. And as Chairman Bruce points out, there’s probably a whole lot more operations just like it that we never get to hear about:

The question now is, do people stumble over the truth here and just sort of dust themselves off and traipse away sideways — or are there more shoes to drop? The furious and deeply humiliated lawyers at HBGary ought to have enough federal clout to pursue their Anonymous harassers and nail them to the barn like corn-eating crows — after all, they claimed they know who they are, and that’s why they got savagely hacked in the first place.

However — are HBGary gonna be able to carry out that revenge attack with their usual discretion — the shadowy obscurity with which they help deny climate change and break labor unions for the Chamber of Commerce? It’s like watching a shark fight a school of ink-squirting squids.

Normally, one never sees a submarine struggle like this. If it does happen to surface, it gets cordially ignored, or ritually dismissed as a sea-monster story. But boy, this one sure is leaky.

Things are getting very permeable of late, aren’t they?

Brilliance and Dreck: Using Good and Bad Writers to Self-Motivate

Retro tin robotI don’t remember when I first began wanting to become a professional writer, only that by third grade I had that idea firmly in my head. But it wasn’t until a few years later that I got a particularly awful SF book from a bookstore–I think it had a robot on the cover, one of those jobs with the dryer-vent-hose arms and the antennae on the head–and really got fired up for the job. I thought (and this may sound familiar) “God, if a lousy book like this can get published, I’m going to be rich!” [image courtesy Ricardo Genius]

Let’s skip over the many misconceptions and sad bits of naïvete lurking in that sentence, if you don’t mind. Continue reading Brilliance and Dreck: Using Good and Bad Writers to Self-Motivate

Dadadadadada

Old Duchamp would be proud, I like to think… though given the responses of other postmodern artists to similar events, I’m probably being overoptimistic on that point. Nonetheless, the future shows no sign of waiting for us to reach an accommodation with it, and you can now get yourself a fabbed facsimile of Marcel’s iconic “readymade” urinal museum piece [via BoingBoing].

Fabbed Duchamp urinal clone

As mentioned before, copyright on physical objects is a lost cause, though I doubt that’s going to stop a phalanx of windmill-tilting IP knights charging into battle as the terrain churns like liquid beneath the hooves of their horses, and the lawyers slip in to their vulture costumes off-stage.

And hey, 3D printers are getting pretty close to the point where they can print copies of themselves, too… so at least the futile carnage should be short lived.

Presenting the fact and fiction of tomorrow since 2001