Think of this as R&D in real time. Says Nokia:
Spotting an opportunity to make their phones more indispensable to consumers, Nokia is investing in crowd sourcing. It sees the most promise in services that leverage global positioning system (GPS) technology, mapping and the mobile Web.
People are using their cellphones to review restaurants, share their favorite hometown hangouts, discover new jogging routes, even dodge speeding tickets…”Mobile is about pushing that even further out to the ultimate edge: an individual, at all times, with his device.” Such crowd-sourced applications point up the power of mobile networks to relay data in real time. “Nearly everyone has a mobile that they carry with them all the time,””Phones are perfectly suited for this type of automated reporting, and potentially a much more pervasive device than the online Web.”
How will they go about using crowd sourcing in developing smart applications? Well, consider this:
Crowds are now being tapped to develop mobile applications before they reach consumers. Mob4Hire connects developers with new applications with tech-savvy testers around the world. Founder Paul Poutanen estimates there are 12,000 different cellphone models and 700 different wireless carriers globally, forming a byzantine system that can take two years to navigate. Mob4Hire’s decentralized system helps developers deliver new applications to consumers faster and catch bugs along the way.
Honestly, the coolest thing about this is the real time nature of the testing. Crowd sourcing seems to be the most organic extension for testing out these so called smart applications, and it will be interesting to see what new projects come along, as more developers jump onboard.
I consider it one of the greatest privileges of my childhood that we had a full Encyclopedia Britannica in the home, and I spent many rainy-day hours just leafing through it and soaking up data about the world. Ah, happy days! [image by Goran Zec]
Had Wikipedia been available back then, I’d have probably developed myopia, RSI and a bad posture far earlier in my life; hyperlinking and universal access are the two “killer apps” of encyclopedias, as anyone who has fallen down the Wikipedia rabbit-hole will know.
Indeed, it appears that even the mighty Encyclopedia Britannica, after years of bitching about Wikipedia’s openness and inaccuracies (the latter complaint, it transpires, being somewhat hypocritical), has realised that locking material away doesn’t work in the new information economy, and they’re granting people the ability to link directly to their content with their WebShare program. [via Phil Bradley]
It’s not quite free yet; they’re granting access to “anyone who publishes material on the web on a regular basis” (bloggers, in other words) and you have to apply for an account (so only bloggers they like), but it’s a step into the Twentyfirst Century for a hidebound institution. Heck, they’ve even got a blog and a Twitter feed.
Richard Morgan, author of a number of excellent cyber-noir sf thrillers (the most recent being the excellent Black Man, or Thirteen as it was titled in the US) was asked by Index On Censorship Magazine to write an essay about the future of the internet, which is now available on his website. [Image by Meyshanworld]
If you’re familiar with Morgan’s books, you’ll know not to expect rose-tinted panglossian speculation from him. I’ll freely admit that I get carried away with techno-utopian visions from time to time, and it’s good to have writers with Morgan’s incisive intelligence to bring me down to earth:
“The future of the internet, then, is not going to be too much of a shock for anyone who knows much about human nature and whose eyes are open. In fact, regardless of the technical innovations that we may or may not see in the next few decades, virtual reality looks as if it’s going to conform pretty ordinarily to the existing human tendencies we so know and love.”
Go read! [Props to Ariel for the tip.]
Steve Rubel points us to an article at American Journalism Review that discusses the hazards of newsrooms relying on Wikipedia for research and citations. [Image by rabbleradio]
This is hardly a new story (though usually we hear about the horrors of students rather than journalists citing the online encyclopedia), but it’s not going away any time soon – in the always-on 24/7 culture of the web, the only constant is change. As Rubel puts it:
“The big question in my mind is this: when journalists cite Wikipedia articles, what happens when the facts they reference from the wiki entries change (assuming they do)? Do the reporters go back and update their articles? The news reports call more attention to the articles, potentially opening up a can of worms each time they source Wikipedia.
Seems like a big vicious cycle. Perhaps in the future these stories will carry some of the same disclaimers that Wikipedia lists.”
And if you think that’s a symptom of postmodernism running wild, what about CNN handing over the reins of iReport to the community of citizen-journalists who contribute to it? [Via SlashDot]
Are the definitions of “truth” and “consensus” converging? Were they ever really different?
A company called Powerset will be making a new natural language search technology available to the public in September. If the company’s claims are true (as credulously reported in the Technology Review), their search technology will be fundamentally different than the many algorithms that have been used in the past. Instead of developing results based on word and synonym matching, Powerset’s technology teases out the deep linguistic structures embodied in the search queries and in the searched text to make both more accurate and less obvious connections. Points to Powerset CEO Barney Pell for admitting that:
There was not one piece of technology that solved the problem… but instead, it was the unification of many theories and fragments that pulled the project together.
…and that most of the technology was licensed from Xerox PARC. If you’re interested you can sign up for the beta on their website. [kurzweilai]