Tag Archives: panopticon

David Brin talks sousveillance

Still got a lot of metaphorical balls in the air here, so continued quietness will be the norm for a few more days. In the meantime, here’s Ben Goertzel interviewing David Brin at H+ Magazine; regular readers will know that I’m very interested in Brin’s “Transparent Society” ideas, and sousveillance is the subject matter at hand. Snip:

Brin: Essentially, this is the greatest of all human experiments.  In theory… sousveillance should eventually equilibrate into a situation where people (for their own sakes and because they believe in the Golden Rule, and because they will be caught if they violate it) eagerly and fiercely zoom in upon areas where others might be conniving or scheming or cheating or pursuing grossly-harmful deluded paths…

… while looking away when none of these dangers apply. A socially sanctioned discretion based on “none of my business” and leaving each other alone… because you’ll want that other person to be your ally next time, when YOU are the one saying “make that guy leave me alone!”

That is where it should wind up.  If we’re capable of calm, or rationality and acting in our own self-interest.  It is stylishly cynical for most people to guffaw, at this point and assume this is a fairy tale. I can just hear some readers muttering “Humans aren’t like that!”

Well, maybe not. But I have seen plenty of evidence that we are now more like that than our ancestors ever imagined they could be.  The goal may not be attainable.  But we’ve already taken strides in that direction.

Goertzel: Hmmmm….  I definitely see this “best of both worlds” scenario as one possible attractor that a sousveillant society could fall into, but not necessarily the only one.  I suppose we could also have convergence to other, very different attractors, for instance ones in which there really is no privacy because endless spying has become the culture; and ones in which uneasy middle-grounds between surveillance and sousveillance arise, with companies and other organizations enforcing cultures of mutual overwhelming sousveillance among their employees or members.

Just as the current set of technologies has led to a variety of different cultural “attractors” in different places, based on complex reasons.

Brin: This is essentially my point. The old attractor states are immensely powerful.  Remember that 99% of post agricultural societies had no freedom because the oligarchs wanted it that way and they controlled the information flows.  That kind of feudal-aristocratic, top-down dominance always looms, ready to take over.  In fact, I think so-called Culture War is essentially an effort to discredit the “smartypants” intellectual elites who might challenge authoritarian/oligarchic attractor states, in favor of others that are based upon calm reason.

The odds have always been against the Enlightenment methodology – the core technique underlying our markets, democracy and science – called Reciprocal Accountability. On the other hand, sousveillance is nothing more or less than the final reification of that methodology.  Look, I want sousveillance primarily because it will end forever the threat of top-down tyranny.  But the core question you are zeroing in on, here, is a very smart one – could the cure be worse than the disease?

It’s also the sort of question that could only be answered one way: by trying it out. Obviously a global roll-out is never going to happen, but this is the sort of thing a small nimble post-geographical state – Iceland, I’m looking at you! – could pilot quite easily. My argument in favour is that the technology of surveillance isn’t going away, and if the choice is undersight or oversight, I’m going with undersight every time.

Interestingly enough, I tend to find that the people who argue in favour of panopticon surveillance with the tired and demonstrably false canard “if you’re doing nothing wrong, you’ve nothing to fear!” are completely unwilling to apply the same reasoning to being surveilled by their fellow citizens. Guessing the reasons why that might be so are left as an exercise for the reader. 🙂

Instruments of Politeness

Instrument of Politeness… which point out how much of what we call “politeness” is actually disguise and dissembling.

At present we can lie about our current situation because the only transmitted information is the actual conversation and background noise. In the future mobile phones will be able to estimate our activity by evaluating multiple sensors in the device. This information will not only be used by the device itself but shared with our environment. The project ‘Instruments of Politeness’ allows the user to lie about his current activity.

The gizmo there is designed to wobble your mobile device about in a manner that will appear to the accelerometers as if you’re taking a walk with it in your pocket (when in fact you might be at home, or in a pub, doing something generally less constructive than the errand you’re supposed to be doing).

Now, mix up whimsical little scams like this one with Scott Adams’ Noprivacyville; utopias (be they real or misdesignated) will always decay under the natural human propensity to secure a little personal advantage. Or, in other words: Everything Can And Will Be Hacked.

Via those fascinating folk at BERG, who – despite the name – seem to do very little involving actual rockets, but an awful lot of other cool stuff.

Scott Adams’ transparent burbclave

Via SlashDot, here’s a provocative post from Scott “Dilbert” Adams where he contemplates the costs of privacy, by trying to imagine a sort of gated community where you surrender a lot of privacy in exchange for living in more affordable, safe and efficient environment. It’s like a hybrid of David Brin’s Transparent Society and Neal Stephenson’s burbclaves… and given how certain sections of the US seem to be reading Snow Crash as a manual of statecraft rather than a dystopian warning, maybe Noprivacyville isn’t as ludicrous as you’d initially imagine.

Although you would never live in a city without privacy, I think that if one could save 30% on basic living expenses, and live in a relatively crime-free area, plenty of volunteers would come forward.

Let’s assume that residents of this city agree to get “chipped” so their locations are always known. Everyone’s online activities are also tracked, as are all purchases, and so on. We’ll have to assume this hypothetical city exists in the not-so-distant future when technology can handle everything I’m about to describe.

This city of no privacy wouldn’t need much of a police force because no criminal would agree to live in such a monitored situation. And let’s assume you have to have a chip to enter the city at all. The few crooks that might make the mistake of opting in would be easy to round up. If anything big went down, you could contract with neighboring towns to get SWAT support in emergency situations.

You wouldn’t need police to catch speeders. Cars would automatically report the speed and location of every driver.  That sucks, you say, because you usually speed, and you like it. But consider that speed limits in this hypothetical town would be much higher than normal because every car would be aware of the location of every other car, every child, and every pet. Accidents could be nearly eliminated.

Healthcare costs might plunge with the elimination of privacy. For example, your pill container would monitor whether you took your prescription pills on schedule. I understand that noncompliance of doctor-ordered dosing is a huge problem, especially with older folks.

Interesting to see Adams factoring in one inevitable outcome of a transparent society, wherein things that we’re obliged to keep secret become a much smaller deal once it’s clear to see that they’re actually quite common; I’ve talked about this in relation to today’s teenagers and their propensity for publicly displaying their transgressions of “acceptable” behaviour, but Adams uses it to highlight health insurance issues as well:

Employment would seem problematic in this world of no privacy. You assume that no employer would hire someone who has risky lifestyle preferences, or DNA that suggests major health problems. But I’ll bet employers would learn that everyone has issues of one kind or another, so hiring a qualified candidate who might later become ill will look like a good deal. And on the plus side, employers would rarely hire someone who had a bad employment record, as that information would not be as hidden as it is today. Bad workers would end up voluntarily moving out of the city to find work. Imagine a world where your coworkers are competent. You might need a lack of privacy to get to that happy situation.

Just to be clear, I’m not holding up Adams’ hypothetical city as some sort of ideal or exemplar that I’d want to live in (and I’m not sure that Adams is trying to do that either), but he’s raising some interesting points about the power of transparency to fix prices and squelch certain social ills. However, implicit in Noprivacyville is some sort of panopticon governance system; your basic choices there are rhizomatic or hierarchical, which would make for very different living experiences and degrees of personal involvement with the politics of your new city-state.

I’m sure someone will tell me how I’m totally wrong about this, but I’m convinced we’ll see experiments of both sorts in the relatively near future as the nation-state model continues to collapse under its own structural weight. As Adams says, plenty of people would see Noprivacyville as a worthwhile exchange; how long they’d retain that opinion, however, is very much an open question.

Kinect: the Big Brother peripheral?

Concerns begin to arise around the capabilities of Microsoft’s Kinect controller – what exactly are you allowing into your front room [via MonkeyFilter]?

On Thursday, Microsoft Vice President Dennis Durkin told the BMO Digital Entertainment Investor Conference in New York that Kinect offers “a really interesting opportunity” to target content and ads based on who is playing, and to send data back to advertisers.

“When you stand in front of it,” he said, according to news reports, “it has face recognition, voice recognition,” and “we can cater what content gets presented to you based on who you are.” Your wife, Durkin added, could see a different set of content choices than you do, and this can include advertising.

The advertiser will also know, he said, “how many people are in a room when an advertisement is shown,” or when a game is played. He said the system, and therefore advertisers, can also know how many people are engaged with a game or a sporting event, if they are standing up and excited — even if they are wearing Seahawks or Giants jerseys.

We’ve heard about these sorts of capability before, but not in such affordable and desirable household consumer electronics items as the Kinect. Microsoft would like to assuage any concerns, however:

Apparently as a result of Durkin’s remarks, Microsoft issued a statement Thursday that neither its Xbox 360 video-game controller nor Xbox Live “use any information captured by Kinect for advertising targeting purposes.”

The instinctively paranoid and mistrustful might find themselves appending a “… yet!” onto the end of that statement. And long-time Microsoft haterz will get a wry chuckle out of this follow-up:

The company added that it has a strong track record “for implementing some of the best privacy-protection measures in the industry.”

Erm, right.

Anyway, the Kinect (much like the similar devices which will doubtless follow hot on its heels) isn’t inherently nasty… but it does have the capability to be misused in Orwellian ways. Which is why I’m always glad to see clever hacker types reverse-engineering drivers for proprietary hardware; knowledge is power.

Spying on employees on social networks… before you hire them

This isn’t exactly a new phenomenon, but it’s the first example I’ve seen of an outfit offering a service for outsourcing this sort of Human Resources gruntwork: a new startup gnomically named Social Intelligence promises to do a deep scan of a potential employee’s socnet presences in 48 hours, focussing on such catch-all categories as “‘Poor Judgment,’ ‘Gangs,’ ‘Drugs and Drug Lingo’ and ‘Demonstrating Potentially Violent Behavior.'” [via Bruce Schneier]

My instant knee-jerk reaction to this was OMG Panopticon! But if you think about it, it’s really just doing what paper references used to do, for a world where the fakeability and legal complications of references have made them much less useful. It’s easy to forget that social networks are a very old phenomenon; it’s their cybernetic extension into information space that’s new, and we’re all learning how to navigate these widening savannahs as we go along.

“But what about the kids? They have no concept of privacy, nor the sense to cover up their indiscretions!” Well, then the problem will solve itself, as I suggested a while back: if an entire generation starts falling foul of hawk-eyed HR socnet trawlers, the playing field will flatten. If everyone has a few dumb indiscretions on public display, we’ll simply become more accepting of the fact that everyone does stupid stuff every now and again. If anything, it’ll be the people with totally clean sheets who start to look suspect.

Schneier points out that the service is being marketed using scare tactics:

Two aspects of this are worth noting. First, company spokespeople emphasize liability. What happens if one of your employees freaks out, comes to work and starts threatening coworkers with a samurai sword? You’ll be held responsible because all of the signs of such behavior were clear for all to see on public Facebook pages. That’s why you should scan every prospective hire and run continued scans on every existing employee.

In other words, they make the case that now that people use social networks, companies will be expected (by shareholders, etc.) to monitor those services and protect the company from lawsuits, damage to reputation, and other harm. And they’re probably right.

They probably are right… but incidents like that are far rarer than the cognitive bias of media coverage would have us believe. Perhaps it’ll be fashionable for a while, but in tough economic times like these, I doubt there’ll be many companies willing to fork out big bucks to salve the legal department’s paranoia… though I have underestimated the stupidity of the hierarchical corporate mindset many times before, so I’m prepared to be proven wrong on that point.

Bonus panopticon news: the latest development over here in the United Kingdom of Closed Circuit Surveillance is an outfit called Internet Eyes, which is offering a bounty of up to £1,000 for any user who spots a crime being committed on the feeds of private security footage that will be piped through the site.

Again, sounds pretty nasty (though I’m rather alarmed by how desensitised I’ve become to stories like this in recent years), but I can’t see it working as well as Internet Eyes thinks it will. How’re they going to vet their userbase (who will watch the watchmen, indeed)? Are the sorts of people willing to stare at grainy and uneventful video feeds for hours on end on the off-chance of winning some money the sort of people whose vigilance and motives best suit the task at hand? What if the mighty Anonymous decided to infiltrate the userbase (for LULZ and great justice)? Or if criminal syndicates placed their own low-level operatives on the site, found out who was watching which feeds at what times and then planned their jobs accordingly?

And all of that largely bypasses the underlying problem, namely that Internet Eyes’ business plan almost certainly contravenes EU privacy laws. That said, the UK isn’t exactly unfamiliar with doing just that