Tag Archives: privacy

Anatomy of a socnet background check

Over at Gizmodo, they’ve taken the social network “background check” service offered by a company called Social Intelligence for a spin. The results are interesting:

In May, the FTC gave a company called Social Intelligence the green light to run background checks of your Internet and social media history using this reverse phone lookup services. The media made a big hulabaloo out of the ruling. And it largely got two important facts wrong.

Contrary to initial reports, Social Intelligence doesn’t store seven years worth of your social data. Rather it looks at up to seven years of your history, and stores nothing.

The second was the idea that it was looking for boozy or embarrassing photos of you to pass along to your employer. In fact it screens for just a handful of things: aggressive or violent acts or assertions, unlawful activity, discriminatory activity (for example, making racist statements), and sexually explicit activity. And it doesn’t pass on identifiable photos of you at all. In other words, your drunken kegstand photos are probably fine as long as you’re not wearing a T-shirt with a swastika or naked from the waist down.

Basically, it just wants to know if you’re the kind of asshole who will cause legal hassles for an employer.

[…]

… we learned a few things about how it works, and what you can do if you’ve got to have one of these reports run. And you will.

For starters, what it doesn’t include in the report is nearly as interesting as what it does. Every image of me that might be able to identify my ethnicity is blacked out, even my hands. On my homepage, a line that reads “I drink too much beer” has been obscured because it’s ultimately irrelevant. Screw you, boss man. I love my beer. (Joe: please do not fire me.)

And then there’s the stuff it didn’t find. For example, our editor in chief, Joe Brown, has a Facebook account under a different name he uses for close friends who do not want to be subjected to his work-related posts. (And, you know, to avoid annoying publicists who try to friend him.) It’s easily findable if you know his personal email address. We gave that address to Social Intelligence, but it didn’t dig up his aliased account, just his main profile.

It also seems like it helps to have a large Web footprint. Yeah, it found some negative hits. Tip of the iceberg, my man!

There was much more to find buried deep in my Google search results that could have been just as incriminating. Sometimes, on even more than one level.

Plenty more detail in that piece, but to cut a long story short, it’ll be eminently possible to live a fun fulfilling life online and not flunk one of these background checks… although, counterintuitively perhaps, it appears that broadcasting more of your life rather than less of it is one way to help yourself.

But note that SI’s offer is essentially an outsource offer, and – deliberately, thanks to the constrains of certain laws – much more limited than a few hours of Googling an employee by name. A big firm could easily have a dedicated HR drone whose job it was to rake over the pasts of potential applicants for nasty nuggets. Hell, keep their paygrade low enough, and there’ll be plenty of axe-grinding motivation for them to dish the dirt on high-level managerial applicants; few things motivate in a shitty job as powerfully as resentment, after all. Though don’t treat ’em too bad… you wouldn’t want them agitating your own layers of silt, now would you?

(Businesses: if this sounds like a good plan to you, don’t delay, start hiring now! After all, the job market – at least here in the UK – is about to be flooded with people who’ve made a living by digging up the mundane failures and foibles of people’s private lives and exposing them to public scrutiny, so hire now while they’re still cheap! You may even find that a bit of your own research will enable you to apply the very same sort of leverage upon them, too.)

On this side of the pond, meanwhile, the European parliament is trying to enshrine an Eric Shmidt-esque “right to be forgotten” into law. Tessa Mayes remains unimpressed:

we shouldn’t champion a right to be forgotten. Why? For one, it could be used to stifle our culture’s imagination by banning freedom of expression. It could encourage public figures to claim a “right to erase what people say about my sex life”, as some have been trying to do using superinjunctions, and as Max Mosley, whose orgy was exposed in the News of the World, failed to do in the European Court of Human Rights. But that isn’t my main reason. An exemption could be made so it refers only to data processing rather than when your data is talked about.

Neither am I arguing from a technical point of view: that there’s no point in trying to be forgotten online because it’s difficult, if not impossible, to achieve (although technical challenges don’t help).

Instead my argument is political, about the conception of individuals’ power in our society. The right to be forgotten conjures up the idea of a passive, isolated individual, outside of society. This is a figment of an imagination that believes individuals should exist in the shadows and bureaucrats should act as our puppet masters.

By contrast, at the heart of a right to privacy is the conception of us all as engaged citizens. As social beings we interact in public life. However, sometimes we need downtime from it. A right to privacy recognises that a social existence demands a public and a private life, both of which we control.

A remarkably apropos and proleptic piece of writing, considering the events of the last few days here in the UK; I suspect privacy will be a hot topic here for a good few weeks to come, too. But before we sign off on this one, let’s make a call-back to Bill Gibson’s thoughts from last year on making your past unGooglable:

… I don’t find this a very realistic idea, however much the prospect of millions of people living out their lives in individual witness protection programs, prisoners of their own youthful folly, appeals to my novelistic Kafka glands. Nor do I take much comfort in the thought that Google itself would have to be trusted never to link one’s sober adulthood to one’s wild youth, which surely the search engine, wielding as yet unimagined tools of transparency, eventually could and would do.

I imagine that those who are indiscreet on the Web will continue to have to make the best of it, while sharper cookies, pocketing nyms and proxy cascades (as sharper cookies already do), slouch toward an ever more Googleable future, one in which Google, to some even greater extent than it does now, helps us decide what we’ll do next.

We adapt. And better still, we don’t even notice ourselves adapting… possibly because we’re too busy panicking about the idea of having to adapt.

[ Cue resurrection of OMG GOOGLE IZ TOO BIG KILL IT WIV FIRE! riff in 5… 4… 3… ]

Think you have nothing to hide? Here’s why you’re wrong

Regular readers will know my deep antipathy to the pro-surveillance canard “if you’ve nothing to hide, you’ve nothing to fear”, so they’ll also know exactly why I’m linking to and quoting from this lengthy and careful debunking of it [via TechDirt]:

Legal and policy solutions focus too much on the problems under the Orwellian metaphor—those of surveillance—and aren’t adequately addressing the Kafkaesque problems—those of information processing. The difficulty is that commentators are trying to conceive of the problems caused by databases in terms of surveillance when, in fact, those problems are different.

Commentators often attempt to refute the nothing-to-hide argument by pointing to things people want to hide. But the problem with the nothing-to-hide argument is the underlying assumption that privacy is about hiding bad things. By accepting this assumption, we concede far too much ground and invite an unproductive discussion about information that people would very likely want to hide. As the computer-security specialist Schneier aptly notes, the nothing-to-hide argument stems from a faulty “premise that privacy is about hiding a wrong.” Surveillance, for example, can inhibit such lawful activities as free speech, free association, and other First Amendment rights essential for democracy.

The deeper problem with the nothing-to-hide argument is that it myopically views privacy as a form of secrecy. In contrast, understanding privacy as a plurality of related issues demonstrates that the disclosure of bad things is just one among many difficulties caused by government security measures. To return to my discussion of literary metaphors, the problems are not just Orwellian but Kafkaesque. Government information-gathering programs are problematic even if no information that people want to hide is uncovered. In The Trial, the problem is not inhibited behavior but rather a suffocating powerlessness and vulnerability created by the court system’s use of personal data and its denial to the protagonist of any knowledge of or participation in the process. The harms are bureaucratic ones—indifference, error, abuse, frustration, and lack of transparency and accountability.

Essential reading. Go now.

Don’t Take It Personally, Babe, It Just Ain’t Your Story: High School, Privacy and Blended Identity

Don’t Take it Personally, Babe, It Just Ain’t Your Story is the follow-up to Christine Love’s critically acclaimed indie game Digital: A Love Story. Much like its author’s previous work, Don’t Take It Personally is a game devoted to exploring the nature of online identity. However, while Digital expressed a delicately muted nostalgia for a fictionalised past in which cyberspace allowed Mind to detach itself from Body, Don’t Take it Personally expresses a similarly ambivalent attitude to a notional future in which privacy has become an archaic and outmoded concept. Continue reading Don’t Take It Personally, Babe, It Just Ain’t Your Story: High School, Privacy and Blended Identity

Instruments of Politeness

Instrument of Politeness… which point out how much of what we call “politeness” is actually disguise and dissembling.

At present we can lie about our current situation because the only transmitted information is the actual conversation and background noise. In the future mobile phones will be able to estimate our activity by evaluating multiple sensors in the device. This information will not only be used by the device itself but shared with our environment. The project ‘Instruments of Politeness’ allows the user to lie about his current activity.

The gizmo there is designed to wobble your mobile device about in a manner that will appear to the accelerometers as if you’re taking a walk with it in your pocket (when in fact you might be at home, or in a pub, doing something generally less constructive than the errand you’re supposed to be doing).

Now, mix up whimsical little scams like this one with Scott Adams’ Noprivacyville; utopias (be they real or misdesignated) will always decay under the natural human propensity to secure a little personal advantage. Or, in other words: Everything Can And Will Be Hacked.

Via those fascinating folk at BERG, who – despite the name – seem to do very little involving actual rockets, but an awful lot of other cool stuff.

Scott Adams’ transparent burbclave

Via SlashDot, here’s a provocative post from Scott “Dilbert” Adams where he contemplates the costs of privacy, by trying to imagine a sort of gated community where you surrender a lot of privacy in exchange for living in more affordable, safe and efficient environment. It’s like a hybrid of David Brin’s Transparent Society and Neal Stephenson’s burbclaves… and given how certain sections of the US seem to be reading Snow Crash as a manual of statecraft rather than a dystopian warning, maybe Noprivacyville isn’t as ludicrous as you’d initially imagine.

Although you would never live in a city without privacy, I think that if one could save 30% on basic living expenses, and live in a relatively crime-free area, plenty of volunteers would come forward.

Let’s assume that residents of this city agree to get “chipped” so their locations are always known. Everyone’s online activities are also tracked, as are all purchases, and so on. We’ll have to assume this hypothetical city exists in the not-so-distant future when technology can handle everything I’m about to describe.

This city of no privacy wouldn’t need much of a police force because no criminal would agree to live in such a monitored situation. And let’s assume you have to have a chip to enter the city at all. The few crooks that might make the mistake of opting in would be easy to round up. If anything big went down, you could contract with neighboring towns to get SWAT support in emergency situations.

You wouldn’t need police to catch speeders. Cars would automatically report the speed and location of every driver.  That sucks, you say, because you usually speed, and you like it. But consider that speed limits in this hypothetical town would be much higher than normal because every car would be aware of the location of every other car, every child, and every pet. Accidents could be nearly eliminated.

Healthcare costs might plunge with the elimination of privacy. For example, your pill container would monitor whether you took your prescription pills on schedule. I understand that noncompliance of doctor-ordered dosing is a huge problem, especially with older folks.

Interesting to see Adams factoring in one inevitable outcome of a transparent society, wherein things that we’re obliged to keep secret become a much smaller deal once it’s clear to see that they’re actually quite common; I’ve talked about this in relation to today’s teenagers and their propensity for publicly displaying their transgressions of “acceptable” behaviour, but Adams uses it to highlight health insurance issues as well:

Employment would seem problematic in this world of no privacy. You assume that no employer would hire someone who has risky lifestyle preferences, or DNA that suggests major health problems. But I’ll bet employers would learn that everyone has issues of one kind or another, so hiring a qualified candidate who might later become ill will look like a good deal. And on the plus side, employers would rarely hire someone who had a bad employment record, as that information would not be as hidden as it is today. Bad workers would end up voluntarily moving out of the city to find work. Imagine a world where your coworkers are competent. You might need a lack of privacy to get to that happy situation.

Just to be clear, I’m not holding up Adams’ hypothetical city as some sort of ideal or exemplar that I’d want to live in (and I’m not sure that Adams is trying to do that either), but he’s raising some interesting points about the power of transparency to fix prices and squelch certain social ills. However, implicit in Noprivacyville is some sort of panopticon governance system; your basic choices there are rhizomatic or hierarchical, which would make for very different living experiences and degrees of personal involvement with the politics of your new city-state.

I’m sure someone will tell me how I’m totally wrong about this, but I’m convinced we’ll see experiments of both sorts in the relatively near future as the nation-state model continues to collapse under its own structural weight. As Adams says, plenty of people would see Noprivacyville as a worthwhile exchange; how long they’d retain that opinion, however, is very much an open question.