Tag Archives: trust

Reputation management services

If I were a bright-eyed huckster with a sharp suit and few morals (or should that be fewer?), online reputation management would be one of the business models I’d be thinking about putting into action. For as they say in Yorkshire, “where there’s muck, there’s brass”… quite a lot of brass, in fact, if this NYT piece is to be believed:

Reputation.com advertises an annual membership fee of $99, but Mr. Fertik said that costs could easily reach $10,000 for a prominent person who wanted to make a scandal harder to discover through Internet searches. (He said Mr. Weiner was probably out of luck: “It would take a long time and more money than he has.”)

For the detective work, the costs escalate quickly. Michael J. Hershman, president of the Fairfax Group, a risk and reputation management firm, said burying negative information could cost $500 to $1,000, but persuading search engines to expunge incorrect information could cost several thousand dollars more. Getting that information removed from aggregating Web sites like Intellius or PeopleFinder can add another couple of thousand dollars.

Costs can spike into five figures when a firm is asked to find the people responsible for the defamatory blog post or Twitter message. “If you’re going to hire a firm like ours to find that person, it’s hit or miss,” Mr. Hershman said. “We can’t guarantee success. It’s not as easy as going to the search engines.”

There’s a pretty obvious parallel here with the UK-centric phenomenon of super- and hyper-injunctions; the traditional privacy of the rich and privileged is becoming harder and more difficult to maintain in the face of network culture. (This is, if I understand it correctly, the intended meaning of the old saw about “information wanting to be free”, rather than as a dubious ideological justification for content piracy.)

It remains to be seen whether power and money will win the battle in the long run; it probably won’t surprise regular readers to know that I rather hope it doesn’t, because that would mean the rich and powerful would be obliged to think about the potential fallout from their indiscretions before committing them, or face the consequences like everyone else.

The flipside is that life-damaging falsehoods can proliferate with equal ease, deliberately or accidentally, and that corrections to erroneous reports rarely have the same high profile or link-back rate as the initial reports themselves. That said, the nature of network culture suggests that concerted efforts to publicise truth and retractions are likely to be just as effective as deliberate smears or falsehoods propagated with the same degree of effort. As more and more raw data and evidence becomes part of the online ecosystem, it should in theory be possible to defend the truth more effectively as time goes by… but that discounts the regrettable realities of confirmation bias. As so many sensitive topics demonstrate – from global-level biggies like climate science, all the way down to gender representation disparities in science fiction publishing – no amount of data will convince those who simply don’t want to be convinced.

At this point in my thought-train, it’s time to bring in that small yet hardy perennial of geek-futurist topics, the reputation currency. These are still in their infancy, and as such are very open to gaming and logrolling; Amazon review ratings, for instance, vary wildly in their utility from product to product, though the eBay system is a little more robust and trustworthy, provided one does one’s due diligence. But that’s the key, I suspect; in the same way that I think we have to take responsibility for regulating the behaviour of corporations by thinking carefully about where we spend our money and/or attention-time, I think it’s also down to us to make sure we only trust systems that are trustworthy.

Easier said than done, of course, as it would require a pretty fundamental shift in attitudes toward who is responsible for protecting us from the more miscreant members of the species. But there’s another topical example that provides a potential model for  a currency of trustworthiness, and that’s BitCoin. Only a trust currency would actually have to be a sort of mirror image of BitCoin, in that it would have to be completely transparent at the transaction level, with every exchange documented and verified by the cloud of peers. (Whether such a system could ever scale to a global or even national level is way beyond my limited grokking of cryptotech to grasp; it’s a subject I really need to dig into properly at my soonest opportunity.)

In the short- to medium-term, however, I think we can expect to see reputation management become an increasingly expensive and cut-throat theatre of business, alongside a broad swathe of attempts to reinstate the privilege of privacy using the statute books. With any luck, though, the continual exposure of politicians and celebrities as having the same suite of flaws and stupidities as the rest of us might eventually encourage us to look past the headlines and start asking the questions that really matter… namely what these people do when they’re actually at work on our dime.

Trust and utility

This Freeman Dyson article/review at the New York Review Of Books has many interesting points in it, and the new James Gleick book it discusses sounds like a title I’ll need to get my hands on at some point (his biography of Richard Feynman is a fascinating work). But there was a pair of sentences that really just leapt out at me, and I offer them here without further comment (but with a little emphasis):

Among my friends and acquaintances, everybody distrusts Wikipedia and everybody uses it. Distrust and productive use are not incompatible.

Merry Christmas; I got you a panopticon

Two quick links; I’ll leave you to do the math yourself. First up – ‘smart’ CCTV system learns to spot suspicious behaviour with a little help from its human operators:

… a next-generation CCTV system, called Samurai, which is capable of identifying and tracking individuals that act suspiciously in crowded public spaces. It uses algorithms to profile people’s behaviour, learning about how people usually behave in the environments where it is deployed. It can also take changes in lighting conditions into account, enabling it to track people as they move from one camera’s viewing field to another.

[…]

Samurai is designed to issue alerts when it detects behaviour that differs from the norm, and adjusts its reasoning based on feedback. So an operator might reassure the system that the person with a mop appearing to loiter in a busy thoroughfare is no threat. When another person with a mop exhibits similar behaviour, it will remember that this is not a situation that needs flagging up.

And secondly – a facial recognition door lock system retailing for under UK£300.

… can store and register up to 500 faces thanks to an internal dual sensor and two cameras. This, claims the manufacturer, “allows it to establish an incredible facial recognition algorithm in a fraction of a second”. Importantly, the system also works at night. A 3.5 inch screen and touch keypad are also included.

The system can also be used to record attendance in an office. There’s a USB and Ethernet port so that managers can download or keep track of who arrives and leaves the office when.

I have the sudden urge to talk at length to people about the findings of the Stanford Prison Experiment.