What if the best way to protect against identity theft was not to hide the fingerprints of your digital daily life, but to expose them to public scrutiny? It sounds like an Orwellian contradiction, but Alex Pentland of MIT’s Human Dynamics Lab believes that allowing limited access to logs of our electronic acitivities is actually much safer than relying on passwords or keys which can be phished or stolen. [image by hyku]
“You are what you do and who you do it with,” says Pentland. Researchers and corporations have realised the potential of such data mining, he points out. “It is already happening and it is time for people to get a stake.”
If people gain control of their own personal data mines, rather than allowing them to be built and held by corporations, they could use them not only to prove who they are but also to inform smart recommendation systems, Pentland says.
He recognises that allowing even limited access to detailed logs of your actions may seem scary. But he argues it is safer than relying on key-like codes and numbers, which are vulnerable to theft or forgery.
If I understand my cryptographic principles correctly (and I may well not, so do put me straight in the comments if I’m wrong), Pentland is proposing something a little bit like a public key verification system. Perhaps in this case “your best defence is a good offence”… the sort of thing that could easily be combined with some sort of reputation-based currency like whuffie? And hey, he’s advising we take our data back from the corporations that already scrape at it when we’re not watching. Makes sense, right?
“It is not feasible for a single organisation to own all this rich identity information,” Pentland says. What he envisages instead is the creation of a central body, supported by a combination of cellphone networks, banks and government bodies.
That bank could provide “slices” of data to third parties that want to check a person’s identity. That information could be much like that required to verify high-level security clearance in government, says Pentland.
Uh-oh… suddenly I’m not so keen on this idea, at least in the way Pentland is thinking about it. A peer-to-peer system, fine, I’m down with that… but handing the reins of identity verification over to banks and quangos, after having already admitted that private corporations are prone to abusing the crumbs of data we drop behind us all the time? That’s got to be a step sideways, if not backwards. Pentland has thought about ways to monetise the system, too:
An individual could also allow their data to be used by services like apps on their smartphone to provide personalised recommendations such as restaurant suggestions or driving directions. This has the potental to be much more powerful than the recommender systems built into services like Netflix and iTunes, and would help familiarise users with the value of the approach, says Pentland.
Pentland’s carrot seems to be much the same as the one dangled by the people behind Phorm: “if you’ve nothing to hide, there’s nothing to fear, and we’ll even be able to recommend you stuff that you’re more likely to want to buy!” Maybe I’m just being paranoid; I remain convinced that a certain degree of personal transparency is not only a societal good but a useful tool for personal security, but something about this particular formulation smells very bad indeed.