Tag Archives: security

Brain scans: an end to lying?

FMRI This is one of the most science fictional sounding news stories to surface in the past few days. From the New York Times:

India has become the first country to convict someone of a crime relying on evidence from this controversial machine: a brain scanner that produces images of the human mind in action and is said to reveal signs that a suspect remembers details of the crime in question.

For years, scientists have peered into the brain and sought to identify deception. They have shot infrared beams through liars’ heads, placed them in giant magnetic resonance imaging machines and used scanners to track their eyeballs. Since the Sept. 11 attacks, the United States has plowed money into brain-based lie detection in the hope of producing more fruitful counterterrorism investigations.

The technologies, generally regarded as promising but unproved, have yet to be widely accepted as evidence — except in India, where in recent years judges have begun to admit brain scans. But it was only in June, in a murder case in Pune, in Maharashtra State, that a judge explicitly cited a scan as proof that the suspect’s brain held “experiential knowledge” about the crime that only the killer could possess, sentencing her to life in prison.

Psychologists and neuroscientists in the United States, which has been at the forefront of brain-based lie detection, variously called India’s application of the technology to legal cases “fascinating,” “ridiculous,” “chilling” and “unconscionable.” (While attempts have been made in the United States to introduce findings of similar tests into court cases, these generally have been by defense lawyers trying to show the mental impairment of the accused, not by prosecutors trying to convict.)

If the technology actually works, it holds obvious promise in law enforcement and security efforts. Polygraph tests are notoriously unreliable, measuring anxiety rather than truth-telling, and so called “truth serum” just makes people babble. But:

After passing an 18-page promotional dossier about the BEOS test to a few of his colleagues, Michael S. Gazzaniga, a neuroscientist and director of the SAGE Center for the Study of the Mind at the University of California, Santa Barbara, said: “Well, the experts all agree. This work is shaky at best.”

Even if brain-scan lie-detection technology doesn’t work as advertised yet, though, that doesn’t mean it won’t work in the future, raising any number of issues that will vary from country to country depending on each nation’s particular legal system.

And if it becomes well established? Then those tempted to break the law could literally be warned, “Don’t even think about it.”

(Image: Wikimedia Commons.)

[tags]brain,security,law enforcement,neuroscience[/tags]

Walking the Walk

WalkGet this, the next time you’re at the airport, security cameras could be watching your every step and feeding it into a computer, from where security officials could crosscheck your gait-type with CCTV footage to spot suspected terrorists:

A database of different gaits thus created may enable security officials to recognise the gait of individuals checking in at an airport, even before they entered the concourse. The researchers say that a comparison of such data with CCTV footage may also be used to track suspect terrorists or criminals who may otherwise be disguising their features or be carrying forged documents.

What about privacy issues?

They insist that gait recognition has a significant advantage over more well-known biometrics, including fingerprinting and iris scanning, in that it is entirely unobtrusive.

It seems like a workable idea, but when you consider how many people pass through airports everyday, and how long it would take to capture the gaits of enough people to have an unbiased sample size to work with, and the accuracy of the gait recognition, you start to override the practicalities that are initially presented. [image by chilling soul]

Innocence-Sensitive Spy Cams

security cameraSince 9/11, the government’s use of video surveillance on the public has increased dramatically (this opens a new window with a .pdf). While the vast majority of this surveillance has been implemented to “protect the country from another 9/11-style attack”, it has been used in other arenas as well, namely in attempts to catch wanted criminals. It’s effectiveness in such a capacity is questionable at times, and the effects of such surveillance on society is noteworthy [photo courtesy of kafka4prez].

However, companies like 3VR – one of the largest surveillance software and video-analysis producers in the world – have begun development of increased-privacy software that would seek to protect innocent people from being falsely targeted by authorities. Their software is hoping to visible blur every face in video surveillance unless an investigation requires that the people in the video be identified. It seems like a small step in the right direction to counter the immense violations by the NSA not too long ago, but at least it’s something.

Web wars – white hats versus black in botnet battles

CPU chip pinsThey may be off the news radar at the moment, but botnets are still a serious bugbear for computer security professionals – it’s hard work trying to defeat something that fights back, after all. [image by Rodrigo Senna]

So here’s a new idea from the University of Washington – why not fight fire with fire, and build a white hat botnet to defend against the DDoS attacks af the black hat botnets?

“Their system, called Phalanx, uses its own large network of computers to shield the protected server. Instead of the server being accessed directly, all information must pass through the swarm of “mailbox” computers.

The many mailboxes do not simply relay information to the server like a funnel – they only pass on information when the server requests it. That allows the server to work at its own pace, without being swamped.”

Sounds like a good plan. It’s beyond my knowledge levels, but the guys at Techdirt seem to think it’s a creative approach.

As a recent convert to Linux, this is the part where I smugly remind everyone that if certain commercially ubiquitous operating systems weren’t so riddled with security flaws, botnets wouldn’t be a problem anyway

Transparency bites – Brin blasts back

transparent-train-carriage Wired has given David Brin some rebuttal space to defend his Transparent Society concept in response to Bruce Schneier’s recent criticisms (as covered earlier here on Futurismic):

“How did we get the freedom we already have, becoming the first civilization in history to (somewhat) defy ancient patterns? Yes, it’s imperfect, always under threat. We swim against hard currents of human nature. But reciprocal accountability is the innovation that lets us even try.

Schneier claims that The Transparent Society doesn’t address “the inherent value of privacy.” But several chapters do, and I conclude that privacy is an inherent human need, too important to leave in the hands of state elites, who are themselves following ornate information-control rules written by other elites — rules, by the way, that never work. (Robert Heinlein said “‘privacy laws’ only make the bugs smaller.”)”

Going back and reading Schneier’s piece again, it does seem like he’s arguing a similar point from a different direction – they’re both opposed to top-heavy hierarchies of control. It would be great if Wired could arrange some sort of formal public debate between Schneier and Brin – the topic has never been more relevant, after all, and as Cory Doctorow points out, talking about these issues is the best way to ensure things don’t get any worse. [image by David de Groot]