No Lie MRI to Begin Offering “Lie Detection” Services

In “Betrayed By Your Brain?” (9 October 2006) Philadelphia Inquirer staff writer Faye Flam reports on No Lie MRI, a Philadelphia start-up company that will soon offer “lie detection” services to the public:

Betrayed by your brain?
A Phila. company is poised to offer a lie-detecting MRI, though questions about its reliability remain.
By Faye Flam
Inquirer Staff Writer

Orwell’s 1984 thought police used the age-old tactic of intimidation to get into people’s heads, but by 2084, authorities could have more direct access. Scientists are already starting to use brain scanning, EEG and other tools to extract information directly from the brain.

“The science really has gone to the point where under very controlled circumstances you can tell whether someone is lying,” says Paul Root Wolpe, a bioethicist at the University of Pennsylvania.

This month, a Philadelphia-based company called No Lie MRI anticipates sliding its first clients into a scanning machine. Founder Joel Huizenga says they run the gamut, from women trying to prove they didn’t cheat on their husbands to convicts claiming innocence. A priest wants to clear charges of child molestation, and an Australian man wants to use this to prove he’s not gay. There’s also interest from Internet dating companies.

Huizenga says he’s charging $30 a minute and the procedure should take about an hour.

Many brain scientists say customers won’t get much for their money, since the technique will not deliver answers, only odds. But they also agree that technology is starting to break into the mind in a new way, opening up some novel ethical questions.

“Do we have a right to privacy in terms of subjective thoughts?” asks Penn’s Wolpe. “Who will have access to them?”

Soon after 9/11, a burst of government funding went into high-tech interrogation tools. Some of that went to Penn, where psychiatrist Daniel Langleben and neuropsychiatrist Ruben Gur had already spent several years exploring the potential of a brain scanning technique called functional magnetic resonance imaging, or fMRI.

The machine roughly measures changes in activity in different regions of the brain by tracking hemoglobin in the blood. When neurons become active they remove oxygen from hemoglobin, changing its magnetic properties, which the MRI’s magnet detects.

The researchers tried several experiments. One involved what’s called the guilty knowledge test, designed to determine whether you’re familiar with a particular person, object, scene or fact. Gur reasoned that he might discern whether, say, a suspect was familiar with a crime scene or someone associated with a terrorist act by how his brain reacted to a photograph.

Experiments so far show that indeed familiar objects elicit a different brain response than new ones, says Gur.

He and Langleben also did an experiment in which they gave their subjects a known playing card, then instructed them to lie about the card while under the scanner. Though they found no single lying center in the brain, when they averaged their subjects’ responses together, the distribution of brain activity looked different for the lies than the truthful statements.

But could they detect when an individual was lying? To find out, Langleben gave 26 male students envelopes with two playing cards and a $20 bill. They were told they could keep the envelopes if they could fool his colleague about the contents.

They were then led to a building with the fMRI. There, the colleague instructed them to tell the truth.

That way, Langleben says, he was prompting more spontaneous lying, as opposed to lying on cue, which he considered more like acting. He says 11 of the volunteers tried to fool the scanner by concentrating harder while they were telling the truth. For most, it didn’t help, he says. Two subjects, however, did manage to lie without their brains giving anything discernible away. “I’m sending them to the CIA,” he joked.

Overall, the scientists discriminated lies from truth between 76 and 85 percent of the time. The questioning technique is crucial, Langleben says. “We’re not talking about mind-reading… fMRI can’t just tap into someone’s brain and read free-flowing thoughts.”

In these more recent experiments, Langleben saw a significant difference in parts of the frontal cortex, which is involved with inhibition.

In 2001 the Penn experiments caught the attention of biologist-turned-entrepreneur Huizenga, who eventually bought patents from Penn and started No Lie MRI. Another company, called Cephos, is also developing fMRI, while still another is promoting the use of EEG to do “brain fingerprinting.”

Huizenga says he has more than 50 clients lined up – a mix of personal and legal cases.

He hopes to unveil the technology later this month before 24 television stations in California.

He says he thinks the technique could help people build trust. “Civilization is built on trust,” he says.

But should we trust him?

J. Peter Rosenfeld, a psychologist at Northwestern University, says he has no trouble believing fMRI can discriminate lies 80 percent of the time, but that’s still a huge error rate. Since the 1980s, Rosenfeld has been experimenting with EEG to see lies in the form of brain waves. He says his most recent round of experiments suggest it’s more reliable and accurate than fMRI.

Stephen Fienberg, a statistician at Carnegie Mellon University, is skeptical of both. He headed a recent National Academy of Sciences panel that evaluated the polygraph.

Fienberg argues that neither fMRI nor EEG have demonstrated greater reliability than the polygraph, which his panel deemed too inaccurate for the government to use to screen employees.

“We’re looking at technologies that have not been proven and have many of the same pitfalls as we articulated in our report,” he says.

Polygraphs measure sweat, pulse changes and several other signals of stress. In carefully controlled studies, it gave about 20 to 30 percent false negatives and false positives, he says. Some studies showed it could tell you which subjects were lying about 70 percent of the time, he says, but if you read the fine print, “only about half were detected as being deceptive about the right question.”

Others say science is just starting to unravel the complex physiology of lying.

“There are about 120 words in the English language that apply to different gradations of deception,” says Jennifer Vendemia, a psychologist at the University of South Carolina. Some people tell lies to hurt others, others to spare them. We tell them to impress people and to avoid blame.

All these differences have consequences for how you model your brain measurements, she says. “The game is not going to be won with technology – it’s going to be won when we understand what’s happening in the brain when we deceive.”

And, yet, while the scientific understanding remains sketchy, she’s seen some provocative results from her own work. She attaches 120 electrodes to the heads of her volunteers and has found that at least 80 percent of the time their brains give off a telltale signal a few milliseconds before they actually tell a lie. She attributes this to the mental effort that precedes the lie.

Meanwhile, a half-dozen competitors around the country are touting their own approaches. But none of this technology could possibly make a dent in the pervasiveness of lying. None of the scientists have come close to pinning down those more slippery kinds of untruths: exaggeration, hype, distortion or innuendo. They don’t know what happens if you believe your own lies.

At one point, Langleben encapsulated the reason we’ll never stamp out the lie, repeating the adage, “A lie repeated often enough becomes truth.”

Contact staff writer Faye Flam at 215-854-4977 or fflam@phillynews.com.

Leave a Reply

Your email address will not be published. Required fields are marked *