Shedding light on lies — and lie detectors
Polygraphs persist despite failing science’s truth test
By Robert Bazell
Chief science and health correspondent
updated 9:08 a.m. ET Dec. 4, 2007
Two fascinating spy scandals came to light recently. Both cases illustrate the government’s bizarre reliance on lie detectors, even though sound science finds polygraph tests virtually useless.
Nada Nadim Prouty worked for both the FBI and CIA before an investigation revealed she had lied to get jobs at the two agencies. The probe found that she had obtained her U.S. citizenship illegally, searched the FBI’s restricted computer files on the terrorist organization Hezbollah, and had some relatives who might be affiliated with the group.
Prouty, who was born in Lebanon, was not charged with spying. She pleaded guilty last month in federal court to defrauding the government on the immigration charges. But, as part of the plea deal, she will have to answer questions about the suspicious relatives while hooked up to a polygraph.
The other case involves Rita Chiang, an FBI agent in the bureau’s China section who was forced to suddenly surrender her gun and badge in 2002. The agency believed a mole had compromised the division and suspected Chiang, although it ultimately cleared her and allowed her to return to duty. The mole, it turns out, was the mistress of Chiang’s boss.
As a result, Chiang quit and is now suing the FBI. She alleges that, after the charges were made she lived under such a cloud of suspicion that it was emotionally impossible for her to function.
Why did she come under suspicion in the first place? Apparently she failed a polygraph test.
Despite their common use by the government in these and many other cases — and what you’ve seen in movies and TV detective shows — polygraph machines don’t work very well.
Real world failure
An extensive study from the National Academy of Sciences published in 2003 concluded that in a very controlled setting — say, with college students in a psychology lab — a polygraph can discriminate lying at “rates above chance.”
But the machine — which measures pulse, blood pressure, sweat and other physiological parameters — often fails in the real world. Countermeasures, or ways to cheat the test, are well known and widely available. That’s why the National Academy of Sciences concluded that “polygraph test accuracy may be degraded by countermeasures, particularly when used by major security threats who have a strong incentive and sufficient resources to use them effectively.”
The Academy found that reliance on polygraph testing to screen government employees who may be potential security threats results in “too many loyal employees falsely judged deceptive and too many security threats left undetected.”
Indeed, the histories of the FBI and CIA are replete with spies and double agents who successfully evaded the polygraph.
Illuminating a lie
And, yet, polygraphs persist.
By one estimate, the federal government alone administers 40,000 a year. Evidence from polygraphs cannot be admitted in federal courts; however, some state courts allow them. Many law enforcement agencies rely on the devices to try to coerce confessions.
Polygraphs are big moneymakers for the companies that produce them. But beyond the commercial pressure, we may simply want to believe that scientists have found a magic way to look into the mind and illuminate a lie.
Many people expressed this opinion during a recent opening session of a project on neuroscience and the law, funded by the Macarthur Foundation. The foundation is providing a three-year, $10 million grant for the Law and Neuroscience Project, involving a distinguished group of neuroscientists, legal scholars and bioethicists. They’ll probe questions such as: could brain scans someday be used to find a liar? And what if scientists could find an accurate picture of the criminal mind? Should we screen schoolchildren before they get into trouble?
From what I can see, these applications are far in the future — if they are ever ascertainable. The problem — as the polygraph shows us — is that lack of scientific proof may not prevent widespread use of other machines that promise to see inside our brains.