BREAKING: American Polygraph Association issues caution over its longtime allowed EDA scoring criteria for certain "tests"

Started by Dan Mangan, Mar 26, 2018, 09:11 PM

Previous topic - Next topic

Dan Mangan

The APA's much ballyhooed simplified, dumbed-down cookie-cutter Empirical Scoring System (ESS) chart evaluation protocol -- coupled with the widespread use of the polygraph trade's convenient de rigueur "automatic" mode for recording the perceived prime indicator of deception, electrodermal activity (EDA), is a sure-fire path to reliable "test" results, right?

Not so fast.

In a stunning about face, the APA appears to own up to its seemingly capricious exuberance vis-a-vis confidence in the previously described "test."

Here are some selected excerpts from the latest APA journal, now known as Polygraph & Forensic Credibility Assessment: A Journal of Science and Field Practice

This particular edition of the APA "journal" was published just two days ago. The article of interest -- The Difference Between the Manual and Automatic Settings for the Electrodermal Channel and a Potential Effect on Manual Scoring  -- was co-authored by APA luminary and polygraph researcher Don Krapohl.

Here are some potentially thought-provoking highlights from co-author Krapohl's jarringly candid piece on this documented downfall of polygraph "testing"...

Because ESS is intentionally designed to weight the EDA channel, ESS scores may be especially vulnerable to larger shifts in EDA scores between the automatic and manual mode than are other scoring systems. This question calls for more research.

Everyone should agree it is important to have confidence that the tracings we score truly represent the examinee's physiological activity. In the case of the EDA, the variety of filtering approaches across and within instrument manufacturers suggests no one yet knows what the best filtering method should be. This is unfortunate. The authors view data filtering not just as an engineering question, but one of public trust. [emphasis added]

Filtering is surely different between analog and digital polygraphs, between the various makers of computer polygraphs, and sometimes between models and even software updates from the same manufacturer.

A mistake in evaluating an EDA cause a shift of 2 to 4 points. Scoring mistakes in other channels risk only half as many
points. Moreover, examiners tend to assign scores to the electrodermal channel more often than they do to other channels, thereby increasing the impact of erroneous EDR scores.


For some techniques,  such as the mixed-issue Air Force MGQT, a shift of only 4 points in the spot score of a single test question could change a polygraph result of truthfulness to one of deception, or the reverse. 

Predictably, no mention was made of any remedy for victimized truth tellers who failed the aforementioned version of the heretofore APA-approved polygraph "test".

To me, Krapohl's eye-opening and arguably revolting article begs this essential question: Does anyone, especially the APA -- who writes the rules and dictates the model policies for polygraph "testing" -- really know what is going on?

George W. Maschke

My first impression is that it's as if the phrenologists were placing undue emphasis on skull width whilst using faulty calipers to boot, which of course would lead to unreliable craniological profiling.
George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Signal Private Messenger: ap_org.01
SimpleX: click to contact me securely and anonymously
E-mail: antipolygraph.org@protonmail.com
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"

Dan Mangan

George, I wholeheartedly agree.

What mystifies me, though, is the lack of response on this forum from the APA polygraph apologists.

BTW, the same issue of the APA's so-called "journal" includes responses to Krapohl's devastating article from the various manufacturers of polygraph machines.

To my way of thinking -- and I'm just a lowly polygraph operator and APA full member with about 15 years of experience doing polygraph "testing" -- all of the manufacturer responses danced around the real issue of EDA and scientific validity to one extent or another.

But in my humble opinion, the most entertaining shuck-and-jive response came from Raymond Nelson, a past president of the APA and longtime "researcher" for Lafayette Instrument Company -- the world's premier source of polygraph machinery.

Regarding the deeply troubling EDA matter at hand, Ray sez:

"Any single EDA score, whether produced via manually-centered EDA or auto-centered EDA is an insufficient basis [emphasis added] for reliable and accurate conclusions about deception and truth-telling. [...] This is because there is inherent variability in EDA data and EDA scores. To put it another way,  neither  manually  processed  EDA  nor automatic  EDA  correlates  perfectly  to  deception;  both are an  approximation. "

An "approximation"? Say what?

In the polygraph "test," is the respiratory rate also an approximation?

Is the pulse an approximation?

What about relative changes in blood pressure/volume? More approximate data?

For years now, the vaunted EDA response has been at the core of polygraph "test" scoring -- at least according to APA and third-party algorithm protocols.

One is forced to wonder if the "approximation" of EDA, as Ray calls it, is yet another polygraph SWAG (Scientific Wild-Ass Guess).

What gives, Ray? Straighten things out for us.

Is the ESS/auto-EDA polygraph "test" really as much of a crap shoot as Krapohl suggests in his alarming article?

[cue crickets]



Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:
Type the letters shown in the picture
Listen to the letters / Request another image

Type the letters shown in the picture:
Shortcuts: ALT+S post or ALT+P preview