Are they even this objective?

Started by lane99, Aug 11, 2011, 05:58 PM

Previous topic - Next topic

lane99

I don't believe polygraphs can distinguish between lies and truth.

However, out of curiousity, are they even objective enough so that the charts produced will be read the same by all polygraphers?

If a test is run, and the chart is given to 10 polygraphers, will all 10 (or at least the vast majority of them) come to the same conclusion as to "truthful" or "deceptive"?

Been wondering about this for a while.  I'm rather assuming most blind polygraphers WOULD come to the same conclusion on any given chart.  But not sure about it.

Chuckles

A polygraph examiner takes many things into consideration when deciding if a person seems deceptive or not, including how the subject carries himself, the subject's attitude and if the subject acts suspiciously. They also take into consideration the opinion of the person calling for the test. If the head guy says he thinks a certain subject is guilty of something, the polygraph examiner is way more likely to say that the "test is picking something up."

Chuckles

Bill_Brown

lane99,

I have been involved in studies of examiners scoring charts, and we found examiners do have high agreement rates when scoring charts.  Properly trained examiners are in agreement on results of polygraph charts.

Bill_Brown

Chuckles,

We only take data from the chart in making a determination regarding responses.  Totally chart interpretation, nothing to do with how you carry yourself, attitude or suspicion. 

figs

Quote from: Bill_Brown on Aug 12, 2011, 12:46 PMlane99,

I have been involved in studies of examiners scoring charts, and we found examiners do have high agreement rates when scoring charts.  Properly trained examiners are in agreement on results of polygraph charts.

Publication citation, please.

Chuckles

QuoteWe only take data from the chart in making a determination regarding responses.  Totally chart interpretation, nothing to do with how you carry yourself, attitude or suspicion.

Haven't you heard of the CBS 60 Minutes Exposé where all the (randomly chosen) polygraph examiners from different companies in New York each repeatedly fingered the person identified as the suspect by the boss, even though the suspect was changed with each round of tests?

The fact that they even had those conversations and allowed themselves to hear who was suspected casts doubt on their honesty. A polygraph examiner who truly stuck to reading the charts would not need to find out which person was suspected before identifying that person as the culprit.


https://antipolygraph.org/blog/?p=110

No matter what anyone says, it can't hurt to follow the advice in "The Lie Behind The Lie Detector" about dressing right, having a good attitude and not acting suspicious. When I read that I realized that I had broken every rule the times that I had failed the polygraph. I was sullen, made many excuses for why I might not have good test results and jabbered on and on nervously, instead of just answering the questions in a confident, businesslike manner.
Chuckles

Bill_Brown

That CBS Expose was in 1986, over 20 years ago.  It was also a setup. 

Bill_Brown

#7
Quote from: 6966687C0F0 on Aug 13, 2011, 12:13 AM
Re: Are they even this objective?
Reply #4 - Today at 5:13am  Bill_Brown wrote on Yesterday at 5:46pm:
lane99,

I have been involved in studies of examiners scoring charts, and we found examiners do have high agreement rates when scoring charts.  Properly trained examiners are in agreement on results of polygraph charts.


Publication citation, please.

Podlesny & Raskin (1978) Rovner et al. (1979) Kircher & Raskin (1988) Honts et al. (1994) Horowitz et al. (1997)

There are newer studies also, check the Marin protocol also.  Inter rater reliability is about 86% in that particular study by Krapol. 

figs

Quote from: Bill_Brown on Aug 13, 2011, 05:49 PM
Quote from: 6966687C0F0 on Aug 13, 2011, 12:13 AM
Re: Are they even this objective?
Reply #4 - Today at 5:13am  Bill_Brown wrote on Yesterday at 5:46pm:
lane99,

I have been involved in studies of examiners scoring charts, and we found examiners do have high agreement rates when scoring charts.  Properly trained examiners are in agreement on results of polygraph charts.


Publication citation, please.

Podlesny & Raskin (1978) Rovner et al. (1979) Kircher & Raskin (1988) Honts et al. (1994) Horowitz et al. (1997)

There are newer studies also, check the Marin protocol also.  Inter rater reliability is about 86% in that particular study by Krapol. 

The Marin protocol is irelevent to interrater reliability. So are all the studies you cite.

Teh Horowitz study you cite even concludes "The R-I test produced an unacceptable rate of false positive decisions."

You dont offer a cit by Donald Krapohl. Maybe you can dig up something responsive? I can't, but your the expert.

stefano

Quote from: Bill_Brown on Aug 13, 2011, 01:39 PMThat CBS Expose was in 1986, over 20 years ago.It was also a setup. 
Of course it was a setup; that was the whole idea. Bill, you know that you are one of the few polygraphists that I have come to respect, but you seem to be blind to the plethora of arrogant polygraphists out there who have taken it upon themselves to do whatever they please apparently without peer scrutiny. It seems that simply going through the 320 hour course somehow makes them untouchable and immune to peer criticism--much like the cops that support each other regardless of whatever travesty they choose to precipitate. There is an examiner in my area who takes the $500, accuses the examinee of attempting countermeasures and gives an Inconclusive. There is no oversight, no scrutiny, they consider themselves demigods and will do as they please. You and others in your profession refuse to take the shitbirds to the woodshed.

Bill_Brown

#10
figs,

Quote
The Marin protocol is irelevent to interrater reliability. So are all the studies you cite.


The Marin protocol requires the examiner score 100 known solution cases with a  minimum of 86% accuracy.  That is inter rater reliability.  This is to qualify under that system as an expert witness in court using paired testing.  And other studies I have seen and not quoted are in the same area of inter rater reliability.

I will look later and find other studies, sorry I just don't have the time right now. 

Bill_Brown

#11
Stefano,

We are trying to get a handle on those examiners you mention.  It is difficult to dictate procedures to individuals that have no respect for proper procedures.  There is no legal authority vested in any examiner that allows censorship.  New rules and guidelines are being enacted by the APA, but again they are not enforceable. 

I am not blind to this problem and do not condone examiners that are not totally professional, I have no power to stop them.  I have lobbied for State and Federal regulation of polygraph.  Even these State Polygraph Examiner Boards do not dictate or enforce "best practices" because we as examiners have not pushed hard enough against our opposition to have them enacted.  Hopefully we will get a handle on this problem in the future, it does not look good right now. 

Bill_Brown

figs,

Found this on the APA website, another study to look at:

QuoteA Replication and Validation Study on an Empirically Based Manual Scoring System1
Ben Blalock, Barry Cushman & Raymond Nelson
Abstract
This is a replication of a study validating the hand scoring system for comparison question polygraph examinations proposed by Nelson, Krapohl and Handler (2008). Nine polygraph examiner trainees at an American Polygraph Association accredited polygraph school used an empirically based three-position manual scoring system involving three evaluative criteria and a reduced set of basic rules to evaluate 100 confirmed event-specific single-issue criminal investigation polygraph examinations from the Department of Defense Polygraph Institute confirmed case archive. Average decision accuracy for the inexperienced examiners was 88% with 13.1% inconclusives. Sensitivity and specificity levels achieved by the trainees did not differ significantly, suggesting they achieved balanced accuracy characteristics using the empirically based scoring system. All nine of the inexperienced examiners scored the sample cases with sufficient accuracy to meet the accuracy requirements specified by the Marin protocol (Krapohl, 2005; Marin, 2000).      Results from this study parallel the results reported in the previous experiment and support the validity of an empirically based three-position manual scoring method.
Quote



figs

Quote from: Bill_Brown on Aug 14, 2011, 12:14 PMfigs,

Quote
The Marin protocol is irelevent to interrater reliability. So are all the studies you cite.


The Marin protocol requires the examiner score 100 known solution cases with a  minimum of 86% accuracy.  That is inter rater reliability.  This is to qualify under that system as an expert witness in court using paired testing.  And other studies I have seen and not quoted are in the same area of inter rater reliability.

I will look later and find other studies, sorry I just don't have the time right now. 


May be you should tell the polygraphers who practice teh Marin protocol that they dont know what it means.

http://www.gsrsystems.com/examtypes/marin_protocol.html

http://www.veritascenter.org/

That, or it means one thing in practice and some thing els in you're post.

Bill_Brown

We were discussing the topic of inter rater reliability.  I used the protocol known as the Marin protocol to demonstrate there is inter rater reliability demonstrated by this protocol.   


Certifies:
The Center certifies qualified examiners, in accordance with
ASTM Standard E2324-04 (The Marin Protocol)
Examiners applying for certification must submit an end-to-
end videotape of an examination showing their proficient
adherence to a validated methodology, and the interpretable
chart produced by that examination. They must then blind-
score 100 sets of charts generated in real-world examinations
in cases where the truth is now known, fifty of which are
from examinations where the subject was deceptive and the
rest from examinations where the subject was non-deceptive.
The test sets are selected from a collection of more than 400
sets. The applicant is certified if and only if he declares a
conclusive result for at least 80 per cent of each group of
fifty, and at least 86 of his conclusive results for each
group are correct.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:
Type the letters shown in the picture
Listen to the letters / Request another image

Type the letters shown in the picture:
How many sides does a stop sign have? (numeral):
Shortcuts: ALT+S post or ALT+P preview