Page Index Toggle Pages: [1] 2 3  ReplyAdd Poll Send TopicPrint
Very Hot Topic (More than 25 Replies) The Scientific Validity of Polygraph (Read 40911 times)
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
The Scientific Validity of Polygraph
Jan 20th, 2002 at 6:45am
Mark & QuoteQuote Print Post  
This message thread is being started in direct response to George Maschke?s assertion that, "CQT polygraphy has not been shown by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions. Moreover, since CQT polygraph lacks both standardization and control, it can have no validity."   

The discussion will encompass but not be limited to; 

1. All studies that are published and peer reviewed in a professional journal or publication. 

2. Comparisons of polygraph to other scientific accepted fields of studies and their practices.
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #1 - Jan 20th, 2002 at 6:50am
Mark & QuoteQuote Print Post  
I will start the discussion by referencing those interested to a web site that contains government reviews of polygraph and many of the studies and finding on the validity of polygraph: http://fas.org/sgp/othergov/polygraph/ota/
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #2 - Jan 22nd, 2002 at 8:04am
Mark & QuoteQuote Print Post  
Here is an excerpt from http://fas.org/sgp/othergov/polygraph/ota/conc.html to get the discussion under way.

In reading this, one can see that the reviewing entity states quite clearly that polygraph does show a better than chance ability to detect deception.  "The preponderance of research evidence does indicate that, when the control question technique is used in specific-incident criminal investigations, the polygraph detects deception at a rate better than chance.."   
 
Quote:

Scientific Validity of Polygraph Testing:
A Research Review and Evaluation

A Technical Memorandum
Washington, D. C.: U.S. Congress
Office of Technology Assessment
OTA-TM-H-15
November 1983

Chapter 7, Section 3, Sub-Section 1

SPECIFIC SCIENTIFIC CONCLUSIONS IN POLICY CONTEXT
Specific-Incident Criminal Investigations

A principal use of the polygraph test is as part of an investigation (usually conducted by law enforcement or private security officers) of a specific situation in which a criminal act has been alleged to have, or in fact has, taken place. This type of case is characterized by a prior investigation that both narrows the suspect list down to a very small number, and that develops significant information about the crime itself. When the polygraph is used in this context, the application is known as a specific-issue or specific-incident criminal investigation.

Results of OTA Review

The application of the polygraph to specific-incident criminal investigations is the only one to be extensively researched. OTA identified 6 prior reviews of such research (summarized in ch. 3), as well as 10 field and 14 analog studies that met minimum scientific standards and were conducted using the control question technique (the most common technique used in criminal investigations; see chs. 2, 3, and 4). Still, even though meeting minimal scientific standards, many of these research studies had various methodological problems that reduce the extent to which results can be generalized. The cases and examiners were often sampled selectively rather than randomly. For field studies, the criteria for actual guilt or innocence varied and in some studies were inadequate. In addition, only some versions of the control question technique have been researched, and the effect of different types of examiners, subjects, settings, and countermeasures has not been systematically explored.

Nonetheless, this research is the best available source of evidence on which to evaluate the scientific validity of the polygraph for specific-incident criminal investigations. The results (for research on the control question technique in specific-incident criminal investigations) are summarized below:

    * Six prior reviews of field studies:
         * average accuracy ranged from 64 to 98 percent.
    * Ten individual field studies:
         * correct guilty detections ranged from 70.6 to 98.6 percent and averaged 86.3 percent;
         * correct innocent detections ranged from 12.5 to 94.1 percent and averaged 76 percent;
         * false positive rate (innocent persons found deceptive) ranged from O to 75 percent and averaged 19.1 percent; and
         * false negative rate (guilty persons found nondeceptive) ranged from O to 29.4 percent and averaged 10.2 percent.
    * Fourteen individual analog studies:
         * correct guilty detections ranged from 35.4 to 100 percent and averaged 63.7 percent;
         * correct innocent detections ranged from 32 to 91 percent and averaged 57.9 percent;
         * false positives ranged from 2 to 50.7 percent and averaged 14.1 percent; and
         * false negatives ranged from O to 28.7 percent and averaged 10.4 percent.

The wide variability of results from both prior research reviews and OTA?S own review of individual studies makes it impossible to determine a specific overall quantitative measure of polygraph validity. The preponderance of research evidence does indicate that, when the control question technique is used in specific-incident criminal investigations, the polygraph detects deception at a rate better than chance, but with error rates that could be considered significant.

The figures presented above are strictly ranges or averages for groups of research studies. Another selection of studies would yield different results, although OTA?S selection represents the set of studies that met minimum scientific criteria. Also, some researchers exclude inconclusive results in calculating accuracy rates. OTA elected to include the inconclusive on the grounds that an inconclusive is an error in the sense that a guilty or innocent person has not been correctly identified. Exclusion of inconclusive would raise the overall accuracy rates calculated. In practice, inconclusive results may be followed by a retest or other investigations.

  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box George W. Maschke
Global Moderator
*****
Offline


Make-believe science yields
make-believe security.

Posts: 6230
Joined: Sep 29th, 2000
Re: The Scientific Validity of Polygraph
Reply #3 - Jan 22nd, 2002 at 11:57pm
Mark & QuoteQuote Print Post  
J.B.

You argue that the 1983 OTA report "states quite clearly that polygraph does show a better than chance ability to detect deception." And you cite the following from Chapter 7 of the report:

Quote:
[font=Palatino,Times]The preponderance of research evidence does indicate that, when the control question technique is used in specific-incident criminal investigations, the polygraph detects deception at a rate better than chance, but with error rates that could be considered significant.[/font]


As a preliminary matter, note that this statement by the OTA is not inconsistent with my statement that CQT polygraphy has not been proven by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions. The OTA relied on both field studies and analog (laboratory) studies. Of the field studies, only two appeared in a peer-reviewed scientific journal:

Bersh, P. J. "A Validation Study of Polygraph Examiner Judgments," Journal of Applied Psychology, 53:399-403, 1969.

Horvath, F. S., "The Effect of Selected Variables on Interpretation of Polygraph Records," Journal of Applied Psychology, 62:127-136, 1977.

(By the way, the FAS website does not include the OTA report's list of references. You'll find it in the PDF version available on Princeton University's Woodrow Wilson School of Public and International Affairs website.)

Bersh's study involved both the Zone [of] Comparison "Test" (a form of probable-lie "Control" Question "Test") and the General Question "Test" (a form of the Relevant/Irrelevant technique). The polygraphers used "global" scoring, that is, they reached their determinations of guilt or innocence based not only on the charts, but also on their clinical impression or "gut feeling" regarding the subject.The decision of a panel of judges (four Judge Advocate General attorneys) was used as "ground truth." Assuming the panel's judgement to be correct, the OTA report notes that the polygraphers' determinations were (overall) 70.6% correct with guilty subjects and 80% correct with innocent subjects.

David T. Lykken provides an insightful commentary on Bersh's study at pp. 104-106 of the 2nd edition of A Tremor in the Blood: Uses and Abuses of the Lie Detector. Because the discussion we are having of polygraph validity is an important one, I will cite Lykken's treatment of Bersh's study here in full for the benefit of those who do not have ready access to A Tremor in the Blood (which now seems to be out of print):

Quote:

[font=Palatino,Times]
Validity of the Clinical Lie Test


In view of the millions of clinical lie tests that have been administered to date, it is surprising that only one serious investigation of the validity of this method has been published, Bersh's 1969 Army study.[reference deleted] Bersh wanted to assess the average accuracy of typical Army polygraphers who routinely administered clinically evaluated lie "tests" to military personnel suspected of criminal acts. He obtained a representative sample of 323 such cases on which the original examiner had rendered a global diagnosis of truthful or deceptive. The completed case files were then given to a panel of experienced Army attorneys who were asked to study them unhindered by technical rules of evidence and to decide which of the suspects they believed had been guilty and which innocent. The four judges discarded 80 cases in which they felt there was insufficient evidence to permit a confident decision. On the remaining 243 cases, the panel reached unanimous agreement on 157, split three-to-one on another 59, and were deadlocked on 27 cases. Using the panel's judgment as his criterion of ground truth, Bersh then compared the prior judgments of the polygraphers against this criterion. When the panel was unanimous, the polygraphers' diagnosis agreed with the panel's verdict on 92% of the cases. When the panel was split three-to-one, the agreement fell to 75%. On the 107 cases where the panel had divided two-to-two or had withheld judgment, no criterion was of course available.

Bersh himself pointed out that we cannot tell what role if any the actual polygraph results played in producing this level of agreement. In another part of that same Defense Department study, polygraphers like those Bersh investigated were required to "blindly" rescore one another's polygraph charts in order to estimate polygraph reliability. The agreement was better than chance but very low. As these Army examiners then operated (they have since converted to the Backster method [of numerical scoring], which is more reliable), chart scoring was conducted so unreliably that we can be sure that Bersh's examiners could not have obtained much of their accuracy from the polygraphs: validity is limited by unreliability. But, although these findings are a poor advertisement for the polygraph itself, can they at least indicate the average accuracy of a trained examiner in judging the credibility of a respondent in the relatively standardized setting of a polygraph examination?

Bersh's examiners based their diagnoses in part on clinical impressions or behavior symptoms, which, we know from the evidence mentioned above, should not have permitted an accuracy much better than chance. But they also had available to them at the time of testing whatever information was then present in that suspect's case file: the evidence then known against him, his own alibi, his past disciplinary record, and so on. In other words, the polygraphers based their diagnoses in part on some portion of the same case facts that the four panel judges used in reaching their criterion decision. This contamination is the chief difficulty with the Bersh study. When his judges were in unanimous agreement, it was presumably because the evidence was especially persuasive, an "open-and-shut case." It may be that much of that same convincing evidence was also available to the polygraphers, helping them to attain that 92% agreement. When the evidence was less clear-cut and the panel disagreed three-to-one among themselves, the evidence may also have been similarly less persuasive when the lie tests were administered--and so the polygrapher's agreement with the panel dropped to 75% (note that the average panel member also agreed with the majority 75% of the time). An extreme example of this contamination involves the fact that an unspecified number of the guilty suspects confessed at the time of the examination. Because the exams were clinically evaluated, we can be sure that every test that led to a confession was scored as deceptive. Since confessions were reported to the panel, we can be sure also that the criterion judgment was always guilty in these same cases. Thus, every lie test that produced a confession was inevitably counted as an accurate test, although, of course, such cases do not predict at all whether the polygrapher would have been correct absent the confession. That the polygraph test frequently produces a confession is its most valuable characteristic to the criminal investigator, but the occurrence of a confession tells us nothing about the accuracy of the test itself.

Thus, the one available study of the accuracy of the clinical lie test is fatally compromised. Because of the contamination discussed above, the agreement achieved when the criterion panel was unanimous is clearly an overestimate of how accurate such examiners could be in the typical run of cases. When the panel split three-to-one, then at least we know that there was no confession during the lie test or some other conclusive evidence available to both the panel and the examiner. The agreement achieved on this subgroup was 75%, equal to the panel judges' agreement among themselves. As we have seen, Bersh's examiners could not have improved much on their clinical and evidentiary judgments by referring to their unreliable polygraphs.
[/font]


As Lykken makes clear, Bersh's study does little to support the validity of CQT polygraphy.

The second peer-reviewed field study cited in the OTA report is that by Horvath. In this study, confessions were used as the criterion for ground truth. In Horvath's study, 77% of the guilty and 51% of the innocent were correctly classified, for a mean accuracy of 64%.

Lykken again provides cogent commentary regarding Horvath's study (as well as a later peer-reviewed field study conducted by Kleinmuntz and Szucko). The following is an excerpt from pp. 133-34 of A Tremor in the Blood (2nd ed.):

Quote:
[font=Palatino,Times]The studies by Horvath and by Kleinmuntz and Szucko both used confession-verified CQT charts obtained respectively from a police agency and the Reid polygraph firm in Chicago. The original examiners in these cases, all of whom used the Reid clinical lie test technique, did not rely only on the polygraph results in reaching their diagnoses but also employed the case facts and their clinical appraisal of the subject's behavior during testing. Therefore, some suspects who failed the CQT and confessed were likely to have been judged deceptive and interrogated based primarily on the case facts and their demeanor during the polygraph examination, leaving open the possibility that their charts may or may not by themselves have indicated deception. Moreover, some other suspects, judged truthful using global criteria, could have produced charts indicative of deception. That is, the original examiners in these cases were led to doubt these suspects' guilt in part regardless of the evidence in the charts and proceeded to interrogate an alternative suspect in the same case who thereupon confessed. For these reasons, some undetermined number of the confessions that were criterial in these two studies were likely to be relatively independent of the polygraph results, revealing some of the guilty suspects who "failed" it....[/font]


Again, Horvath's study (and for that matter, that of Kleinmuntz & Szucko) does little to support the validity of CQT polygraphy.

In A Tremor in the Blood, Lykken addresses three peer-reviewed field studies that post-date the OTA review. I won't address those studies individually for the time being, but I think it's fair to say that the available peer-reviewed research has not proven that CQT polygraphy works at better than chance levels of accuracy under field conditions.

Do you disagree? If so, why? What peer-reviewed field research proves that CQT polygraphy works better than chance? And just how valid does that research prove it to be?

The other statement I've made (and you've noted) is that because CQT polygraphy lacks both standardization and control, it can have no validity. You'll find that explained in more detail in Chapter 1 of The Lie Behind the Lie Detector. I'll be happy to discuss it further, but before I do, I would ask whether you disagree with me regarding this, and if so, why?

  

George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Tel/SMS: 1-202-810-2105 (Please use Signal Private Messenger or WhatsApp to text or call.)
E-mail/iMessage/FaceTime: antipolygraph.org@protonmail.com
Wire: @ap_org
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"
Back to top
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #4 - Feb 18th, 2002 at 7:46am
Mark & QuoteQuote Print Post  
George,

The OTA's findings and statements that polygraph is better than chance was based on all available credible research and only acceptable field studies were included.  Some suggested research studies were eliminated due to validity and structural problems found upon "peer-review".   

The Bersh study, although archaic, does much to enlighten the general public and scientific community by producing what I believe to be the first field research study of the ZCT (1961 Backster).  There were changes made to the ZCT that could have increased the Bersh study accuracy even further, standardized numeric scoring criteria being one.   

Bersh examiners' did not solely conduct a "clinical" evaluation of subjects for deception as Lykken suggests.  The examiners used a global scoring method of evaluation.  This method does uses the charts to discern if one is showing deception to a particular question.  Global scoring also includes observations of the subject prior, during, and following the exam, and all the available investigative material.  It does puzzles me that Lykken would state, "clinical impressions or behavior symptoms, which, we know from the evidence mentioned above, should not have permitted an accuracy much better than chance."  It is a well-known fact that psychologists quite frequently use this very method to come to their professional opinion.  Sometimes, if not often, psychologists' opinions are regarding weather a client "truly" believes or is being "truthful" about something they say has happened to them.  The psychologist then gives his professional opinion to the aforementioned.  I have both seen and heard psycholgists testify to these opinions in court.  Unlike polygraph, psychology rarely has physiological data to base or support their inferences. 

As for the results of the study, the OTA compares Bersh?s to Barland and Raskin?s study.  The OTA does note that the two studies have some inherent differences.  However, the OTA considered the studies similar enough to compare.  The OTA states, "Assuming the panel's decisions, the two studies' results are strikingly different.  Barland and Raskin attained accuracy rates of 91.5 percent for the guilty and 29.4 percent for the innocent subjects; comparable figures in Bersh's study are 70.6 percent guilty correct and 80 percent innocent correct."  My math shows a combine accuracy rate of 81.05 percent for guilty, 54.7 percent for innocent, and 67.87 percent overall accuracy for the two studies.  The OTA then wrote, "It is not clear why there should be this variation?.."  They go on to give some possible but miss some technical reasons for the differences in the findings of the two studies.  Most obvious, Bersh?s study used ZCT and R&I, the global scoring method, and eliminated inconclusive exams.  Ground truth is the most difficult element to establish in a polygraph research study because it is subjective to the interpretations and opinions of the peer-reviewer.

The R&I question format has proven to be a less accurate technique when compared to the ZCT or CQT in specific criminal issue examinations studies.  This is arguably the reason why the Army Modified the General Question Technique (MGQT) to include comparison questions, zone/spot scoring, and total chart minutes.   How the two question formats in Bersh?s study compared or differed in accuracy would be interesting.

The available scientific research for polygraph shows that a greater percentage of inconclusive exams are found in the innocent.  Thus it would be prudent to ascertain that Barland and Raskin's study may have produced similar, if not better, results in the truthful and in the deceptive when compared to Bersh, if inconclusive results were set aside.  The scientific community often holds inconclusive results against polygraph when reviewing its scientific validity and accuracy.  However, polygraph examiners view inconclusive results as not enough in the chart tracings to determine an opinion.  An inconclusive can be attributed to many variables.  One example of inconclusive chart tracings may be found in an exam where the examinee has problems remaining still or intentionally moves.  Even Farwell the Brain Fingerprint?s inventor says his instrument will produce inconclusive results if the examinee does not remain still during the examination.   

Lykken also states, "Because the exams were clinically evaluated, we can be sure that every test that led to a confession was scored as deceptive."  He makes this statement without any supportive documentation, and/or reference to a specific incident within the study where this actually occurred.  There is no evidence to support his opinion on this issue.  If a confession were to be obtained prior to chart data collection, the exam would have been considered incomplete by the examiner.  This is not the case in point in Bersh?s study because inconclusive and incomplete exams were not included.

Lykken's argues that Bersh's study is "fatally flawed" because of his prior assertions.  He writes, "That the polygraph test frequently produces a confession is its most valuable characteristic to the criminal investigator, but the occurrence of a confession tells us nothing about the accuracy of the test itself."  I agree that a confession is a valuable tool in a criminal investigation.  I disagree with his fallible knowledge of the use of a properly documented confession to ascertain conformation of the polygraph data results.  A proper confession covers the elements of the crime and includes information that only a person who committed the crime would know.  When this information is present in a confession, it would undoubtedly confirm the data.  The question here is not weather the confession can be used to confirm the polygraph chart data but what standard was used in deeming statements made by examinee's as a confession.  However, this point is not asserted and/or proven in Lykken's argument, thus it would appear to be a nonexistent flaw.

Horvath's research study provides good data in areas but had missing information that might of hindered the overall accuracy results. Barland submits that Horvath's original examiners were 100 percent correct in their opinions.  Barland notes that some special charts administered in 32 percent of the cases were removed from the files of considered deceptive subjects.  These special charts were most likely removed to avoid pre-judgment by the research evaluators. I do not think his study invalidated polygraph in anyway.  The study in fact provided valuable insight into the possible effect incomplete chart data might have on accurate review.  Horvath's study still produced better then chance results considering there was a 50% chance of the reviewers being correct and they were overall 64% correct.

Lykken states, "The original examiners in these cases, all of whom used the Reid clinical lie test technique, did not rely only on the polygraph results in reaching their diagnoses but also employed the case facts and their clinical appraisal of the subject's behavior during testing."  This statement is partially true but not completely factual.  The examiners in this study used scoring of the charts along with the global information present.  The global scoring method in no way goes against the chart data results and contrariwise uses other information to confirm the chart data.  Lykken goes on to purport, "Moreover, some other suspects, judged truthful using global criteria, could have produced charts indicative of deception."  This is an illogical statement.  He never stipulates what scoring method or criterion might have produced a deceptive result.  He cannot prove or disprove his assertions.  I could easily conclude that, if given all the data available to the original examiners and the same scoring method, the reviewers would have concurred with the original examiners in 100% of the cases.  Neither Lykken nor I can prove or disprove our assertions because this variable was not present or measured.  However, Barland had access to and reviewed the data after the missing variable was discovered.

The point I am making is that no matter how meticulous one accounts for variables there will most likely be ones that need further research to answer.  This is true in any research including physiology, psychology, and medical.  One cannot control for and/or predict every possible variable.  Because a variable is in question does not invalidate the findings or methodology.  Because DNA's sample database is relatively small in comparison with the total population that inhabits the earth, does not lead scientists to doubt the accuracy and/or scientific validity of its methods or findings.

I have read chapter one of The Lie Behind The Lie Detector and have found no reference to what standardization and control the CQT lacks.  You state that it lacks these elements but give no examples or criterion for standardization and control.  I would think this would be hard for you to do, as even the scientific community is quite subjective in their opinion of what constitutes acceptable standardization and control for scientific validity.  Can you reference, for comparison purposes, any other scientific method that has been accepted and its basis for acceptance?  Can you reference, for comparison purposes,  any other scientific method that was rejected based on comparable factors you might use to make this statement? 
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box George W. Maschke
Global Moderator
*****
Offline


Make-believe science yields
make-believe security.

Posts: 6230
Joined: Sep 29th, 2000
Re: The Scientific Validity of Polygraph
Reply #5 - Feb 18th, 2002 at 10:38am
Mark & QuoteQuote Print Post  
J.B.,

Before I address your questions, I note that you didn't really answer mine:

1) Do you agree that that the available peer-reviewed research has not proven that CQT polygraphy works at better than chance levels of accuracy under field conditions? If not, why? What peer-reviewed field research proves that CQT polygraphy works better than chance? And just how valid does that research prove it to be?

I realize you averaged the Bersh and Barland & Raskin studies to come up with an average accuracy of 67.87? Do you seriously maintain that these two studies prove that CQT polygraphy works better than chance and that it is 67.87% accurate? By the way, you did not specify to which study by Barland & Raskin you were referring. I assume you are referring to the following non-peer-reviewed study discussed at p. 52-54 of the OTA report:

Barland, G.H., and Raskin, D.C., "Validity and Reliability of Polygraph Examinations of Criminal Suspects," report No. 76-1, contract No. 75-N1-99-0001 (Washington, D. C.: National Institute of Justice, Department of Justice, 1976).

2) Do you agree that because CQT polygraphy lacks both standardization and control, it can have no validity? If not, why?

Now, you mentioned that you read Chapter 1 of The Lie Behind the Lie Detector and found no reference to what standardization and control the CQT lacks. That reference is found at pp. 2-3 of the 1st digital edition, where we cite Furedy:

Quote:

Professor John J. Furedy of the University of Toronto (Furedy,
1996) explains regarding the “Control” Question “Test” that

…basic terms like “control” and “test” are used in ways that are not consistent with normal usage. For experimental psychophysiologists, it is the Alice-in-Wonderland usage of the term “control” that is most salient. There are virtually an infinite number of dimensions along which the R [relevant] and the so-called “C” [“control”] items of the CQT could differ. These differences include such dimensions as time (immediate versus distant past), potential penalties (imprisonment and a criminal record versus a bad conscience), and amount of time and attention paid to “developing” the questions (limited versus extensive). Accordingly, no logical inference is possible based on the R versus “C” comparison. For those concerned with the more applied issue of evaluating the accuracy of the CQT procedure, it is the procedure’s in-principle lack of standardization that is more critical. The fact that the procedure is not a test, but an unstandardizable interrogatory interview, means that its accuracy is not empirically, but only rhetorically, or anecdotally, evaluatable. That is, one can state accuracy figures only for a given examiner interacting with a given examinee, because the CQT is a dynamic interview situation rather than a standardizable and specifiable test. Even the weak assertion that a certain examiner is highly accurate cannot be supported, as different examinees alter the dynamic examiner-examinee relationship that grossly influences each unique and unspecifiable CQT episode.
 

Other uncontrolled (and uncontrollable) variables that may reasonably be expected to affect the outcome of a polygraph interrogation include the subject's level of knowledge about CQT polygraphy (that is, whether he/she understands that it's a fraud) and whether the subject has employed countermeasures.

You asked, "Can you reference, for comparison purposes, any other scientific method that has been accepted and its basis for acceptance?" I think Drew Richardson gave a good example in his remarks to the National Academy of Sciences on 17 October 2001, when he compared polygraphy to a test for a urinary metabolite of cocaine:

http://antipolygraph.org/nas/richardson-transcript.shtml#control 

The test Dr. Richardson describes is genuinely standardized and controlled, unlike polygraphy.

You also asked, "Can you reference, for comparison purposes, any other scientific method that was rejected based on comparable factors you might use to make this statement?" For comparison purposes, look to polygraphy's sister pseudosciences of phrenology and graphology.

« Last Edit: Feb 18th, 2002 at 2:28pm by George W. Maschke »  

George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Tel/SMS: 1-202-810-2105 (Please use Signal Private Messenger or WhatsApp to text or call.)
E-mail/iMessage/FaceTime: antipolygraph.org@protonmail.com
Wire: @ap_org
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"
Back to top
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #6 - Mar 3rd, 2002 at 9:00am
Mark & QuoteQuote Print Post  
George,

To answer your first question, no I do not agree.  The peer-reviewed research does prove polygraph to be better then chance at detecting deception. I purposefully used a rather dated research for my first post because even it provides a better then chance accuracy in the review method used.  If we look at Bersh's study alone, "....70.6 percent guilty correct and 80 percent innocent correct.", the overall accuracy rate is 75.3%.  This is one of the first field polygraph studies.  There have been changes made to polygraph which have improved the overall accuracy, some of which I discussed in my previous post.  Bersh?s study also uses non-polygraph evaluators to confirm results.  Thus, the 75.3% accuracy is achieved in part through information independent of the polygraph data.  Lykken?s argument is that the results are independent of the polygraph charts. I have stated previously that this is not completely true.  Some of the evaluators? based their decisions on information independent of the polygraph charts.  However, the examiners? original decisions were based on the polygraph chart data.  Regardless of all that, it shows better then chance in detecting both truth and deception in a field setting.   

The combined studies of Bersh and Barland and Raskin do not illustrate polygraph to be 67.87% accurate per se.  The studies illustrate that polygraph was accurate to this degree for the particular confirmation method used.  Your argument is that polygraph is "not better then chance".  Although the previous studies do not reflect the current accuracy rate of polygraph, using the given confirmation method the combine studies do support a better then chance accuracy rate.

On a related but separate note, just because Barland and Raskins study did not appear in a professional journal upon its release does not mean it was not peer-reviewed and accepted.  Just because Lykken does not like the results and the method is not his, does not make the study invalid.  You may get some of the people to agree some of the time but you can?t get all of the people to agree all of the time.

Since I have stated that there were improvements made to polygraph which have increased its? accuracy, I will quote some more recent studies to support the increased accuracy of Polygraph.  All of these studies appear in professional journals and provide better then chance results for CQT polygraph detecting deception.

Quote:
 
From http://www.polygraph.org/research.htm ;

Patrick, C. J., & Iacono, W. G. (1991). Validity of the control question polygraph test: The problem of sampling bias. Journal of applied Psychology, 76(2), 229-238.

Sampling bias is a potential problem in polygraph validity studies in which posttest confessions are used to establish ground truth because this criterion is not independent of the polygraph test. In the present study, criterion evidence was sought from polygraph office records and from independent police files for all 402 control question test (CQTs) conducted during a 5-year period by a federal police examiners in a major Canadian city. Based on blind scoring of the charts, the hit rate for criterion innocent subjects (65% of whom were verified by independent sources) was 55%; for guilty subjects (of whom only 2% were verified independently), the hit rate was 98%. Although the estimate for innocent subjects is tenable given the characteristics of the sample on which it is based, the estimate for the guilty subsample is not. Some alternatives to confession studies for evaluating the accuracy of the CQT with guilty subjects are discussed

Podlesny, J. A., & Truslow, C. M. (1993). Validity of an expanded-issue (Modified General Question) polygraph technique in a simulated distributed-crime-roles context. Journal of Applied Psychology, 78(5), 788-797.

The validity of an expanded-issue control-question technique that is commonly used in investigations was tested with simulations of thief, accomplice, confidant, and innocent crime roles. Field numerical scores and objective measures discriminated between the guilty and innocent groups. Excluding inconclusives (guilty = 
-18.1%, innocent = 20.8%), decisions based on total numerical scores were 84.7% correct for the guilty group and 94.7% correct for the innocent group. There was relatively weaker, but significant, discrimination between the thief group and the other guilty groups and no significant discrimination between the accomplice group and confident group. Skin conductance, respiration, heart rate, and cardiograph measures contributed most strongly to discrimination

Honts, C. R. (1996). Criterion development and validity of the CQT in field application. The Journal of General Psychology, 123(4), 309-324.

A field study of the control question test (CQT) for the detection of deception was conducted. Data from the files of 41 criminal cases were examined for confirming information and were rated by two evaluators on the strength of the confirming information. Those ratings were found to be highly reliable, r = .94. Thirty-two of the cases were found to have some independent confirmation. Numerical scores and decisions from the original examiners and an independent evaluation were analyzed. The results indicated that the CQT was a highly valid discriminator. Excluding inconclusives, the decision of the original examiners were correct 96% of the time, and the independent evaluations were 93% correct. These results suggest that criteria other than confessions can be developed and used reliably. In addition, the validity of the CQT in real-world settings was supported.


You ask how valid it shows polygraph to be. It is you who have purported that the CQT is not better then chance at detecting between truth and deception.  Since you have in the past set the terms for rational discourse, I see it to be your burden to prove what is chance and what peer-reviewed studies have proven polygraph to be below the chance level.  You would also have to establish what scientifically the chance level is.  In a separate message thread you wrote: 

Quote:
Re: How Countermeasures are Detected on the Charts
« Reply #52 on: 12/11/01 at 15:51:49 »
In addition, the chance level of accuracy is not necessarily 50/50. It is governed by the base rate of guilt. For example, in screening for espionage, where the base rate of guilt is quite small (less than 1%), an accuracy rate of over 99% could be obtained by ignoring the polygraph charts and arbitrarily declaring all "tested" to be truthful. However, such a methodology would not work better than chance.


This is an estimation of base rate on your part.  One cannot establish an ultimate true base rate in a field setting for truthful and deceptive because it will vary.  One cannot control for the number of cases that will produce one or the other result within a field setting.  A toxicologist cannot say that 50% of his cases will detect a presents of XYZ in the field because it will vary.  A toxicologist can say that if XYZ is present then it will be detected, if the test works.  Since polygraph measures for deception, polygraph can only produce one of two results, if the test works.  Thus, the exam/test will have a 50% chance of producing deception or no deception, if the test works.  If the exam/test does not work in either discipline, there is an outside contaminant that is thwarting the ability of the exam to produce an acceptable result.

As for your second question, no I do not agree that polygraph lacks both standardization and control.   

Standardization:

The instrumentation must meet a standardized criterion.  The examiner must meet a specified standard criterion. There is a very standardized process followed in a specific issue polygraph that is discussed in you book.  The examiner must follow the process from the beginning to the end.  This is standardization.  The given question formats must contain a standardized number of a given type of question.  This is another standardization.  The given question format must follow a standardized sequence.  The chart tracings must be of a certain standard of quality for acceptable scoring purposes.  The scoring must be done in an acceptable standardized scoring method and must meet a standardized scoring result to make a decision.  The fact is that there are numerous standardized methods within polygraph that prove it to be standardized.

Control:

The examiner is required to conduct the polygraph in a sterol environment that is free of visual and audio distraction.  The examiner is required to assess the examinee?s medical background to control for outside contaminants that may hinder the ability of the instrument to obtain suitable tracings.  The examiner must attempt to control for movement by the examinee to control for outside contaminants that may hinder the ability of the instrument to obtain suitable tracings.  The examiner must conduct an acquaintance exam to control for the possibility of undisclosed medical of physical variants that may contaminate or hinder the ability of the instrument to obtain suitable tracings.  Again, polygraph controls for a number of variables and thus does not lack control.

You quiet frequently use Furdey and Lykken for references.  It should be noted that both of these individuals do have motive to be bias in their opinions toward CQT but support polygraph.  Dr. Furedy has repeatedly condemned the use of polygraph but only the CQT method.  Furedy is a proponent for polygraph when it uses the Guilty Knowledge Test (GKT). The GKT lacks the extensive research, reviews, critical debate, and sheer numbers of its use in the field setting that the CQT has endured and produced over time.  Lykken is under the same ideology as Furdey.  Their bias may be genuinely for scientific purposes but I think not.  Their motive more likely is synonymous with the old cliché that plagues polygraph, "My question format is better then yours."  Why argue so intensely over this issue of the question format to use?  Polygraph is attempting to even further standardize an already standardized method.  Searching for further standardized format, polygraph looked to the academic community because of its? wealth of resources, ability to formulate experimental design, and ability to conduct extensive controlled laboratory research on the methods.  In the academic community, those who posses the acceptable methods are the ones who get the research grant money to perfect and substantiate their methods.  This may be somewhat of an off issue for this discussion.  However, if one looks at why polygraph has not been overwhelmingly accepted in the scientific community regardless of its? high validity marks, one can see that lack of agreement is a major issue that hold up is overall acceptance.  I have spoken with scientists, psychologists, and many other scientific disciplines.  These people say that polygraph is valid, not flawless.  Again, the reoccurring theme that hinders polygraph is the lack of agreement amongst the ranks.  The irony is that those who are causing such confusion and thwarting the acceptance of polygraph as a standardized scientific method are the ones who were sought out to aid in doing just the opposite.  Further, these individuals are not even polygraph examiners.  "Those who can do.  Those who can?t teach."   

Further more, Lykken?s argument against presence of standardization and control in polygraph is elusive babble.   He say?s things like, "There are virtually an infinite number of dimensions along which the R [relevant] and the so-called "C" ["control"] items of the CQT could differ. These differences include such dimensions as time (immediate versus distant past), potential penalties (imprisonment and a criminal record versus a bad conscience), and amount of time and attention paid to "developing" the questions (limited versus extensive). Accordingly, no logical inference is possible based on the R versus "C" comparison. For those concerned with the more applied issue of evaluating the accuracy of the CQT procedure, it is the procedure?s in-principle lack of standardization that is more critical."  He has haphazardly taken terms used in polygraph, thrown them about into a paragraph, imposed his own opinion of there meanings and uses without reference to support, and finally drawn a conclusion that has nothing to do with the previous statements made.  The fact is, none of Lykken?s gibberish has a thing to do with the standardized methods of polygraph and/or even the physiological data for which its? findings are based on.      

You state:
Quote:

Other uncontrolled (and uncontrollable) variables that may reasonably be expected to affect the outcome of a polygraph interrogation include the subject's level of knowledge about CQT polygraphy (that is, whether he/she understands that it's a fraud) and whether the subject has employed countermeasures.


I can with quite certainty say that you have no research or data to support your opinion on this issue.  There are no published research studies that have measured specifically for this variance you speak of and/or one that has concluded with data that will concur with your ideology.   

You then use Dr. Richardson?s explanation in attempts to illustrate a scientific procedure that has been accepted and its? basis for acceptance.

Quote:
  http://antipolygraph.org/nas/richardson-transcript.shtml#control

The test Dr. Richardson describes is genuinely standardized and controlled, unlike polygraphy.


You will notice in Drew?s explanation that he states, "if the test works".  To be fair, maybe all forensic sciences should be put to the standards of validity measurement that is held against polygraph, inconclusive results included.  It is a known fact that even controlled testing for proficiency purposes in accepted scientific practices can often go a rye.  When this happens, the  results can be inconclusive and/or even false.  For example I will give a hypothetical scenario; A standardized control sample of urine leaves an accredited proficiency testing company. That sample is known to contain benzoylecgonine.  The test is weather the sample contains benzoylecgonine or not.  During the shipping process the cabin of the airplane that contains the samples loses atmospheric pressure.  The loss of pressure causes the cabin temperature to plummet to  -80 degrees F.  When the airplane descends, the cabin pressure and temperature returns to normal atmospheric conditions for the region, let us say 70 degrees F for this hypothetical scenario.  This change can happen quite quickly considering the sometimes rapid descent of airplanes.  The sample arrives at the lab and is tested.  It is found to contain no presence of benzoylecgonine and is reported as such.   
 
Drew also speaks about the test for known to verify the test and the instrumentation works.  A polygraph examiner should be conducting an acquaintance exam/test, as stated in APA polygraph procedures outline.  This exam/test checks for the ability of the test to work.  If the subject has an autonomic response to the known lie, the test works.  If the subject does not have an autonomic response to the known lie, the test does not work.  The subject is instructed not to move and to follow specific directions.  If the subject does not cooperate and attempts to augment his responses in any way on this non-intrusive exam/test, the subject is intentionally attempting to hide his natural responses.  I know of only one reason for someone to augment his or her response.  That is, they are going to attempt to deceive.  I believe any reasonable person would come to the same conclusion.  Now if the exam/test works, I have a true physiological response created by a known lie and a true homeostasis or tonic level measurement contained in the known truth.  This data can be used to confirm the remainder of the exam/test data collected.   
 
You referenced, for comparison purposes,  phrenology and graphology.  Phrenology and graphology measures no known and/or research proven methods.  These once experimental methods do not even remotely compare to polygraphs extensive research, documentation of known and proven physiological responses, and proven accuracy.  A closer but distant comparison you may have used would be questioned documents, since it is a forensic science.  From your same source of information, http://www.skepdic.com/graphol.html , the following appears.  "Real handwriting experts are known as forensic document examiners, not as graphologists. Forensic (or questioned) document examiners consider loops, dotted "i's" and crossed "t's," letter spacing, slants, heights, ending strokes, etc. They examine handwriting to detect authenticity or forgery."  I believe the author of this site accepts questioned documents as a scientific discipline.  Polygraph measures known physiological responses of the subject to detect deception.  Polygraph has provided more favorable research, standardization and validity then questioned documents.  I know you have read the research study in which polygraph was putt head to head against questioned documents and latent fingerprints.   

Although it is not my burden to prove the validity, I will give an example of how polygraphs tested validity stands up against other accepted science:

Quote:


From http://www.iivs.org/news/3t3.html ;

STATEMENT ON THE SCIENTIFIC VALIDITY OF THE 3T3 NRU PT TEST (AN in vitro TEST FOR PHOTOTOXIC POTENTIAL)

At its 9th meeting, held on 1-2 October 1997 at the European Centre for the Validation of Alternative Methods (ECVAM), Ispra, Italy, the ECVAM Scientific Advisory Committee (ESAC) 1 unanimously endorsed the following statement:

The results obtained with the 3T3 NRU PT test in the blind trial phase of the EU/COLIPA2 international validation study on in vitro tests for phototoxic potential were highly reproducible in all the nine laboratories that performed the test, and the correlations between the in vitro data and the in vivo data were very good. The Committee therefore agrees with the conclusion from this formal validation study that the 3T3 NRU PT is a scientifically validated test which is ready to be considered for regulatory acceptance.

???

General information about the study:

A. The study was managed by a Management Team consisting of representatives of the European Commission and COLIPA, under the chairmanship of Professor Horst Spielmann (ZEBET, BgVV, Berlin, Germany). The following laboratories participated in the blind trial on the 3T3 NRU PT test: ZEBET (the lead laboratory), Beiersdorf (Hamburg, Germany), University of Nottingham (Nottingham, UK), Henkel (Dusseldorf, German), Hoffman-La Roche (Basel, Switzerland), L'Oréal (Aulnay-sous-Bois, France), Procter & Gamble (Cincinnati, USA), Unilever (Sharnbrook, UK), and Warsaw Medical School (Warsaw, Poland).

B. This study began in 1991, as a joint initiative of the European Commission and COLIPA. Phase I of the study (1992-93) was designed as a prevalidation phase, for test selection and test protocol optimisation. Phase II (1994-95) involved a formal validation trial, conducted under blind conditions on 30 test materials which were independently selected, coded and distributed to nine laboratories. The results obtained were submitted to an indepedent statistician for analysis. Data analysis and preparation of the final report took place during 1996-97.

C. A number of tests at different stages of development were included in the study, but the 3T3 NRU PT test was found to be the one most ready for validation. It is a cytotoxicity test, in which Balb/c mouse embryo-derived cells of the 3T3 cell line are exposed to test chemicals with and without exposure to UVA under carefully defined conditions. Cytotoxicity is measured as inhibition of the capacity of the cell cultures to take up a vital dye, neutral red. The prediction model requites a sufficient increase in toxicity in the presence of UVA for a chemical to be labelled as having phototoxic potential.

D. Two versions of the prediction model were applied by the independent statistician. The phototoxicity factor (PTF) version compared two equi-effective concentrations (the IC50 value, defined as the concentration of test chemical which reduces neutral red uptake by 50%) with and without UV light. However, since no 1C50 value was obtained for some chemicals in the absence of UVA, another version was devised, based on the Mean Phototoxic Effect (MPE), whereby all parts of the dose-response curves could be compared.

The two versions of the prediction model were applied to classify the phototoxic potentials of the 30 test chemicals on the basis of the in vitro data obtained in the nine laboratories. Comparing these in vitro classifications with the in vivo classifications independently assigned to the chemicals before the blind trial began, the following overall contingency statistics were obtained for the 3T3 NRU PT test:

                              PIF version      MPE version
Specificity:                   90%                93%
Sensitivity:                   82%                84%
Positive predictivity:    96%                96%
Negative predictivity:  64%                 73%
Accuracy:                    88%                92%

E.  Other methods in the study included the human keratinocyte NRU PT test, the red blood cell PT test, the SOLATEX PT test, the histidine oxidation test, a protein binding test, the Skin2 ZK1350 PT test, and a complement PT test. The other methods showed varying degrees of promise, e.g. as potential mechanistic tests for certain kinds of phototoxicity, and this will be the subject of further reports.


Considering the above, I would conclude that polygraph has provided more then sufficient overall accuracy data and done so over a greater test period of both laboratory and field settings to prove scientifically valid.

Comparison example:

Quote:


From: http://www.polygraphplace.com/docs/acr.htm

In their recent review, Raskin and his colleagues (12) also examined the available field studies of the CQT. They were able to find four field studies (13) that met the above criteria for meaningful field studies of psychophysiological detection of deception tests. The results of the independent evaluations for those studies are illustrated in Table 2. Overall, the independent evaluations of the field studies produce results that are quite similar to the results of the high quality laboratory studies. The average accuracy of field decisions for the CQT was 90.5 percent. (14) However, with the field studies nearly all of the errors made by the CQT were false positive errors. (15)

http://www.polygraphplace.com/docs/AMICUS%20CURIAE%20RE%20THE%20POLYGRAPH%20Draf...

aSub-group of subjects confirmed by confession and evidence.

bDecision based only on comparisons to traditional control questions.

cResults from the mean blind rescoring of the cases "verified with maximum certainty" (p.235)

dThese results are from an independent evaluation of the "pure verification" cases.

_______________

Although the high quality field studies indicate a high accuracy rate for the CQT, all of the data represented in Table 2 were derived from independent evaluations of the physiological data. This is a desirable practice from a scientific viewpoint, because it eliminates possible contamination (e.g. knowledge of the case facts, and the overt behaviors of the subject during the examination) in the decisions of the original examiners. However, independent evaluators rarely offer testimony in legal proceedings. It is usually the original examiner who gives testimony. Thus, accuracy rates based on the decisions of independent evaluators may not be the true figure of merit for legal proceedings. Raskin and his colleagues have summarized the data from the original examiners in the studies reported in Table 2, and for two additional studies that are often cited by critics of the CQT. (16) The data for the original examiners are presented in Table 3. These data clearly indicate that the original examiners are even more accurate than the independent evaluators.

<http://www.polygraphplace.com/docs/AMICUS%20CURIAE%20RE%20THE%20POLYGRAPH%20Draf...;

aCases where all questions were confirmed.

bIncludes all cases with some confirmation.


The above comparison supports the assertion that polygraph when using the CQT meets standard validity test requirements to be considered a scientific test method.    
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box Drew Richardson
Especially Senior User
*****
Offline



Posts: 427
Joined: Sep 7th, 2001
Re: The Scientific Validity of Polygraph
Reply #7 - Mar 3rd, 2002 at 7:25pm
Mark & QuoteQuote Print Post  
J.B.,

I believe you have totally missed the point regarding scientific control and what constitutes it in a given instance.  It should not be confused with other issues, nor should its absence in one paradigm be confused with the possibility of its isolated absence in the face of operator error in the case of a discipline which in fact normally reflects principles of scientific control.  With regard to the first--although it is proper and admirable that polygraph examiners calibrate their instruments, there is little question that electrons do flow and pressure gauges can accurately measure pressures in most instances.  Nor is there any serious question that ambulatory individuals who travel to and report for polygraph examinations have at least minimally functioning autonomic systems.  The ANS is required on a daily basis for individual life function and its function as displayed in polygraph examinations is trivial relative to its various life sustaining functions.  And if autonomic function and responsivity were in question, the re-named so-called "acquaintance test" is no serious measure of it, but merely a nomenclature evolution of the parlor game and fraudulent exercise we have all come to know as a "stim test."  To suggest that this is anything more should be embarrassing to one who understands anything about autonomic physiology.  As has been pointed out recently by others, the acquaintance test is actually the first opportunity for the examinee to con the con-man examiner with countermeasure response to the chosen number and feigned amazement at the examiner's mystical deductive powers in pointing out said response(s)...

But on to meaningful scientific control and that which is lacking with control question test polygraphy...

That which will define scientific control in an analysis is the ability of the control to shed light on the various dependent measure recordings of the analyte in question.  In the case of the control question polygraph exam, the analyte in question is the relevant question subject matter; the dependent measures are those measures of physiology recorded, and the scientific control, in theory, is furnished by the control or comparison questions.  THIS IS WHERE THE HEART OF CONTROL LIES AND WHY IT IS COMPLETELY ABSENT IN PROBABLE LIE CONTROL QUESTION POLYGRAPHY.   In order for it to exist, we would need to know something about the emotional content or affect and the relational nature of this affect for chosen relevant and control question pairings within a given exam.  Although polygraphers have speculated about this, there is NO independent measure of this for a relevant/control pair for given examinee (guilty or innocent) on any given day.  This is not a function of isolated operator/examiner error that you correctly suggest could exist on any given day with any discipline, but is an every day condition and lack of control that exists with polygraphy.  If an innocent examinee is not more concerned with control/comparison questions than relevant questions (i.e. the emotional content/affect of controls is greater than for relevant questions) and this can not be demonstrated through the process, then any recording of physiological response (dependent variable) and any conclusions drawn are absolutely meaningless with a given exam.  This inability to verify theoretical constructs with a given relevant/control pairing for a given examinee is what makes control question test polygraphy without scientific control and without any ability to be meaningfully analyzed.  This situation does not exist with the forensic toxicological analysis that you either completely do not understand (hopefully) or intentionally misrepresent.  The chemical/physical relationship between deuterated-benzoylecgonine (control) and benzoylecgonine (urinary metabolite of cocaine and analyte of interest) is well understood for all of the environments involved in analysis, i.e., tissue, organic and aqueous media, chromatographic packing materials, mass spec source, analyzer, etc.  Because of this one can determine whether an experiment worked and what qualitative and quantitative conclusions can be meaningfully deduced with any dependent variable measurements obtained.  To compare control question test polygraphy to this is, again, a quite embarrassing comparison.  Again, the fact that operator error can compromise the validity of quality control or operational practice with any given toxicological analysis neither makes this (forensic toxicology) uncontrolled under normal circumstances nor vicariously makes control question test polygraphy more of a scientifically controlled practice through any contrived and envious comparisons.  It most assuredly does not.  WE ARE LEFT WITH WHAT WE BEGAN WITH---PROBABLE LIE CONTROL QUESTION TEST (CQT) POLYGRAPHY DOES NOT IN ANY WAY EMBODY PRINCIPLES OF SCIENTIFIC CONTROL...
« Last Edit: Mar 3rd, 2002 at 10:09pm by Drew Richardson »  
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #8 - Mar 7th, 2002 at 10:09pm
Mark & QuoteQuote Print Post  
Drew,

Although one may control for some given variants in a particular setting, there is always the chance of uncontrollable variants.   I will admit I am not a toxicologist.  My knowledge of this discipline is extremely limited in comparison to yours.  I am not arguing that toxicology is invalid nor is that the subject.  I did not use or list toxicology as a direct comparison of scientific validity.  My reference to toxicology was to show that even validated disciplines could have outside factors that cannot be controlled for in field settings and that those outside factors may produce an inconclusive and/or false result.  I compared questioned documents for scientific validity and I used documentation of the validity results of the 3T3 NRU PT for accuracy comparison.  This dialog was in direct response to what George wrote in his post prior to mine.

Quote:


2) Do you agree that because CQT polygraphy lacks both standardization and control, it can have no validity? If not, why?……

Other uncontrolled (and uncontrollable) variables that may reasonably be expected to affect the outcome of a polygraph interrogation include the subject's level of knowledge about CQT polygraphy (that is, whether he/she understands that it's a fraud) and whether the subject has employed countermeasures.



In reading this I deducted George was referring to variant control.  George must prove, with substantiated evidence of comparable scientific disciplines, that polygraph is not scientifically valid.  It is his assertions and this discussion is based on those and the past rules of discourse he has used.

I wrote, “This exam/test checks for the ability of the test to work. If the subject has an autonomic response to the known lie, the test works. If the subject does not have an autonomic response to the known lie, the test does not work.”  I did not say that the purpose of the stim/acquaintance was to measure the ability of the ANS to work.  A positive control test simply takes a known sample of a suspected unknown and simultaneously tests it with the unknown.   

For example:

Quote:


From ‘The Methods of Attacking Scientific Evidence’ by Edward J. Imwinkelried, 1982, Pg. 421-422

12-5(B).  Positive Control Test.

Control tests are vital in drug identification (14) and serological (15) testing.  Suppose that the analyst suspects that the unknown is marijuana.  At the same time that the analyst tests the unknown, she would subject marijuana to the identical test – the known is the control or reference sample. (16)  By simultaneously testing the unknown and known samples, the analyst can compare the test results side by side.  Drug identification experts almost unanimously agree that the use of controls is vital to the credibility of drug analysis evidence. (17) Experts on blood group typing also fell that controls are needed in blood, semen, and saliva analysis. (18) ……

14. Bradford, “Credibility of Drug Analysis Evidence,” Trial, May/June 1975, at 90.
15. Wraxall, “Forensic Serology,” in Scientific and Expert Evidence 897, 907 (2d ed. 1981).
16. Bradford, “Credibility of Drug Analysis Evidence,” Trial, May/June 1975, at 90.
17. Id.
18. Wraxall, “Forensic Serology,” in Scientific and Expert Evidence 897, 907 (2d ed. 1981).



You reference the ANS.  Although the ANS is regularly used to sustain life, the specific deceptive ANS response measured in a polygraph is not regularly used for continual life sustaining purposes.  I agree with you that there are other reasons that you have alluded to that define others’ explanations for the use and existence of the stim/acquaintance test/exam.  A stim/acquaintance test/exam is a Known Solution Peak of Tension Test.  Polygraph examiner training material reads as follows in reference to the stim/acquaintance, “Correlate outcome to the polygraph examination.”  Given my and the above supporting literature’s explanation of positive control test, do you agree or disagree that the stim/acquaintance test/exam is a positive control test?   

Quote:


From: http://www.scientificexploration.org/jse/abstracts/v1n2a2.html

What Do We Mean by "Scientific?"

Henry H. Bauer, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061

There exists no simple and satisfactory definition of "science." Such terms as "scientific" are used for rhetorical effect rather than with descriptive accuracy. The virtues associated with science — reliability, for instance — stem from the functioning of the scientific community.



When referring to scientific validity, one can reference many instances where a science was discredited by the majority of scientists, thus not accepted, and inversely proven to be true and accepted at a later date without addition and/or change to theory.  The reverse of this process has also happened.  So  “Scientific Validity” is in itself a highly subjective process directly dependent on the opinions of the current majority of scientists in the related discipline.  A scientific process can be accurate and its theory sound but with an absence of its general acceptance it may be considered invalid.  It is the test of scientific acceptability that defines whether a theory or practice is accepted.   
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box George W. Maschke
Global Moderator
*****
Offline


Make-believe science yields
make-believe security.

Posts: 6230
Joined: Sep 29th, 2000
Re: The Scientific Validity of Polygraph
Reply #9 - Mar 7th, 2002 at 11:42pm
Mark & QuoteQuote Print Post  
J.B.,

In your post of 3 March, you wrote in part:

Quote:
Your argument is that polygraph is "not better then chance".


J.B., my argument is not that polygraphy is "not better than chance," but that it has not been proven by peer-reviewed research to work better than chance under field conditions.

And in your post of 7 March (today) you write:

Quote:
George must prove, with substantiated evidence of comparable scientific disciplines, that polygraph is not scientifically valid.  It is his assertions and this discussion is based on those and the past rules of discourse he has used.


No, J.B. The burden of proof rests with you (and other polygraph proponents) if you would have us believe that CQT polygraphy is a valid diagnostic technique. Respectfully, I don't think you've met that burden. Not even close.


« Last Edit: Mar 8th, 2002 at 12:05am by George W. Maschke »  

George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Tel/SMS: 1-202-810-2105 (Please use Signal Private Messenger or WhatsApp to text or call.)
E-mail/iMessage/FaceTime: antipolygraph.org@protonmail.com
Wire: @ap_org
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"
Back to top
IP Logged
 
Paste Member Name in Quick Reply Box beech trees
God Member
*****
Offline



Posts: 593
Joined: Jun 22nd, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #10 - Mar 8th, 2002 at 12:48am
Mark & QuoteQuote Print Post  
J.B. McCloughan wrote on Mar 7th, 2002 at 10:09pm:

George must prove, with substantiated evidence of comparable scientific disciplines, that polygraph is not scientifically valid.  It is his assertions and this discussion is based on those and the past rules of discourse he has used


Nope, no sir. No way. You cannot prove a negative, it is a Logical impossibility. The burden of proof rests squarely on your shoulders. Thus far, I'm not convinced.
« Last Edit: Mar 8th, 2002 at 2:02am by beech trees »  

"It is the duty of the patriot to protect his country from its government." ~ Thomas Paine
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #11 - Mar 8th, 2002 at 8:06am
Mark & QuoteQuote Print Post  
George,

This thread was started because of a direct statement that you had made about polygraph.  You said, "CQT polygraphy has not been shown by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions. Moreover, since CQT polygraph lacks both standardization and control, it can have no validity."

You have in no way supported this assumption.  Lykken or any other opponent of CQT polygraph has not proven your assumption.  There is no current statistical data in field or laboratory peer-reviewed research studies that purports polygraph is not better then chance accuracy at differentiating between truth and deception.  I have supported this by illustrating some current and past peer-reviewed studies all with higher then chance validity rates and the more current with validity rates equal to or better then some of the accepted scientific disciplines.

You and those you reference write of the lack of scientific control and standardization yet there is no support for these assumptions.  They are simply unsupported statements.  Lykken does have the afforded luxury of being renowned in his field. Thus his assertions are reverend by the followers of his ideology(GKT).  His arguments only aid in the slowing of general acceptance and do nothing to disprove polygraph as a scientifically valid discipline. 

You do not have the afforded luxuries that Lykken does.  Your formal education is not in a related or even semi-related field.  For you to make unsupported statements with the lack of credential or peer-reviewed research to support these is nothing more then a lay assumption or repeat of Lykken's meaningless rhetoric.

I have shown examples of scientific control, standardization, and validity.

I have reviewed my comparisons with scientists of other accepted disciplines and they believe my explanations are sound scientifically and support my assertions.

I again ask you:

1) What about the current peer-reviewed field researches show polygraph to be no better then chance accuracy and what is the current accuracy rate?

2) What control does polygraph lack?

3) What standardization does polygraph lack?

If you cannot prove your assertion, then please retract it and state that which you can support with hard evidence.

beech trees,

This debate is in reference to an assertion made by George.  He has in the past set the rules for rational discourse and placed the burden of proof on he who makes the assertion.  Thus, the burden of proof is his.   

This is not a scientific review of polygraph for official acceptance.  If that were the case, I would agree with you that the presenter of evidence of a purposed science for acceptance would have the burden to prove it to be true.  There are very few on this site who have the credentials to carry this type of formal debate out and it would have to be done in proper forum for acceptance.
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box Drew Richardson
Especially Senior User
*****
Offline



Posts: 427
Joined: Sep 7th, 2001
Re: The Scientific Validity of Polygraph
Reply #12 - Mar 8th, 2002 at 4:16pm
Mark & QuoteQuote Print Post  
J.B.

Quote:
...although one may control for some given variants in a particular setting, there is always the chance of uncontrollable variants...


True, but irrelevant to the fact the probable-lie control question test (CQT) polygraphy has no, nada, zilch scientific control on ANY day, uncontrollable variants notwithstanding.  This is because the emotional content of relevant/control question pairings is NEVER known apriori for a given examinee with a given examination.  This makes any conclusions drawn from physiological recordings meaningless and sheer speculation on each and every occasion/outing.


Quote:
...I will admit I am not a toxicologist.  My knowledge of this discipline is extremely limited in comparison to yours.  I am not arguing that toxicology is invalid nor is that the subject...


Again, all likely true, and although containing an appreciated and flattering admission, all irrelevant to the issue at hand...

The theoretical nature of control/comparison questions (emotional content/affect in relationship to paired relevant question material) can not be verified on ANY and again I repeat ANY given occasion, making their use, if not without purpose, at least providing no scientific control in any formal and recognized context.
« Last Edit: Mar 8th, 2002 at 4:56pm by Drew Richardson »  
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #13 - Mar 8th, 2002 at 10:44pm
Mark & QuoteQuote Print Post  
Drew,

Quote:

…..the fact the probable-lie control question test (CQT) polygraphy has no, nada, zilch scientific control on ANY day……


Do you agree that the POT/Known Solution Test can be considered as a Positive Control Test or not?
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box Drew Richardson
Especially Senior User
*****
Offline



Posts: 427
Joined: Sep 7th, 2001
Re: The Scientific Validity of Polygraph
Reply #14 - Mar 8th, 2002 at 11:16pm
Mark & QuoteQuote Print Post  
J.B.,

In order to answer your lasted posted question, I am afraid I must seek clarification.  I believe that you are asking me if I consider the stim/acquaintance test a positive control for subsequently administered probable lie control question tests.  If not please correct me, and I will answer your intended question.  But to this one...

No, I don't---if the stim/acquaintance test were in fact a probable lie CQT, at best, you would have an external control situation, a much weaker form of control than the internal positive control we have discussed with a forensic toxicological analysis, but in fact, even this weaker form of control does not exist...

The so-called stim test is really not a probable-lie CQT or any other test for deception and therefore offers no form of control, external or internal.  The reason being that, although you can instruct the examinee to answer "no" to the chosen number (and therefore lie), you can also have he/she answer "yes" to that same number or provide no answer at all, i.e., a silent test, and obtain exactly the same result/same response.  In other words, the lie is irrelevant to the stim test and what the stim test  really is is a form of concealed information test in which the examinee is merely responding to something of significance to himself, significance derived from the fact that the number was recently chosen by the examinee.  Although, as I have indicated before, I have great disdain for how a stim/acquaintance test is used in a polygraph setting, I actually believe the format, apart from that setting, to be a quite useful and a narrowly defined/controlled vehicle for studying physiological change.  But again, in answer to the question that I believe I was asked, it (a stim test) has absolutely nothing to do with providing control to a probable-lie control question test.
  
Back to top
 
IP Logged
 
Page Index Toggle Pages: [1] 2 3 
ReplyAdd Poll Send TopicPrint
The Scientific Validity of Polygraph

Please type the characters that appear in the image. The characters must be typed in the same order, and they are case-sensitive.
Open Preview Preview

You can resize the textbox by dragging the right or bottom border.
Insert Hyperlink Insert FTP Link Insert Image Insert E-mail Insert Media Insert Table Insert Table Row Insert Table Column Insert Horizontal Rule Insert Teletype Insert Code Insert Quote Edited Superscript Subscript Insert List /me - my name Insert Marquee Insert Timestamp No Parse
Bold Italicized Underline Insert Strikethrough Highlight
                       
Change Text Color
Insert Preformatted Text Left Align Centered Right Align
resize_wb
resize_hb







Max 200000 characters. Remaining characters:
Text size: pt
More Smilies
View All Smilies
Collapse additional features Collapse/Expand additional features Smiley Wink Cheesy Grin Angry Sad Shocked Cool Huh Roll Eyes Tongue Embarrassed Lips Sealed Undecided Kiss Cry
Attachments More Attachments Allowed file types: txt doc docx ics psd pdf bmp jpe jpg jpeg gif png swf zip rar tar gz 7z odt ods mp3 mp4 wav avi mov 3gp html maff pgp gpg
Maximum Attachment size: 500000 KB
Attachment 1:
X