AntiPolygraph.org Message Board
Polygraph and CVSA Forums >> Polygraph Procedure >> The Scientific Validity of Polygraph
https://antipolygraph.org/cgi-bin/forums/YaBB.pl?num=1011498360

Message started by J.B. McCloughan on Jan 20th, 2002 at 6:45am

Title: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Jan 20th, 2002 at 6:45am
This message thread is being started in direct response to George Maschke?s assertion that, "CQT polygraphy has not been shown by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions. Moreover, since CQT polygraph lacks both standardization and control, it can have no validity."  

The discussion will encompass but not be limited to;

1. All studies that are published and peer reviewed in a professional journal or publication.

2. Comparisons of polygraph to other scientific accepted fields of studies and their practices.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Jan 20th, 2002 at 6:50am
I will start the discussion by referencing those interested to a web site that contains government reviews of polygraph and many of the studies and finding on the validity of polygraph: http://fas.org/sgp/othergov/polygraph/ota/

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Jan 22nd, 2002 at 8:04am
Here is an excerpt from http://fas.org/sgp/othergov/polygraph/ota/conc.html to get the discussion under way.

In reading this, one can see that the reviewing entity states quite clearly that polygraph does show a better than chance ability to detect deception.  "The preponderance of research evidence does indicate that, when the control question technique is used in specific-incident criminal investigations, the polygraph detects deception at a rate better than chance.."  


Quote:

Scientific Validity of Polygraph Testing:
A Research Review and Evaluation

A Technical Memorandum
Washington, D. C.: U.S. Congress
Office of Technology Assessment
OTA-TM-H-15
November 1983

Chapter 7, Section 3, Sub-Section 1

SPECIFIC SCIENTIFIC CONCLUSIONS IN POLICY CONTEXT
Specific-Incident Criminal Investigations

A principal use of the polygraph test is as part of an investigation (usually conducted by law enforcement or private security officers) of a specific situation in which a criminal act has been alleged to have, or in fact has, taken place. This type of case is characterized by a prior investigation that both narrows the suspect list down to a very small number, and that develops significant information about the crime itself. When the polygraph is used in this context, the application is known as a specific-issue or specific-incident criminal investigation.

Results of OTA Review

The application of the polygraph to specific-incident criminal investigations is the only one to be extensively researched. OTA identified 6 prior reviews of such research (summarized in ch. 3), as well as 10 field and 14 analog studies that met minimum scientific standards and were conducted using the control question technique (the most common technique used in criminal investigations; see chs. 2, 3, and 4). Still, even though meeting minimal scientific standards, many of these research studies had various methodological problems that reduce the extent to which results can be generalized. The cases and examiners were often sampled selectively rather than randomly. For field studies, the criteria for actual guilt or innocence varied and in some studies were inadequate. In addition, only some versions of the control question technique have been researched, and the effect of different types of examiners, subjects, settings, and countermeasures has not been systematically explored.

Nonetheless, this research is the best available source of evidence on which to evaluate the scientific validity of the polygraph for specific-incident criminal investigations. The results (for research on the control question technique in specific-incident criminal investigations) are summarized below:

   * Six prior reviews of field studies:
         * average accuracy ranged from 64 to 98 percent.
   * Ten individual field studies:
         * correct guilty detections ranged from 70.6 to 98.6 percent and averaged 86.3 percent;
         * correct innocent detections ranged from 12.5 to 94.1 percent and averaged 76 percent;
         * false positive rate (innocent persons found deceptive) ranged from O to 75 percent and averaged 19.1 percent; and
         * false negative rate (guilty persons found nondeceptive) ranged from O to 29.4 percent and averaged 10.2 percent.
   * Fourteen individual analog studies:
         * correct guilty detections ranged from 35.4 to 100 percent and averaged 63.7 percent;
         * correct innocent detections ranged from 32 to 91 percent and averaged 57.9 percent;
         * false positives ranged from 2 to 50.7 percent and averaged 14.1 percent; and
         * false negatives ranged from O to 28.7 percent and averaged 10.4 percent.

The wide variability of results from both prior research reviews and OTA?S own review of individual studies makes it impossible to determine a specific overall quantitative measure of polygraph validity. The preponderance of research evidence does indicate that, when the control question technique is used in specific-incident criminal investigations, the polygraph detects deception at a rate better than chance, but with error rates that could be considered significant.

The figures presented above are strictly ranges or averages for groups of research studies. Another selection of studies would yield different results, although OTA?S selection represents the set of studies that met minimum scientific criteria. Also, some researchers exclude inconclusive results in calculating accuracy rates. OTA elected to include the inconclusive on the grounds that an inconclusive is an error in the sense that a guilty or innocent person has not been correctly identified. Exclusion of inconclusive would raise the overall accuracy rates calculated. In practice, inconclusive results may be followed by a retest or other investigations.


Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Jan 22nd, 2002 at 11:57pm
J.B.

You argue that the 1983 OTA report "states quite clearly that polygraph does show a better than chance ability to detect deception." And you cite the following from Chapter 7 of the report:


Quote:
The preponderance of research evidence does indicate that, when the control question technique is used in specific-incident criminal investigations, the polygraph detects deception at a rate better than chance, but with error rates that could be considered significant.


As a preliminary matter, note that this statement by the OTA is not inconsistent with my statement that CQT polygraphy has not been proven by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions. The OTA relied on both field studies and analog (laboratory) studies. Of the field studies, only two appeared in a peer-reviewed scientific journal:

Bersh, P. J. "A Validation Study of Polygraph Examiner Judgments," Journal of Applied Psychology, 53:399-403, 1969.

Horvath, F. S., "The Effect of Selected Variables on Interpretation of Polygraph Records," Journal of Applied Psychology, 62:127-136, 1977.

(By the way, the FAS website does not include the OTA report's list of references. You'll find it in the PDF version available on Princeton University's Woodrow Wilson School of Public and International Affairs website.)

Bersh's study involved both the Zone [of] Comparison "Test" (a form of probable-lie "Control" Question "Test") and the General Question "Test" (a form of the Relevant/Irrelevant technique). The polygraphers used "global" scoring, that is, they reached their determinations of guilt or innocence based not only on the charts, but also on their clinical impression or "gut feeling" regarding the subject.The decision of a panel of judges (four Judge Advocate General attorneys) was used as "ground truth." Assuming the panel's judgement to be correct, the OTA report notes that the polygraphers' determinations were (overall) 70.6% correct with guilty subjects and 80% correct with innocent subjects.

David T. Lykken provides an insightful commentary on Bersh's study at pp. 104-106 of the 2nd edition of A Tremor in the Blood: Uses and Abuses of the Lie Detector. Because the discussion we are having of polygraph validity is an important one, I will cite Lykken's treatment of Bersh's study here in full for the benefit of those who do not have ready access to A Tremor in the Blood (which now seems to be out of print):


Quote:

Validity of the Clinical Lie Test


In view of the millions of clinical lie tests that have been administered to date, it is surprising that only one serious investigation of the validity of this method has been published, Bersh's 1969 Army study.[reference deleted] Bersh wanted to assess the average accuracy of typical Army polygraphers who routinely administered clinically evaluated lie "tests" to military personnel suspected of criminal acts. He obtained a representative sample of 323 such cases on which the original examiner had rendered a global diagnosis of truthful or deceptive. The completed case files were then given to a panel of experienced Army attorneys who were asked to study them unhindered by technical rules of evidence and to decide which of the suspects they believed had been guilty and which innocent. The four judges discarded 80 cases in which they felt there was insufficient evidence to permit a confident decision. On the remaining 243 cases, the panel reached unanimous agreement on 157, split three-to-one on another 59, and were deadlocked on 27 cases. Using the panel's judgment as his criterion of ground truth, Bersh then compared the prior judgments of the polygraphers against this criterion. When the panel was unanimous, the polygraphers' diagnosis agreed with the panel's verdict on 92% of the cases. When the panel was split three-to-one, the agreement fell to 75%. On the 107 cases where the panel had divided two-to-two or had withheld judgment, no criterion was of course available.

Bersh himself pointed out that we cannot tell what role if any the actual polygraph results played in producing this level of agreement. In another part of that same Defense Department study, polygraphers like those Bersh investigated were required to "blindly" rescore one another's polygraph charts in order to estimate polygraph reliability. The agreement was better than chance but very low. As these Army examiners then operated (they have since converted to the Backster method [of numerical scoring], which is more reliable), chart scoring was conducted so unreliably that we can be sure that Bersh's examiners could not have obtained much of their accuracy from the polygraphs: validity is limited by unreliability. But, although these findings are a poor advertisement for the polygraph itself, can they at least indicate the average accuracy of a trained examiner in judging the credibility of a respondent in the relatively standardized setting of a polygraph examination?

Bersh's examiners based their diagnoses in part on clinical impressions or behavior symptoms, which, we know from the evidence mentioned above, should not have permitted an accuracy much better than chance. But they also had available to them at the time of testing whatever information was then present in that suspect's case file: the evidence then known against him, his own alibi, his past disciplinary record, and so on. In other words, the polygraphers based their diagnoses in part on some portion of the same case facts that the four panel judges used in reaching their criterion decision. This contamination is the chief difficulty with the Bersh study. When his judges were in unanimous agreement, it was presumably because the evidence was especially persuasive, an "open-and-shut case." It may be that much of that same convincing evidence was also available to the polygraphers, helping them to attain that 92% agreement. When the evidence was less clear-cut and the panel disagreed three-to-one among themselves, the evidence may also have been similarly less persuasive when the lie tests were administered--and so the polygrapher's agreement with the panel dropped to 75% (note that the average panel member also agreed with the majority 75% of the time). An extreme example of this contamination involves the fact that an unspecified number of the guilty suspects confessed at the time of the examination. Because the exams were clinically evaluated, we can be sure that every test that led to a confession was scored as deceptive. Since confessions were reported to the panel, we can be sure also that the criterion judgment was always guilty in these same cases. Thus, every lie test that produced a confession was inevitably counted as an accurate test, although, of course, such cases do not predict at all whether the polygrapher would have been correct absent the confession. That the polygraph test frequently produces a confession is its most valuable characteristic to the criminal investigator, but the occurrence of a confession tells us nothing about the accuracy of the test itself.

Thus, the one available study of the accuracy of the clinical lie test is fatally compromised. Because of the contamination discussed above, the agreement achieved when the criterion panel was unanimous is clearly an overestimate of how accurate such examiners could be in the typical run of cases. When the panel split three-to-one, then at least we know that there was no confession during the lie test or some other conclusive evidence available to both the panel and the examiner. The agreement achieved on this subgroup was 75%, equal to the panel judges' agreement among themselves. As we have seen, Bersh's examiners could not have improved much on their clinical and evidentiary judgments by referring to their unreliable polygraphs.


As Lykken makes clear, Bersh's study does little to support the validity of CQT polygraphy.

The second peer-reviewed field study cited in the OTA report is that by Horvath. In this study, confessions were used as the criterion for ground truth. In Horvath's study, 77% of the guilty and 51% of the innocent were correctly classified, for a mean accuracy of 64%.

Lykken again provides cogent commentary regarding Horvath's study (as well as a later peer-reviewed field study conducted by Kleinmuntz and Szucko). The following is an excerpt from pp. 133-34 of A Tremor in the Blood (2nd ed.):


Quote:
The studies by Horvath and by Kleinmuntz and Szucko both used confession-verified CQT charts obtained respectively from a police agency and the Reid polygraph firm in Chicago. The original examiners in these cases, all of whom used the Reid clinical lie test technique, did not rely only on the polygraph results in reaching their diagnoses but also employed the case facts and their clinical appraisal of the subject's behavior during testing. Therefore, some suspects who failed the CQT and confessed were likely to have been judged deceptive and interrogated based primarily on the case facts and their demeanor during the polygraph examination, leaving open the possibility that their charts may or may not by themselves have indicated deception. Moreover, some other suspects, judged truthful using global criteria, could have produced charts indicative of deception. That is, the original examiners in these cases were led to doubt these suspects' guilt in part regardless of the evidence in the charts and proceeded to interrogate an alternative suspect in the same case who thereupon confessed. For these reasons, some undetermined number of the confessions that were criterial in these two studies were likely to be relatively independent of the polygraph results, revealing some of the guilty suspects who "failed" it....


Again, Horvath's study (and for that matter, that of Kleinmuntz & Szucko) does little to support the validity of CQT polygraphy.

In A Tremor in the Blood, Lykken addresses three peer-reviewed field studies that post-date the OTA review. I won't address those studies individually for the time being, but I think it's fair to say that the available peer-reviewed research has not proven that CQT polygraphy works at better than chance levels of accuracy under field conditions.

Do you disagree? If so, why? What peer-reviewed field research proves that CQT polygraphy works better than chance? And just how valid does that research prove it to be?

The other statement I've made (and you've noted) is that because CQT polygraphy lacks both standardization and control, it can have no validity. You'll find that explained in more detail in Chapter 1 of The Lie Behind the Lie Detector. I'll be happy to discuss it further, but before I do, I would ask whether you disagree with me regarding this, and if so, why?


Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Feb 18th, 2002 at 7:46am
George,

The OTA's findings and statements that polygraph is better than chance was based on all available credible research and only acceptable field studies were included.  Some suggested research studies were eliminated due to validity and structural problems found upon "peer-review".  

The Bersh study, although archaic, does much to enlighten the general public and scientific community by producing what I believe to be the first field research study of the ZCT (1961 Backster).  There were changes made to the ZCT that could have increased the Bersh study accuracy even further, standardized numeric scoring criteria being one.  

Bersh examiners' did not solely conduct a "clinical" evaluation of subjects for deception as Lykken suggests.  The examiners used a global scoring method of evaluation.  This method does uses the charts to discern if one is showing deception to a particular question.  Global scoring also includes observations of the subject prior, during, and following the exam, and all the available investigative material.  It does puzzles me that Lykken would state, "clinical impressions or behavior symptoms, which, we know from the evidence mentioned above, should not have permitted an accuracy much better than chance."  It is a well-known fact that psychologists quite frequently use this very method to come to their professional opinion.  Sometimes, if not often, psychologists' opinions are regarding weather a client "truly" believes or is being "truthful" about something they say has happened to them.  The psychologist then gives his professional opinion to the aforementioned.  I have both seen and heard psycholgists testify to these opinions in court.  Unlike polygraph, psychology rarely has physiological data to base or support their inferences.

As for the results of the study, the OTA compares Bersh?s to Barland and Raskin?s study.  The OTA does note that the two studies have some inherent differences.  However, the OTA considered the studies similar enough to compare.  The OTA states, "Assuming the panel's decisions, the two studies' results are strikingly different.  Barland and Raskin attained accuracy rates of 91.5 percent for the guilty and 29.4 percent for the innocent subjects; comparable figures in Bersh's study are 70.6 percent guilty correct and 80 percent innocent correct."  My math shows a combine accuracy rate of 81.05 percent for guilty, 54.7 percent for innocent, and 67.87 percent overall accuracy for the two studies.  The OTA then wrote, "It is not clear why there should be this variation?.."  They go on to give some possible but miss some technical reasons for the differences in the findings of the two studies.  Most obvious, Bersh?s study used ZCT and R&I, the global scoring method, and eliminated inconclusive exams.  Ground truth is the most difficult element to establish in a polygraph research study because it is subjective to the interpretations and opinions of the peer-reviewer.

The R&I question format has proven to be a less accurate technique when compared to the ZCT or CQT in specific criminal issue examinations studies.  This is arguably the reason why the Army Modified the General Question Technique (MGQT) to include comparison questions, zone/spot scoring, and total chart minutes.   How the two question formats in Bersh?s study compared or differed in accuracy would be interesting.

The available scientific research for polygraph shows that a greater percentage of inconclusive exams are found in the innocent.  Thus it would be prudent to ascertain that Barland and Raskin's study may have produced similar, if not better, results in the truthful and in the deceptive when compared to Bersh, if inconclusive results were set aside.  The scientific community often holds inconclusive results against polygraph when reviewing its scientific validity and accuracy.  However, polygraph examiners view inconclusive results as not enough in the chart tracings to determine an opinion.  An inconclusive can be attributed to many variables.  One example of inconclusive chart tracings may be found in an exam where the examinee has problems remaining still or intentionally moves.  Even Farwell the Brain Fingerprint?s inventor says his instrument will produce inconclusive results if the examinee does not remain still during the examination.  

Lykken also states, "Because the exams were clinically evaluated, we can be sure that every test that led to a confession was scored as deceptive."  He makes this statement without any supportive documentation, and/or reference to a specific incident within the study where this actually occurred.  There is no evidence to support his opinion on this issue.  If a confession were to be obtained prior to chart data collection, the exam would have been considered incomplete by the examiner.  This is not the case in point in Bersh?s study because inconclusive and incomplete exams were not included.

Lykken's argues that Bersh's study is "fatally flawed" because of his prior assertions.  He writes, "That the polygraph test frequently produces a confession is its most valuable characteristic to the criminal investigator, but the occurrence of a confession tells us nothing about the accuracy of the test itself."  I agree that a confession is a valuable tool in a criminal investigation.  I disagree with his fallible knowledge of the use of a properly documented confession to ascertain conformation of the polygraph data results.  A proper confession covers the elements of the crime and includes information that only a person who committed the crime would know.  When this information is present in a confession, it would undoubtedly confirm the data.  The question here is not weather the confession can be used to confirm the polygraph chart data but what standard was used in deeming statements made by examinee's as a confession.  However, this point is not asserted and/or proven in Lykken's argument, thus it would appear to be a nonexistent flaw.

Horvath's research study provides good data in areas but had missing information that might of hindered the overall accuracy results. Barland submits that Horvath's original examiners were 100 percent correct in their opinions.  Barland notes that some special charts administered in 32 percent of the cases were removed from the files of considered deceptive subjects.  These special charts were most likely removed to avoid pre-judgment by the research evaluators. I do not think his study invalidated polygraph in anyway.  The study in fact provided valuable insight into the possible effect incomplete chart data might have on accurate review.  Horvath's study still produced better then chance results considering there was a 50% chance of the reviewers being correct and they were overall 64% correct.

Lykken states, "The original examiners in these cases, all of whom used the Reid clinical lie test technique, did not rely only on the polygraph results in reaching their diagnoses but also employed the case facts and their clinical appraisal of the subject's behavior during testing."  This statement is partially true but not completely factual.  The examiners in this study used scoring of the charts along with the global information present.  The global scoring method in no way goes against the chart data results and contrariwise uses other information to confirm the chart data.  Lykken goes on to purport, "Moreover, some other suspects, judged truthful using global criteria, could have produced charts indicative of deception."  This is an illogical statement.  He never stipulates what scoring method or criterion might have produced a deceptive result.  He cannot prove or disprove his assertions.  I could easily conclude that, if given all the data available to the original examiners and the same scoring method, the reviewers would have concurred with the original examiners in 100% of the cases.  Neither Lykken nor I can prove or disprove our assertions because this variable was not present or measured.  However, Barland had access to and reviewed the data after the missing variable was discovered.

The point I am making is that no matter how meticulous one accounts for variables there will most likely be ones that need further research to answer.  This is true in any research including physiology, psychology, and medical.  One cannot control for and/or predict every possible variable.  Because a variable is in question does not invalidate the findings or methodology.  Because DNA's sample database is relatively small in comparison with the total population that inhabits the earth, does not lead scientists to doubt the accuracy and/or scientific validity of its methods or findings.

I have read chapter one of The Lie Behind The Lie Detector and have found no reference to what standardization and control the CQT lacks.  You state that it lacks these elements but give no examples or criterion for standardization and control.  I would think this would be hard for you to do, as even the scientific community is quite subjective in their opinion of what constitutes acceptable standardization and control for scientific validity.  Can you reference, for comparison purposes, any other scientific method that has been accepted and its basis for acceptance?  Can you reference, for comparison purposes,  any other scientific method that was rejected based on comparable factors you might use to make this statement?

Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Feb 18th, 2002 at 10:38am
J.B.,

Before I address your questions, I note that you didn't really answer mine:

1) Do you agree that that the available peer-reviewed research has not proven that CQT polygraphy works at better than chance levels of accuracy under field conditions? If not, why? What peer-reviewed field research proves that CQT polygraphy works better than chance? And just how valid does that research prove it to be?

I realize you averaged the Bersh and Barland & Raskin studies to come up with an average accuracy of 67.87? Do you seriously maintain that these two studies prove that CQT polygraphy works better than chance and that it is 67.87% accurate? By the way, you did not specify to which study by Barland & Raskin you were referring. I assume you are referring to the following non-peer-reviewed study discussed at p. 52-54 of the OTA report:

Barland, G.H., and Raskin, D.C., "Validity and Reliability of Polygraph Examinations of Criminal Suspects," report No. 76-1, contract No. 75-N1-99-0001 (Washington, D. C.: National Institute of Justice, Department of Justice, 1976).

2) Do you agree that because CQT polygraphy lacks both standardization and control, it can have no validity? If not, why?

Now, you mentioned that you read Chapter 1 of The Lie Behind the Lie Detector and found no reference to what standardization and control the CQT lacks. That reference is found at pp. 2-3 of the 1st digital edition, where we cite Furedy:


Quote:

Professor John J. Furedy of the University of Toronto (Furedy,
1996) explains regarding the “Control” Question “Test” that

…basic terms like “control” and “test” are used in ways that are not consistent with normal usage. For experimental psychophysiologists, it is the Alice-in-Wonderland usage of the term “control” that is most salient. There are virtually an infinite number of dimensions along which the R [relevant] and the so-called “C” [“control”] items of the CQT could differ. These differences include such dimensions as time (immediate versus distant past), potential penalties (imprisonment and a criminal record versus a bad conscience), and amount of time and attention paid to “developing” the questions (limited versus extensive). Accordingly, no logical inference is possible based on the R versus “C” comparison. For those concerned with the more applied issue of evaluating the accuracy of the CQT procedure, it is the procedure’s in-principle lack of standardization that is more critical. The fact that the procedure is not a test, but an unstandardizable interrogatory interview, means that its accuracy is not empirically, but only rhetorically, or anecdotally, evaluatable. That is, one can state accuracy figures only for a given examiner interacting with a given examinee, because the CQT is a dynamic interview situation rather than a standardizable and specifiable test. Even the weak assertion that a certain examiner is highly accurate cannot be supported, as different examinees alter the dynamic examiner-examinee relationship that grossly influences each unique and unspecifiable CQT episode.


Other uncontrolled (and uncontrollable) variables that may reasonably be expected to affect the outcome of a polygraph interrogation include the subject's level of knowledge about CQT polygraphy (that is, whether he/she understands that it's a fraud) and whether the subject has employed countermeasures.

You asked, "Can you reference, for comparison purposes, any other scientific method that has been accepted and its basis for acceptance?" I think Drew Richardson gave a good example in his remarks to the National Academy of Sciences on 17 October 2001, when he compared polygraphy to a test for a urinary metabolite of cocaine:

http://antipolygraph.org/nas/richardson-transcript.shtml#control

The test Dr. Richardson describes is genuinely standardized and controlled, unlike polygraphy.

You also asked, "Can you reference, for comparison purposes, any other scientific method that was rejected based on comparable factors you might use to make this statement?" For comparison purposes, look to polygraphy's sister pseudosciences of phrenology and graphology.


Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Mar 3rd, 2002 at 9:00am
George,

To answer your first question, no I do not agree.  The peer-reviewed research does prove polygraph to be better then chance at detecting deception. I purposefully used a rather dated research for my first post because even it provides a better then chance accuracy in the review method used.  If we look at Bersh's study alone, "....70.6 percent guilty correct and 80 percent innocent correct.", the overall accuracy rate is 75.3%.  This is one of the first field polygraph studies.  There have been changes made to polygraph which have improved the overall accuracy, some of which I discussed in my previous post.  Bersh?s study also uses non-polygraph evaluators to confirm results.  Thus, the 75.3% accuracy is achieved in part through information independent of the polygraph data.  Lykken?s argument is that the results are independent of the polygraph charts. I have stated previously that this is not completely true.  Some of the evaluators? based their decisions on information independent of the polygraph charts.  However, the examiners? original decisions were based on the polygraph chart data.  Regardless of all that, it shows better then chance in detecting both truth and deception in a field setting.  

The combined studies of Bersh and Barland and Raskin do not illustrate polygraph to be 67.87% accurate per se.  The studies illustrate that polygraph was accurate to this degree for the particular confirmation method used.  Your argument is that polygraph is "not better then chance".  Although the previous studies do not reflect the current accuracy rate of polygraph, using the given confirmation method the combine studies do support a better then chance accuracy rate.

On a related but separate note, just because Barland and Raskins study did not appear in a professional journal upon its release does not mean it was not peer-reviewed and accepted.  Just because Lykken does not like the results and the method is not his, does not make the study invalid.  You may get some of the people to agree some of the time but you can?t get all of the people to agree all of the time.

Since I have stated that there were improvements made to polygraph which have increased its? accuracy, I will quote some more recent studies to support the increased accuracy of Polygraph.  All of these studies appear in professional journals and provide better then chance results for CQT polygraph detecting deception.


Quote:

From http://www.polygraph.org/research.htm

Patrick, C. J., & Iacono, W. G. (1991). Validity of the control question polygraph test: The problem of sampling bias. Journal of applied Psychology, 76(2), 229-238.

Sampling bias is a potential problem in polygraph validity studies in which posttest confessions are used to establish ground truth because this criterion is not independent of the polygraph test. In the present study, criterion evidence was sought from polygraph office records and from independent police files for all 402 control question test (CQTs) conducted during a 5-year period by a federal police examiners in a major Canadian city. Based on blind scoring of the charts, the hit rate for criterion innocent subjects (65% of whom were verified by independent sources) was 55%; for guilty subjects (of whom only 2% were verified independently), the hit rate was 98%. Although the estimate for innocent subjects is tenable given the characteristics of the sample on which it is based, the estimate for the guilty subsample is not. Some alternatives to confession studies for evaluating the accuracy of the CQT with guilty subjects are discussed

Podlesny, J. A., & Truslow, C. M. (1993). Validity of an expanded-issue (Modified General Question) polygraph technique in a simulated distributed-crime-roles context. Journal of Applied Psychology, 78(5), 788-797.

The validity of an expanded-issue control-question technique that is commonly used in investigations was tested with simulations of thief, accomplice, confidant, and innocent crime roles. Field numerical scores and objective measures discriminated between the guilty and innocent groups. Excluding inconclusives (guilty =
-18.1%, innocent = 20.8%), decisions based on total numerical scores were 84.7% correct for the guilty group and 94.7% correct for the innocent group. There was relatively weaker, but significant, discrimination between the thief group and the other guilty groups and no significant discrimination between the accomplice group and confident group. Skin conductance, respiration, heart rate, and cardiograph measures contributed most strongly to discrimination

Honts, C. R. (1996). Criterion development and validity of the CQT in field application. The Journal of General Psychology, 123(4), 309-324.

A field study of the control question test (CQT) for the detection of deception was conducted. Data from the files of 41 criminal cases were examined for confirming information and were rated by two evaluators on the strength of the confirming information. Those ratings were found to be highly reliable, r = .94. Thirty-two of the cases were found to have some independent confirmation. Numerical scores and decisions from the original examiners and an independent evaluation were analyzed. The results indicated that the CQT was a highly valid discriminator. Excluding inconclusives, the decision of the original examiners were correct 96% of the time, and the independent evaluations were 93% correct. These results suggest that criteria other than confessions can be developed and used reliably. In addition, the validity of the CQT in real-world settings was supported.


You ask how valid it shows polygraph to be. It is you who have purported that the CQT is not better then chance at detecting between truth and deception.  Since you have in the past set the terms for rational discourse, I see it to be your burden to prove what is chance and what peer-reviewed studies have proven polygraph to be below the chance level.  You would also have to establish what scientifically the chance level is.  In a separate message thread you wrote:


Quote:
Re: How Countermeasures are Detected on the Charts
« Reply #52 on: 12/11/01 at 15:51:49 »
In addition, the chance level of accuracy is not necessarily 50/50. It is governed by the base rate of guilt. For example, in screening for espionage, where the base rate of guilt is quite small (less than 1%), an accuracy rate of over 99% could be obtained by ignoring the polygraph charts and arbitrarily declaring all "tested" to be truthful. However, such a methodology would not work better than chance.


This is an estimation of base rate on your part.  One cannot establish an ultimate true base rate in a field setting for truthful and deceptive because it will vary.  One cannot control for the number of cases that will produce one or the other result within a field setting.  A toxicologist cannot say that 50% of his cases will detect a presents of XYZ in the field because it will vary.  A toxicologist can say that if XYZ is present then it will be detected, if the test works.  Since polygraph measures for deception, polygraph can only produce one of two results, if the test works.  Thus, the exam/test will have a 50% chance of producing deception or no deception, if the test works.  If the exam/test does not work in either discipline, there is an outside contaminant that is thwarting the ability of the exam to produce an acceptable result.

As for your second question, no I do not agree that polygraph lacks both standardization and control.  

Standardization:

The instrumentation must meet a standardized criterion.  The examiner must meet a specified standard criterion. There is a very standardized process followed in a specific issue polygraph that is discussed in you book.  The examiner must follow the process from the beginning to the end.  This is standardization.  The given question formats must contain a standardized number of a given type of question.  This is another standardization.  The given question format must follow a standardized sequence.  The chart tracings must be of a certain standard of quality for acceptable scoring purposes.  The scoring must be done in an acceptable standardized scoring method and must meet a standardized scoring result to make a decision.  The fact is that there are numerous standardized methods within polygraph that prove it to be standardized.

Control:

The examiner is required to conduct the polygraph in a sterol environment that is free of visual and audio distraction.  The examiner is required to assess the examinee?s medical background to control for outside contaminants that may hinder the ability of the instrument to obtain suitable tracings.  The examiner must attempt to control for movement by the examinee to control for outside contaminants that may hinder the ability of the instrument to obtain suitable tracings.  The examiner must conduct an acquaintance exam to control for the possibility of undisclosed medical of physical variants that may contaminate or hinder the ability of the instrument to obtain suitable tracings.  Again, polygraph controls for a number of variables and thus does not lack control.

You quiet frequently use Furdey and Lykken for references.  It should be noted that both of these individuals do have motive to be bias in their opinions toward CQT but support polygraph.  Dr. Furedy has repeatedly condemned the use of polygraph but only the CQT method.  Furedy is a proponent for polygraph when it uses the Guilty Knowledge Test (GKT). The GKT lacks the extensive research, reviews, critical debate, and sheer numbers of its use in the field setting that the CQT has endured and produced over time.  Lykken is under the same ideology as Furdey.  Their bias may be genuinely for scientific purposes but I think not.  Their motive more likely is synonymous with the old cliché that plagues polygraph, "My question format is better then yours."  Why argue so intensely over this issue of the question format to use?  Polygraph is attempting to even further standardize an already standardized method.  Searching for further standardized format, polygraph looked to the academic community because of its? wealth of resources, ability to formulate experimental design, and ability to conduct extensive controlled laboratory research on the methods.  In the academic community, those who posses the acceptable methods are the ones who get the research grant money to perfect and substantiate their methods.  This may be somewhat of an off issue for this discussion.  However, if one looks at why polygraph has not been overwhelmingly accepted in the scientific community regardless of its? high validity marks, one can see that lack of agreement is a major issue that hold up is overall acceptance.  I have spoken with scientists, psychologists, and many other scientific disciplines.  These people say that polygraph is valid, not flawless.  Again, the reoccurring theme that hinders polygraph is the lack of agreement amongst the ranks.  The irony is that those who are causing such confusion and thwarting the acceptance of polygraph as a standardized scientific method are the ones who were sought out to aid in doing just the opposite.  Further, these individuals are not even polygraph examiners.  "Those who can do.  Those who can?t teach."  

Further more, Lykken?s argument against presence of standardization and control in polygraph is elusive babble.   He say?s things like, "There are virtually an infinite number of dimensions along which the R [relevant] and the so-called "C" ["control"] items of the CQT could differ. These differences include such dimensions as time (immediate versus distant past), potential penalties (imprisonment and a criminal record versus a bad conscience), and amount of time and attention paid to "developing" the questions (limited versus extensive). Accordingly, no logical inference is possible based on the R versus "C" comparison. For those concerned with the more applied issue of evaluating the accuracy of the CQT procedure, it is the procedure?s in-principle lack of standardization that is more critical."  He has haphazardly taken terms used in polygraph, thrown them about into a paragraph, imposed his own opinion of there meanings and uses without reference to support, and finally drawn a conclusion that has nothing to do with the previous statements made.  The fact is, none of Lykken?s gibberish has a thing to do with the standardized methods of polygraph and/or even the physiological data for which its? findings are based on.      

You state:

Quote:

Other uncontrolled (and uncontrollable) variables that may reasonably be expected to affect the outcome of a polygraph interrogation include the subject's level of knowledge about CQT polygraphy (that is, whether he/she understands that it's a fraud) and whether the subject has employed countermeasures.


I can with quite certainty say that you have no research or data to support your opinion on this issue.  There are no published research studies that have measured specifically for this variance you speak of and/or one that has concluded with data that will concur with your ideology.  

You then use Dr. Richardson?s explanation in attempts to illustrate a scientific procedure that has been accepted and its? basis for acceptance.


Quote:
 http://antipolygraph.org/nas/richardson-transcript.shtml#control

The test Dr. Richardson describes is genuinely standardized and controlled, unlike polygraphy.


You will notice in Drew?s explanation that he states, "if the test works".  To be fair, maybe all forensic sciences should be put to the standards of validity measurement that is held against polygraph, inconclusive results included.  It is a known fact that even controlled testing for proficiency purposes in accepted scientific practices can often go a rye.  When this happens, the  results can be inconclusive and/or even false.  For example I will give a hypothetical scenario; A standardized control sample of urine leaves an accredited proficiency testing company. That sample is known to contain benzoylecgonine.  The test is weather the sample contains benzoylecgonine or not.  During the shipping process the cabin of the airplane that contains the samples loses atmospheric pressure.  The loss of pressure causes the cabin temperature to plummet to  -80 degrees F.  When the airplane descends, the cabin pressure and temperature returns to normal atmospheric conditions for the region, let us say 70 degrees F for this hypothetical scenario.  This change can happen quite quickly considering the sometimes rapid descent of airplanes.  The sample arrives at the lab and is tested.  It is found to contain no presence of benzoylecgonine and is reported as such.  

Drew also speaks about the test for known to verify the test and the instrumentation works.  A polygraph examiner should be conducting an acquaintance exam/test, as stated in APA polygraph procedures outline.  This exam/test checks for the ability of the test to work.  If the subject has an autonomic response to the known lie, the test works.  If the subject does not have an autonomic response to the known lie, the test does not work.  The subject is instructed not to move and to follow specific directions.  If the subject does not cooperate and attempts to augment his responses in any way on this non-intrusive exam/test, the subject is intentionally attempting to hide his natural responses.  I know of only one reason for someone to augment his or her response.  That is, they are going to attempt to deceive.  I believe any reasonable person would come to the same conclusion.  Now if the exam/test works, I have a true physiological response created by a known lie and a true homeostasis or tonic level measurement contained in the known truth.  This data can be used to confirm the remainder of the exam/test data collected.  

You referenced, for comparison purposes,  phrenology and graphology.  Phrenology and graphology measures no known and/or research proven methods.  These once experimental methods do not even remotely compare to polygraphs extensive research, documentation of known and proven physiological responses, and proven accuracy.  A closer but distant comparison you may have used would be questioned documents, since it is a forensic science.  From your same source of information, http://www.skepdic.com/graphol.html , the following appears.  "Real handwriting experts are known as forensic document examiners, not as graphologists. Forensic (or questioned) document examiners consider loops, dotted "i's" and crossed "t's," letter spacing, slants, heights, ending strokes, etc. They examine handwriting to detect authenticity or forgery."  I believe the author of this site accepts questioned documents as a scientific discipline.  Polygraph measures known physiological responses of the subject to detect deception.  Polygraph has provided more favorable research, standardization and validity then questioned documents.  I know you have read the research study in which polygraph was putt head to head against questioned documents and latent fingerprints.  

Although it is not my burden to prove the validity, I will give an example of how polygraphs tested validity stands up against other accepted science:


Quote:


From http://www.iivs.org/news/3t3.html

STATEMENT ON THE SCIENTIFIC VALIDITY OF THE 3T3 NRU PT TEST (AN in vitro TEST FOR PHOTOTOXIC POTENTIAL)

At its 9th meeting, held on 1-2 October 1997 at the European Centre for the Validation of Alternative Methods (ECVAM), Ispra, Italy, the ECVAM Scientific Advisory Committee (ESAC) 1 unanimously endorsed the following statement:

The results obtained with the 3T3 NRU PT test in the blind trial phase of the EU/COLIPA2 international validation study on in vitro tests for phototoxic potential were highly reproducible in all the nine laboratories that performed the test, and the correlations between the in vitro data and the in vivo data were very good. The Committee therefore agrees with the conclusion from this formal validation study that the 3T3 NRU PT is a scientifically validated test which is ready to be considered for regulatory acceptance.

???

General information about the study:

A. The study was managed by a Management Team consisting of representatives of the European Commission and COLIPA, under the chairmanship of Professor Horst Spielmann (ZEBET, BgVV, Berlin, Germany). The following laboratories participated in the blind trial on the 3T3 NRU PT test: ZEBET (the lead laboratory), Beiersdorf (Hamburg, Germany), University of Nottingham (Nottingham, UK), Henkel (Dusseldorf, German), Hoffman-La Roche (Basel, Switzerland), L'Oréal (Aulnay-sous-Bois, France), Procter & Gamble (Cincinnati, USA), Unilever (Sharnbrook, UK), and Warsaw Medical School (Warsaw, Poland).

B. This study began in 1991, as a joint initiative of the European Commission and COLIPA. Phase I of the study (1992-93) was designed as a prevalidation phase, for test selection and test protocol optimisation. Phase II (1994-95) involved a formal validation trial, conducted under blind conditions on 30 test materials which were independently selected, coded and distributed to nine laboratories. The results obtained were submitted to an indepedent statistician for analysis. Data analysis and preparation of the final report took place during 1996-97.

C. A number of tests at different stages of development were included in the study, but the 3T3 NRU PT test was found to be the one most ready for validation. It is a cytotoxicity test, in which Balb/c mouse embryo-derived cells of the 3T3 cell line are exposed to test chemicals with and without exposure to UVA under carefully defined conditions. Cytotoxicity is measured as inhibition of the capacity of the cell cultures to take up a vital dye, neutral red. The prediction model requites a sufficient increase in toxicity in the presence of UVA for a chemical to be labelled as having phototoxic potential.

D. Two versions of the prediction model were applied by the independent statistician. The phototoxicity factor (PTF) version compared two equi-effective concentrations (the IC50 value, defined as the concentration of test chemical which reduces neutral red uptake by 50%) with and without UV light. However, since no 1C50 value was obtained for some chemicals in the absence of UVA, another version was devised, based on the Mean Phototoxic Effect (MPE), whereby all parts of the dose-response curves could be compared.

The two versions of the prediction model were applied to classify the phototoxic potentials of the 30 test chemicals on the basis of the in vitro data obtained in the nine laboratories. Comparing these in vitro classifications with the in vivo classifications independently assigned to the chemicals before the blind trial began, the following overall contingency statistics were obtained for the 3T3 NRU PT test:

                              PIF version      MPE version
Specificity:                   90%                93%
Sensitivity:                   82%                84%
Positive predictivity:    96%                96%
Negative predictivity:  64%                 73%
Accuracy:                    88%                92%

E.  Other methods in the study included the human keratinocyte NRU PT test, the red blood cell PT test, the SOLATEX PT test, the histidine oxidation test, a protein binding test, the Skin2 ZK1350 PT test, and a complement PT test. The other methods showed varying degrees of promise, e.g. as potential mechanistic tests for certain kinds of phototoxicity, and this will be the subject of further reports.


Considering the above, I would conclude that polygraph has provided more then sufficient overall accuracy data and done so over a greater test period of both laboratory and field settings to prove scientifically valid.

Comparison example:


Quote:


From: http://www.polygraphplace.com/docs/acr.htm

In their recent review, Raskin and his colleagues (12) also examined the available field studies of the CQT. They were able to find four field studies (13) that met the above criteria for meaningful field studies of psychophysiological detection of deception tests. The results of the independent evaluations for those studies are illustrated in Table 2. Overall, the independent evaluations of the field studies produce results that are quite similar to the results of the high quality laboratory studies. The average accuracy of field decisions for the CQT was 90.5 percent. (14) However, with the field studies nearly all of the errors made by the CQT were false positive errors. (15)

http://www.polygraphplace.com/docs/AMICUS%20CURIAE%20RE%20THE%20POLYGRAPH%20Draft%20v_%202_1_1_files/IMG00003.gif

aSub-group of subjects confirmed by confession and evidence.

bDecision based only on comparisons to traditional control questions.

cResults from the mean blind rescoring of the cases "verified with maximum certainty" (p.235)

dThese results are from an independent evaluation of the "pure verification" cases.

_______________

Although the high quality field studies indicate a high accuracy rate for the CQT, all of the data represented in Table 2 were derived from independent evaluations of the physiological data. This is a desirable practice from a scientific viewpoint, because it eliminates possible contamination (e.g. knowledge of the case facts, and the overt behaviors of the subject during the examination) in the decisions of the original examiners. However, independent evaluators rarely offer testimony in legal proceedings. It is usually the original examiner who gives testimony. Thus, accuracy rates based on the decisions of independent evaluators may not be the true figure of merit for legal proceedings. Raskin and his colleagues have summarized the data from the original examiners in the studies reported in Table 2, and for two additional studies that are often cited by critics of the CQT. (16) The data for the original examiners are presented in Table 3. These data clearly indicate that the original examiners are even more accurate than the independent evaluators.

<http://www.polygraphplace.com/docs/AMICUS%20CURIAE%20RE%20THE%20POLYGRAPH%20Draft%20v_%202_1_1_files/IMG00003.gif>;

aCases where all questions were confirmed.

bIncludes all cases with some confirmation.


The above comparison supports the assertion that polygraph when using the CQT meets standard validity test requirements to be considered a scientific test method.    

Title: Re: The Scientific Validity of Polygraph
Post by Drew Richardson on Mar 3rd, 2002 at 7:25pm
J.B.,

I believe you have totally missed the point regarding scientific control and what constitutes it in a given instance.  It should not be confused with other issues, nor should its absence in one paradigm be confused with the possibility of its isolated absence in the face of operator error in the case of a discipline which in fact normally reflects principles of scientific control.  With regard to the first--although it is proper and admirable that polygraph examiners calibrate their instruments, there is little question that electrons do flow and pressure gauges can accurately measure pressures in most instances.  Nor is there any serious question that ambulatory individuals who travel to and report for polygraph examinations have at least minimally functioning autonomic systems.  The ANS is required on a daily basis for individual life function and its function as displayed in polygraph examinations is trivial relative to its various life sustaining functions.  And if autonomic function and responsivity were in question, the re-named so-called "acquaintance test" is no serious measure of it, but merely a nomenclature evolution of the parlor game and fraudulent exercise we have all come to know as a "stim test."  To suggest that this is anything more should be embarrassing to one who understands anything about autonomic physiology.  As has been pointed out recently by others, the acquaintance test is actually the first opportunity for the examinee to con the con-man examiner with countermeasure response to the chosen number and feigned amazement at the examiner's mystical deductive powers in pointing out said response(s)...

But on to meaningful scientific control and that which is lacking with control question test polygraphy...

That which will define scientific control in an analysis is the ability of the control to shed light on the various dependent measure recordings of the analyte in question.  In the case of the control question polygraph exam, the analyte in question is the relevant question subject matter; the dependent measures are those measures of physiology recorded, and the scientific control, in theory, is furnished by the control or comparison questions.  THIS IS WHERE THE HEART OF CONTROL LIES AND WHY IT IS COMPLETELY ABSENT IN PROBABLE LIE CONTROL QUESTION POLYGRAPHY.   In order for it to exist, we would need to know something about the emotional content or affect and the relational nature of this affect for chosen relevant and control question pairings within a given exam.  Although polygraphers have speculated about this, there is NO independent measure of this for a relevant/control pair for given examinee (guilty or innocent) on any given day.  This is not a function of isolated operator/examiner error that you correctly suggest could exist on any given day with any discipline, but is an every day condition and lack of control that exists with polygraphy.  If an innocent examinee is not more concerned with control/comparison questions than relevant questions (i.e. the emotional content/affect of controls is greater than for relevant questions) and this can not be demonstrated through the process, then any recording of physiological response (dependent variable) and any conclusions drawn are absolutely meaningless with a given exam.  This inability to verify theoretical constructs with a given relevant/control pairing for a given examinee is what makes control question test polygraphy without scientific control and without any ability to be meaningfully analyzed.  This situation does not exist with the forensic toxicological analysis that you either completely do not understand (hopefully) or intentionally misrepresent.  The chemical/physical relationship between deuterated-benzoylecgonine (control) and benzoylecgonine (urinary metabolite of cocaine and analyte of interest) is well understood for all of the environments involved in analysis, i.e., tissue, organic and aqueous media, chromatographic packing materials, mass spec source, analyzer, etc.  Because of this one can determine whether an experiment worked and what qualitative and quantitative conclusions can be meaningfully deduced with any dependent variable measurements obtained.  To compare control question test polygraphy to this is, again, a quite embarrassing comparison.  Again, the fact that operator error can compromise the validity of quality control or operational practice with any given toxicological analysis neither makes this (forensic toxicology) uncontrolled under normal circumstances nor vicariously makes control question test polygraphy more of a scientifically controlled practice through any contrived and envious comparisons.  It most assuredly does not.  WE ARE LEFT WITH WHAT WE BEGAN WITH---PROBABLE LIE CONTROL QUESTION TEST (CQT) POLYGRAPHY DOES NOT IN ANY WAY EMBODY PRINCIPLES OF SCIENTIFIC CONTROL...

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Mar 7th, 2002 at 10:09pm
Drew,

Although one may control for some given variants in a particular setting, there is always the chance of uncontrollable variants.   I will admit I am not a toxicologist.  My knowledge of this discipline is extremely limited in comparison to yours.  I am not arguing that toxicology is invalid nor is that the subject.  I did not use or list toxicology as a direct comparison of scientific validity.  My reference to toxicology was to show that even validated disciplines could have outside factors that cannot be controlled for in field settings and that those outside factors may produce an inconclusive and/or false result.  I compared questioned documents for scientific validity and I used documentation of the validity results of the 3T3 NRU PT for accuracy comparison.  This dialog was in direct response to what George wrote in his post prior to mine.


Quote:


2) Do you agree that because CQT polygraphy lacks both standardization and control, it can have no validity? If not, why?……

Other uncontrolled (and uncontrollable) variables that may reasonably be expected to affect the outcome of a polygraph interrogation include the subject's level of knowledge about CQT polygraphy (that is, whether he/she understands that it's a fraud) and whether the subject has employed countermeasures.



In reading this I deducted George was referring to variant control.  George must prove, with substantiated evidence of comparable scientific disciplines, that polygraph is not scientifically valid.  It is his assertions and this discussion is based on those and the past rules of discourse he has used.

I wrote, “This exam/test checks for the ability of the test to work. If the subject has an autonomic response to the known lie, the test works. If the subject does not have an autonomic response to the known lie, the test does not work.”  I did not say that the purpose of the stim/acquaintance was to measure the ability of the ANS to work.  A positive control test simply takes a known sample of a suspected unknown and simultaneously tests it with the unknown.  

For example:


Quote:


From ‘The Methods of Attacking Scientific Evidence’ by Edward J. Imwinkelried, 1982, Pg. 421-422

12-5(B).  Positive Control Test.

Control tests are vital in drug identification (14) and serological (15) testing.  Suppose that the analyst suspects that the unknown is marijuana.  At the same time that the analyst tests the unknown, she would subject marijuana to the identical test – the known is the control or reference sample. (16)  By simultaneously testing the unknown and known samples, the analyst can compare the test results side by side.  Drug identification experts almost unanimously agree that the use of controls is vital to the credibility of drug analysis evidence. (17) Experts on blood group typing also fell that controls are needed in blood, semen, and saliva analysis. (18) ……

14. Bradford, “Credibility of Drug Analysis Evidence,” Trial, May/June 1975, at 90.
15. Wraxall, “Forensic Serology,” in Scientific and Expert Evidence 897, 907 (2d ed. 1981).
16. Bradford, “Credibility of Drug Analysis Evidence,” Trial, May/June 1975, at 90.
17. Id.
18. Wraxall, “Forensic Serology,” in Scientific and Expert Evidence 897, 907 (2d ed. 1981).



You reference the ANS.  Although the ANS is regularly used to sustain life, the specific deceptive ANS response measured in a polygraph is not regularly used for continual life sustaining purposes.  I agree with you that there are other reasons that you have alluded to that define others’ explanations for the use and existence of the stim/acquaintance test/exam.  A stim/acquaintance test/exam is a Known Solution Peak of Tension Test.  Polygraph examiner training material reads as follows in reference to the stim/acquaintance, “Correlate outcome to the polygraph examination.”  Given my and the above supporting literature’s explanation of positive control test, do you agree or disagree that the stim/acquaintance test/exam is a positive control test?  


Quote:


From: http://www.scientificexploration.org/jse/abstracts/v1n2a2.html

What Do We Mean by "Scientific?"

Henry H. Bauer, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061

There exists no simple and satisfactory definition of "science." Such terms as "scientific" are used for rhetorical effect rather than with descriptive accuracy. The virtues associated with science — reliability, for instance — stem from the functioning of the scientific community.



When referring to scientific validity, one can reference many instances where a science was discredited by the majority of scientists, thus not accepted, and inversely proven to be true and accepted at a later date without addition and/or change to theory.  The reverse of this process has also happened.  So  “Scientific Validity” is in itself a highly subjective process directly dependent on the opinions of the current majority of scientists in the related discipline.  A scientific process can be accurate and its theory sound but with an absence of its general acceptance it may be considered invalid.  It is the test of scientific acceptability that defines whether a theory or practice is accepted.  

Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Mar 7th, 2002 at 11:42pm
J.B.,

In your post of 3 March, you wrote in part:


Quote:
Your argument is that polygraph is "not better then chance".


J.B., my argument is not that polygraphy is "not better than chance," but that it has not been proven by peer-reviewed research to work better than chance under field conditions.

And in your post of 7 March (today) you write:


Quote:
George must prove, with substantiated evidence of comparable scientific disciplines, that polygraph is not scientifically valid.  It is his assertions and this discussion is based on those and the past rules of discourse he has used.


No, J.B. The burden of proof rests with you (and other polygraph proponents) if you would have us believe that CQT polygraphy is a valid diagnostic technique. Respectfully, I don't think you've met that burden. Not even close.



Title: Re: The Scientific Validity of Polygraph
Post by beech trees on Mar 8th, 2002 at 12:48am

J.B. McCloughan wrote on Mar 7th, 2002 at 10:09pm:

George must prove, with substantiated evidence of comparable scientific disciplines, that polygraph is not scientifically valid.  It is his assertions and this discussion is based on those and the past rules of discourse he has used


Nope, no sir. No way. You cannot prove a negative, it is a Logical impossibility. The burden of proof rests squarely on your shoulders. Thus far, I'm not convinced.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Mar 8th, 2002 at 8:06am
George,

This thread was started because of a direct statement that you had made about polygraph.  You said, "CQT polygraphy has not been shown by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions. Moreover, since CQT polygraph lacks both standardization and control, it can have no validity."

You have in no way supported this assumption.  Lykken or any other opponent of CQT polygraph has not proven your assumption.  There is no current statistical data in field or laboratory peer-reviewed research studies that purports polygraph is not better then chance accuracy at differentiating between truth and deception.  I have supported this by illustrating some current and past peer-reviewed studies all with higher then chance validity rates and the more current with validity rates equal to or better then some of the accepted scientific disciplines.

You and those you reference write of the lack of scientific control and standardization yet there is no support for these assumptions.  They are simply unsupported statements.  Lykken does have the afforded luxury of being renowned in his field. Thus his assertions are reverend by the followers of his ideology(GKT).  His arguments only aid in the slowing of general acceptance and do nothing to disprove polygraph as a scientifically valid discipline.

You do not have the afforded luxuries that Lykken does.  Your formal education is not in a related or even semi-related field.  For you to make unsupported statements with the lack of credential or peer-reviewed research to support these is nothing more then a lay assumption or repeat of Lykken's meaningless rhetoric.

I have shown examples of scientific control, standardization, and validity.

I have reviewed my comparisons with scientists of other accepted disciplines and they believe my explanations are sound scientifically and support my assertions.

I again ask you:

1) What about the current peer-reviewed field researches show polygraph to be no better then chance accuracy and what is the current accuracy rate?

2) What control does polygraph lack?

3) What standardization does polygraph lack?

If you cannot prove your assertion, then please retract it and state that which you can support with hard evidence.

beech trees,

This debate is in reference to an assertion made by George.  He has in the past set the rules for rational discourse and placed the burden of proof on he who makes the assertion.  Thus, the burden of proof is his.  

This is not a scientific review of polygraph for official acceptance.  If that were the case, I would agree with you that the presenter of evidence of a purposed science for acceptance would have the burden to prove it to be true.  There are very few on this site who have the credentials to carry this type of formal debate out and it would have to be done in proper forum for acceptance.

Title: Re: The Scientific Validity of Polygraph
Post by Drew Richardson on Mar 8th, 2002 at 4:16pm
J.B.


Quote:
...although one may control for some given variants in a particular setting, there is always the chance of uncontrollable variants...


True, but irrelevant to the fact the probable-lie control question test (CQT) polygraphy has no, nada, zilch scientific control on ANY day, uncontrollable variants notwithstanding.  This is because the emotional content of relevant/control question pairings is NEVER known apriori for a given examinee with a given examination.  This makes any conclusions drawn from physiological recordings meaningless and sheer speculation on each and every occasion/outing.



Quote:
...I will admit I am not a toxicologist.  My knowledge of this discipline is extremely limited in comparison to yours.  I am not arguing that toxicology is invalid nor is that the subject...


Again, all likely true, and although containing an appreciated and flattering admission, all irrelevant to the issue at hand...

The theoretical nature of control/comparison questions (emotional content/affect in relationship to paired relevant question material) can not be verified on ANY and again I repeat ANY given occasion, making their use, if not without purpose, at least providing no scientific control in any formal and recognized context.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Mar 8th, 2002 at 10:44pm
Drew,


Quote:

…..the fact the probable-lie control question test (CQT) polygraphy has no, nada, zilch scientific control on ANY day……


Do you agree that the POT/Known Solution Test can be considered as a Positive Control Test or not?

Title: Re: The Scientific Validity of Polygraph
Post by Drew Richardson on Mar 8th, 2002 at 11:16pm
J.B.,

In order to answer your lasted posted question, I am afraid I must seek clarification.  I believe that you are asking me if I consider the stim/acquaintance test a positive control for subsequently administered probable lie control question tests.  If not please correct me, and I will answer your intended question.  But to this one...

No, I don't---if the stim/acquaintance test were in fact a probable lie CQT, at best, you would have an external control situation, a much weaker form of control than the internal positive control we have discussed with a forensic toxicological analysis, but in fact, even this weaker form of control does not exist...

The so-called stim test is really not a probable-lie CQT or any other test for deception and therefore offers no form of control, external or internal.  The reason being that, although you can instruct the examinee to answer "no" to the chosen number (and therefore lie), you can also have he/she answer "yes" to that same number or provide no answer at all, i.e., a silent test, and obtain exactly the same result/same response.  In other words, the lie is irrelevant to the stim test and what the stim test  really is is a form of concealed information test in which the examinee is merely responding to something of significance to himself, significance derived from the fact that the number was recently chosen by the examinee.  Although, as I have indicated before, I have great disdain for how a stim/acquaintance test is used in a polygraph setting, I actually believe the format, apart from that setting, to be a quite useful and a narrowly defined/controlled vehicle for studying physiological change.  But again, in answer to the question that I believe I was asked, it (a stim test) has absolutely nothing to do with providing control to a probable-lie control question test.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Mar 8th, 2002 at 11:57pm
Drew,

You have I think answered my question.  Regardless if the polygraph format is CQT, GKT, or R&I, the Stim/Aquaintence test (otherwise known as the POT/Known Solution Test) is a positive control test.  

As for Comparison/Control Questions, I believe that these will fall under the same defenition of standard tests.

So now we have a positive control and a standard test being used in a CQT polygraph, the same tests that are used in other accepted scientific disciplines.

I will be traveling for the weekend. So I may not respond to you further for a couple of days.

Title: Re: The Scientific Validity of Polygraph
Post by Drew Richardson on Mar 9th, 2002 at 12:07am
J.B.,

I am glad I responded to your question; unfortunately, you do not appear to have read my answer.  I really don't want to be flippant with you, but you appear to have no knowledge of the terms and practices that you associate in your writings.  But nevertheless, do enjoy your weekend and we can continue anew next week...

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Mar 12th, 2002 at 6:00pm
Drew,

In looking at my last post, I can see that one of the terms was used without explanation and could have been misconstrued in the meaning of its use.  Standards are samples with a known identity that unknown are being compared to for identification.  To determine that a method is working correctly, one must use appropriate controls and standards.  One may use quantitative controls (called blanks), blind controls and/or internal controls.  These controls are used to assure a reproducible and accurate method by which an acceptable value or range of values is established.  Irrelevant questions are blanks.  Control/comparison questions are suspected known samples that can be established with the known sample from a stimulation/acquaintance test.  The relevant questions are unknown samples that are compared with the other test data to establish its degree within the range of values.  If the degree is consistently greater in the relevant questions, set by numerical scoring criterion, then deception is shown.

Title: Re: The Scientific Validity of Polygraph
Post by Drew Richardson on Mar 12th, 2002 at 11:40pm
J.B.,

There is not the slightest bit of scientific control furnished through the utilization of comparison questions with a CQT.  In order for there to be, there would need be some CLEARLY DEFINED, CONSISTENT AND READILY DEMONSTRABLE  relationship between the affect of the two types of stimuli.  NO SUCH RELATIONSHIP EXISTS.  Furthermore, the comparison questions of a CQT have no relationship to the alternative (or correct) answers of a stim/acquaintance test.  As I previously pointed out, the latter is merely a concealed information test (at best) whereas the former is suggested by its proponents as having some relationship to detection of deception .  Your comparison of apples and oranges and conclusions drawn is most perplexing and somewhat troubling…

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Mar 15th, 2002 at 7:20pm
Drew,

You wrote:
(1)

Quote:

There is not the slightest bit of scientific control furnished through the utilization of comparison questions with a CQT.  In order for there to be, there would need be some clear and demonstrable relationship between the affect of the two types of stimuli. NO SUCH RELATIONSHIP EXISTS.


Although other stimuli may be present, the common and main stimulus that exists between the comparisons and relevant questions is deception.  It is the deception that causes the release of hormones from the adrenal medulla.  The greater the stimuli the greater the release.  Thus, if a stimulus for comparison questions is equal to the stimuli of the relevant questions, there is a valid comparison to be made.  Whatever the emotion that is elicited with the deception is a conditioned response and secondary to the main stimuli.  The secondary conditioned response will be readily consistent for a given person based on psychosocial conditioning.

Support for statement:

Quote:


From ‘Social Psychology’ Sixth Edition, by Lindesmith, Strauss, and Denzin, pg. 98-99:

….,it is difficult to conceive of an experience that is purely emotional or an emotion that is purely physiological.  Apart from the difficulties inherent in the idea of a purely physiological experience, Skinner (1953, pp. 161-62) has observed that the scientific study of emotional behavior which is based on the idea that each emotion has its own characteristic pattern of emotional response offers a far less reliable basis for identifying emotion than does common sense.

Our discussion suggests that emotion should be viewed as an aspect of certain types of behavior rather than as a distinct form of behavior itself.  The specifically emotional portion of behavior is elicited by the relationship of the emotion-provoking situation to the values of the person as seen by that person.



(2)

Quote:

Furthermore, the comparison questions of a CQT have no relationship to the alternative (or correct) answers of a stim/acquaintance test.


The (correct) answers on the stim/acquaintance test are quantitative controls or blanks.  They are simply used to establish the homeostasis or tonic level of a given subject.  The incorrect/deceptive response to the known lie on the stim/acquaintance test can be used for direct comparison with the responses to the comparison/control questions on the CQT to confirm deception.

(3)  

Quote:

As I previously pointed out, the latter is merely a concealed information test (at best) whereas the former is suggested by its proponents as having some relationship to detection of deception .


I have already posted a direct quote from polygraph training material.  
Quote:
Polygraph examiner training material reads as follows in reference to the stim/acquaintance, “Correlate outcome to the polygraph examination.”
 This material was written and taught by the same organization that trained you.

In a previous post you wrote:

Quote:

Although, as I have indicated before, I have great disdain for how a stim/acquaintance test is used in a polygraph setting, I actually believe the format, apart from that setting, to be a quite useful and a narrowly defined/controlled vehicle for studying physiological change.


In my opinion this and the previously quoted statement are contradictory in nature.

Your comment about my comparisons of apples and oranges in unspecified.  If you are referring to my definitions of the different scientific controls, then I would agree that it is you who is perplexed.  I have discussed these terms, their definitions, and the relationship to the portions of polygraph noted with other scientists within accepted fields and they concur with me.  It is not my burden to get you to agree and/or even my burden to prove anything.  Scientific acceptance is for the most part general and not specific to an individual.  Like I have stated before, the scientific acceptance of any given method is for the most part subjective and opinionated.

We are wandering farther and farther of the course of this debate.  I do recall it being ‘MY’ burden in the last debate on CMs due to my assertions.  This debate is based on George’s assertions.  To date he has provided no peer-reviewed scientific research either field or laboratory that proves CQT polygraph to or not to, “…. differentiate between truth and deception at better than chance levels of accuracy under field conditions..”

What is the current overall accuracy rate of CQT polygraph shown by peer-reviewed field and laboratory research?

Title: Re: The Scientific Validity of Polygraph
Post by Drew Richardson on Mar 15th, 2002 at 10:45pm
J.B.


Quote:
...Although other stimuli may be present, the common and main stimulus that exists between the comparisons and relevant questions is deception.  It is the deception that causes the release of hormones from the adrenal medulla.  The greater the stimuli the greater the release...


Balderdash!!!!!

As a former US President was fond of saying, "There you go again..."  There is not the slightest bit of evidence of such a thing.  Even your more serious colleagues in the world of polygraphy don't claim a "lie response" let alone one uniquely manifested at the level of the adrenal medulla.  Remember, my friend, you are talking to a toxicologist.  Please show me anywhere in the literature that therapeutic monitoring of blood levels of norepinephrine and/or epinephrine has been performed in connection with deception in a CQT, let alone correlated with deceptive responses to control and relevant questions.  As utterly ridiculous and unsupported as this hypothesis is, it totally ignores the sympathetic cholinergic (acetyl choline) electrodermal responses that have nothing to do with the adrenal medulla.  It furthermore ignores the timing of the onset of response (seconds) which is consistent with neuronal input (neurotransmitters) not organ bathing over minutes with blood born adrenergic hormones which at best contributes to duration of cardiovascular responses (again even this physical phenomenon has never been shown to correlate in any fashion with deception, isolated from God knows how many other factors involved with the asking of questions in a CQT).   This is nonsensical beyond all reason and not worthy of comment, save eliminating confusion for only the most naive who visit this site.....

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Mar 16th, 2002 at 5:24pm
Drew,

This should not a battle of brash words but a debate conducted in a professional manner.  I do not appreciate your unwarranted satirical remarks.  My statements are supported within professional scientific texts.  My prior post used a Social Psychology reference to support my statement about emotions.  Here are some support references for the Physiological portions of my statement.


Quote:


Anatomica 2001, pg. 64:

The adrenal medulla is derived from neural (nerve) tissue and is concerned with the production and secretion of epinephrine (adrenaline) and nor-epinephrine (nor-adrenaline).  These hormones can cause increased heart rate, widening of the airways, and breakdown of glycogen to glucose for energy.  All of these make the body more equipped to handle emergency situations.





Quote:


Essentials of Anatomy and Physiology, by Seeley, Stephens, and Tate, pg. 259-260:

The principle hormone released from the adrenal medulla is epinephrine or adrenaline, but small amounts of norepinephrine are also released.  The epinephrine and norepinephrine are released in response to stimulation by the sympathetic nervous system, which becomes most active when a person is physically excited (Figure 10-8).  Epinephrine and norepinephrine are referred to as the fight-or-flight hormones because they prepare the body for vigorous physical activity.  




Quote:


http://www.hhpub.com/journals/jop/1998/abstv12i4.html

Journal of Psychophysiology Volume 12, No. 4, 1998

The relationship between heart rate and blood pressure reactivity in the laboratory and in the field: Evidence using continuous measures of blood pressure, heart rate and physical activity
by Anita Jain (1), Thomas F. H. Schmidt (2), Derek W. Johnston (3), Georg Brabant(4), and Alexander von zur Mühlen (4)
(1) Department of Psychology, University of Cologne, Germany
(2) Preventive and Behavioral Medicine, Department of Epidemiology and Social Medicine, Hanover Medical University, Germany
(3) School of Psychology, University of St Andrews, Scotland
(4) Department of Endocrinology, Hanover Medical University, Germany

The relationship between cardiovascular reactivity in the laboratory and in everyday life has been under discussion for many years. Manuck and Krantz (1984) and Light (1987) proposed three models of how laboratory reactivity could relate to real life reactions (recurrent activation, prevailing state and combined model). The aim of the present study was to test the relationship of cardiovascular reactivity in the laboratory and in the field using continuous measures of blood pressure and heart rate as well as physical activity and posture. Seventeen high and low laboratory rate pressure product (RPP) reactors were selected from a sample of 50. Continuous finger blood pressure and heart rate (HR) were measured noninvasively with PORTAPRES for 22 hours in everyday life together with continuous measures of thigh EMG, arm movement and posture. Adrenaline, noradrenaline, cortisol, and dopamine urinary excretion rates were determined for the same period. As predicted, high laboratory reactors showed higher daytime variability of their RPP after eliminating the effects of serial dependency and they also showed larger responses to stressful situations in everyday life. Similar, but less pronounced effects were seen for HR. High reactors also had higher daytime diastolic blood pressure (DBP) levels. In systolic blood pressure no group differences were seen. High reactors also showed higher urinary adrenaline and noradrenaline excretion rates during the day. In this study, different cardiovascular variables follow different models for the relationship between laboratory and field reactivity. For RPP and HR the "recurrent activation model" is supported. DBP may follow the "prevailing state model." Endocrine sympathetic mechanisms appear to be involved in individual cardiovascular reactivity differences.




Quote:


http://www.jphysiol.org/cgi/content/abstract/250/3/633?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=adrenaline&searchid=1016251543786_2662&stored_search=&F
IRSTINDEX=40&journalcode=jphysiol

The Journal of Physiology, Vol 250, Issue 3 633-649, Copyright © 1975 by The Physiological Society

RESEARCH PAPERS
Sweat gland function in isolated perfused skin

KG Johnson

1.  A technique for perfusion of skin has been used to investigate a possible neurochemical basis for the different patterns of sweating in domestic animals. Evaporative water loss was measured from excised trunk skin, ears or tails perfused with a nutrient Krebs solution, to which drugs were added as required. Perfused skin was observed to sweat in response to administration of sudorific drugs, and some features of the patterns of sweating were similar to those which could be induced by heating or by drugs in conscious animals. 2. In sheep and goat skin, injections of adrenaline, and to a lesser extent of noradrenaline, elicited brief sweat discharges but these were not sustained when the drugs were infused during 10-20 min. Injections of isoprenaline, carbachol, 5-HT, bradykinin, oxytocin and histamine were all ineffective. 3. Injections of adrenaline into cattle skin evoked longer- lasting sweat discharges, and infusions of adrenaline elicited continuous discharges. Injections of noradrenaline and sometimes of bradykinin caused only brief sweat discharges; other drugs were ineffective. 4. In horse and donkey skin, injections or infusions of noradrenaline, oxytocin and bradykinin elicited brief discharges of sweat. Infusions of isoprenaline caused a continuous and profuse outflow of sweat. Infusions of adrenaline also caused a continuous discharge which was usually biphasic in its onset. Other drugs were ineffective. 5. Assuming that the brief sweat discharges are due to myoepithelial contractions and the continuous discharges to sustained increases in secretion, equine sweat glands seem to have a alpha- adrenergically controlled myoepithelium and a beta-adrenergically controlled secretory mechanism. Sheep and goats may have a similar alpha-adrenergic control of the sweat gland myoepithelium but only a feeble sweat secretory mechanism. In cattle, an alpha-adrenergic mechanism appears to control sweat secretion, but the control of the myoepithelium is uncertain.




Quote:


Essentials of Anatomy and Physiology, by Seeley, Stephens, and Tate, pg. 99:

Emotional sweating is used in lie detector (polygraph) tests because sweat glands activity usually increases when a person tells a lie.




Quote:


Anatomica 2001, pg. 687:

The eccrine sweat glands are distributed over the body, except on the lips and some parts of the genital regions?..They secrete large quantities of sweat, which cools the body by evaporation.  The sweat glands are activated when the body becomes overheated, (due to environmental conditions or exercise), and occasionally by emotions such as fear("cold sweat").




(Note: this post was edited by the AntiPolygraph.org administrator to correct a coding problem that affected display of the message thread. No changes were made to the words posted.)

Title: Re: The Scientific Validity of Polygraph
Post by Drew Richardson on Mar 16th, 2002 at 6:12pm
J.B.

My goal is not to hurt your feelings through sarcasm, but to clearly point out wild leaps of faith on your part as evidenced by your quantum jumps from explanations of reasonably well-understood physiology to your postulates about control question test (CQT) polygraphy.  I suppose my language is a reflection of the need to continue this after several such exchanges.  Perhaps you can point out to me where deception/detection of deception is discussed in any of that which you have quoted.  With the exception of the Seely et al quote (idle speculative commentary (secondary source) with no reference to the scientific literature), I see none.  Unless you can, it is completely irrelevant (and would be if you had downloaded a complete physiology text if unrelated to deception through references to the peer reviewed literature) to our discussions and simply more evidence of a lack of critical thinking.... sorry, but there lies the truth.  It is not I who stated categorically that adrenergic hormone release was directly and proportionately related to deception, but you.  Where's the proof.  Absolutely none of that which you have offered in your most recent post is evidence of that...if you are going to idly speculate about such things, so be it, but please distinguish such and identify for the reader and also realize that you have offered nothing whatsoever to indicate that comparison questions in a CQT offer any form of scientific control. (I believe the original issue we were discussing)

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Mar 31st, 2002 at 7:59am
Drew,

First off, my feelings have nothing to do with my last post.  Hearing bothersome language and being called names, not on my birth certificate, are common occurrences in my line of work.  My point was we are professionals and we should keep the dialog as such.

Some of the cites in my last post were directed toward your assertion that;


Quote:


As utterly ridiculous and unsupported as this hypothesis is, it totally ignores the sympathetic cholinergic (acetyl choline) electrodermal responses that have nothing to do with the adrenal medulla.



So I quoted to that;


Quote:

From:

http://www.jphysiol.org/cgi/content/abstract/250/3/633?maxtoshow=&HITS=10&hits=10&RESULTFORMAT=&fulltext=adrenaline&searchid=1016251543786_2662&stored_search=&F
IRSTINDEX=40&journalcode=jphysiol

The Journal of Physiology, Vol 250, Issue 3 633-649, Copyright © 1975 by The Physiological Society

RESEARCH PAPERS
Sweat gland function in isolated perfused skin

KG Johnson

??.
Injections of adrenaline into cattle skin evoked longer- lasting sweat discharges, and infusions of adrenaline elicited continuous discharges. Injections of noradrenaline and sometimes of bradykinin caused only brief sweat discharges?..



Although Acetylcholine (ACh) is the pre-ganglionic neurotransmitter for both the sympathetic and the parasympathetic divisions of the autonomic nervous system, the post-ganglionic neurotransmitters are different.  Norepinephrine (Ne) is the post-ganglionic neurotransmitter for the sympathetic division, which is used for emergency response.  Most organs use both the sympathetic and parasympathetic innervation. There are three exceptions to the above and they are; 1. The blood vessels are only sympathetically innervated.  2. The sweat glands are only sympathetically innervated with the use of ACh as the neurotransmitter.  3. The adrenal glands are sympathetically innervated with the use of ACh as the neurotransmitter.    I am assuming that this is what you were referring to.  If so, I would agree with your last statement in the described neurological portion of a response. There are other factors you negated to discuss, such as the hormonal induced ones.  I don't think you were suggesting that neurological functions cannot be effected by hormons.

For example:


Quote:


From: http://endo.endojournals.org/cgi/content/full/138/12/5597?maxtoshow=&HITS=&hits=&RESULTFORMAT=&titleabstract=%22sweat%22&searchid=1016823905919_160&stored_searc
h=&FIRSTINDEX=0&journalcode=endo

Endocrinology Vol. 138, No. 12 5597-5604
Copyright © 1997 by The Endocrine Society

ARTICLES
Expression of Adrenomedullin and Its Receptor in Normal and Malignant Human Skin: A Potential Pluripotent Role in the Integument
Alfredo MartÍnez, Theodore H. Elsasser, Carlos Muro-Cacho, Terry W. Moody, Mae Jean Miller, Charles J. Macri and Frank Cuttitta

Detection of AM in sweat
The presence of AM immunoreactivity in the sweat glands (Figs. 4F1F1 and 5F1F1) suggested that the peptide may be secreted into the sweat, and to test this hypothesis, we performed RIA in sweat samples and compared the values obtained with AM levels in blood serum (Fig. 13F13F13). Surprisingly, the values obtained for AM in the sweat were very variable (87.93 ± 88.48 fmol/ml) but, in general, were much higher than the values obtained in the blood samples (16.83 ± 2.52 fmol/ml). These data confirm that AM is secreted into the sweat in large amounts. The variation in AM levels may reflect differences in exertion or in sweat secretion rates.




Quote:


Perhaps you can point out to me where deception/detection of deception is discussed in any of that which you have quoted.  With the exception of the Seely et al quote (idle speculative commentary (secondary source) with no reference to the scientific literature), I see none.  Unless you can, it is completely irrelevant (and would be if you had downloaded a complete physiology text if unrelated to deception through references to the peer reviewed literature) to our discussions and simply more evidence of a lack of critical thinking....



I own the books I quote and they are not downloaded.  I use web-based information because it is readily accessible to anyone who wishes to check my information for accuracy.  I can use full text material I own but most cannot check for the accuracy of statements against those sources.  Seely is a well respected figure within his field and I dare say has more knowledge of anatomy and physiology then both you and I combine. Deception is a broad term and can be associated with much of the literature available.  In an earlier post I cited a book entitled "Social Psychology", which I own, and the quoted text puts the idea of deception into context for our discussion.


Quote:


It is not I who stated categorically that adrenergic hormone release was directly and proportionately related to deception, but you.  Where's the proof.  Absolutely none of that which you have offered in your most recent post is evidence of that...if you are going to idly speculate about such things, so be it, but please distinguish such and identify for the reader and also realize that you have offered nothing whatsoever to indicate that comparison questions in a CQT offer any form of scientific control. (I believe the original issue we were discussing)



Is the intention of your above statement to suggest that the fight or flight syndrome has nothing to do with polygraph?  Are you saying that the sum of stimuli is not proportionately related to the response?  Again, deception is a broad-based term that covers many facets.  As for your inference to scientific control, I have given you definitions of scientific controls taken from other accepted scientific disciplines' and how the CQT uses them.  I am not here to argue which is a better question format, CQT vs GKT.  I believe they both have utility and are valid when used in a proper setting.  I have already made it known what my thoughts are as to the use of CQT in a pre-employment screening setting.

You are correct in that the point of this debate is amiss.  George made the assertions that this debate was based on.  He has purported that the CQT has not been shown to be better then chance in peer-reviewed field research.  This debate has meandered off course because he has changed the subject and passed the burden without first ever proving his assertions.  In a separate thread I wrote;


Quote:


Again you skirt the issue.  There are accepted peer-reviewed field research studies on CQT polygraph and there is a current accuracy rate established by those studies.  The reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis.  It has to do with the squabbling between ideological camps as to who's question format is better.   Your reference to an interrogator's ability to render an opinion on truthfulness has nothing to do with CQT polygraph.



George then replied, in part, the following answer;


Quote:


That CQT polygraphy is not unanimously supported has everything to do with its lack of an established (or establishable) accuracy rate and it's lack of grounding in the scientific method.



I think this is what I have been saying all along.  CQT polygraph used for specific criminal issue purposes is highly accurate and is scientific.  However, some want GKT instead of CQT so they press for its unacceptability and in the course find their cause in the same disarray because it relies heavily on many of the core concepts.  If GKT proponents and CQT proponents would simply agree that both of the methods have utility and are valid, then we would most likely have two scientifically accepted formats.  More importantly, I cannot imagine the impossible force the combine effort would have in steering polygraph.  Still George, you, and I all know that CQT is shown to be better than chance in the current accepted peer-reviewed field research studies.  

Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Mar 31st, 2002 at 1:43pm
J.B.,

You wrote in part:


Quote:
George made the assertions that this debate was based on.  He has purported that the CQT has not been shown to be better then chance in peer-reviewed field research.  This debate has meandered off course because he has changed the subject and passed the burden without first ever proving his assertions.


Where did I change the subject? I am not aware that I did so.

If the polygraph community would have the rest of us believe that CQT polygraphy is a genuinely standardized and controlled diagnostic test that works better than chance under field conditions, then it must shoulder the burden of proving it.

As we noted in The Lie Behind the Lie Detector, there are only four field studies of CQT validity that have been published in peer-reviewed scientific journals, and they haven't met the burden of proving CQT polygraphy to work better than chance. (Note that this is not the same as saying that polygraphy has been proven not to work better than chance.)

You suggest that the reason CQT polygraphy has not been unanimously accepted by the scientific community is attributable to squabbling over whose format is better (CQT vs. GKT). I suggest a different explanation: a dearth of competent research establishing its validity. With regard to the scientific community's acceptance of CQT polygraphy, I would again remind you of Iacono & Lykken's survey, which is discussed at p. 22 of the 2nd ed. of The Lie Behind the Lie Detector:


Quote:
In 1994, William G. Iacono and David T. Lykken conducted a survey of opinion of members of the Society for Psychophysiological Research (SPR) (Iacono & Lykken, 1997). Members of this scholarly organization constitute the relevant scientific community for the evaluation of the validity of polygraphic lie detection. Members of the SPR were asked, “Would you say that the CQT is based on scientifically sound psychological principles or theory?” Of the 84% of the 183 respondents with an opinion, only 36% agreed.

Moreover, SPR members were asked whether they agreed with the statement, “The CQT can be beaten by augmenting one’s response to the control questions.” Of the 96% of survey respondents with an opinion, 99% agreed that polygraph “tests” can be beaten.


And as for standardization and control, I think you've failed to understand both concepts, as is amply illustrated by your exchange with Drew above and before that, by your dismissal (in your post of 3 March) of Furedy's critique (which you clearly did not understand and mistakenly attributed to Lykken) as "elusive babble."

If you would have us believe that CQT polygraphy has been proven by peer-reviewed research to differentiate between truth and deception at better than chance levels, then among other things, you ought to be able to:

1) tell us what the diagnostic sensitivity and specificity of CQT polygraphy is for the detection of deception;

2) cite the peer-reviewed research that establishes such sensitivity and specificity;

3) refer us to the standardized protocol for the CQT that was used in this research;

4) explain how such variables such as whether the subject understands how truth vs. deception is actually inferred in CQT polygraphy and whether the subject employed countermeasures were controlled for.

Title: Re: The Scientific Validity of Polygraph
Post by Drew Richardson on Mar 31st, 2002 at 2:29pm
J.B.,

The glory of medical physiology does not and will not cover the sins and shortcomings of control question polygraphy.  Your use of the former in an attempt to support the latter through wild assertion and speculation will not fly.  Please do not waste my time or that of other readers with anything less than citations from the peer reviewed physiological literature with specific reference to control question test polygraphy if you would have me evaluate and draw conclusions about one based on the other…

With regard to teaming up with CQT polygraphists, perhaps, but not as you suggest.  I will never seek to garner support for the meaningful (e.g., concealed information testing) by generally associating myself with the unsound, unsupported, uncontrolled, and unspecifiable behavior we now know as control question test (CQT) polygraphy.  The only faint praise I can presently offer practioners of such in a criminal specific-issue setting is that your practice is theoretically more sound than that of your colleagues who use it for the fishing expedition we have come to know as polygraph screening.  But as to your suggestion of team effort…when those of you who use CQT polygraphy in a criminal specific setting have mustered sufficient courage and integrity to openly condemn (It is not sufficient to simply say that my agency does not do polygraph screening) that which you know to be wrong and the source of victimization of thousands of individuals (including many who visit this site), then you will find me quite willing to be part of a team effort to end polygraph screening.  I will be more than happy to be a follower of those in your community who will spearhead such an effort and, once the mutual goals of such a team effort have been achieved, I will pledge support to reevaluate with an open mind all the various options for criminal specific-issue testing.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 1st, 2002 at 8:23am
George,

I was referring to your insistence that I must prove CQT polygraph valid.
 
For example:

On 01/22/02 you wrote:


Quote:

?
What peer-reviewed field research proves that CQT polygraphy works better than chance? And just how valid does that research prove it to be?



On 02/18/02 you wrote:


Quote:


Before I address your questions, I note that you didn't really answer mine:

1) Do you agree that that the available peer-reviewed research has not proven that CQT polygraphy works at better than chance levels of accuracy under field conditions? If not, why? What peer-reviewed field research proves that CQT polygraphy works better than chance? And just how valid does that research prove it to be?

I realize you averaged the Bersh and Barland & Raskin studies to come up with an average accuracy of 67.87? Do you seriously maintain that these two studies prove that CQT polygraphy works better than chance and that it is 67.87% accurate?
You say not better than chance and then say it doesn't say it doesn't work better than chance?  Please explain.



On 03/07/02 you wrote:


Quote:


No, J.B. The burden of proof rests with you (and other polygraph proponents) if you would have us believe that CQT polygraphy is a valid diagnostic technique. Respectfully, I don't think you've met that burden. Not even close.



On 03/30/02 you wrote:


Quote:


If the polygraph community would have the rest of us believe that CQT polygraphy is a genuinely standardized and controlled diagnostic test that works better than chance under field conditions, then it must shoulder the burden of proving it.



It is you who has said that, "CQT polygraphy has not been shown by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions. Moreover, since CQT polygraph lacks both standardization and control, it can have no validity."

I repeatedly have asked you how valid the current peer-reviewed scientific field research has shown CQT polygraph to be.  Drew interjected with a quiet valid argument about true standards and controls. However, my original references to standardization and controls were based on your wording, "2) Do you agree that because CQT polygraphy lacks both standardization and control, it can have no validity? If not, why???Other uncontrolled (and uncontrollable) variables that may reasonably be expected to affect the outcome of a polygraph interrogation include the subject's level of knowledge about CQT polygraphy (that is, whether he/she understands that it's a fraud) and whether the subject has employed countermeasures."  Even so, I continued the dialogue referencing scientific definitions of controls and standards and how they are used in CQT polygraph.

I admit I erred in attributing Furedy?s assumptions to Lykken, easily done as they are from like ideological camps.  I completely understand what he is saying about the psychological and sociological consequences and elements that may differ in any given test/exam.  

The survey you posted does not specify what information was given to those who were polled and what prior if any knowledge they had of polygraph.  Honts has disputed this poll and I don?t think provides any enlightenment to your original statement for which this debate is centered around.

You then ask me once again to prove elements before you establish your assertion of the validity.


Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 1st, 2002 at 8:33am
Drew,

Just because there are no direct peer-reviewed researches studies for the CQT polygraph on the physiological responses described does not mean that there are no comparable psychophysiology studies.  I don?t see it is necessary to continue this discourse.  It is George who has a not better than chance validity assertions to establish.  There also currently exists the problem of multiple uses that are shirttailed to one another under the common ground of the CQT format, which may rightfully cause subjectivity problems for almost every inference throughout a continued discussion.

I cannot agree with you more about your task for the future of pre-employment polygraph screening.  I too do hope that a true combined effort can be established for the betterment of society as a whole and polygraph as a profession in the not so distant future.

Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Apr 1st, 2002 at 12:22pm
J.B.,


Quote:
I was referring to your insistence that I must prove CQT polygraph valid.


This is hardly changing the subject.


Quote:
I repeatedly have asked you how valid the current peer-reviewed scientific field research has shown CQT polygraph to be.


No sensitivity or specificity can be determined for CQT polygraphy (an uncontrolled, unstandardized, unspecifiable procedure) based on the available peer-reviewed field research.


Quote:
The survey you posted does not specify what information was given to those who were polled and what prior if any knowledge they had of polygraph.  Honts has disputed this poll and I don't think provides any enlightenment to your original statement for which this debate is centered around.


The survey to which I referred (Iacono, W.G. and D.T. Lykken, The validity of the lie detector: Two surveys of scientific opinion, Journal of Applied Psychology, 1997, 82, 426-433) does indeed specify what information was provided to those who were polled, and your assertion that it doesn't suggests that you haven't read it. If you have the 2nd ed. of Lykken's A Tremor in the Blood: Uses and Abuses of the Lie Detector, you'll also find the information that was provided to survey respondents at pp. 179-181.

I only mention this survey in response to your ludicrous assertion that the failure of CQT polygraphy to be unanimously accepted by the scientific community is ascribable to quibbling over whose format is better (CQT vs. GKT).

In your last message directed to Drew you wrote:


Quote:
It is George who has a not better than chance validity assertions [sic] to establish.


Again, I haven't claimed that polygraphy has been proven not to work better than chance, but rather that it has not been proven by peer-reviewed research to work better than chance under field conditions. Your position seems to be that polygraphy is valid until proven invalid. If that is indeed your position, then I think there is little point in further discussion.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 8th, 2002 at 8:04pm
George,

It is a change of subject.  You shift the burden in every discussion and still have yet to assert what the accuracy rate is for CQT polygraph under field conditions.

I have read the survey you have cited.  What I meant by my prior statement was that the information was not specified and/or included in the cited survey by you.  Iacono and Lykken’s survey indicates it is for the purpose of “the evaluation of the validity of polygraphic lie detection”.  However, it is overwhelmingly obvious that this survey was conducted in the attempt to discredit CQT and boast GKT.  My assertion about squabbling over methods is not ludicrous but a well-known fact that this ideological camp supports GKT and only prescribes ill comments to CQT.  If you would have posted the information provided in the survey to those surveyed about the CQT and GKT, one could see that the information was vague and bias.  I say vague because there is simply an opinionated summary of the CQT theory.  For example;


Quote:


Journal of Applied Psychology 1997, Vol. 82, No.3, 426-433

         The Validity of the Lie Detector: Two Surveys of Scientific Opinion

W.G. Iacono and D.T. Lykken
University of Minnesota, Twin Cities Campus

Pg. 427-428

                       Polygraph Techniques
CQT

           The CQT compares the physiological disturbance caused by relevant questions about the crime (e.g., for the O.J. Simpson case, “On June 12, did you stab your ex-wife, Nicole?” with disturbance caused by “control” (more appropriately, comparison) questions relating to possible prior misdeeds (e.g., “Before 1992, did you ever lie to get out of trouble?”  or “During the first 45 years of your life, did you ever try to seriously hurt someone?”).  As characterized by Raskin (1986), the control question, which are deliberately vague and therefore difficult for anyone to answer truthfully, are designed to give the innocent person

The opportunity to become more concerned about questions other than the relevant questions and   produce stronger physiological reactions to the control questions.  If the subject shows stronger physiological reactions to the control as compared to the relevant question, the test outcome is interpreted as truthful.  Stronger reactions to the relevant questions indicate deception. (p. 34)

GKT

     The GKT attempts to detect not lying, but whether the suspect possesses, “guilty knowledge,” that is, knowledge that only the perpetrator of the crime and the police would posses (Lykken, 1981).  For example, “If you were at the crime scene, Mr. Simpson, you would know what Nicole was wearing.  Was she wearing a green swimsuit?  A black cocktail dress?  A white tennis outfit? A red blouse and slacks?  A blue bathrobe?  A T-shirt and jeans?”  A GKT might consist of 10 such items.  Guilt would be indicated by a consistently stronger physiological response to the correct guilty knowledge alternative among these items.  Although the GKT is seldom used in the field, it has been the topic of considerable interest, generating a substantial number of research reports in psychological journals (for reviews, see Abrams, 1989; Ben-Shakar & Furedy, 1991; Iacono & Patrick, 1988).



There should have been data presented of the accepted research studies validity findings and/or a list of these studies combined to show a statistical overall accuracy rate.  A short outline of the entire method should have also been included. I say bias because the wording in the descriptions of the two question formats is obviously slanted.  When the CQT is discussed, the physiological response is a “physiological disturbance”.  When the GKT is discussed, it is a “physiological response”.  When looking at the GKT method description, there are several studies suggested for reference.  The CQT method only lists one source and makes no suggestion to references.  The difference of the highly informed subject group, who thought CQT was at least 85% accurate, and the remaining uninformed was an interesting point of discussion that was touched but dismissed. The percentages of with opinion on the surveyed areas are also an interesting topic of discussion that is set aside.

Whatever you wish to say about chance validity, you still have not once given what the established validity is and/or soundly defined your assertion as to how you have come to the chance validity conclusion. Both the sensitivity and specificity is included in your statement.  I assume your assertion is based on the RCMP study.  Maybe if you were to say, "CQT polygraphy has been shown by peer-reviewed research to work at not better than chance levels for truth under field conditions.” your assertion may have some grounds, albeit still arguable.

My assertion is not that polygraph is valid until proven otherwise.  It is that the accuracy rate for the CQT format has been acceptably established when used in a specific criminal issue testing scope and the only element missing is its proof of general acceptance.  It has been pointed out in previous literature that one of the main reasons for this lack of scientific acceptance is the lack of universal agreement within the field of polygraph, (ie.. use, question format methods,….)  This task is much easier to accomplish by narrowing the scope of use.

Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Apr 9th, 2002 at 3:44pm
J.B.,

You wrote:


Quote:
It is a change of subject.  You shift the burden in every discussion and still have yet to assert what the accuracy rate is for CQT polygraph under field conditions.


As I noted in my post of 1 April above, "No sensitivity or specificity can be determined for CQT polygraphy (an uncontrolled, unstandardized, unspecifiable procedure) based on the available peer-reviewed field research." If you disagree, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research?

You also wrote:


Quote:
My assertion about squabbling over methods is not ludicrous but a well-known fact that this ideological camp supports GKT and only prescribes ill comments to CQT.


What is ludicrous, J.B., is your earlier statement:


Quote:
The reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis.  It has to do with the squabbling between ideological camps as to who's question format is better.


In Iacono & Lykken's survey of SPR members, only 36% of respondents with an opinion answered affirmatively when asked, "Would you say that the CQT is based on scientifically sound psychological principles or theory?" And 99% of respondents with an opinion agreed with the statement, "The CQT can be beaten by augmenting one’s response to the control questions."

I think it's completely absurd for you to suggest that the large majority who did not agree that CQT polygraphy is based on scientifically sound pyschological principles or theory and the overwhelming majority who agreed that the CQT can be beaten reached those opinions because of partisan loyalty to some supposed "ideological camp" that supports the GKT. Nor do I see any reason for supposing that any alleged bias on the part of Iacono and Lykken accounts for the results of their peer-reviewed survey.


Clearly, CQT polygraphy's lack of support amongst the scientific community is attributable to something more than just "squabbling between ideological camps as to who's question format is better."

Title: Re: The Scientific Validity of Polygraph
Post by beech trees on Apr 9th, 2002 at 7:00pm

wrote on Apr 9th, 2002 at 3:44pm:
As I noted in my post of 1 April above, "No sensitivity or specificity can be determined for CQT polygraphy (an uncontrolled, unstandardized, unspecifiable procedure) based on the available peer-reviewed field research." If you disagree, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research?


The district court found that there are no standards
which control the procedures used in the polygraph industry
and that without such standards a court can not adequately
evaluate the reliability of a particular polygraph exam... The Court enumerated a series of general observations
designed to aid trial judges in making initial admissibility
determinations. In ascertaining whether proposed testimony is
scientific knowledge, trial judges first must determine if the
underlying theory or technique is based on a testable scientific
hypothesis. Id. at 593. The second element considers whether
others in the scientific community have critiqued the proposed
concept and whether such critiques have been published in
peer-review journals. Id. at 593-94. Third, the trial judge
should consider the known or potential error rate. Id. at 594.
Fourth, courts are to consider whether standards to control the technique's operation exist...  


The reliability of polygraph testing fundamentally
      depends on the reliability of the protocol followed
      during the examination. After considering the evi-
      dence and briefing, the court concludes the proposed
      polygraph evidence is not admissible under Fed. R.
      Evid. 702. Although capable of testing and subject to
      peer review, no reliable error rate conclusions are
      available for real-life polygraph testing. Addition-
      ally, there is no general acceptance in the scientific
      community for the courtroom fact-determinative use
      proposed here. Finally, there are no reliable and
      accepted standards controlling polygraphy. Without
      such standards, there is no way to ensure proper pro-
      tocol, or measure the reliability of a polygraph
      examination. Without such standards, the proposed
      polygraph evidence is inadmissible because it is not
      based on reliable `scientific knowledge.'


USA v CORDOBA
9850082

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 12th, 2002 at 2:27am
George,

First,  where in any peer-reviewed scientific research study has your following assertion been conclusively shown to be true?


Quote:


As I noted in my post of 1 April above, "No sensitivity or specificity can be determined for CQT polygraphy (an uncontrolled, unstandardized, unspecifiable procedure) based on the available peer-reviewed field research." If you disagree, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research?



Again, I am not saying polygraph is valid until proven otherwise.  I am saying what peer-reviewed scientific research supports that the “CQT polygraphy has not been shown by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions.”  If there is one to support your statement, then what was the specificity and sensitivity established?

You then mischaracterized my statements.

I wrote:


Quote:


The reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis. It has to do with the squabbling between ideological camps as to who's question format is better.



In relevant terms you respond:


Quote:


I think it's completely absurd for you to suggest that the large majority who did not agree that CQT polygraphy is based on scientifically sound pyschological principles or theory and the overwhelming majority who agreed that the CQT can be beaten reached those opinions because of partisan loyalty to some supposed "ideological camp" that supports the GKT. Nor do I see any reason for supposing that any alleged bias on the part of Iacono and Lykken accounts for the results of their peer-reviewed survey.



Not once have I said and/or suggested that the respondents “who did not agree that CQT polygraphy is based on scientifically sound pyschological principles or theory and the overwhelming majority who agreed that the CQT can be beaten reached those opinions because of partisan loyalty to some supposed "ideological camp" that supports the GKT.”  I have not made one mention to this and there is no statistical data to suggest this or its disputed form.  

The apparent bias in Iacono and Lykken’s study and how it is presented is shown in part by the illustrated points of my previous post. Was all of the data and material of this study made available to Honts, Raskin, et al for critique and criticism, as required?  Did the study use consistent scales (ie.. 1-5) for all the questions?  Was the cutoff point uniform throughout the different questions (ie.. a ‘5’ response is an ‘agree’ and a ‘1’ response is a ‘disagree’ response for every question asked and answered).  How informed were the majority of responders?  A good follow-up to this survey would be to provide all the original responders with a detailed presentation of CQT polygraph, conduct the original survey following with an included sub-answer for all methods to each question and see how their responses differ and what their concluding comparative assessment is of the degree of information provided by the original study to the latter.


Quote:


Clearly, CQT polygraphy's lack of support amongst the scientific community is attributable to something more than just "squabbling between ideological camps as to who's question format is better."



Can you illustrate a debate over the scientific support of CQT polygraph where an adversary format is not involved?  It seems to be the reoccurring theme of most every discussion revolving around CQT polygraph.  Even you use GKT proponents/CQT opponents in your illustrated texts and studies.


Quote:


In Iacono & Lykken's survey of SPR members, only 36% of respondents with an opinion answered affirmatively when asked, "Would you say that the CQT is based on scientifically sound psychological principles or theory?" And 99% of respondents with an opinion agreed with the statement, "The CQT can be beaten by augmenting one’s response to the control questions."



These responses mean little to nothing without knowing the degree of knowledge of the responders.  The later response needs additional definition of the augmentation.  For example; What is the possibility that one would beat the CQT by augmenting their responses to the control questions?  Remember that most anything is possible and/or conceivable but is subject to the given and/or stipulated condition to establish a degree of its probability.   Iacono and Lykken’s study just gives the opinion of the responders that it is possible, not how possible and under what conditions.




Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Apr 12th, 2002 at 7:18am
J.B.,

My conclusion that CQT polygraphy has not been proven by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions is based on a review of the four peer-reviewed field studies that have been published (and the understanding that CQT polygraphy is an unspecifiable procedure that lacks both standardization and control), not on a peer-reviewed study assessing those studies. (Note, however, that Professor Furedy's critique of the scientific status of CQT polygraphy, cited in Chapter 1 of The Lie Behind the Lie Detector, and which you casually dismissed as "elusive babble," was published in the International Journal of Psychophysiology.)

Again, if you disagree with me on this, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research? Your continued silence on this point suggests that you can't.

With regard to the Iacono and Lykken study, again, I only mentioned it to illustrate the point that the lack of unanimous support for CQT polygraphy in the scientific community is not, as you suggested, attributable merely to squabbling over whose technique (CQT vs. GKT) is better. The majority of survey respondents who did not believe that the CQT is based on scientifically sound psychological principles or theory cannot plausibly be argued to have based their skepticism regarding CQT polygraphy on some imputed advocacy for the GKT.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 15th, 2002 at 7:27am
George,

1. In those four studies you use for your assumption what are the established accuracy rates?

2. Furedy and Honts have been debating this for years and it should be once again noted that Furedy reserves ill comments for CQT not polygraph because he is a GKT format supporter.

3. Can you tell me what sensitvity and specifity has been established for any given forensic science by peer-reviewed and published studies under field conditions?  

4.  I don't think you read what I wrote in regards to the study. Iacono and Lykken obviously slanted the information given to the surveyed in the study.  The study also lacks proper information for uninformed persons to be able to make a scientific analysis of the CQT.

Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Apr 15th, 2002 at 2:14pm
J.B.,


Quote:
1. In those four studies you use for your assumption what are the established accuracy rates?


The accuracy rates obtained (not established) in the four studies are presented in a table provided at p. 134 of the 2nd ed. of Lykken's A Tremor in the Blood. I'll reproduce that table here for the benefit of those without ready access to the book:

Table 8.2. Summary of Studies of Lie Test Validity That Were Published in Scientific Journals and That Used Confessions to Establish Ground Truth

































  Horvath
(1977)
Kleinmuntz
&Szucko
(1984)
Patrick &
Iacono
(1991)
Honts
(1996)
Mean
Guilty correctly
classified
21.6/28
77%
38/50
76%
48/49
98%
7/7
100%
114.6/134
85.5%
Innocent correctly
classified
14.3/28
51%
32/50
64%
11/20
55%
5/5
100%
62.3/103
60.5%
Mean of above 64% 70% 77% 100% 73%


As Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. Do you mean to suggest that these four studies are adequate for determining the sensitivity and specificity of CQT polygraphy?


Quote:
2. Furedy and Honts have been debating this for years and it should be once again noted that Furedy reserves ill comments for CQT not polygraph because he is a GKT format supporter.


So what? This doesn't support your laughably implausible assertion that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis.  It has to do with the squabbling between ideological camps as to who's question format is better."


Quote:
3. Can you tell me what sensitvity and specifity has been established for any given forensic science by peer-reviewed and published studies under field conditions?


No, not off the top of my head. I would assume that most diagnostic tests would be validated with laboratory studies. This is not feasible with CQT polygraphy because fear of consequences is a significant variable that is generally absent in the laboratory setting.

What is your point? Do you mean to suggest that I'm holding CQT polygraphy to an unfairly high standard?


Quote:
4.  I don't think you read what I wrote in regards to the study. Iacono and Lykken obviously slanted the information given to the surveyed in the study.  The study also lacks proper information for uninformed persons to be able to make a scientific analysis of the CQT.


I read it, but frankly, I think you're "picking fly shit out of pepper" in an attempt to dismiss the results of a peer-reviewed survey that happen not to support your wishes regarding the scientific community's acceptance of CQT polygraphy.

Again, you'll find the information provided to those surveyed cited (it's paraphrased in the journal article) at pp. 179-181 of A Tremor in the Blood. The description of the probable-lie CQT is largely cited from Raskin, a leading CQT proponent. Given the survey's high response rate, it would appear that most of those surveyed disagreed with your view that the information provided was inadequate for them to render an opinion on whether the CQT is based on scientifically sound principles or theory.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 16th, 2002 at 11:02pm
George,

You wrote:


Quote:


As Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. Do you mean to suggest that these four studies are adequate for determining the sensitivity and specificity of CQT polygraphy?



If you take the more recent of these studies, those conducted in the 1990’s, the ‘obtained’ accuracy is much higher.

                                                                                                             Patrick & Iacono (1991) Honts (1996) Mean

Guilty correctly
classified              48/49                        7/7        55/56
                          98%                      100%         99%    
Innocent correctly
classified               11/20                       5/5        16/25
                          55%                       100%        77.5%

Mean of above        77%                        100%       88.25%


Confession based criteria is a dependable means of establishing ground truth if the definition of confession is well defined, adhered to, and the examiners’ decision based on the polygraph is pre-confession.  As I have said before, one can not place a definitive base rate to sensitivity and specificity in a field setting due to the variable truthful and deceptive that may be present at any given time.  Likewise, in any forensic science the base rate of these two areas is ever changing within the field based on the casework.  Sensitivity and specificity are established in a controlled laboratory research environment.  You have said chance accuracy and based your assumption on the four studies that you posted.  I do not see where in any of these studies or even the four studies combined for a mean accuracy rate produce a not better then chance outcome in any of the areas.  Even the lowest of the percentages is above chance.


Quote:

 
So what? This doesn't support your laughably implausible assertion that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis. It has to do with the squabbling between ideological camps as to who's question format is better."



Nonsense, the studies conducted by Honts had a much different reported outcome.  So, Lykken and Iacono decided to do their own study to refute Honts et al.  These studies where designed to find if general acceptance existed.  General acceptance was an important element to establish because it was a main criterion for admissibility in court prior to Daubert v. Merrell Dow Pharmaceuticals.  The rules of evidence have since changed post Daubert, see: http://cyber.law.harvard.edu/daubert/ch3.htm for the new acceptance criteria.  I am not dismissing the results of any survey or saying the percentages are not what was reported.  I am saying that this survey, as any, is only in part as good as the information provided, questions posed, and the how it is presented.  It is my opinion that Lykken and Iacono’s survey was a poor attempt to discredit the acceptance of CQT, especially with the difference in the highly informed opinion.  I do not think this is a good method of developing scientific acceptance for either format.  GKT was reported as having 2/3 acceptance. However, can the difference in the two formats acceptance level be correlated to the presentation of the formats, the difference in amount of directed literature provided for each, and the degree of knowledge present in the surveyed for a given method?  The survey does not pose equal questions across the board and leaves much to be answered.  Just because Raskin is a leading expert in the CQT does not mean the majority of the surveyed who were relatively uninformed will give weight to his statements.  Scientists are analytical in nature. (i.e..  tell me what is being done, how it was done, the results obtained, and how you calculated  the results)  If this previous information was properly presented in a scientific forum to the relevant societies and the same results were obtained,  I would accept the results.  This is not the case though.  With the difference in the highly informed opinion, I think the results would be dramatically different to the positive.  

You wrote:


Quote:


No, not off the top of my head. I would assume that most diagnostic tests would be validated with laboratory studies. This is not feasible with CQT polygraphy because fear of consequences is a significant variable that is generally absent in the laboratory setting.



Laboratory studies can be useful in this area.  See a conclusion on this topic at: http://www.polygraph.org/research.htm


Quote:


Podlesny, J. A., & Raskin, D. C. (1978). Effectiveness of techniques and physiological measures in the detection of deception. The Society for Psychophysiological Research, Inc., 15(4), 344-359.

Control-question (CQ) and guilty-knowledge (GK) techniques for the detection of deception were studied in a mock theft context. Subjects from the local community received $5 for participation, and both guilty and innocent subjects were motivated with a $10 bonus for a truthful outcome on the polygraph examination. They were instructed to deny the theft when they were examined by experimenters who were blind with respect to their guilt or innocent. Eight physiological channels were recorded. Blind numerical field evaluations with an inconclusive zone produced 94% and 83% correct decision for two different types of CQ test and 89% correct decisions for GK tests. Control questions were more effective than guilt-complex questions, and exclusive control questions were more effective than nonexclusive control questions. Behavioral observations were relatively ineffective in differentiating guilty and innocent subjects. Quantitative analyses of the CQ and GK data revealed significant discrimination between guilty and innocent subjects with a variety of electrodermal and cardiovascular measures. The results support the conclusion that certain techniques and physiological measures can be very useful for the detection of deception in a laboratory mock-crime context.



Also, psychopaths’ and/or sociopaths’ have not been proven to be able to pass a polygraph when being deceptive.

See: http://www.polygraph.org/research.htm


Quote:


Raskin, D. C., & Hare, R. D. (1978). Psychopathy and detection of deception in prison in a prison population. Psychophysiology, 15, 126-136.

The effectiveness of detection of deception was evaluated with a sample of 48 prisoners, half of whom were diagnosed psychopaths. Half of each group were "guilty" of taking $20 in a mock crime and half were "innocent". An examiner who had no knowledge of the guilt or innocent of each subject conducted a field-type interview followed by a control question polygraph examination. Electrodermal, respiration, and cardiovascular activity was recorded, and field (semi-objective) and quantitative evaluations of the physiological responses were made. Field evaluations by the examiner produced 88% correct, 4% wrong, and 8% inconclusives. Excluding inconclusives, there were 96% correct decisions. Using blind quantitative scoring and field evaluations, significant discrimination between "guilty" and "innocent" subjects was obtained for a variety of electrodermal, respiration, and cardiovascular measures. Psychopaths were as easily detected as nonpsychopaths, and psychopaths showed evidence of stronger electrodermal responses and heart rate decelerations. The effectiveness of control question techniques in differentiating truth and deception was demonstrated in psychopathic and nonpsychopathic criminals in a mock crime situation, and the generalizability of the results to the field situation is discussed.





Quote:


What is your point? Do you mean to suggest that I'm holding CQT polygraphy to an unfairly high standard?



It is not about ‘fair’ standards.  It is about a consistent standard applied to other scientific or forensic scientific procedures for acceptability as being valid.  


Quote:


Again, you'll find the information provided to those surveyed cited (it's paraphrased in the journal article) at pp. 179-181 of A Tremor in the Blood. The description of the probable-lie CQT is largely cited from Raskin, a leading CQT proponent. Given the survey's high response rate, it would appear that most of those surveyed disagreed with your view that the information provided was inadequate for them to render an opinion on whether the CQT is based on scientifically sound principles or theory.



Again, the respondents based their survey-based opinion on the information they were given.  Although paraphrased, the information is not consistent with that which is presented in a scientific forum for the review of a method for scientific validity.  A conclusion that may be drawn from this survey is that the majority of scientists are not properly informed.

Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Apr 17th, 2002 at 12:31pm
J.B.,

The two studies you point to do not establish that CQT polygraphy works better than chance, nor can any sensitivity and specificity for the procedure be inferred from them. The matter of sampling bias introduced when confessions are used as criteria for ground truth is indeed significant. You wrote:


Quote:
Confession based criteria is a dependable means of establishing ground truth if the definition of confession is well defined, adhered to, and the examiners' decision based on the polygraph is pre-confession.


But Lykken (A Tremor in the Blood, 2nd ed., pp. 70-71) explains how reliance on confessions as criteria for ground truth biases the sampling:


Quote:

How Polygraph-Induced Confessions Mislead Polygraphers


It is standard practice for police polygraphers to interrogate a suspect who has failed the lie test. They tell him that the impartial, scientific polygraph has demonstrated his guilt, that no one now will believe his denials, and that his most sensible action at this point would be to confess and try to negotiate the best terms that he can. This is strong stuff, and what the examiner says to the suspect is especially convincing and effective because the examiner genuinely believes it himself. Police experience in the United States suggests that as many as 40% of interrogated suspects do actually confess in this situation. And these confessions provide virtually the only feedback of "ground truth" or criterion data that is ever available to a polygraph examiner.

If a suspect passes the polygraph test, he will not be interrogated because the examiner firmly believes he has been truthful. Suspects who are not interrogated do not confess, of course. This means that the only criterion data that are systematically sought--and occasionally obtained--are confessions by people who have failed the polygraph, confessions that are guaranteed to corroborate the tests that elicited those confessions. The examiner almost never discovers that a suspect he diagnosed as truthful was in fact deceptive, because that bad news is excluded by his dependence on immediate confessions for verification. Moreover, these periodic confessions provide a diet of consistently good news that confirms the examiner's belief that the lie test is nearly infallible. Note that the examiner's client or employer also hears about these same confessions and is also protected from learning about most of the polygrapher's mistakes.

Sometimes a confession can verify, not only the test that produced it, but also a previous test that resulted in a diagnosis of truthful. This can happen when there is more than one suspect in the same crime, so that the confession of one person reveals that the alternative suspect must be innocent. Once again, however, the examiner is usually protected from learning when he has made an error. If the suspect who was tested first is diagnosed as deceptive, then the alternative suspect--who might be the guilty one--is seldom tested at all because the examiner believes that the case was solved by that first failed test. This means that only rarely does a confession prove that someone who has already failed his test is actually innocent.

Therefore, when a confession allows us to evaluate the accuracy of the test given to a person cleared by that confession, then once again the news will almost always be good news; that innocent suspect will be found to have passed his lie test, because if the first suspect had not passed the test, the second person would not have been tested and would not have confessed.[endnote omitted]


As Lykken notes (p. 134), in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive.

With regard to Honts' 1996 study, it would be appropriate to cite here Lykken's cogent commentary (pp. 134-35):


Quote:
The recent study by Honts illustrates that publication in a refereed journal is no guarantee of scientific respectability. The meticulous study by Patrick and Iacono was done with the cooperation of the Royal Canadian Mounted Police (RCMP) in Vancouver, B.C., and showed that nearly half of the suspects later shown to be innocent were diagnosed as deceptive by the RCMP polygraphers. This prompted the Canadian Police College to contract with Honts, once of the Raskin group, to conduct another study. A polygraphy instructor at the college sent Honts charts from tests administered to seven suspects who had confessed after failing the CQT and also charts of six suspects confirmed to be innocent by the se confessions of alternative suspects in the same crimes. Knowing which were which, Honts then proceeded to rescore the charts, using the same scoring rules employed by the RCMP examiners. Those original examiners had, of course, scored all seven guilty suspects as deceptive; that was why they proceeded to interrogate them and obtained the criterial confessions. Using the same scoring rules (and also knowing which suspects were in fact guilty), Honts of course managed to score all seven as deceptive also. The RCMP examiners had scored four of the six innocent suspects as truthful and two as inconclusive. We can be confident that all innocent suspects classified as deceptive were never discovered to have been innocent because, in such cases, alternative suspects would not have been tested, excluding any possibility that the truly guilty suspect might have failed, been interrogated, and confessed. Honts, using the same scoring rules and perhaps aided by his foreknowledge of which suspects were innocent, managed to improve on the original examiners, scoring five of the six as truthful and only one as inconclusive. The difference in Honts's findings from those of the other studies summarized in Table 8.2 is striking.

Surely no sensible reader can imagine that these alleged "findings" of the Honts study add anything at all to the sum of human knowledge about the true accuracy of the CQT. How it came about that scientific peer review managed to allow this report to be published in an archival scientific journal is a mystery. Since the author, Honts, and the editor of the journal, Garvin Chastain, are colleagues in the psychology department of Boise State University, it is a mystery they might be able to solve.


You also wrote:


Quote:
As I have said before, one can not place a definitive base rate to sensitivity and specificity in a field setting due to the variable truthful and deceptive that may be present at any given time.  Likewise, in any forensic science the base rate of these two areas is ever changing within the field based on the casework.  Sensitivity and specificity are established in a controlled laboratory research environment.


Can sensitivity and specificity genuinely be determined for a procedure like CQT polygraphy that is both unspecifiable and lacking in control?

In the message thread, What's more effective than the polygraph? you wrote:


Quote:
...there is a known sensitivity and specificity for polygraph that has been established and proven through peer-reviewed scientific research.


Are you prepared, at long last, to reveal to us to whom that sensitivity and specificity is known, and what precisely it is? And what peer-reviewd research established it? Again, the sensitivity and specificity of CQT polygraphy appears to be unknown to the U.S. Government, and as Gordon Barland, formerly of the DoDPI research division, wrote in that message thread, "...I know of no official government statistic regarding sensitivity and specificity."

You also wrote:


Quote:
You have said chance accuracy and based your assumption on the four studies that you posted.  I do not see where in any of these studies or even the four studies combined for a mean accuracy rate produce a not better then chance outcome in any of the areas.  Even the lowest of the percentages is above chance.


Again, J.B., my argument is not that CQT polygraphy has been proven by peer-reviewed research to have "chance accuracy," but rather that it has not been proven by peer-reviewed research to be more accurate than chance. There is a significant distinction between the two that still seems to elude you, but I'm not sure how to make the distinction any clearer. No accuracy rate (sensitivity or specificity) can be determined for CQT polygraphy based on the available peer-reviewed field research.

Finally, with regard to Iacono & Lykken's survey of scientific opinion on the polygraph, however inadequate you may think the information provided to respondents was, the fact remains that the great majority of survey respondents believed they had enough information to render an opinion on whether the CQT is based on scientifically sound psychological principles or theory. And only 36% of Society for Psychophysiological Research members and 30% of Division One fellows of the American Psychological Association thought it was.

If you genuinely believe that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis" and is instead attributable to "squabbling between ideological camps as to who's question format is better," well, more power to you, J.B. It appears to be a waste of my time and intellect to attempt to disabuse you of what seems to be a cherished delusion.

Title: Re: The Scientific Validity of Polygraph
Post by akuma264666 on Apr 20th, 2002 at 12:02pm
I did not lie about my drug use, i have never used them. I did not lie about selling drugs, i have never sold them. I am not nor have I ever been a member of a group whose purpose was the destruction of my country. I have never been contacted by a member of a non- U.S. government for the express purposes of selling secrets. I am most certainly not a traitor to my country and yet your beloved polygraph has branded me so. My life has been ruined by that infernal machine and for you to maintain that the polygraph has an acceptable accuracy rate makes me very angry. For my position I don't care if the damned thing is 99% accurate, which it is not, it was wrong when it labelled me a drug selling, dope using, traitor and if it screwed me I can only imagine how many countless others it has harmed. The polygraph can not and should not take the place of old fashioned investigative work, it has no place in the preemployment process of the federal government.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 21st, 2002 at 5:50am
George,


You wrote:


Quote:


The two studies you point to do not establish that CQT polygraphy works better than chance, nor can any sensitivity and specificity for the procedure be inferred from them. The matter of sampling bias introduced when confessions are used as criteria for ground truth is indeed significant.



The accuracy of any given method is established by the ?obtained? accuracy results.  If a study is accepted though peer-review, then inferences can be made about the accuracy of the method by those results.  The bias you speak of is directly dependent on the number of cases that where discriminated against because they lacked the criterion to be included.  

You wrote:


Quote:


As Lykken notes (p. 134), in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive.



You once again assert Lykken?s opinion.  What is Raskin?s opinion, whom you have admitted as a leading expert in CQT, on this study.  It appears to be additional evidence of conflicting opinion between two separate ideologies on question methodology.  You might ask why Lykken would refute Patrick & Iacono?s study when they are from the same ideological camp?  Because the results turned out in the positive for the CQT.  "CQT-induced confession", now that is ludicrous.  Is Lykken suggesting that someone confesses because of the polygraph test question format used?  It would be interesting to see this assertion supported through research.

You wrote:


Quote:


Can sensitivity and specificity genuinely be determined for a procedure like CQT polygraphy that is both unspecifiable and lacking in control?



For one to say that a scientific method is unspecified, they must have definitive evidence of what other samples will produce the same results as the primary specificity.  An example of this would be the Marquis test when used to identify heroin.  In this test there are at least 50 other compounds that would produce the same result as the specified.  In knowing what the specificity is, do you know of any other physiological response that would produce the same result during and after the asking and answering of a specific question?  I know you will probably attempt to assert countermeasures.  Countermeasures are not a physiological response but an attempt by one to produce a similar looking response to alter the test outcome to the positive.  

You wrote:


Quote:


Are you prepared, at long last, to reveal to us to whom that sensitivity and specificity is known, and what precisely it is? And what peer-reviewd research established it? Again, the sensitivity and specificity of CQT polygraphy appears to be unknown to the U.S. Government, and as Gordon Barland, formerly of the DoDPI research division, wrote in that message thread, "...I know of no official government statistic regarding sensitivity and specificity."



The answer to this that it is known in a controlled laboratory setting for research purposes, just like any other scientific method is established. You are asking for produced statistics that are not of the criteria used in the establishment of a scientific method.  Gordon did not say that the specificity and sensitivity were not known.  He said they were not known in the context that you had given, within the field.  I have repeatedly answered this question to the point of your fixed inquiry and you have acknowledged that specificity and sensitivity are not obtained in the setting you wish to place them.

You wrote:


Quote:


Again, J.B., my argument is not that CQT polygraphy has been proven by peer-reviewed research to have "chance accuracy," but rather that it has not been proven by peer-reviewed research to be more accurate than chance. There is a significant distinction between the two that still seems to elude you, but I'm not sure how to make the distinction any clearer. No accuracy rate (sensitivity or specificity) can be determined for CQT polygraphy based on the available peer-reviewed field research.



Again, accuracy is set by the accepted results that have been obtained.  Regardless of how it is worded (I fully understand the difference between has been and has not been), when something has not been proven statistically better then a given percentage then it has been proven to be equal to or less then the specified percentage.  Knowing this, I have seen an abundance of accuracy rates that have obtained above chance accuracy rates, including the peer-reviewed field research you recently posted as support for your assertion, but not one that supports "has not been proven by peer-reviewed research to be more accurate then chance.

You wrote:


Quote:


Finally, with regard to Iacono & Lykken's survey of scientific opinion on the polygraph, however inadequate you may think the information provided to respondents was, the fact remains that the great majority of survey respondents believed they had enough information to render an opinion on whether the CQT is based on scientifically sound psychological principles or theory. And only 36% of Society for Psychophysiological Research members and 30% of Division One fellows of the American Psychological Association thought it was.



This is a redundant argument that we should just agree to disagree on.  This is simply a survey and has nothing to do with the scientific validity or, as I previously pointed out, the currently established  rules of evidence.
 
You wrote:


Quote:


If you genuinely believe that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis" and is instead attributable to "squabbling between ideological camps as to who's question format is better," well, more power to you, J.B. It appears to be a waste of my time and intellect to attempt to disabuse you of what seems to be a cherished delusion.



I agree this to be a moot point.  However, there are no delusions on my part.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 21st, 2002 at 6:12am


jrjr2 wrote on Apr 20th, 2002 at 12:02pm:

I did not lie about my drug use, i have never used them. I did not lie about selling drugs, i have never sold them. I am not nor have I ever been a member of a group whose purpose was the destruction of my country. I have never been contacted by a member of a non- U.S. government for the express purposes of selling secrets. I am most certainly not a traitor to my country and yet your beloved polygraph has branded me so. My life has been ruined by that infernal machine and for you to maintain that the polygraph has an acceptable accuracy rate makes me very angry. For my position I don't care if the damned thing is 99% accurate, which it is not, it was wrong when it labelled me a drug selling, dope using, traitor and if it screwed me I can only imagine how many countless others it has harmed. The polygraph can not and should not take the place of old fashioned investigative work, it has no place in the preemployment process of the federal government.


akuma264666 ,

I have never advocated the use of polygraph in a pre-employment screening process the way it is currently used.  There is little scientific research in the use of polygraph for this purpose and none that is favorable, in my opinion.  I agree that nothing can nor should replace a thorough background investigation.  

My argument for polygraph, as are all my arguments for polygraph, being scientifically valid is in a specific criminal issue testing.  This use is were the majority of scientific research is done and where polygraph has been shown to have high validity.

Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Apr 21st, 2002 at 10:50am
J.B.,

You wrote:


Quote:
The accuracy of any given method is established by the "obtained" accuracy results.  If a study is accepted though peer-review, then inferences can be made about the accuracy of the method by those results.  The bias you speak of is directly dependent on the number of cases that where discriminated against because they lacked the criterion to be included.


I'm not sure what your point is. Are you arguing that sampling bias is not a significant factor in the peer-reviewed field validity studies by Patrick & Iacono and Honts?


Quote:
You once again assert Lykken's opinion.  What is Raskin's opinion, whom you have admitted as a leading expert in CQT, on this study.  It appears to be additional evidence of conflicting opinion between two separate ideologies on question methodology.  You might ask why Lykken would refute Patrick & Iacono's study when they are from the same ideological camp?  Because the results turned out in the positive for the CQT.  "CQT-induced confession", now that is ludicrous.  Is Lykken suggesting that someone confesses because of the polygraph test question format used?  It would be interesting to see this assertion supported through research.


No, J.B., I did not "once again assert Lykken's opinion." I referred to an inconvenient (for polygraph proponents) fact that Lykken has pointed out: "in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive."

Your suggestion that Lykken's discussion of Patrick & Iacono's study amounts to a refutation of it is evidence that you haven't read Patrick & Iacono's study. If you had, you would know that Lykken's observations on the matter of sampling bias are entirely consistent with the conclusions drawn by Patrick & Iacono, which are implicit in the title of their article, "Validity of the control question polygraph test: The problem of sampling bias." (Journal of Applied Psychology, 76, 229-238)

In response to my following remarks:


Quote:
Again, J.B., my argument is not that CQT polygraphy has been proven by peer-reviewed research to have "chance accuracy," but rather that it has not been proven by peer-reviewed research to be more accurate than chance. There is a significant distinction between the two that still seems to elude you, but I'm not sure how to make the distinction any clearer. No accuracy rate (sensitivity or specificity) can be determined for CQT polygraphy based on the available peer-reviewed field research.


you replied:


Quote:
Again, accuracy is set by the accepted results that have been obtained.  Regardless of how it is worded (I fully understand the difference between has been and has not been), when something has not been proven statistically better then a given percentage then it has been proven to be equal to or less then the specified percentage.  Knowing this, I have seen an abundance of accuracy rates that have obtained above chance accuracy rates, including the peer-reviewed field research you recently posted as support for your assertion, but not one that supports "has not been proven by peer-reviewed research to be more accurate then chance.


Your reasoning that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is a logical fallacy of the argument to ignorance (argumentum ad ignorantiam) variety.

That something has not been proven to work better than chance does not mean that it has been proven to work no better than chance. If you cannot grasp this elementary concept, then my further debating with you the topics you proposed to discuss when you started this message thread is pointless, really.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 23rd, 2002 at 7:31am
George,

The first response (highlighted to show emphasis of the point) I gave in my last post;


Quote:


The accuracy of any given method is established by the "obtained" accuracy results.  If a study is accepted though peer-review, then inferences can be made about the accuracy of the method by those results.  The bias you speak of is directly dependent on the number of cases that where discriminated against because they lacked the criterion to be included.



If the sampling bias was so significant,  I would think the research would not have been accepted for publishing after peer-review.  Lykken et al have criticized the use of a confession based criterion in the past.  However, it appears it was still used in the Iacono & Patrick study, minus the one case.  In the real world, it is difficult to establish this criteria.  Again, I don't feel that this is the best method but is an acceptible method if it follows strict guidelines such as the ones I posted prior.  

You wrote:

Quote:


No, J.B., I did not "once again assert Lykken's opinion." I referred to an inconvenient (for polygraph proponents) fact that Lykken has pointed out: "in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive."



I don't see this as an 'inconvenient (for polygraph proponents)'.  How do you propose it is?

You wrote:

Quote:


Your suggestion that Lykken's discussion of Patrick & Iacono's study amounts to a refutation of it is evidence that you haven't read Patrick & Iacono's study. If you had, you would know that Lykken's observations on the matter of sampling bias are entirely consistent with the conclusions drawn by Patrick & Iacono, which are implicit in the title of their article, "Validity of the control question polygraph test: The problem of sampling bias." (Journal of Applied Psychology, 76, 229-238)



I never once indicated that Lykken's view differed from that of Patrick & Iacono.  You have a right to your opinion.  Again, they have refuted sampling bias based on this criterion previously but still sought in using it for their study.  If it is such a bad method, then why did they not use a different method?  I can't help but wonder what the comments or lack there of may have been if the 'obtained' accuracy results would have been less favorable.


You replied to my previous post about statistical accuracy:

Quote:


Your reasoning that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is a logical fallacy of the argument to ignorance (argumentum ad ignorantiam) variety.

That something has not been proven to work better than chance does not mean that it has been proven to work no better than chance. If you cannot grasp this elementary concept, then my further debating with you the topics you proposed to discuss when you started this message thread is pointless, really.



This is not a 'logical fallacy'-"The argument to ignorance is a logical fallacy of irrelevance occurring when one claims that something is true only because it hasn't been proved false, or that something is false only because it has not been proved true."  It is a 'contradictory claim'- "A claim is proved true if its contradictory is proved false, and vice-versa."  I am saying that there is proof, found in the four studies you illustrated as support for your statement, that polygraph has been proven to work better then chance in peer-reviewed field research.  By definition of this argument, my claim has been proven true and unless you can provide refuting evidence that your assertion is true then yours is false.  If you could provide contrary evidence to support your assertion,  then my claim would be a 'contrary claim' and not a 'logical fallacy'.


Title: Re: The Scientific Validity of Polygraph
Post by George W. Maschke on Apr 23rd, 2002 at 9:16am
J.B.,

You seem to pooh-pooh the matter of sampling bias when confessions are used as criteria for ground truth. However, I don't think there is any rational ground for ignoring it, as I've explained above in my post of 17 April. It's significant with regard to what conclusions may be reasonably drawn based on the data obtained in the studies.

With regard to Patrick & Iacono's article, you write, "I never once indicated that Lykken's view differed from that of Patrick & Iacono." Sure you did: in your post of 20 April when you wrote, "You might ask why Lykken would refute Patrick & Iacono's study when they are from the same ideological camp?"

You insist that your assertion that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is not a logical fallacy. I don't have the patience to take you through this step-by-step, but I suggest that you carefully consider what you wrote (which is perhaps not what you really meant).

Finally, you suggest that the four peer-reviewed field studies cited in Lykken's A Tremor in the Blood show "that polygraph has been proven to work better then [sic] chance in peer-reviewed field research." Your argument seems to be essentially an argument to authority (argumentum ad verecundiam), suggesting that the results obtained in these four studies must prove CQT validity works better than chance because the articles were published in peer-reviewed journals and the accuracy rates obtained in them exceeded 50%.

For numerous reasons that we've discussed above, I disagree with this conclusion. Again, as Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. In addition, chance is not necessarily 50-50.

Moreover, as Dr. Richardson eloquently explained, CQT polygraphy is completely lacking in any scientific control whatsoever, and as Professor Furedy has explained, it is also unspecifiable and is not a genuine "test." Lacking both standardization and control, CQT polygraphy can have no meaningful accuracy rate and no predictive validity.

Title: Re: The Scientific Validity of Polygraph
Post by J.B. McCloughan on Apr 26th, 2002 at 6:41am
George,

You wrote:

Quote:


You seem to pooh-pooh the matter of sampling bias when confessions are used as criteria for ground truth. However, I don't think there is any rational ground for ignoring it, as I've explained above in my post of 17 April. It's significant with regard to what conclusions may be reasonably drawn based on the data obtained in the studies.



For you and/or anyone else to say that the sampling bias is and/or was significant, the estimated number of cases/samples excluded would need to be established.  Although this measurement is nearly impossible in some applications, as an example large census polls, it is able to be established in this particular research method.  However, the problem you purpose is not a sampling bias per se but a potential measurement bias.  From: http://personalpages.geneseo.edu/~socl212/biaserror.html


Quote:


sample statistic = population parameter ± bias ± sampling error
· bias is systematic and each instance tends to push the statistic away from the parameter in a specific direction
· Sampling bias
· non probability sample
· inadequate sampling frame that fails to cover the population
· non-response
· the relevant concept is generalizability
· Measurement bias
· response bias (question wording, context, interviewer effects, etc.)
· the relevant concept is measurement validity (content validity, criterion validity, construct validity, etc.)
· there is no simple indicator of bias since there are many kinds of bias that act in quite different ways
· sampling error is random and does not push the statistic away in a specific direction
· the standard error is an estimate of the size of the sampling error
· a 95% confidence margin of error of ± 3 percentage points refers ONLY to sampling error, i.e., only to the error due to random sampling, all other error comes under the heading of bias



Here is a more definitive explanation of criterion sampling. From: http://trochim.human.cornell.edu/tutorial/mugo/tutorial.htm


Quote:

PURPOSEFUL SAMPLING
Purposeful sampling selects information rich cases for indepth study. Size and specific cases depend on the study purpose.
There are about 16 different types of purposeful sampling. They are briefly described below for you to be aware of them. The details can be found in Patton(1990)Pg 169-186.

Criterion sampling Here, you set a criteria and pick all cases that meet that criteria for example, all ladies six feet tall, all white cars, all farmers that have planted onions. This method of sampling is very strong in quality assurance.
References

Patton, M.Q.(1990). Qualitative evaluation and research methods. SAGE Publications. Newbury Park London New Delhi



You wrote:

Quote:


With regard to Patrick & Iacono's article, you write, "I never once indicated that Lykken's view differed from that of Patrick & Iacono." Sure you did: in your post of 20 April when you wrote, "You might ask why Lykken would refute Patrick & Iacono's study when they are from the same ideological camp?"



If you would have placed my entire explanation of this, one can plainly see I didn't say that Lykken's 'view differed' from that of Pratrick & Iacono.

I wrote:

Quote:


You might ask why Lykken would refute Patrick & Iacono?s study when they are from the same ideological camp? Because the results turned out in the positive for the CQT.



Lykken refuted the high validity results obtained by the study.  This has nothing to do with Patrick's and/or Iacono's  views on the CQT question method or that Patrick and/or Iacono did not refute the validity results obtained.  This is a percentage obtained from the collected and processed data of the research study.  

You wrote:

Quote:

 
You insist that your assertion that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is not a logical fallacy. I don't have the patience to take you through this step-by-step, but I suggest that you carefully consider what you wrote (which is perhaps not what you really meant).



An example of a 'logical fallacy' would be if you would had stated that polygraph has never been proven to work so it does not and I countered with it has never been proven to not work so it does.  One of your errors in using this word is in that you have placed a statistical percentage 'chance' to your assertion, which makes it definitive and not speculative.  Even if your definitive suggestion were subjected to this definition of 'logical fallacy' with the percentage included, my assertion still does not meet the definition of a 'logical fallacy'.  I would have had to state that when something has not been proven statistically better then a given percentage, then it has been proven to be equal to or (greater) then the specified percentage.  Furthermore, there would need to be an established general knowledge that neither has been proven true.  I have already explained this error to you and provided you with the full definition of a 'logical fallacy' and the true definition of this argument ?contradictory claim? from the source that you used.  Again, you attempt to play on words by segmenting statements I have made.  I fully understand the definition and have taken the time to explain it to you.  I followed this statement with explanations and the conflicting data to your assertion that supports my assertion of being proven to work at above chance in peer-reviewed field research.  

I wrote in support of the assertion:

Quote:


'..contradictory claim'- "A claim is proved true if its contradictory is proved false, and vice-versa." I am saying that there is proof, found in the four studies you illustrated as support for your statement, that polygraph has been proven to work better then chance in peer-reviewed field research. By definition of this argument, my claim has been proven true and unless you can provide refuting evidence that your assertion is true then yours is false. If you could provide contrary evidence to support your assertion, then my claim would be a 'contrary claim' and not a 'logical fallacy'.



You wrote:

Quote:


Finally, you suggest that the four peer-reviewed field studies cited in Lykken's A Tremor in the Blood show "that polygraph has been proven to work better then [sic] chance in peer-reviewed field research." Your argument seems to be essentially an argument to authority (argumentum ad verecundiam), suggesting that the results obtained in these four studies must prove CQT validity works better than chance because the articles were published in peer-reviewed journals and the accuracy rates obtained in them exceeded 50%.

For numerous reasons that we've discussed above, I disagree with this conclusion. Again, as Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. In addition, chance is not necessarily 50-50.

Moreover, as Dr. Richardson eloquently explained, CQT polygraphy is completely lacking in any scientific control whatsoever, and as Professor Furedy has explained, it is also unspecifiable and is not a genuine "test." Lacking both standardization and control, CQT polygraphy can have no meaningful accuracy rate and no predictive validity.



You are correct that my assertion is that these four studies provide proof of above chance validity.  You are asserting a conflicting view, which you have the right to, that has no support for its assertion and thus is just a lay opinion.  Again, by definition of 'contradicting claim' my assertion is true because it has been proven and thus your view has been proven false because it has not been proven.  

My assertion is not an 'argument to authority (argumentum ad verecundiam)'.  The data speaks for itself and has been accepted.  Just because Lykken et al refute the obtained accuracy results, does not mean that the results were not accepted.  Your assertion on the results being unacceptable is the fallacy of Ad Verecundiam, since Lykken et al present a biased towards one side.

From: http://gncurtis.home.texas.net/authorit.html

Quote:


Not all arguments from expert opinion are fallacious, and for this reason some authorities on logic have taken to labelling this fallacy as "appeal to false authority" or "argument from questionable  authority". For the same reason, I will use the traditional Latin tag "ad verecundiam" to distinguish fallacious from non-fallacious arguments from authority.

   We must often rely upon expert opinion when drawing conclusion s about technical matters where we lack the time or expertise to form an informed opinion. For instance, those of us who are not physicians usually rely upon those who are when making medical decisions, and we are not wrong to do so. There are, however, four major ways in which such arguments can go wrong:

      1. An appeal to authority may be inappropriate in a couple of ways:

         A. It is unnecessary. If a question can be answered by observation or calculation, an argument from authority is not needed. Since arguments from authority are weaker than more direct evidence, go look or figure it out for yourself.

         The renaissance rebellion against the authority of Aristotle and the Bible played an important role in the scientific revolution. Aristotle was so respected in the Middle Ages that his word was taken on empirical issues which were easily decidable by observation. The scientific revolution moved away from this over-reliance on authority towards the use of observation and experiment.

         Similarly, the Bible has been invoked as an authority on empirical or mathematical questions. A particularly amusing example is the claim that the value of pi can be determined to be 3 based on certain passages in the Old Testament. The value of pi, however, is a mathematical question which can be answered by calculation, and appeal to authority is irrelevant.

         B. It is impossible. About some issues there simply is no expert opinion, and an appeal to authority is bound to commit the next type of mistake. For example, many self-help books are written every year by self-proclaimed "experts" on matters for which there is no expertise.

      2. The "authority" cited is not an expert on the issue, that is, the person who supplies the opinion is not an expert at all, or is one, but in an unrelated area. The now-classic example is the old television commercial which began: "I'm not a doctor, but I play one on TV...." The actor then proceeded to recommend a brand of medicine.

      3. The authority is an expert, but is not disinterested. That is, the expert is biased towards one side of the issue, and his opinion is thereby untrustworthy.

         For example, suppose that a medical scientist testifies that ambient cigarette smoke does not pose a hazard to the health of non-smokers exposed to it. Suppose, further, that it turns out that the scientist is an employee of a cigarette company. Clearly, the scientist has a powerful bias in favor of the position that he is taking which calls into question his objectivity.

         There is an old saying: "A doctor who treats himself has a fool for a patient." There is also a version for attorneys: "A lawyer who defends himself has a fool for a client." Why should these be true if the doctor or lawyer is an expert on medicine or the law? The answer is that we are all biased in our own causes. A physician who tries to diagnose his own illness is more likely to make a mistake out of wishful thinking, or out of fear, than another physician would be.

      4. While the authority is an expert, his opinion is unrepresentative of expert opinion on the subject. The fact is that if one looks hard enough, it is possible to find an expert who supports virtually any position that one wishes to take. "Such is human perversity", to quote Lewis Carroll. This is a great boon for debaters, who can easily find expert opinion on their side of a question, whatever that side is, but it is confusing for those of us listening to debates and trying to form an opinion.

         Experts are human beings, after all, and human beings err, even in their area of expertise. This is one reason why it is a good idea to get a second opinion about major medical matters, and even a third if the first two disagree. While most people understand the sense behind seeking a second opinion when their life or health is at stake, they are frequently willing to accept a single, unrepresentative opinion on other matters, especially when that opinion agrees with their own bias.

         Bias (problem 3) is one source of unrepresentativeness. For instance, the opinions of cigarette company scientists tend to be unrepresentative of expert opinion on the health consequences of smoking because they are biased to minimize such consequences. For the general problem of judging the opinion of a population based upon a sample, see the Fallacy of Unrepresentative Sample.

   To sum up these points in a positive manner, before relying upon expert opinion, go through the following checklist:

       * Is this a matter which I can decide without appeal to expert opinion? If the answer is "yes", then do so. If "no", go to the next question:

       * Is this a matter upon which expert opinion is available? If not, then your opinion will be as good as anyone else's. If so, proceed to the next question:

       * Is the authority an expert on the matter? If not, then why listen? If so, go on:

       * Is the authority biased towards one side? If so, the authority may be untrustworthy. At the very least, before accepting the authority's word seek a second, unbiased opinion. That is, go to the last question:

       * Is the authority's opinion representative of expert opinion? If not, then find out what the expert consensus is and rely on that. If so, then you may rationally rely upon the authority's opinion.

   If an argument to authority cannot pass these five tests, then it commits the fallacy of Ad Verecundiam.

Resources:

       * James Bachman, "Appeal to Authority", in Fallacies: Classical and Contemporary Readings, edited by Hans V. Hanson and Robert C. Pinto (Penn State Press, 1995), pp. 274-286.

       * Appeal to Authority, entry from philosopher Robert Todd Carroll's Skeptic's Dictionary.




You say you disagree with the statistical data of the four field research studies but you used them as a support for your assertion.  I will reiterate, this is not a sampling bias per se by definition.  Again, any bias that may or may not have been created was due to criterion selection (measurement bias).  Anyone can say that something caused error.  However, in the statistical realm one must provide reason and deduction from the data that supports the view for it to be a valid assertion.  More importantly, for one to assert that the statistical data contains criterion/measurement based bias, there would need to be an attributing result to an external and/or internal variable in relationship to the criterion.  The fact that the confession criterion was used does not in itself create a per se bias.  

Here is a hypothetical instance of  how a confession based criterion research study may produce a criterion/measurement bias; Confession was used as the criterion for the selection of deceptive polygraph chart data because it is a means of confirming the results.  In conducting the study, it was found that the original polygraph examiners' decisions were made after the post-test interview and based on the confessions obtained.  Since the original decision and the selection criterion were the same, one can not separate this variable from the original examiners' decision and/or use another sources independent of the confession to base the original examiners' decisions on the polygraph chart data.  The criterion based selection method thus caused an unknown degree of bias in the accuracy rate obtained from the deceptive cases reviewed.  

I agree that chance is not always 50/50.  If the accuracy results obtained for deceptive based on the original examiners' decisions have only the two possible outcomes of a correct or an incorrect original decision, the original decision had a 50 percent chance of producing either result.  If you can point to another decision available and/or the reason that 50/50 is not the chance level of these studies, I would be willing to discuss that.

I respect Drew's views as a scientist but disagree with him on the issue of the scientific definitions we debated.  My definitions are from other accepted scientific disciplines manuals.  These manuals and their contents are nationally reviewed and accredited.  I feel the correlation of the presented structures within CQT polygraph meet these definitions and Drew does not.  I agree to disagree with him on this issue.  

AntiPolygraph.org Message Board » Powered by YaBB 2.6.11!
YaBB Forum Software © 2000-2021. All Rights Reserved.