Page Index Toggle Pages: 1 2 [3]  ReplyAdd Poll Send TopicPrint
Very Hot Topic (More than 25 Replies) The Scientific Validity of Polygraph (Read 34629 times)
Paste Member Name in Quick Reply Box George W. Maschke
Global Moderator
*****
Online


Make-believe science yields
make-believe security.

Posts: 6139
Location: The Hague, The Netherlands
Joined: Sep 29th, 2000
Re: The Scientific Validity of Polygraph
Reply #30 - Apr 9th, 2002 at 3:44pm
Mark & QuoteQuote Print Post  
J.B.,

You wrote:

Quote:
It is a change of subject.  You shift the burden in every discussion and still have yet to assert what the accuracy rate is for CQT polygraph under field conditions.


As I noted in my post of 1 April above, "No sensitivity or specificity can be determined for CQT polygraphy (an uncontrolled, unstandardized, unspecifiable procedure) based on the available peer-reviewed field research." If you disagree, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research?

You also wrote:

Quote:
My assertion about squabbling over methods is not ludicrous but a well-known fact that this ideological camp supports GKT and only prescribes ill comments to CQT.


What is ludicrous, J.B., is your earlier statement:

Quote:
The reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis.  It has to do with the squabbling between ideological camps as to who's question format is better.


In Iacono & Lykken's survey of SPR members, only 36% of respondents with an opinion answered affirmatively when asked, "Would you say that the CQT is based on scientifically sound psychological principles or theory?" And 99% of respondents with an opinion agreed with the statement, "The CQT can be beaten by augmenting one’s response to the control questions."

I think it's completely absurd for you to suggest that the large majority who did not agree that CQT polygraphy is based on scientifically sound pyschological principles or theory and the overwhelming majority who agreed that the CQT can be beaten reached those opinions because of partisan loyalty to some supposed "ideological camp" that supports the GKT. Nor do I see any reason for supposing that any alleged bias on the part of Iacono and Lykken accounts for the results of their peer-reviewed survey.


Clearly, CQT polygraphy's lack of support amongst the scientific community is attributable to something more than just "squabbling between ideological camps as to who's question format is better."
  

George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Tel/SMS: 1-202-810-2105 (Please use Signal Private Messenger or WhatsApp to text or call.)
E-mail/iMessage/FaceTime: antipolygraph.org@protonmail.com
Wire: @ap_org
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"
Back to top
IP Logged
 
Paste Member Name in Quick Reply Box beech trees
God Member
*****
Offline



Posts: 593
Joined: Jun 22nd, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #31 - Apr 9th, 2002 at 7:00pm
Mark & QuoteQuote Print Post  
Quote:
As I noted in my post of 1 April above, "No sensitivity or specificity can be determined for CQT polygraphy (an uncontrolled, unstandardized, unspecifiable procedure) based on the available peer-reviewed field research." If you disagree, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research?


The district court found that there are no standards
which control the procedures used in the polygraph industry
and that without such standards a court can not adequately
evaluate the reliability of a particular polygraph exam... The Court enumerated a series of general observations
designed to aid trial judges in making initial admissibility
determinations. In ascertaining whether proposed testimony is
scientific knowledge, trial judges first must determine if the
underlying theory or technique is based on a testable scientific
hypothesis. Id. at 593. The second element considers whether
others in the scientific community have critiqued the proposed
concept and whether such critiques have been published in
peer-review journals. Id. at 593-94. Third, the trial judge
should consider the known or potential error rate. Id. at 594.
Fourth, courts are to consider whether standards to control the technique's operation exist... 


The reliability of polygraph testing fundamentally
      depends on the reliability of the protocol followed
      during the examination. After considering the evi-
      dence and briefing, the court concludes the proposed
      polygraph evidence is not admissible under Fed. R.
      Evid. 702. Although capable of testing and subject to
      peer review, no reliable error rate conclusions are
      available for real-life polygraph testing. Addition-
      ally, there is no general acceptance in the scientific
      community for the courtroom fact-determinative use
      proposed here. Finally, there are no reliable and
      accepted standards controlling polygraphy. Without
      such standards, there is no way to ensure proper pro-
      tocol, or measure the reliability of a polygraph
      examination. Without such standards, the proposed
      polygraph evidence is inadmissible because it is not
      based on reliable `scientific knowledge.'


USA v CORDOBA
9850082
  

"It is the duty of the patriot to protect his country from its government." ~ Thomas Paine
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #32 - Apr 12th, 2002 at 2:27am
Mark & QuoteQuote Print Post  
George,

First,  where in any peer-reviewed scientific research study has your following assertion been conclusively shown to be true?

Quote:
As I noted in my post of 1 April above, "No sensitivity or specificity can be determined for CQT polygraphy (an uncontrolled, unstandardized, unspecifiable procedure) based on the available peer-reviewed field research." If you disagree, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research?



Again, I am not saying polygraph is valid until proven otherwise.  I am saying what peer-reviewed scientific research supports that the “CQT polygraphy has not been shown by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions.”  If there is one to support your statement, then what was the specificity and sensitivity established?

You then mischaracterized my statements.

I wrote:

Quote:
The reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis. It has to do with the squabbling between ideological camps as to who's question format is better.



In relevant terms you respond:

Quote:
I think it's completely absurd for you to suggest that the large majority who did not agree that CQT polygraphy is based on scientifically sound pyschological principles or theory and the overwhelming majority who agreed that the CQT can be beaten reached those opinions because of partisan loyalty to some supposed "ideological camp" that supports the GKT. Nor do I see any reason for supposing that any alleged bias on the part of Iacono and Lykken accounts for the results of their peer-reviewed survey.



Not once have I said and/or suggested that the respondents “who did not agree that CQT polygraphy is based on scientifically sound pyschological principles or theory and the overwhelming majority who agreed that the CQT can be beaten reached those opinions because of partisan loyalty to some supposed "ideological camp" that supports the GKT.”  I have not made one mention to this and there is no statistical data to suggest this or its disputed form.  

The apparent bias in Iacono and Lykken’s study and how it is presented is shown in part by the illustrated points of my previous post. Was all of the data and material of this study made available to Honts, Raskin, et al for critique and criticism, as required?  Did the study use consistent scales (ie.. 1-5) for all the questions?  Was the cutoff point uniform throughout the different questions (ie.. a ‘5’ response is an ‘agree’ and a ‘1’ response is a ‘disagree’ response for every question asked and answered).  How informed were the majority of responders?  A good follow-up to this survey would be to provide all the original responders with a detailed presentation of CQT polygraph, conduct the original survey following with an included sub-answer for all methods to each question and see how their responses differ and what their concluding comparative assessment is of the degree of information provided by the original study to the latter.

Quote:
Clearly, CQT polygraphy's lack of support amongst the scientific community is attributable to something more than just "squabbling between ideological camps as to who's question format is better."



Can you illustrate a debate over the scientific support of CQT polygraph where an adversary format is not involved?  It seems to be the reoccurring theme of most every discussion revolving around CQT polygraph.  Even you use GKT proponents/CQT opponents in your illustrated texts and studies.

Quote:
In Iacono & Lykken's survey of SPR members, only 36% of respondents with an opinion answered affirmatively when asked, "Would you say that the CQT is based on scientifically sound psychological principles or theory?" And 99% of respondents with an opinion agreed with the statement, "The CQT can be beaten by augmenting one’s response to the control questions."



These responses mean little to nothing without knowing the degree of knowledge of the responders.  The later response needs additional definition of the augmentation.  For example; What is the possibility that one would beat the CQT by augmenting their responses to the control questions?  Remember that most anything is possible and/or conceivable but is subject to the given and/or stipulated condition to establish a degree of its probability.   Iacono and Lykken’s study just gives the opinion of the responders that it is possible, not how possible and under what conditions.



  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box George W. Maschke
Global Moderator
*****
Online


Make-believe science yields
make-believe security.

Posts: 6139
Location: The Hague, The Netherlands
Joined: Sep 29th, 2000
Re: The Scientific Validity of Polygraph
Reply #33 - Apr 12th, 2002 at 7:18am
Mark & QuoteQuote Print Post  
J.B.,

My conclusion that CQT polygraphy has not been proven by peer-reviewed scientific research to differentiate between truth and deception at better than chance levels of accuracy under field conditions is based on a review of the four peer-reviewed field studies that have been published (and the understanding that CQT polygraphy is an unspecifiable procedure that lacks both standardization and control), not on a peer-reviewed study assessing those studies. (Note, however, that Professor Furedy's critique of the scientific status of CQT polygraphy, cited in Chapter 1 of The Lie Behind the Lie Detector, and which you casually dismissed as "elusive babble," was published in the International Journal of Psychophysiology.)

Again, if you disagree with me on this, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research? Your continued silence on this point suggests that you can't.

With regard to the Iacono and Lykken study, again, I only mentioned it to illustrate the point that the lack of unanimous support for CQT polygraphy in the scientific community is not, as you suggested, attributable merely to squabbling over whose technique (CQT vs. GKT) is better. The majority of survey respondents who did not believe that the CQT is based on scientifically sound psychological principles or theory cannot plausibly be argued to have based their skepticism regarding CQT polygraphy on some imputed advocacy for the GKT.
  

George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Tel/SMS: 1-202-810-2105 (Please use Signal Private Messenger or WhatsApp to text or call.)
E-mail/iMessage/FaceTime: antipolygraph.org@protonmail.com
Wire: @ap_org
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"
Back to top
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #34 - Apr 15th, 2002 at 7:27am
Mark & QuoteQuote Print Post  
George,

1. In those four studies you use for your assumption what are the established accuracy rates?

2. Furedy and Honts have been debating this for years and it should be once again noted that Furedy reserves ill comments for CQT not polygraph because he is a GKT format supporter.

3. Can you tell me what sensitvity and specifity has been established for any given forensic science by peer-reviewed and published studies under field conditions? 

4.  I don't think you read what I wrote in regards to the study. Iacono and Lykken obviously slanted the information given to the surveyed in the study.  The study also lacks proper information for uninformed persons to be able to make a scientific analysis of the CQT.
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box George W. Maschke
Global Moderator
*****
Online


Make-believe science yields
make-believe security.

Posts: 6139
Location: The Hague, The Netherlands
Joined: Sep 29th, 2000
Re: The Scientific Validity of Polygraph
Reply #35 - Apr 15th, 2002 at 2:14pm
Mark & QuoteQuote Print Post  
J.B.,

Quote:
1. In those four studies you use for your assumption what are the established accuracy rates?


The accuracy rates obtained (not established) in the four studies are presented in a table provided at p. 134 of the 2nd ed. of Lykken's A Tremor in the Blood. I'll reproduce that table here for the benefit of those without ready access to the book:

Table 8.2. Summary of Studies of Lie Test Validity That Were Published in Scientific Journals and That Used Confessions to Establish Ground Truth


    Horvath
(1977)
Kleinmuntz
&Szucko
(1984)
Patrick &
Iacono
(1991)
Honts
(1996)
Mean
Guilty correctly
classified
21.6/28
77%
38/50
76%
48/49
98%
7/7
100%
114.6/134
85.5%
Innocent correctly
classified
14.3/28
51%
32/50
64%
11/20
55%
5/5
100%
62.3/103
60.5%
Mean of above 64% 70% 77% 100% 73%


As Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. Do you mean to suggest that these four studies are adequate for determining the sensitivity and specificity of CQT polygraphy?

Quote:
2. Furedy and Honts have been debating this for years and it should be once again noted that Furedy reserves ill comments for CQT not polygraph because he is a GKT format supporter.


So what? This doesn't support your laughably implausible assertion that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis.  It has to do with the squabbling between ideological camps as to who's question format is better."

Quote:
3. Can you tell me what sensitvity and specifity has been established for any given forensic science by peer-reviewed and published studies under field conditions?


No, not off the top of my head. I would assume that most diagnostic tests would be validated with laboratory studies. This is not feasible with CQT polygraphy because fear of consequences is a significant variable that is generally absent in the laboratory setting.

What is your point? Do you mean to suggest that I'm holding CQT polygraphy to an unfairly high standard?

Quote:
4.  I don't think you read what I wrote in regards to the study. Iacono and Lykken obviously slanted the information given to the surveyed in the study.  The study also lacks proper information for uninformed persons to be able to make a scientific analysis of the CQT.


I read it, but frankly, I think you're "picking fly shit out of pepper" in an attempt to dismiss the results of a peer-reviewed survey that happen not to support your wishes regarding the scientific community's acceptance of CQT polygraphy.

Again, you'll find the information provided to those surveyed cited (it's paraphrased in the journal article) at pp. 179-181 of A Tremor in the Blood. The description of the probable-lie CQT is largely cited from Raskin, a leading CQT proponent. Given the survey's high response rate, it would appear that most of those surveyed disagreed with your view that the information provided was inadequate for them to render an opinion on whether the CQT is based on scientifically sound principles or theory.
  

George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Tel/SMS: 1-202-810-2105 (Please use Signal Private Messenger or WhatsApp to text or call.)
E-mail/iMessage/FaceTime: antipolygraph.org@protonmail.com
Wire: @ap_org
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"
Back to top
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #36 - Apr 16th, 2002 at 11:02pm
Mark & QuoteQuote Print Post  
George,

You wrote:

Quote:
As Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. Do you mean to suggest that these four studies are adequate for determining the sensitivity and specificity of CQT polygraphy?



If you take the more recent of these studies, those conducted in the 1990’s, the ‘obtained’ accuracy is much higher.

                                                                                                             Patrick & Iacono (1991) Honts (1996) Mean

Guilty correctly
classified              48/49                        7/7        55/56
                          98%                      100%         99%    
Innocent correctly
classified               11/20                       5/5        16/25
                          55%                       100%        77.5%

Mean of above        77%                        100%       88.25%


Confession based criteria is a dependable means of establishing ground truth if the definition of confession is well defined, adhered to, and the examiners’ decision based on the polygraph is pre-confession.  As I have said before, one can not place a definitive base rate to sensitivity and specificity in a field setting due to the variable truthful and deceptive that may be present at any given time.  Likewise, in any forensic science the base rate of these two areas is ever changing within the field based on the casework.  Sensitivity and specificity are established in a controlled laboratory research environment.  You have said chance accuracy and based your assumption on the four studies that you posted.  I do not see where in any of these studies or even the four studies combined for a mean accuracy rate produce a not better then chance outcome in any of the areas.  Even the lowest of the percentages is above chance.

Quote:
 
So what? This doesn't support your laughably implausible assertion that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis. It has to do with the squabbling between ideological camps as to who's question format is better."



Nonsense, the studies conducted by Honts had a much different reported outcome.  So, Lykken and Iacono decided to do their own study to refute Honts et al.  These studies where designed to find if general acceptance existed.  General acceptance was an important element to establish because it was a main criterion for admissibility in court prior to Daubert v. Merrell Dow Pharmaceuticals.  The rules of evidence have since changed post Daubert, see: http://cyber.law.harvard.edu/daubert/ch3.htm for the new acceptance criteria.  I am not dismissing the results of any survey or saying the percentages are not what was reported.  I am saying that this survey, as any, is only in part as good as the information provided, questions posed, and the how it is presented.  It is my opinion that Lykken and Iacono’s survey was a poor attempt to discredit the acceptance of CQT, especially with the difference in the highly informed opinion.  I do not think this is a good method of developing scientific acceptance for either format.  GKT was reported as having 2/3 acceptance. However, can the difference in the two formats acceptance level be correlated to the presentation of the formats, the difference in amount of directed literature provided for each, and the degree of knowledge present in the surveyed for a given method?  The survey does not pose equal questions across the board and leaves much to be answered.  Just because Raskin is a leading expert in the CQT does not mean the majority of the surveyed who were relatively uninformed will give weight to his statements.  Scientists are analytical in nature. (i.e..  tell me what is being done, how it was done, the results obtained, and how you calculated  the results)  If this previous information was properly presented in a scientific forum to the relevant societies and the same results were obtained,  I would accept the results.  This is not the case though.  With the difference in the highly informed opinion, I think the results would be dramatically different to the positive.  

You wrote:

Quote:


No, not off the top of my head. I would assume that most diagnostic tests would be validated with laboratory studies. This is not feasible with CQT polygraphy because fear of consequences is a significant variable that is generally absent in the laboratory setting.



Laboratory studies can be useful in this area.  See a conclusion on this topic at: http://www.polygraph.org/research.htm

Quote:
Podlesny, J. A., & Raskin, D. C. (1978). Effectiveness of techniques and physiological measures in the detection of deception. The Society for Psychophysiological Research, Inc., 15(4), 344-359.

Control-question (CQ) and guilty-knowledge (GK) techniques for the detection of deception were studied in a mock theft context. Subjects from the local community received $5 for participation, and both guilty and innocent subjects were motivated with a $10 bonus for a truthful outcome on the polygraph examination. They were instructed to deny the theft when they were examined by experimenters who were blind with respect to their guilt or innocent. Eight physiological channels were recorded. Blind numerical field evaluations with an inconclusive zone produced 94% and 83% correct decision for two different types of CQ test and 89% correct decisions for GK tests. Control questions were more effective than guilt-complex questions, and exclusive control questions were more effective than nonexclusive control questions. Behavioral observations were relatively ineffective in differentiating guilty and innocent subjects. Quantitative analyses of the CQ and GK data revealed significant discrimination between guilty and innocent subjects with a variety of electrodermal and cardiovascular measures. The results support the conclusion that certain techniques and physiological measures can be very useful for the detection of deception in a laboratory mock-crime context.



Also, psychopaths’ and/or sociopaths’ have not been proven to be able to pass a polygraph when being deceptive.

See: http://www.polygraph.org/research.htm

Quote:
Raskin, D. C., & Hare, R. D. (1978). Psychopathy and detection of deception in prison in a prison population. Psychophysiology, 15, 126-136.

The effectiveness of detection of deception was evaluated with a sample of 48 prisoners, half of whom were diagnosed psychopaths. Half of each group were "guilty" of taking $20 in a mock crime and half were "innocent". An examiner who had no knowledge of the guilt or innocent of each subject conducted a field-type interview followed by a control question polygraph examination. Electrodermal, respiration, and cardiovascular activity was recorded, and field (semi-objective) and quantitative evaluations of the physiological responses were made. Field evaluations by the examiner produced 88% correct, 4% wrong, and 8% inconclusives. Excluding inconclusives, there were 96% correct decisions. Using blind quantitative scoring and field evaluations, significant discrimination between "guilty" and "innocent" subjects was obtained for a variety of electrodermal, respiration, and cardiovascular measures. Psychopaths were as easily detected as nonpsychopaths, and psychopaths showed evidence of stronger electrodermal responses and heart rate decelerations. The effectiveness of control question techniques in differentiating truth and deception was demonstrated in psychopathic and nonpsychopathic criminals in a mock crime situation, and the generalizability of the results to the field situation is discussed.




Quote:
What is your point? Do you mean to suggest that I'm holding CQT polygraphy to an unfairly high standard?



It is not about ‘fair’ standards.  It is about a consistent standard applied to other scientific or forensic scientific procedures for acceptability as being valid.  

Quote:
Again, you'll find the information provided to those surveyed cited (it's paraphrased in the journal article) at pp. 179-181 of A Tremor in the Blood. The description of the probable-lie CQT is largely cited from Raskin, a leading CQT proponent. Given the survey's high response rate, it would appear that most of those surveyed disagreed with your view that the information provided was inadequate for them to render an opinion on whether the CQT is based on scientifically sound principles or theory.



Again, the respondents based their survey-based opinion on the information they were given.  Although paraphrased, the information is not consistent with that which is presented in a scientific forum for the review of a method for scientific validity.  A conclusion that may be drawn from this survey is that the majority of scientists are not properly informed.
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box George W. Maschke
Global Moderator
*****
Online


Make-believe science yields
make-believe security.

Posts: 6139
Location: The Hague, The Netherlands
Joined: Sep 29th, 2000
Re: The Scientific Validity of Polygraph
Reply #37 - Apr 17th, 2002 at 12:31pm
Mark & QuoteQuote Print Post  
J.B.,

The two studies you point to do not establish that CQT polygraphy works better than chance, nor can any sensitivity and specificity for the procedure be inferred from them. The matter of sampling bias introduced when confessions are used as criteria for ground truth is indeed significant. You wrote:

Quote:
Confession based criteria is a dependable means of establishing ground truth if the definition of confession is well defined, adhered to, and the examiners' decision based on the polygraph is pre-confession.


But Lykken (A Tremor in the Blood, 2nd ed., pp. 70-71) explains how reliance on confessions as criteria for ground truth biases the sampling:

Quote:
How Polygraph-Induced Confessions Mislead Polygraphers


It is standard practice for police polygraphers to interrogate a suspect who has failed the lie test. They tell him that the impartial, scientific polygraph has demonstrated his guilt, that no one now will believe his denials, and that his most sensible action at this point would be to confess and try to negotiate the best terms that he can. This is strong stuff, and what the examiner says to the suspect is especially convincing and effective because the examiner genuinely believes it himself. Police experience in the United States suggests that as many as 40% of interrogated suspects do actually confess in this situation. And these confessions provide virtually the only feedback of "ground truth" or criterion data that is ever available to a polygraph examiner.

If a suspect passes the polygraph test, he will not be interrogated because the examiner firmly believes he has been truthful. Suspects who are not interrogated do not confess, of course. This means that the only criterion data that are systematically sought--and occasionally obtained--are confessions by people who have failed the polygraph, confessions that are guaranteed to corroborate the tests that elicited those confessions. The examiner almost never discovers that a suspect he diagnosed as truthful was in fact deceptive, because that bad news is excluded by his dependence on immediate confessions for verification. Moreover, these periodic confessions provide a diet of consistently good news that confirms the examiner's belief that the lie test is nearly infallible. Note that the examiner's client or employer also hears about these same confessions and is also protected from learning about most of the polygrapher's mistakes.

Sometimes a confession can verify, not only the test that produced it, but also a previous test that resulted in a diagnosis of truthful. This can happen when there is more than one suspect in the same crime, so that the confession of one person reveals that the alternative suspect must be innocent. Once again, however, the examiner is usually protected from learning when he has made an error. If the suspect who was tested first is diagnosed as deceptive, then the alternative suspect--who might be the guilty one--is seldom tested at all because the examiner believes that the case was solved by that first failed test. This means that only rarely does a confession prove that someone who has already failed his test is actually innocent.

Therefore, when a confession allows us to evaluate the accuracy of the test given to a person cleared by that confession, then once again the news will almost always be good news; that innocent suspect will be found to have passed his lie test, because if the first suspect had not passed the test, the second person would not have been tested and would not have confessed.[endnote omitted]


As Lykken notes (p. 134), in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive.

With regard to Honts' 1996 study, it would be appropriate to cite here Lykken's cogent commentary (pp. 134-35):

Quote:
The recent study by Honts illustrates that publication in a refereed journal is no guarantee of scientific respectability. The meticulous study by Patrick and Iacono was done with the cooperation of the Royal Canadian Mounted Police (RCMP) in Vancouver, B.C., and showed that nearly half of the suspects later shown to be innocent were diagnosed as deceptive by the RCMP polygraphers. This prompted the Canadian Police College to contract with Honts, once of the Raskin group, to conduct another study. A polygraphy instructor at the college sent Honts charts from tests administered to seven suspects who had confessed after failing the CQT and also charts of six suspects confirmed to be innocent by the se confessions of alternative suspects in the same crimes. Knowing which were which, Honts then proceeded to rescore the charts, using the same scoring rules employed by the RCMP examiners. Those original examiners had, of course, scored all seven guilty suspects as deceptive; that was why they proceeded to interrogate them and obtained the criterial confessions. Using the same scoring rules (and also knowing which suspects were in fact guilty), Honts of course managed to score all seven as deceptive also. The RCMP examiners had scored four of the six innocent suspects as truthful and two as inconclusive. We can be confident that all innocent suspects classified as deceptive were never discovered to have been innocent because, in such cases, alternative suspects would not have been tested, excluding any possibility that the truly guilty suspect might have failed, been interrogated, and confessed. Honts, using the same scoring rules and perhaps aided by his foreknowledge of which suspects were innocent, managed to improve on the original examiners, scoring five of the six as truthful and only one as inconclusive. The difference in Honts's findings from those of the other studies summarized in Table 8.2 is striking.

Surely no sensible reader can imagine that these alleged "findings" of the Honts study add anything at all to the sum of human knowledge about the true accuracy of the CQT. How it came about that scientific peer review managed to allow this report to be published in an archival scientific journal is a mystery. Since the author, Honts, and the editor of the journal, Garvin Chastain, are colleagues in the psychology department of Boise State University, it is a mystery they might be able to solve.


You also wrote:

Quote:
As I have said before, one can not place a definitive base rate to sensitivity and specificity in a field setting due to the variable truthful and deceptive that may be present at any given time.  Likewise, in any forensic science the base rate of these two areas is ever changing within the field based on the casework.  Sensitivity and specificity are established in a controlled laboratory research environment.


Can sensitivity and specificity genuinely be determined for a procedure like CQT polygraphy that is both unspecifiable and lacking in control?

In the message thread, What's more effective than the polygraph? you wrote:

Quote:
...there is a known sensitivity and specificity for polygraph that has been established and proven through peer-reviewed scientific research.


Are you prepared, at long last, to reveal to us to whom that sensitivity and specificity is known, and what precisely it is? And what peer-reviewd research established it? Again, the sensitivity and specificity of CQT polygraphy appears to be unknown to the U.S. Government, and as Gordon Barland, formerly of the DoDPI research division, wrote in that message thread, "...I know of no official government statistic regarding sensitivity and specificity."

You also wrote:

Quote:
You have said chance accuracy and based your assumption on the four studies that you posted.  I do not see where in any of these studies or even the four studies combined for a mean accuracy rate produce a not better then chance outcome in any of the areas.  Even the lowest of the percentages is above chance.


Again, J.B., my argument is not that CQT polygraphy has been proven by peer-reviewed research to have "chance accuracy," but rather that it has not been proven by peer-reviewed research to be more accurate than chance. There is a significant distinction between the two that still seems to elude you, but I'm not sure how to make the distinction any clearer. No accuracy rate (sensitivity or specificity) can be determined for CQT polygraphy based on the available peer-reviewed field research.

Finally, with regard to Iacono & Lykken's survey of scientific opinion on the polygraph, however inadequate you may think the information provided to respondents was, the fact remains that the great majority of survey respondents believed they had enough information to render an opinion on whether the CQT is based on scientifically sound psychological principles or theory. And only 36% of Society for Psychophysiological Research members and 30% of Division One fellows of the American Psychological Association thought it was.

If you genuinely believe that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis" and is instead attributable to "squabbling between ideological camps as to who's question format is better," well, more power to you, J.B. It appears to be a waste of my time and intellect to attempt to disabuse you of what seems to be a cherished delusion.
  

George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Tel/SMS: 1-202-810-2105 (Please use Signal Private Messenger or WhatsApp to text or call.)
E-mail/iMessage/FaceTime: antipolygraph.org@protonmail.com
Wire: @ap_org
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"
Back to top
IP Logged
 
Paste Member Name in Quick Reply Box jrjr2
New User
*
Offline



Posts: 4
Joined: Apr 18th, 2002
Re: The Scientific Validity of Polygraph
Reply #38 - Apr 20th, 2002 at 12:02pm
Mark & QuoteQuote Print Post  
I did not lie about my drug use, i have never used them. I did not lie about selling drugs, i have never sold them. I am not nor have I ever been a member of a group whose purpose was the destruction of my country. I have never been contacted by a member of a non- U.S. government for the express purposes of selling secrets. I am most certainly not a traitor to my country and yet your beloved polygraph has branded me so. My life has been ruined by that infernal machine and for you to maintain that the polygraph has an acceptable accuracy rate makes me very angry. For my position I don't care if the damned thing is 99% accurate, which it is not, it was wrong when it labelled me a drug selling, dope using, traitor and if it screwed me I can only imagine how many countless others it has harmed. The polygraph can not and should not take the place of old fashioned investigative work, it has no place in the preemployment process of the federal government.
  
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #39 - Apr 21st, 2002 at 5:50am
Mark & QuoteQuote Print Post  
George,


You wrote:

Quote:
The two studies you point to do not establish that CQT polygraphy works better than chance, nor can any sensitivity and specificity for the procedure be inferred from them. The matter of sampling bias introduced when confessions are used as criteria for ground truth is indeed significant.



The accuracy of any given method is established by the ?obtained? accuracy results.  If a study is accepted though peer-review, then inferences can be made about the accuracy of the method by those results.  The bias you speak of is directly dependent on the number of cases that where discriminated against because they lacked the criterion to be included. 

You wrote:

Quote:
As Lykken notes (p. 134), in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive.



You once again assert Lykken?s opinion.  What is Raskin?s opinion, whom you have admitted as a leading expert in CQT, on this study.  It appears to be additional evidence of conflicting opinion between two separate ideologies on question methodology.  You might ask why Lykken would refute Patrick & Iacono?s study when they are from the same ideological camp?  Because the results turned out in the positive for the CQT.  "CQT-induced confession", now that is ludicrous.  Is Lykken suggesting that someone confesses because of the polygraph test question format used?  It would be interesting to see this assertion supported through research.

You wrote:

Quote:
Can sensitivity and specificity genuinely be determined for a procedure like CQT polygraphy that is both unspecifiable and lacking in control?



For one to say that a scientific method is unspecified, they must have definitive evidence of what other samples will produce the same results as the primary specificity.  An example of this would be the Marquis test when used to identify heroin.  In this test there are at least 50 other compounds that would produce the same result as the specified.  In knowing what the specificity is, do you know of any other physiological response that would produce the same result during and after the asking and answering of a specific question?  I know you will probably attempt to assert countermeasures.  Countermeasures are not a physiological response but an attempt by one to produce a similar looking response to alter the test outcome to the positive.   

You wrote:

Quote:
Are you prepared, at long last, to reveal to us to whom that sensitivity and specificity is known, and what precisely it is? And what peer-reviewd research established it? Again, the sensitivity and specificity of CQT polygraphy appears to be unknown to the U.S. Government, and as Gordon Barland, formerly of the DoDPI research division, wrote in that message thread, "...I know of no official government statistic regarding sensitivity and specificity."



The answer to this that it is known in a controlled laboratory setting for research purposes, just like any other scientific method is established. You are asking for produced statistics that are not of the criteria used in the establishment of a scientific method.  Gordon did not say that the specificity and sensitivity were not known.  He said they were not known in the context that you had given, within the field.  I have repeatedly answered this question to the point of your fixed inquiry and you have acknowledged that specificity and sensitivity are not obtained in the setting you wish to place them.

You wrote:

Quote:
Again, J.B., my argument is not that CQT polygraphy has been proven by peer-reviewed research to have "chance accuracy," but rather that it has not been proven by peer-reviewed research to be more accurate than chance. There is a significant distinction between the two that still seems to elude you, but I'm not sure how to make the distinction any clearer. No accuracy rate (sensitivity or specificity) can be determined for CQT polygraphy based on the available peer-reviewed field research.



Again, accuracy is set by the accepted results that have been obtained.  Regardless of how it is worded (I fully understand the difference between has been and has not been), when something has not been proven statistically better then a given percentage then it has been proven to be equal to or less then the specified percentage.  Knowing this, I have seen an abundance of accuracy rates that have obtained above chance accuracy rates, including the peer-reviewed field research you recently posted as support for your assertion, but not one that supports "has not been proven by peer-reviewed research to be more accurate then chance.

You wrote:

Quote:
Finally, with regard to Iacono & Lykken's survey of scientific opinion on the polygraph, however inadequate you may think the information provided to respondents was, the fact remains that the great majority of survey respondents believed they had enough information to render an opinion on whether the CQT is based on scientifically sound psychological principles or theory. And only 36% of Society for Psychophysiological Research members and 30% of Division One fellows of the American Psychological Association thought it was.



This is a redundant argument that we should just agree to disagree on.  This is simply a survey and has nothing to do with the scientific validity or, as I previously pointed out, the currently established  rules of evidence.
 
You wrote:

Quote:
If you genuinely believe that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis" and is instead attributable to "squabbling between ideological camps as to who's question format is better," well, more power to you, J.B. It appears to be a waste of my time and intellect to attempt to disabuse you of what seems to be a cherished delusion.



I agree this to be a moot point.  However, there are no delusions on my part.
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #40 - Apr 21st, 2002 at 6:12am
Mark & QuoteQuote Print Post  

jrjr2 wrote on Apr 20th, 2002 at 12:02pm:
I did not lie about my drug use, i have never used them. I did not lie about selling drugs, i have never sold them. I am not nor have I ever been a member of a group whose purpose was the destruction of my country. I have never been contacted by a member of a non- U.S. government for the express purposes of selling secrets. I am most certainly not a traitor to my country and yet your beloved polygraph has branded me so. My life has been ruined by that infernal machine and for you to maintain that the polygraph has an acceptable accuracy rate makes me very angry. For my position I don't care if the damned thing is 99% accurate, which it is not, it was wrong when it labelled me a drug selling, dope using, traitor and if it screwed me I can only imagine how many countless others it has harmed. The polygraph can not and should not take the place of old fashioned investigative work, it has no place in the preemployment process of the federal government.


akuma264666 ,

I have never advocated the use of polygraph in a pre-employment screening process the way it is currently used.  There is little scientific research in the use of polygraph for this purpose and none that is favorable, in my opinion.  I agree that nothing can nor should replace a thorough background investigation. 

My argument for polygraph, as are all my arguments for polygraph, being scientifically valid is in a specific criminal issue testing.  This use is were the majority of scientific research is done and where polygraph has been shown to have high validity.
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box George W. Maschke
Global Moderator
*****
Online


Make-believe science yields
make-believe security.

Posts: 6139
Location: The Hague, The Netherlands
Joined: Sep 29th, 2000
Re: The Scientific Validity of Polygraph
Reply #41 - Apr 21st, 2002 at 10:50am
Mark & QuoteQuote Print Post  
J.B.,

You wrote:

Quote:
The accuracy of any given method is established by the "obtained" accuracy results.  If a study is accepted though peer-review, then inferences can be made about the accuracy of the method by those results.  The bias you speak of is directly dependent on the number of cases that where discriminated against because they lacked the criterion to be included.


I'm not sure what your point is. Are you arguing that sampling bias is not a significant factor in the peer-reviewed field validity studies by Patrick & Iacono and Honts?

Quote:
You once again assert Lykken's opinion.  What is Raskin's opinion, whom you have admitted as a leading expert in CQT, on this study.  It appears to be additional evidence of conflicting opinion between two separate ideologies on question methodology.  You might ask why Lykken would refute Patrick & Iacono's study when they are from the same ideological camp?  Because the results turned out in the positive for the CQT.  "CQT-induced confession", now that is ludicrous.  Is Lykken suggesting that someone confesses because of the polygraph test question format used?  It would be interesting to see this assertion supported through research.


No, J.B., I did not "once again assert Lykken's opinion." I referred to an inconvenient (for polygraph proponents) fact that Lykken has pointed out: "in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive."

Your suggestion that Lykken's discussion of Patrick & Iacono's study amounts to a refutation of it is evidence that you haven't read Patrick & Iacono's study. If you had, you would know that Lykken's observations on the matter of sampling bias are entirely consistent with the conclusions drawn by Patrick & Iacono, which are implicit in the title of their article, "Validity of the control question polygraph test: The problem of sampling bias." (Journal of Applied Psychology, 76, 229-238)

In response to my following remarks:

Quote:
Again, J.B., my argument is not that CQT polygraphy has been proven by peer-reviewed research to have "chance accuracy," but rather that it has not been proven by peer-reviewed research to be more accurate than chance. There is a significant distinction between the two that still seems to elude you, but I'm not sure how to make the distinction any clearer. No accuracy rate (sensitivity or specificity) can be determined for CQT polygraphy based on the available peer-reviewed field research.


you replied:

Quote:
Again, accuracy is set by the accepted results that have been obtained.  Regardless of how it is worded (I fully understand the difference between has been and has not been), when something has not been proven statistically better then a given percentage then it has been proven to be equal to or less then the specified percentage.  Knowing this, I have seen an abundance of accuracy rates that have obtained above chance accuracy rates, including the peer-reviewed field research you recently posted as support for your assertion, but not one that supports "has not been proven by peer-reviewed research to be more accurate then chance.


Your reasoning that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is a logical fallacy of the argument to ignorance (argumentum ad ignorantiam) variety.

That something has not been proven to work better than chance does not mean that it has been proven to work no better than chance. If you cannot grasp this elementary concept, then my further debating with you the topics you proposed to discuss when you started this message thread is pointless, really.
  

George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Tel/SMS: 1-202-810-2105 (Please use Signal Private Messenger or WhatsApp to text or call.)
E-mail/iMessage/FaceTime: antipolygraph.org@protonmail.com
Wire: @ap_org
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"
Back to top
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #42 - Apr 23rd, 2002 at 7:31am
Mark & QuoteQuote Print Post  
George,

The first response (highlighted to show emphasis of the point) I gave in my last post;

Quote:
The accuracy of any given method is established by the "obtained" accuracy results.  If a study is accepted though peer-review, then inferences can be made about the accuracy of the method by those results.  The bias you speak of is directly dependent on the number of cases that where discriminated against because they lacked the criterion to be included.



If the sampling bias was so significant,  I would think the research would not have been accepted for publishing after peer-review.  Lykken et al have criticized the use of a confession based criterion in the past.  However, it appears it was still used in the Iacono & Patrick study, minus the one case.  In the real world, it is difficult to establish this criteria.  Again, I don't feel that this is the best method but is an acceptible method if it follows strict guidelines such as the ones I posted prior. 

You wrote:
Quote:
No, J.B., I did not "once again assert Lykken's opinion." I referred to an inconvenient (for polygraph proponents) fact that Lykken has pointed out: "in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive."



I don't see this as an 'inconvenient (for polygraph proponents)'.  How do you propose it is?

You wrote:
Quote:
Your suggestion that Lykken's discussion of Patrick & Iacono's study amounts to a refutation of it is evidence that you haven't read Patrick & Iacono's study. If you had, you would know that Lykken's observations on the matter of sampling bias are entirely consistent with the conclusions drawn by Patrick & Iacono, which are implicit in the title of their article, "Validity of the control question polygraph test: The problem of sampling bias." (Journal of Applied Psychology, 76, 229-238)



I never once indicated that Lykken's view differed from that of Patrick & Iacono.  You have a right to your opinion.  Again, they have refuted sampling bias based on this criterion previously but still sought in using it for their study.  If it is such a bad method, then why did they not use a different method?  I can't help but wonder what the comments or lack there of may have been if the 'obtained' accuracy results would have been less favorable.


You replied to my previous post about statistical accuracy:
Quote:
Your reasoning that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is a logical fallacy of the argument to ignorance (argumentum ad ignorantiam) variety.

That something has not been proven to work better than chance does not mean that it has been proven to work no better than chance. If you cannot grasp this elementary concept, then my further debating with you the topics you proposed to discuss when you started this message thread is pointless, really.



This is not a 'logical fallacy'-"The argument to ignorance is a logical fallacy of irrelevance occurring when one claims that something is true only because it hasn't been proved false, or that something is false only because it has not been proved true."  It is a 'contradictory claim'- "A claim is proved true if its contradictory is proved false, and vice-versa."  I am saying that there is proof, found in the four studies you illustrated as support for your statement, that polygraph has been proven to work better then chance in peer-reviewed field research.  By definition of this argument, my claim has been proven true and unless you can provide refuting evidence that your assertion is true then yours is false.  If you could provide contrary evidence to support your assertion,  then my claim would be a 'contrary claim' and not a 'logical fallacy'.

  

Quam verum decipio nos
Back to top
 
IP Logged
 
Paste Member Name in Quick Reply Box George W. Maschke
Global Moderator
*****
Online


Make-believe science yields
make-believe security.

Posts: 6139
Location: The Hague, The Netherlands
Joined: Sep 29th, 2000
Re: The Scientific Validity of Polygraph
Reply #43 - Apr 23rd, 2002 at 9:16am
Mark & QuoteQuote Print Post  
J.B.,

You seem to pooh-pooh the matter of sampling bias when confessions are used as criteria for ground truth. However, I don't think there is any rational ground for ignoring it, as I've explained above in my post of 17 April. It's significant with regard to what conclusions may be reasonably drawn based on the data obtained in the studies.

With regard to Patrick & Iacono's article, you write, "I never once indicated that Lykken's view differed from that of Patrick & Iacono." Sure you did: in your post of 20 April when you wrote, "You might ask why Lykken would refute Patrick & Iacono's study when they are from the same ideological camp?"

You insist that your assertion that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is not a logical fallacy. I don't have the patience to take you through this step-by-step, but I suggest that you carefully consider what you wrote (which is perhaps not what you really meant).

Finally, you suggest that the four peer-reviewed field studies cited in Lykken's A Tremor in the Blood show "that polygraph has been proven to work better then [sic] chance in peer-reviewed field research." Your argument seems to be essentially an argument to authority (argumentum ad verecundiam), suggesting that the results obtained in these four studies must prove CQT validity works better than chance because the articles were published in peer-reviewed journals and the accuracy rates obtained in them exceeded 50%.

For numerous reasons that we've discussed above, I disagree with this conclusion. Again, as Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. In addition, chance is not necessarily 50-50.

Moreover, as Dr. Richardson eloquently explained, CQT polygraphy is completely lacking in any scientific control whatsoever, and as Professor Furedy has explained, it is also unspecifiable and is not a genuine "test." Lacking both standardization and control, CQT polygraphy can have no meaningful accuracy rate and no predictive validity.
  

George W. Maschke
I am generally available in the chat room from 3 AM to 3 PM Eastern time.
Tel/SMS: 1-202-810-2105 (Please use Signal Private Messenger or WhatsApp to text or call.)
E-mail/iMessage/FaceTime: antipolygraph.org@protonmail.com
Wire: @ap_org
Threema: A4PYDD5S
Personal Statement: "Too Hot of a Potato"
Back to top
IP Logged
 
Paste Member Name in Quick Reply Box J.B. McCloughan
Very Senior User
****
Offline



Posts: 115
Location: USA
Joined: Dec 7th, 2001
Gender: Male
Re: The Scientific Validity of Polygraph
Reply #44 - Apr 26th, 2002 at 6:41am
Mark & QuoteQuote Print Post  
George,

You wrote:
Quote:
You seem to pooh-pooh the matter of sampling bias when confessions are used as criteria for ground truth. However, I don't think there is any rational ground for ignoring it, as I've explained above in my post of 17 April. It's significant with regard to what conclusions may be reasonably drawn based on the data obtained in the studies.



For you and/or anyone else to say that the sampling bias is and/or was significant, the estimated number of cases/samples excluded would need to be established.  Although this measurement is nearly impossible in some applications, as an example large census polls, it is able to be established in this particular research method.  However, the problem you purpose is not a sampling bias per se but a potential measurement bias.  From: http://personalpages.geneseo.edu/~socl212/biaserror.html

Quote:
sample statistic = population parameter ± bias ± sampling error
· bias is systematic and each instance tends to push the statistic away from the parameter in a specific direction
· Sampling bias
· non probability sample
· inadequate sampling frame that fails to cover the population
· non-response
· the relevant concept is generalizability
· Measurement bias
· response bias (question wording, context, interviewer effects, etc.)
· the relevant concept is measurement validity (content validity, criterion validity, construct validity, etc.)
· there is no simple indicator of bias since there are many kinds of bias that act in quite different ways
· sampling error is random and does not push the statistic away in a specific direction
· the standard error is an estimate of the size of the sampling error
· a 95% confidence margin of error of ± 3 percentage points refers ONLY to sampling error, i.e., only to the error due to random sampling, all other error comes under the heading of bias



Here is a more definitive explanation of criterion sampling. From: http://trochim.human.cornell.edu/tutorial/mugo/tutorial.htm

Quote:
PURPOSEFUL SAMPLING
Purposeful sampling selects information rich cases for indepth study. Size and specific cases depend on the study purpose.
There are about 16 different types of purposeful sampling. They are briefly described below for you to be aware of them. The details can be found in Patton(1990)Pg 169-186.

Criterion sampling Here, you set a criteria and pick all cases that meet that criteria for example, all ladies six feet tall, all white cars, all farmers that have planted onions. This method of sampling is very strong in quality assurance.
References

Patton, M.Q.(1990). Qualitative evaluation and research methods. SAGE Publications. Newbury Park London New Delhi



You wrote:
Quote:
With regard to Patrick & Iacono's article, you write, "I never once indicated that Lykken's view differed from that of Patrick & Iacono." Sure you did: in your post of 20 April when you wrote, "You might ask why Lykken would refute Patrick & Iacono's study when they are from the same ideological camp?"



If you would have placed my entire explanation of this, one can plainly see I didn't say that Lykken's 'view differed' from that of Pratrick & Iacono.

I wrote:
Quote:
You might ask why Lykken would refute Patrick & Iacono?s study when they are from the same ideological camp? Because the results turned out in the positive for the CQT.



Lykken refuted the high validity results obtained by the study.  This has nothing to do with Patrick's and/or Iacono's  views on the CQT question method or that Patrick and/or Iacono did not refute the validity results obtained.  This is a percentage obtained from the collected and processed data of the research study. 

You wrote:
Quote:
 
You insist that your assertion that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is not a logical fallacy. I don't have the patience to take you through this step-by-step, but I suggest that you carefully consider what you wrote (which is perhaps not what you really meant).



An example of a 'logical fallacy' would be if you would had stated that polygraph has never been proven to work so it does not and I countered with it has never been proven to not work so it does.  One of your errors in using this word is in that you have placed a statistical percentage 'chance' to your assertion, which makes it definitive and not speculative.  Even if your definitive suggestion were subjected to this definition of 'logical fallacy' with the percentage included, my assertion still does not meet the definition of a 'logical fallacy'.  I would have had to state that when something has not been proven statistically better then a given percentage, then it has been proven to be equal to or (greater) then the specified percentage.  Furthermore, there would need to be an established general knowledge that neither has been proven true.  I have already explained this error to you and provided you with the full definition of a 'logical fallacy' and the true definition of this argument ?contradictory claim? from the source that you used.  Again, you attempt to play on words by segmenting statements I have made.  I fully understand the definition and have taken the time to explain it to you.  I followed this statement with explanations and the conflicting data to your assertion that supports my assertion of being proven to work at above chance in peer-reviewed field research. 

I wrote in support of the assertion:
Quote:
'..contradictory claim'- "A claim is proved true if its contradictory is proved false, and vice-versa." I am saying that there is proof, found in the four studies you illustrated as support for your statement, that polygraph has been proven to work better then chance in peer-reviewed field research. By definition of this argument, my claim has been proven true and unless you can provide refuting evidence that your assertion is true then yours is false. If you could provide contrary evidence to support your assertion, then my claim would be a 'contrary claim' and not a 'logical fallacy'.



You wrote:
Quote:
Finally, you suggest that the four peer-reviewed field studies cited in Lykken's A Tremor in the Blood show "that polygraph has been proven to work better then [sic] chance in peer-reviewed field research." Your argument seems to be essentially an argument to authority (argumentum ad verecundiam), suggesting that the results obtained in these four studies must prove CQT validity works better than chance because the articles were published in peer-reviewed journals and the accuracy rates obtained in them exceeded 50%.

For numerous reasons that we've discussed above, I disagree with this conclusion. Again, as Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. In addition, chance is not necessarily 50-50.

Moreover, as Dr. Richardson eloquently explained, CQT polygraphy is completely lacking in any scientific control whatsoever, and as Professor Furedy has explained, it is also unspecifiable and is not a genuine "test." Lacking both standardization and control, CQT polygraphy can have no meaningful accuracy rate and no predictive validity.



You are correct that my assertion is that these four studies provide proof of above chance validity.  You are asserting a conflicting view, which you have the right to, that has no support for its assertion and thus is just a lay opinion.  Again, by definition of 'contradicting claim' my assertion is true because it has been proven and thus your view has been proven false because it has not been proven. 

My assertion is not an 'argument to authority (argumentum ad verecundiam)'.  The data speaks for itself and has been accepted.  Just because Lykken et al refute the obtained accuracy results, does not mean that the results were not accepted.  Your assertion on the results being unacceptable is the fallacy of Ad Verecundiam, since Lykken et al present a biased towards one side.

From: http://gncurtis.home.texas.net/authorit.html
Quote:
Not all arguments from expert opinion are fallacious, and for this reason some authorities on logic have taken to labelling this fallacy as "appeal to false authority" or "argument from questionable  authority". For the same reason, I will use the traditional Latin tag "ad verecundiam" to distinguish fallacious from non-fallacious arguments from authority.

    We must often rely upon expert opinion when drawing conclusion s about technical matters where we lack the time or expertise to form an informed opinion. For instance, those of us who are not physicians usually rely upon those who are when making medical decisions, and we are not wrong to do so. There are, however, four major ways in which such arguments can go wrong:

      1. An appeal to authority may be inappropriate in a couple of ways:

         A. It is unnecessary. If a question can be answered by observation or calculation, an argument from authority is not needed. Since arguments from authority are weaker than more direct evidence, go look or figure it out for yourself.

         The renaissance rebellion against the authority of Aristotle and the Bible played an important role in the scientific revolution. Aristotle was so respected in the Middle Ages that his word was taken on empirical issues which were easily decidable by observation. The scientific revolution moved away from this over-reliance on authority towards the use of observation and experiment.

         Similarly, the Bible has been invoked as an authority on empirical or mathematical questions. A particularly amusing example is the claim that the value of pi can be determined to be 3 based on certain passages in the Old Testament. The value of pi, however, is a mathematical question which can be answered by calculation, and appeal to authority is irrelevant.

         B. It is impossible. About some issues there simply is no expert opinion, and an appeal to authority is bound to commit the next type of mistake. For example, many self-help books are written every year by self-proclaimed "experts" on matters for which there is no expertise.

      2. The "authority" cited is not an expert on the issue, that is, the person who supplies the opinion is not an expert at all, or is one, but in an unrelated area. The now-classic example is the old television commercial which began: "I'm not a doctor, but I play one on TV...." The actor then proceeded to recommend a brand of medicine.

      3. The authority is an expert, but is not disinterested. That is, the expert is biased towards one side of the issue, and his opinion is thereby untrustworthy.

         For example, suppose that a medical scientist testifies that ambient cigarette smoke does not pose a hazard to the health of non-smokers exposed to it. Suppose, further, that it turns out that the scientist is an employee of a cigarette company. Clearly, the scientist has a powerful bias in favor of the position that he is taking which calls into question his objectivity.

         There is an old saying: "A doctor who treats himself has a fool for a patient." There is also a version for attorneys: "A lawyer who defends himself has a fool for a client." Why should these be true if the doctor or lawyer is an expert on medicine or the law? The answer is that we are all biased in our own causes. A physician who tries to diagnose his own illness is more likely to make a mistake out of wishful thinking, or out of fear, than another physician would be.

      4. While the authority is an expert, his opinion is unrepresentative of expert opinion on the subject. The fact is that if one looks hard enough, it is possible to find an expert who supports virtually any position that one wishes to take. "Such is human perversity", to quote Lewis Carroll. This is a great boon for debaters, who can easily find expert opinion on their side of a question, whatever that side is, but it is confusing for those of us listening to debates and trying to form an opinion.

         Experts are human beings, after all, and human beings err, even in their area of expertise. This is one reason why it is a good idea to get a second opinion about major medical matters, and even a third if the first two disagree. While most people understand the sense behind seeking a second opinion when their life or health is at stake, they are frequently willing to accept a single, unrepresentative opinion on other matters, especially when that opinion agrees with their own bias.

         Bias (problem 3) is one source of unrepresentativeness. For instance, the opinions of cigarette company scientists tend to be unrepresentative of expert opinion on the health consequences of smoking because they are biased to minimize such consequences. For the general problem of judging the opinion of a population based upon a sample, see the Fallacy of Unrepresentative Sample.

    To sum up these points in a positive manner, before relying upon expert opinion, go through the following checklist:

       * Is this a matter which I can decide without appeal to expert opinion? If the answer is "yes", then do so. If "no", go to the next question:

       * Is this a matter upon which expert opinion is available? If not, then your opinion will be as good as anyone else's. If so, proceed to the next question:

       * Is the authority an expert on the matter? If not, then why listen? If so, go on:

       * Is the authority biased towards one side? If so, the authority may be untrustworthy. At the very least, before accepting the authority's word seek a second, unbiased opinion. That is, go to the last question:

       * Is the authority's opinion representative of expert opinion? If not, then find out what the expert consensus is and rely on that. If so, then you may rationally rely upon the authority's opinion.

    If an argument to authority cannot pass these five tests, then it commits the fallacy of Ad Verecundiam.

Resources:

       * James Bachman, "Appeal to Authority", in Fallacies: Classical and Contemporary Readings, edited by Hans V. Hanson and Robert C. Pinto (Penn State Press, 1995), pp. 274-286.

       * Appeal to Authority, entry from philosopher Robert Todd Carroll's Skeptic's Dictionary.




You say you disagree with the statistical data of the four field research studies but you used them as a support for your assertion.  I will reiterate, this is not a sampling bias per se by definition.  Again, any bias that may or may not have been created was due to criterion selection (measurement bias).  Anyone can say that something caused error.  However, in the statistical realm one must provide reason and deduction from the data that supports the view for it to be a valid assertion.  More importantly, for one to assert that the statistical data contains criterion/measurement based bias, there would need to be an attributing result to an external and/or internal variable in relationship to the criterion.  The fact that the confession criterion was used does not in itself create a per se bias. 

Here is a hypothetical instance of  how a confession based criterion research study may produce a criterion/measurement bias; Confession was used as the criterion for the selection of deceptive polygraph chart data because it is a means of confirming the results.  In conducting the study, it was found that the original polygraph examiners' decisions were made after the post-test interview and based on the confessions obtained.  Since the original decision and the selection criterion were the same, one can not separate this variable from the original examiners' decision and/or use another sources independent of the confession to base the original examiners' decisions on the polygraph chart data.  The criterion based selection method thus caused an unknown degree of bias in the accuracy rate obtained from the deceptive cases reviewed. 

I agree that chance is not always 50/50.  If the accuracy results obtained for deceptive based on the original examiners' decisions have only the two possible outcomes of a correct or an incorrect original decision, the original decision had a 50 percent chance of producing either result.  If you can point to another decision available and/or the reason that 50/50 is not the chance level of these studies, I would be willing to discuss that.

I respect Drew's views as a scientist but disagree with him on the issue of the scientific definitions we debated.  My definitions are from other accepted scientific disciplines manuals.  These manuals and their contents are nationally reviewed and accredited.  I feel the correlation of the presented structures within CQT polygraph meet these definitions and Drew does not.  I agree to disagree with him on this issue.   
  

Quam verum decipio nos
Back to top
 
IP Logged
 
Page Index Toggle Pages: 1 2 [3] 
ReplyAdd Poll Send TopicPrint
The Scientific Validity of Polygraph

Please type the characters that appear in the image. The characters must be typed in the same order, and they are case-sensitive.
Open Preview Preview

You can resize the textbox by dragging the right or bottom border.
Insert Hyperlink Insert FTP Link Insert Image Insert E-mail Insert Media Insert Table Insert Table Row Insert Table Column Insert Horizontal Rule Insert Teletype Insert Code Insert Quote Edited Superscript Subscript Insert List /me - my name Insert Marquee Insert Timestamp No Parse
Bold Italicized Underline Insert Strikethrough Highlight
                       
Insert Preformatted Text Left Align Centered Right Align
resize_wb
resize_hb







Max 200000 characters. Remaining characters:
Text size: pt
More Smilies
View All Smilies
Collapse additional features Collapse/Expand additional features Smiley Wink Cheesy Grin Angry Sad Shocked Cool Huh Roll Eyes Tongue Embarrassed Lips Sealed Undecided Kiss Cry
Attachments More Attachments Allowed file types: txt doc docx ics psd pdf bmp jpe jpg jpeg gif png swf zip rar tar gz 7z odt ods mp3 mp4 wav avi mov 3gp html maff pgp gpg
Maximum Attachment size: 500000 KB
Attachment 1:
X