Quote
You seem to pooh-pooh the matter of sampling bias when confessions are used as criteria for ground truth. However, I don't think there is any rational ground for ignoring it, as I've explained above in my post of 17 April. It's significant with regard to what conclusions may be reasonably drawn based on the data obtained in the studies.
Quote
sample statistic = population parameter ± bias ± sampling error
· bias is systematic and each instance tends to push the statistic away from the parameter in a specific direction
· Sampling bias
· non probability sample
· inadequate sampling frame that fails to cover the population
· non-response
· the relevant concept is generalizability
· Measurement bias
· response bias (question wording, context, interviewer effects, etc.)
· the relevant concept is measurement validity (content validity, criterion validity, construct validity, etc.)
· there is no simple indicator of bias since there are many kinds of bias that act in quite different ways
· sampling error is random and does not push the statistic away in a specific direction
· the standard error is an estimate of the size of the sampling error
· a 95% confidence margin of error of ± 3 percentage points refers ONLY to sampling error, i.e., only to the error due to random sampling, all other error comes under the heading of bias
Quote
PURPOSEFUL SAMPLING
Purposeful sampling selects information rich cases for indepth study. Size and specific cases depend on the study purpose.
There are about 16 different types of purposeful sampling. They are briefly described below for you to be aware of them. The details can be found in Patton(1990)Pg 169-186.
Criterion sampling Here, you set a criteria and pick all cases that meet that criteria for example, all ladies six feet tall, all white cars, all farmers that have planted onions. This method of sampling is very strong in quality assurance.
References
Patton, M.Q.(1990). Qualitative evaluation and research methods. SAGE Publications. Newbury Park London New Delhi
Quote
With regard to Patrick & Iacono's article, you write, "I never once indicated that Lykken's view differed from that of Patrick & Iacono." Sure you did: in your post of 20 April when you wrote, "You might ask why Lykken would refute Patrick & Iacono's study when they are from the same ideological camp?"
Quote
You might ask why Lykken would refute Patrick & Iacono?s study when they are from the same ideological camp? Because the results turned out in the positive for the CQT.
Quote
You insist that your assertion that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is not a logical fallacy. I don't have the patience to take you through this step-by-step, but I suggest that you carefully consider what you wrote (which is perhaps not what you really meant).
Quote
'..contradictory claim'- "A claim is proved true if its contradictory is proved false, and vice-versa." I am saying that there is proof, found in the four studies you illustrated as support for your statement, that polygraph has been proven to work better then chance in peer-reviewed field research. By definition of this argument, my claim has been proven true and unless you can provide refuting evidence that your assertion is true then yours is false. If you could provide contrary evidence to support your assertion, then my claim would be a 'contrary claim' and not a 'logical fallacy'.
Quote
Finally, you suggest that the four peer-reviewed field studies cited in Lykken's A Tremor in the Blood show "that polygraph has been proven to work better then [sic] chance in peer-reviewed field research." Your argument seems to be essentially an argument to authority (argumentum ad verecundiam), suggesting that the results obtained in these four studies must prove CQT validity works better than chance because the articles were published in peer-reviewed journals and the accuracy rates obtained in them exceeded 50%.
For numerous reasons that we've discussed above, I disagree with this conclusion. Again, as Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. In addition, chance is not necessarily 50-50.
Moreover, as Dr. Richardson eloquently explained, CQT polygraphy is completely lacking in any scientific control whatsoever, and as Professor Furedy has explained, it is also unspecifiable and is not a genuine "test." Lacking both standardization and control, CQT polygraphy can have no meaningful accuracy rate and no predictive validity.
Quote
Not all arguments from expert opinion are fallacious, and for this reason some authorities on logic have taken to labelling this fallacy as "appeal to false authority" or "argument from questionable authority". For the same reason, I will use the traditional Latin tag "ad verecundiam" to distinguish fallacious from non-fallacious arguments from authority.
We must often rely upon expert opinion when drawing conclusion s about technical matters where we lack the time or expertise to form an informed opinion. For instance, those of us who are not physicians usually rely upon those who are when making medical decisions, and we are not wrong to do so. There are, however, four major ways in which such arguments can go wrong:
1. An appeal to authority may be inappropriate in a couple of ways:
A. It is unnecessary. If a question can be answered by observation or calculation, an argument from authority is not needed. Since arguments from authority are weaker than more direct evidence, go look or figure it out for yourself.
The renaissance rebellion against the authority of Aristotle and the Bible played an important role in the scientific revolution. Aristotle was so respected in the Middle Ages that his word was taken on empirical issues which were easily decidable by observation. The scientific revolution moved away from this over-reliance on authority towards the use of observation and experiment.
Similarly, the Bible has been invoked as an authority on empirical or mathematical questions. A particularly amusing example is the claim that the value of pi can be determined to be 3 based on certain passages in the Old Testament. The value of pi, however, is a mathematical question which can be answered by calculation, and appeal to authority is irrelevant.
B. It is impossible. About some issues there simply is no expert opinion, and an appeal to authority is bound to commit the next type of mistake. For example, many self-help books are written every year by self-proclaimed "experts" on matters for which there is no expertise.
2. The "authority" cited is not an expert on the issue, that is, the person who supplies the opinion is not an expert at all, or is one, but in an unrelated area. The now-classic example is the old television commercial which began: "I'm not a doctor, but I play one on TV...." The actor then proceeded to recommend a brand of medicine.
3. The authority is an expert, but is not disinterested. That is, the expert is biased towards one side of the issue, and his opinion is thereby untrustworthy.
For example, suppose that a medical scientist testifies that ambient cigarette smoke does not pose a hazard to the health of non-smokers exposed to it. Suppose, further, that it turns out that the scientist is an employee of a cigarette company. Clearly, the scientist has a powerful bias in favor of the position that he is taking which calls into question his objectivity.
There is an old saying: "A doctor who treats himself has a fool for a patient." There is also a version for attorneys: "A lawyer who defends himself has a fool for a client." Why should these be true if the doctor or lawyer is an expert on medicine or the law? The answer is that we are all biased in our own causes. A physician who tries to diagnose his own illness is more likely to make a mistake out of wishful thinking, or out of fear, than another physician would be.
4. While the authority is an expert, his opinion is unrepresentative of expert opinion on the subject. The fact is that if one looks hard enough, it is possible to find an expert who supports virtually any position that one wishes to take. "Such is human perversity", to quote Lewis Carroll. This is a great boon for debaters, who can easily find expert opinion on their side of a question, whatever that side is, but it is confusing for those of us listening to debates and trying to form an opinion.
Experts are human beings, after all, and human beings err, even in their area of expertise. This is one reason why it is a good idea to get a second opinion about major medical matters, and even a third if the first two disagree. While most people understand the sense behind seeking a second opinion when their life or health is at stake, they are frequently willing to accept a single, unrepresentative opinion on other matters, especially when that opinion agrees with their own bias.
Bias (problem 3) is one source of unrepresentativeness. For instance, the opinions of cigarette company scientists tend to be unrepresentative of expert opinion on the health consequences of smoking because they are biased to minimize such consequences. For the general problem of judging the opinion of a population based upon a sample, see the Fallacy of Unrepresentative Sample.
To sum up these points in a positive manner, before relying upon expert opinion, go through the following checklist:
* Is this a matter which I can decide without appeal to expert opinion? If the answer is "yes", then do so. If "no", go to the next question:
* Is this a matter upon which expert opinion is available? If not, then your opinion will be as good as anyone else's. If so, proceed to the next question:
* Is the authority an expert on the matter? If not, then why listen? If so, go on:
* Is the authority biased towards one side? If so, the authority may be untrustworthy. At the very least, before accepting the authority's word seek a second, unbiased opinion. That is, go to the last question:
* Is the authority's opinion representative of expert opinion? If not, then find out what the expert consensus is and rely on that. If so, then you may rationally rely upon the authority's opinion.
If an argument to authority cannot pass these five tests, then it commits the fallacy of Ad Verecundiam.
Resources:
* James Bachman, "Appeal to Authority", in Fallacies: Classical and Contemporary Readings, edited by Hans V. Hanson and Robert C. Pinto (Penn State Press, 1995), pp. 274-286.
* Appeal to Authority, entry from philosopher Robert Todd Carroll's Skeptic's Dictionary.
Quote
The accuracy of any given method is established by the "obtained" accuracy results. If a study is accepted though peer-review, then inferences can be made about the accuracy of the method by those results. The bias you speak of is directly dependent on the number of cases that where discriminated against because they lacked the criterion to be included.
Quote
No, J.B., I did not "once again assert Lykken's opinion." I referred to an inconvenient (for polygraph proponents) fact that Lykken has pointed out: "in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive."
Quote
Your suggestion that Lykken's discussion of Patrick & Iacono's study amounts to a refutation of it is evidence that you haven't read Patrick & Iacono's study. If you had, you would know that Lykken's observations on the matter of sampling bias are entirely consistent with the conclusions drawn by Patrick & Iacono, which are implicit in the title of their article, "Validity of the control question polygraph test: The problem of sampling bias." (Journal of Applied Psychology, 76, 229-238)
Quote
Your reasoning that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is a logical fallacy of the argument to ignorance (argumentum ad ignorantiam) variety.
That something has not been proven to work better than chance does not mean that it has been proven to work no better than chance. If you cannot grasp this elementary concept, then my further debating with you the topics you proposed to discuss when you started this message thread is pointless, really.
QuoteThe accuracy of any given method is established by the "obtained" accuracy results. If a study is accepted though peer-review, then inferences can be made about the accuracy of the method by those results. The bias you speak of is directly dependent on the number of cases that where discriminated against because they lacked the criterion to be included.
QuoteYou once again assert Lykken's opinion. What is Raskin's opinion, whom you have admitted as a leading expert in CQT, on this study. It appears to be additional evidence of conflicting opinion between two separate ideologies on question methodology. You might ask why Lykken would refute Patrick & Iacono's study when they are from the same ideological camp? Because the results turned out in the positive for the CQT. "CQT-induced confession", now that is ludicrous. Is Lykken suggesting that someone confesses because of the polygraph test question format used? It would be interesting to see this assertion supported through research.
QuoteAgain, J.B., my argument is not that CQT polygraphy has been proven by peer-reviewed research to have "chance accuracy," but rather that it has not been proven by peer-reviewed research to be more accurate than chance. There is a significant distinction between the two that still seems to elude you, but I'm not sure how to make the distinction any clearer. No accuracy rate (sensitivity or specificity) can be determined for CQT polygraphy based on the available peer-reviewed field research.
QuoteAgain, accuracy is set by the accepted results that have been obtained. Regardless of how it is worded (I fully understand the difference between has been and has not been), when something has not been proven statistically better then a given percentage then it has been proven to be equal to or less then the specified percentage. Knowing this, I have seen an abundance of accuracy rates that have obtained above chance accuracy rates, including the peer-reviewed field research you recently posted as support for your assertion, but not one that supports "has not been proven by peer-reviewed research to be more accurate then chance.
Quote from: akuma264666 on Apr 20, 2002, 08:02 AMakuma264666 ,
I did not lie about my drug use, i have never used them. I did not lie about selling drugs, i have never sold them. I am not nor have I ever been a member of a group whose purpose was the destruction of my country. I have never been contacted by a member of a non- U.S. government for the express purposes of selling secrets. I am most certainly not a traitor to my country and yet your beloved polygraph has branded me so. My life has been ruined by that infernal machine and for you to maintain that the polygraph has an acceptable accuracy rate makes me very angry. For my position I don't care if the damned thing is 99% accurate, which it is not, it was wrong when it labelled me a drug selling, dope using, traitor and if it screwed me I can only imagine how many countless others it has harmed. The polygraph can not and should not take the place of old fashioned investigative work, it has no place in the preemployment process of the federal government.
Quote
The two studies you point to do not establish that CQT polygraphy works better than chance, nor can any sensitivity and specificity for the procedure be inferred from them. The matter of sampling bias introduced when confessions are used as criteria for ground truth is indeed significant.
Quote
As Lykken notes (p. 134), in Patrick & Iacono's study, only one of the 49 guilty subjects could be confirmed as guilty independently of a CQT-induced confession, and his charts were classified as inconclusive.
Quote
Can sensitivity and specificity genuinely be determined for a procedure like CQT polygraphy that is both unspecifiable and lacking in control?
Quote
Are you prepared, at long last, to reveal to us to whom that sensitivity and specificity is known, and what precisely it is? And what peer-reviewd research established it? Again, the sensitivity and specificity of CQT polygraphy appears to be unknown to the U.S. Government, and as Gordon Barland, formerly of the DoDPI research division, wrote in that message thread, "...I know of no official government statistic regarding sensitivity and specificity."
Quote
Again, J.B., my argument is not that CQT polygraphy has been proven by peer-reviewed research to have "chance accuracy," but rather that it has not been proven by peer-reviewed research to be more accurate than chance. There is a significant distinction between the two that still seems to elude you, but I'm not sure how to make the distinction any clearer. No accuracy rate (sensitivity or specificity) can be determined for CQT polygraphy based on the available peer-reviewed field research.
Quote
Finally, with regard to Iacono & Lykken's survey of scientific opinion on the polygraph, however inadequate you may think the information provided to respondents was, the fact remains that the great majority of survey respondents believed they had enough information to render an opinion on whether the CQT is based on scientifically sound psychological principles or theory. And only 36% of Society for Psychophysiological Research members and 30% of Division One fellows of the American Psychological Association thought it was.
Quote
If you genuinely believe that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis" and is instead attributable to "squabbling between ideological camps as to who's question format is better," well, more power to you, J.B. It appears to be a waste of my time and intellect to attempt to disabuse you of what seems to be a cherished delusion.
QuoteConfession based criteria is a dependable means of establishing ground truth if the definition of confession is well defined, adhered to, and the examiners' decision based on the polygraph is pre-confession.
QuoteHow Polygraph-Induced Confessions Mislead Polygraphers
It is standard practice for police polygraphers to interrogate a suspect who has failed the lie test. They tell him that the impartial, scientific polygraph has demonstrated his guilt, that no one now will believe his denials, and that his most sensible action at this point would be to confess and try to negotiate the best terms that he can. This is strong stuff, and what the examiner says to the suspect is especially convincing and effective because the examiner genuinely believes it himself. Police experience in the United States suggests that as many as 40% of interrogated suspects do actually confess in this situation. And these confessions provide virtually the only feedback of "ground truth" or criterion data that is ever available to a polygraph examiner.
If a suspect passes the polygraph test, he will not be interrogated because the examiner firmly believes he has been truthful. Suspects who are not interrogated do not confess, of course. This means that the only criterion data that are systematically sought--and occasionally obtained--are confessions by people who have failed the polygraph, confessions that are guaranteed to corroborate the tests that elicited those confessions. The examiner almost never discovers that a suspect he diagnosed as truthful was in fact deceptive, because that bad news is excluded by his dependence on immediate confessions for verification. Moreover, these periodic confessions provide a diet of consistently good news that confirms the examiner's belief that the lie test is nearly infallible. Note that the examiner's client or employer also hears about these same confessions and is also protected from learning about most of the polygrapher's mistakes.
Sometimes a confession can verify, not only the test that produced it, but also a previous test that resulted in a diagnosis of truthful. This can happen when there is more than one suspect in the same crime, so that the confession of one person reveals that the alternative suspect must be innocent. Once again, however, the examiner is usually protected from learning when he has made an error. If the suspect who was tested first is diagnosed as deceptive, then the alternative suspect--who might be the guilty one--is seldom tested at all because the examiner believes that the case was solved by that first failed test. This means that only rarely does a confession prove that someone who has already failed his test is actually innocent.
Therefore, when a confession allows us to evaluate the accuracy of the test given to a person cleared by that confession, then once again the news will almost always be good news; that innocent suspect will be found to have passed his lie test, because if the first suspect had not passed the test, the second person would not have been tested and would not have confessed.[endnote omitted]
QuoteThe recent study by Honts illustrates that publication in a refereed journal is no guarantee of scientific respectability. The meticulous study by Patrick and Iacono was done with the cooperation of the Royal Canadian Mounted Police (RCMP) in Vancouver, B.C., and showed that nearly half of the suspects later shown to be innocent were diagnosed as deceptive by the RCMP polygraphers. This prompted the Canadian Police College to contract with Honts, once of the Raskin group, to conduct another study. A polygraphy instructor at the college sent Honts charts from tests administered to seven suspects who had confessed after failing the CQT and also charts of six suspects confirmed to be innocent by the se confessions of alternative suspects in the same crimes. Knowing which were which, Honts then proceeded to rescore the charts, using the same scoring rules employed by the RCMP examiners. Those original examiners had, of course, scored all seven guilty suspects as deceptive; that was why they proceeded to interrogate them and obtained the criterial confessions. Using the same scoring rules (and also knowing which suspects were in fact guilty), Honts of course managed to score all seven as deceptive also. The RCMP examiners had scored four of the six innocent suspects as truthful and two as inconclusive. We can be confident that all innocent suspects classified as deceptive were never discovered to have been innocent because, in such cases, alternative suspects would not have been tested, excluding any possibility that the truly guilty suspect might have failed, been interrogated, and confessed. Honts, using the same scoring rules and perhaps aided by his foreknowledge of which suspects were innocent, managed to improve on the original examiners, scoring five of the six as truthful and only one as inconclusive. The difference in Honts's findings from those of the other studies summarized in Table 8.2 is striking.
Surely no sensible reader can imagine that these alleged "findings" of the Honts study add anything at all to the sum of human knowledge about the true accuracy of the CQT. How it came about that scientific peer review managed to allow this report to be published in an archival scientific journal is a mystery. Since the author, Honts, and the editor of the journal, Garvin Chastain, are colleagues in the psychology department of Boise State University, it is a mystery they might be able to solve.
QuoteAs I have said before, one can not place a definitive base rate to sensitivity and specificity in a field setting due to the variable truthful and deceptive that may be present at any given time. Likewise, in any forensic science the base rate of these two areas is ever changing within the field based on the casework. Sensitivity and specificity are established in a controlled laboratory research environment.
Quote...there is a known sensitivity and specificity for polygraph that has been established and proven through peer-reviewed scientific research.
QuoteYou have said chance accuracy and based your assumption on the four studies that you posted. I do not see where in any of these studies or even the four studies combined for a mean accuracy rate produce a not better then chance outcome in any of the areas. Even the lowest of the percentages is above chance.
Quote
As Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. Do you mean to suggest that these four studies are adequate for determining the sensitivity and specificity of CQT polygraphy?
Quote
So what? This doesn't support your laughably implausible assertion that "[t]he reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis. It has to do with the squabbling between ideological camps as to who's question format is better."
Quote
No, not off the top of my head. I would assume that most diagnostic tests would be validated with laboratory studies. This is not feasible with CQT polygraphy because fear of consequences is a significant variable that is generally absent in the laboratory setting.
Quote
Podlesny, J. A., & Raskin, D. C. (1978). Effectiveness of techniques and physiological measures in the detection of deception. The Society for Psychophysiological Research, Inc., 15(4), 344-359.
Control-question (CQ) and guilty-knowledge (GK) techniques for the detection of deception were studied in a mock theft context. Subjects from the local community received $5 for participation, and both guilty and innocent subjects were motivated with a $10 bonus for a truthful outcome on the polygraph examination. They were instructed to deny the theft when they were examined by experimenters who were blind with respect to their guilt or innocent. Eight physiological channels were recorded. Blind numerical field evaluations with an inconclusive zone produced 94% and 83% correct decision for two different types of CQ test and 89% correct decisions for GK tests. Control questions were more effective than guilt-complex questions, and exclusive control questions were more effective than nonexclusive control questions. Behavioral observations were relatively ineffective in differentiating guilty and innocent subjects. Quantitative analyses of the CQ and GK data revealed significant discrimination between guilty and innocent subjects with a variety of electrodermal and cardiovascular measures. The results support the conclusion that certain techniques and physiological measures can be very useful for the detection of deception in a laboratory mock-crime context.
Quote
Raskin, D. C., & Hare, R. D. (1978). Psychopathy and detection of deception in prison in a prison population. Psychophysiology, 15, 126-136.
The effectiveness of detection of deception was evaluated with a sample of 48 prisoners, half of whom were diagnosed psychopaths. Half of each group were "guilty" of taking $20 in a mock crime and half were "innocent". An examiner who had no knowledge of the guilt or innocent of each subject conducted a field-type interview followed by a control question polygraph examination. Electrodermal, respiration, and cardiovascular activity was recorded, and field (semi-objective) and quantitative evaluations of the physiological responses were made. Field evaluations by the examiner produced 88% correct, 4% wrong, and 8% inconclusives. Excluding inconclusives, there were 96% correct decisions. Using blind quantitative scoring and field evaluations, significant discrimination between "guilty" and "innocent" subjects was obtained for a variety of electrodermal, respiration, and cardiovascular measures. Psychopaths were as easily detected as nonpsychopaths, and psychopaths showed evidence of stronger electrodermal responses and heart rate decelerations. The effectiveness of control question techniques in differentiating truth and deception was demonstrated in psychopathic and nonpsychopathic criminals in a mock crime situation, and the generalizability of the results to the field situation is discussed.
Quote
What is your point? Do you mean to suggest that I'm holding CQT polygraphy to an unfairly high standard?
Quote
Again, you'll find the information provided to those surveyed cited (it's paraphrased in the journal article) at pp. 179-181 of A Tremor in the Blood. The description of the probable-lie CQT is largely cited from Raskin, a leading CQT proponent. Given the survey's high response rate, it would appear that most of those surveyed disagreed with your view that the information provided was inadequate for them to render an opinion on whether the CQT is based on scientifically sound principles or theory.
Quote1. In those four studies you use for your assumption what are the established accuracy rates?
| Horvath (1977) | Kleinmuntz &Szucko (1984) | Patrick & Iacono (1991) | Honts (1996) | Mean | |
| Guilty correctly classified | 21.6/28 77% | 38/50 76% | 48/49 98% | 7/7 100% | 114.6/134 85.5% |
| Innocent correctly classified | 14.3/28 51% | 32/50 64% | 11/20 55% | 5/5 100% | 62.3/103 60.5% |
| Mean of above | 64% | 70% | 77% | 100% | 73% |
Quote2. Furedy and Honts have been debating this for years and it should be once again noted that Furedy reserves ill comments for CQT not polygraph because he is a GKT format supporter.
Quote3. Can you tell me what sensitvity and specifity has been established for any given forensic science by peer-reviewed and published studies under field conditions?
Quote4. I don't think you read what I wrote in regards to the study. Iacono and Lykken obviously slanted the information given to the surveyed in the study. The study also lacks proper information for uninformed persons to be able to make a scientific analysis of the CQT.
Quote
As I noted in my post of 1 April above, "No sensitivity or specificity can be determined for CQT polygraphy (an uncontrolled, unstandardized, unspecifiable procedure) based on the available peer-reviewed field research." If you disagree, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research?
Quote
The reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis. It has to do with the squabbling between ideological camps as to who's question format is better.
Quote
I think it's completely absurd for you to suggest that the large majority who did not agree that CQT polygraphy is based on scientifically sound pyschological principles or theory and the overwhelming majority who agreed that the CQT can be beaten reached those opinions because of partisan loyalty to some supposed "ideological camp" that supports the GKT. Nor do I see any reason for supposing that any alleged bias on the part of Iacono and Lykken accounts for the results of their peer-reviewed survey.
Quote
Clearly, CQT polygraphy's lack of support amongst the scientific community is attributable to something more than just "squabbling between ideological camps as to who's question format is better."
Quote
In Iacono & Lykken's survey of SPR members, only 36% of respondents with an opinion answered affirmatively when asked, "Would you say that the CQT is based on scientifically sound psychological principles or theory?" And 99% of respondents with an opinion agreed with the statement, "The CQT can be beaten by augmenting one's response to the control questions."
Quote from: George W. Maschke on Apr 09, 2002, 11:44 AMAs I noted in my post of 1 April above, "No sensitivity or specificity can be determined for CQT polygraphy (an uncontrolled, unstandardized, unspecifiable procedure) based on the available peer-reviewed field research." If you disagree, could you tell us, based on peer-reviewed research conducted under field conditions, what the sensitivity and specificity of CQT polygraphy is and refer us to the standardized protocol for the CQT that was used in this research?
QuoteIt is a change of subject. You shift the burden in every discussion and still have yet to assert what the accuracy rate is for CQT polygraph under field conditions.
QuoteMy assertion about squabbling over methods is not ludicrous but a well-known fact that this ideological camp supports GKT and only prescribes ill comments to CQT.
QuoteThe reason CQT polygraph has not been unanimously accepted as a scientific method has nothing to do with its current accuracy rate or its scientific basis. It has to do with the squabbling between ideological camps as to who's question format is better.