George,
You wrote:
Quote:
You seem to pooh-pooh the matter of sampling bias when confessions are used as criteria for ground truth. However, I don't think there is any rational ground for ignoring it, as I've explained above in my post of 17 April. It's significant with regard to what conclusions may be reasonably drawn based on the data obtained in the studies.
For you and/or anyone else to say that the sampling bias is and/or was significant, the estimated number of cases/samples excluded would need to be established. Although this measurement is nearly impossible in some applications, as an example large census polls, it is able to be established in this particular research method. However, the problem you purpose is not a sampling bias per se but a potential measurement bias. From:
http://personalpages.geneseo.edu/~socl212/biaserror.html
Quote:
sample statistic = population parameter ± bias ± sampling error
· bias is systematic and each instance tends to push the statistic away from the parameter in a specific direction
· Sampling bias
· non probability sample
· inadequate sampling frame that fails to cover the population
· non-response
· the relevant concept is generalizability
· Measurement bias
· response bias (question wording, context, interviewer effects, etc.)
· the relevant concept is measurement validity (content validity, criterion validity, construct validity, etc.)
· there is no simple indicator of bias since there are many kinds of bias that act in quite different ways
· sampling error is random and does not push the statistic away in a specific direction
· the standard error is an estimate of the size of the sampling error
· a 95% confidence margin of error of ± 3 percentage points refers ONLY to sampling error, i.e., only to the error due to random sampling, all other error comes under the heading of bias
Here is a more definitive explanation of criterion sampling. From:
http://trochim.human.cornell.edu/tutorial/mugo/tutorial.htm Quote:
PURPOSEFUL SAMPLING
Purposeful sampling selects information rich cases for indepth study. Size and specific cases depend on the study purpose.
There are about 16 different types of purposeful sampling. They are briefly described below for you to be aware of them. The details can be found in Patton(1990)Pg 169-186.
Criterion sampling Here, you set a criteria and pick all cases that meet that criteria for example, all ladies six feet tall, all white cars, all farmers that have planted onions. This method of sampling is very strong in quality assurance.
References
Patton, M.Q.(1990). Qualitative evaluation and research methods. SAGE Publications. Newbury Park London New Delhi
You wrote:
Quote:
With regard to Patrick & Iacono's article, you write, "I never once indicated that Lykken's view differed from that of Patrick & Iacono." Sure you did: in your post of 20 April when you wrote, "You might ask why Lykken would refute Patrick & Iacono's study when they are from the same ideological camp?"
If you would have placed my entire explanation of this, one can plainly see I didn't say that Lykken's 'view differed' from that of Pratrick & Iacono.
I wrote:
Quote:
You might ask why Lykken would refute Patrick & Iacono?s study when they are from the same ideological camp? Because the results turned out in the positive for the CQT.
Lykken refuted the high validity results obtained by the study. This has nothing to do with Patrick's and/or Iacono's views on the CQT question method or that Patrick and/or Iacono did not refute the validity results obtained. This is a percentage obtained from the collected and processed data of the research study.
You wrote:
Quote:
You insist that your assertion that "when something has not been proven statistically better then [sic] a given percentage then it has been proven to be equal to or less then [sic] the specified percentage" is not a logical fallacy. I don't have the patience to take you through this step-by-step, but I suggest that you carefully consider what you wrote (which is perhaps not what you really meant).
An example of a 'logical fallacy' would be if you would had stated that polygraph has never been proven to work so it does not and I countered with it has never been proven to not work so it does. One of your errors in using this word is in that you have placed a statistical percentage 'chance' to your assertion, which makes it definitive and not speculative. Even if your definitive suggestion were subjected to this definition of 'logical fallacy' with the percentage included, my assertion still does not meet the definition of a 'logical fallacy'. I would have had to state that when something has not been proven statistically better then a given percentage, then it has been proven to be equal to or (greater) then the specified percentage. Furthermore, there would need to be an established general knowledge that neither has been proven true. I have already explained this error to you and provided you with the full definition of a 'logical fallacy' and the true definition of this argument ?contradictory claim? from the source that you used. Again, you attempt to play on words by segmenting statements I have made. I fully understand the definition and have taken the time to explain it to you. I followed this statement with explanations and the conflicting data to your assertion that supports my assertion of being proven to work at above chance in peer-reviewed field research.
I wrote in support of the assertion:
Quote:
'..contradictory claim'- "A claim is proved true if its contradictory is proved false, and vice-versa." I am saying that there is proof, found in the four studies you illustrated as support for your statement, that polygraph has been proven to work better then chance in peer-reviewed field research. By definition of this argument, my claim has been proven true and unless you can provide refuting evidence that your assertion is true then yours is false. If you could provide contrary evidence to support your assertion, then my claim would be a 'contrary claim' and not a 'logical fallacy'.
You wrote:
Quote:
Finally, you suggest that the four peer-reviewed field studies cited in Lykken's A Tremor in the Blood show "that polygraph has been proven to work better then [sic] chance in peer-reviewed field research." Your argument seems to be essentially an argument to authority (argumentum ad verecundiam), suggesting that the results obtained in these four studies must prove CQT validity works better than chance because the articles were published in peer-reviewed journals and the accuracy rates obtained in them exceeded 50%.
For numerous reasons that we've discussed above, I disagree with this conclusion. Again, as Lykken notes at p. 135, none of these studies are definitive, and reliance on polygraph-induced confessions as criteria of ground truth results in overestimation of CQT accuracy, especially in detecting guilty subjects, to an unknown extent. In addition, chance is not necessarily 50-50.
Moreover, as Dr. Richardson eloquently explained, CQT polygraphy is completely lacking in any scientific control whatsoever, and as Professor Furedy has explained, it is also unspecifiable and is not a genuine "test." Lacking both standardization and control, CQT polygraphy can have no meaningful accuracy rate and no predictive validity.
You are correct that my assertion is that these four studies provide proof of above chance validity. You are asserting a conflicting view, which you have the right to, that has no support for its assertion and thus is just a lay opinion. Again, by definition of 'contradicting claim' my assertion is true because it has been proven and thus your view has been proven false because it has not been proven.
My assertion is not an 'argument to authority (argumentum ad verecundiam)'. The data speaks for itself and has been accepted. Just because Lykken et al refute the obtained accuracy results, does not mean that the results were not accepted. Your assertion on the results being unacceptable is the fallacy of Ad Verecundiam, since Lykken et al present a biased towards one side.
From:
http://gncurtis.home.texas.net/authorit.html
Quote:
Not all arguments from expert opinion are fallacious, and for this reason some authorities on logic have taken to labelling this fallacy as "appeal to false authority" or "argument from questionable authority". For the same reason, I will use the traditional Latin tag "ad verecundiam" to distinguish fallacious from non-fallacious arguments from authority.
We must often rely upon expert opinion when drawing conclusion s about technical matters where we lack the time or expertise to form an informed opinion. For instance, those of us who are not physicians usually rely upon those who are when making medical decisions, and we are not wrong to do so. There are, however, four major ways in which such arguments can go wrong:
1. An appeal to authority may be inappropriate in a couple of ways:
A. It is unnecessary. If a question can be answered by observation or calculation, an argument from authority is not needed. Since arguments from authority are weaker than more direct evidence, go look or figure it out for yourself.
The renaissance rebellion against the authority of Aristotle and the Bible played an important role in the scientific revolution. Aristotle was so respected in the Middle Ages that his word was taken on empirical issues which were easily decidable by observation. The scientific revolution moved away from this over-reliance on authority towards the use of observation and experiment.
Similarly, the Bible has been invoked as an authority on empirical or mathematical questions. A particularly amusing example is the claim that the value of pi can be determined to be 3 based on certain passages in the Old Testament. The value of pi, however, is a mathematical question which can be answered by calculation, and appeal to authority is irrelevant.
B. It is impossible. About some issues there simply is no expert opinion, and an appeal to authority is bound to commit the next type of mistake. For example, many self-help books are written every year by self-proclaimed "experts" on matters for which there is no expertise.
2. The "authority" cited is not an expert on the issue, that is, the person who supplies the opinion is not an expert at all, or is one, but in an unrelated area. The now-classic example is the old television commercial which began: "I'm not a doctor, but I play one on TV...." The actor then proceeded to recommend a brand of medicine.
3. The authority is an expert, but is not disinterested. That is, the expert is biased towards one side of the issue, and his opinion is thereby untrustworthy.
For example, suppose that a medical scientist testifies that ambient cigarette smoke does not pose a hazard to the health of non-smokers exposed to it. Suppose, further, that it turns out that the scientist is an employee of a cigarette company. Clearly, the scientist has a powerful bias in favor of the position that he is taking which calls into question his objectivity.
There is an old saying: "A doctor who treats himself has a fool for a patient." There is also a version for attorneys: "A lawyer who defends himself has a fool for a client." Why should these be true if the doctor or lawyer is an expert on medicine or the law? The answer is that we are all biased in our own causes. A physician who tries to diagnose his own illness is more likely to make a mistake out of wishful thinking, or out of fear, than another physician would be.
4. While the authority is an expert, his opinion is unrepresentative of expert opinion on the subject. The fact is that if one looks hard enough, it is possible to find an expert who supports virtually any position that one wishes to take. "Such is human perversity", to quote Lewis Carroll. This is a great boon for debaters, who can easily find expert opinion on their side of a question, whatever that side is, but it is confusing for those of us listening to debates and trying to form an opinion.
Experts are human beings, after all, and human beings err, even in their area of expertise. This is one reason why it is a good idea to get a second opinion about major medical matters, and even a third if the first two disagree. While most people understand the sense behind seeking a second opinion when their life or health is at stake, they are frequently willing to accept a single, unrepresentative opinion on other matters, especially when that opinion agrees with their own bias.
Bias (problem 3) is one source of unrepresentativeness. For instance, the opinions of cigarette company scientists tend to be unrepresentative of expert opinion on the health consequences of smoking because they are biased to minimize such consequences. For the general problem of judging the opinion of a population based upon a sample, see the Fallacy of Unrepresentative Sample.
To sum up these points in a positive manner, before relying upon expert opinion, go through the following checklist:
* Is this a matter which I can decide without appeal to expert opinion? If the answer is "yes", then do so. If "no", go to the next question:
* Is this a matter upon which expert opinion is available? If not, then your opinion will be as good as anyone else's. If so, proceed to the next question:
* Is the authority an expert on the matter? If not, then why listen? If so, go on:
* Is the authority biased towards one side? If so, the authority may be untrustworthy. At the very least, before accepting the authority's word seek a second, unbiased opinion. That is, go to the last question:
* Is the authority's opinion representative of expert opinion? If not, then find out what the expert consensus is and rely on that. If so, then you may rationally rely upon the authority's opinion.
If an argument to authority cannot pass these five tests, then it commits the fallacy of Ad Verecundiam.
Resources:
* James Bachman, "Appeal to Authority", in Fallacies: Classical and Contemporary Readings, edited by Hans V. Hanson and Robert C. Pinto (Penn State Press, 1995), pp. 274-286.
* Appeal to Authority, entry from philosopher Robert Todd Carroll's Skeptic's Dictionary.
You say you disagree with the statistical data of the four field research studies but you used them as a support for your assertion. I will reiterate, this is not a sampling bias per se by definition. Again, any bias that may or may not have been created was due to criterion selection (measurement bias). Anyone can say that something caused error. However, in the statistical realm one must provide reason and deduction from the data that supports the view for it to be a valid assertion. More importantly, for one to assert that the statistical data contains criterion/measurement based bias, there would need to be an attributing result to an external and/or internal variable in relationship to the criterion. The fact that the confession criterion was used does not in itself create a per se bias.
Here is a hypothetical instance of how a confession based criterion research study may produce a criterion/measurement bias; Confession was used as the criterion for the selection of deceptive polygraph chart data because it is a means of confirming the results. In conducting the study, it was found that the original polygraph examiners' decisions were made after the post-test interview and based on the confessions obtained. Since the original decision and the selection criterion were the same, one can not separate this variable from the original examiners' decision and/or use another sources independent of the confession to base the original examiners' decisions on the polygraph chart data. The criterion based selection method thus caused an unknown degree of bias in the accuracy rate obtained from the deceptive cases reviewed.
I agree that chance is not always 50/50. If the accuracy results obtained for deceptive based on the original examiners' decisions have only the two possible outcomes of a correct or an incorrect original decision, the original decision had a 50 percent chance of producing either result. If you can point to another decision available and/or the reason that 50/50 is not the chance level of these studies, I would be willing to discuss that.
I respect Drew's views as a scientist but disagree with him on the issue of the scientific definitions we debated. My definitions are from other accepted scientific disciplines manuals. These manuals and their contents are nationally reviewed and accredited. I feel the correlation of the presented structures within CQT polygraph meet these definitions and Drew does not. I agree to disagree with him on this issue.