Dan, I do not write or manage the content for the AP website. What I do is try to learn more and provide information and knowledge to the polygraph profession and to the public. I wish more people would read. And I wish more people would read intelligently. Toward that objective, part of what I have done is to try to make instructional materials available to help people better understand the direct application of science and testing principles to the context of polygraph, lie detection and credibility assessment. Your strategy would seem to be the one about "tell a lie long enough and people will believe it, and the bigger the lie they more they believe it." The first lie was a published claim of ~100% accuracy. Followed by the supporting lie that your favorite technique can "nullify the effects of countermeasures." (Dan Mangan’s written and published words). The next lie was in re-inventing yourself as the anti-polygraph polygraph examiner accusing everyone else of exaggerated claims (hoping they will either not notice or forget your first lie about ~100% accuracy). Follow that with the lie of professing to be the champion of reason while publishing conclusions that are un-replicatable (~100% accuracy, “nulify countermeasures,” etc.) and unaccountable (only those who are similarly anointed by the guru could possibly understand you), and therefore inconsistent with and disconnected from reason (having more to do with mystified experteeism.) Probably there are some people who are desperate enough to want your services and pay your fees on the off chance that you can magically pull a rabbit out of the hat for them. I don't know whether you actually believe your published claims of ~100% accuracy, or whether that is just convenient marketing hype. I do know that Matte's published hypothesis are not consistent with the evidence - polygraph machines cannot discriminate between fear and hope, nor can they determine the reasons for these emotions. And so your reliance on an unscientific claim would seem you put you solidly in the realm of pseudoscience. In the end what you offer is this: "trust me because I am an expert, having been anointed by the hand of the guru.." The corollary to this is the message "don't trust anyone else without your approval." The problem with unscientific expertise as the basis for your test results and conclusions is that you are basically free to give subjective results - perhaps even any result you want to give or any result someone wants to purchase. I do understand the financial and business and professional economic motivation that would make a person want to perpetuate the business model of "trust me I'm an expert" - when subjective, unscientific, unquantifiable, un-replicatable, "expertise" is all you have to sell. And of course - when selling subjective expertise and unrealistic solutions - the more you sell it and the more you boastful and outrageous your claims the more likely you are to succeed. Think about it, when selling nonsense, it would sell nothing if one were to advise people that it is nonsense. Development of a scientific test requires that we first understand that a test and a test result is always going to be an imperfect and probabilistic assessment of some interesting and important thing for which we cannot evaluate with simple and perfect deterministic observation nor with a direct physical/linear measurement. So if the test result is greater than chance and less than perfection, then the goal of studying the test is to obtain and study data to improve our knowledge about the confidence intervals that describe the things we might say about the test result. The difference between old-school-experteeism and a scientific test is that a test result is something that can be reproduced with some expected frequency. The old-school mystified expert approach tends to be so esoteric that reproduction of analytic conclusions is not a realistic thing (think back to the decades before numerical and statistical analysis when polygraph examiners might have been reluctant to allow other examiners to look at there data). Even today we sometimes professionals play this game we will see a lot of subjective experteeism when different experts disagree while using subjective, inscruitable, and under-quantified (or unquantified) analytic methods. In the worst cases esoteric process give way to wholesale mysticism in which people tend to make all kinds of outrageous and laughable claims (for example: ~100% accuracy, or the ability to "nullify the effects of countermeasures.") for the convenience of self-promotion or for the convenience of a single case outcome. Of course if a test result is based on a structured process then it is reproducible (and non-mystical) – and we can begin to study and know the range of probabilities for which we can expect the test to lead us to correct conclusions and effective decisions. In the words of W. Edward Demming "If you can't describe what you do as a process, you don't know what you are doing." And, of course, if we can describe what we do as a process then we could teach most intelligent persons to do it successfully using the structured process (no mysticism needed). And ultimately we begin to automate any well structured process. About a half a century or more ago there was discussion was about the need for standardization of processes. This was in a day when people could not imagine the potential for computerized automation. Today we can much more easily imagine what computers, machines, robots and algorithms can do. Automation is today what standardization was 50 years ago. Of course there are some ethical and scientific discussions to be had around how exactly we use computers in human decision making – but ignoring this is not a wise solution. Automation of a test, either completely or wherever possible would reduce the impact of both competency and random human variation. But automation will only succeed if a test actually works at rates significantly greater than chance. Automating the process and result of a pseudoscientific test would not be fun, because then the “expert” would no longer be free to subjectively adapt the test results to the solution that is most socially and professionally convenient to the expert. Standardization of processes and process automation will only succeed if results can be reasonably expected to occur most often in a usable range (can you say "confidence interval"). If not, if results are mere random chaos or random guessing, then neither process standardization nor process automation will improve the consistency of outcomes. If test data and test results are mere random chaos then professional and economic survival will depend wholly on salesmanship (the ability to sell confidence in nothing more than one's expertise). Which begins to beg the question, why, if even the NAS agrees that polygraph results are significantly greater than chance and still less than perfect (which could be said about any and all tests), why does someone cling to an impression that the test and test result is mere subjective chaos for which he can inject his esoterica and mystified "expert" opinion in ways that are beyond the scrutiny of others who have not been anointed by the guru??? One possible answer has to do with a lack of competence. I'll explain. A standardized (non-automated) test process still requires a level of competency for test results to occur within some expected confidence interval (e.g., greater than chance, less than perfect). Without some competency in test administration the results might be mere chaos, in which case professional economic survival will depend on some fast talking and slick marketing - and a customer base that prefers to purchase "expertise" as if it is disconnected from science. ,hoping for the future, /rn
|