Greetings,
For those who may wish to read the whole (somewhat admittedly slightly angsty) thing, here is a link.
https://www.dropbox.com/s/7rpiedrl0qh56r6/Nelson2019SepOctBODReport.pdf?dl=0 In short, the BOD did partially walk back a requirement to use algorithms for evidentiary exams (the only place they were required). There remains a requirement to report probabilistic results for evidentiary exams.
Reasons for this have nothing to do with any case experience, and nothing to do with experience related to any of the different computer scoring algorithms.
Prior to 1.8.3 the APA made no mention of computer algorithms or probabilistic results, leaving polygraph professionals free to rely solely on manual test data analysis - which may include numeical scoring methods with visual/subjective feature extraction of the type necessary prior to the widespread availability of computers and may even include the type of subjective eye-ball analytics that Mr. Williams was trained in way back in the 1970s.
The reasons for the walk-back have to do with some people feeling like the cart was before the horse on 1.8.3, with many polygraph professionals still completely unfamiliar with discussions of probabilistic results and still largely unaware of how computer algorithms work. Oddly, those professionals will use the computerized polygraph of today (analyze data today) in largely the same way that Mr. Williams and others would have used and analog instrument (by looking at the squiggly lines with one's eye-balls). Problem is that different "experts" can sometimes "see" things differently. And so the quest is for an objective and reproducible, albeit probabilistic, solution.
In reality, polygraph examiners do use algorithms. It is a mistake not to use them. It is also a mistake not use manual scoring methods. But there are always growing pains, and there is always some push-back from old school practitioners when technology and automation are introduced. (I can just imagine some old-time airline pilots complaining "that's not really flyin' if ya have to use a auto-pilot. Ya gotta be able to read them steam-dials or you dunno what yer doin'."
Experteeism is fun, and probably feels more economically and professionally secure than tech and automation. But tech and automation is how things go. So my position is that all professionals should acquaint themselves with advancing technologies. Professions and industries that stay in the past are eventually completely disrupted by robots that attempt to make things better, faster and cheaper - but can more or less completely miss the value of human experience.
So, if we don't want the robots to be in charge of us, then we may want to be in charge of the robots. Mr. Mangan is a well-known advocate for "experteeism" as a basis of all polygraph results. As I stated earlier, experteeism is fun and feels economically safe (viable), whereas automation and technology tend to be equalizers. That is, it "feels" economically safe - but may not be as secure as people think. As it goes, professionals who make no use of advancing tech are at increased risk for disruption.
Also, in reality, there are some transitional issues to address. These needs include education for older polygraph examiners - some of whom may still need to learn to think probabilistically (because ll scientific tests are fundamentally probabilistic - and therefore not expected to be infallible). Some local associations and some training programs may not, at this time, be fully prepared to meet those educational needs. The catch is that resources to meet . need are usually not allocated until the need is clarified - in some form of requirement. In this case, we are working to more fully develop the available knowledge and instructional materials on the different computerized analysis methods available today - even though the requirement is stayed.
Another issue to consider is that different methods of analysis may give different results - and it is up to the evaluator to try to make sense of this. The NAS (2012) Report on Scientific Evidence discussed this. They noted that it is common for analysts to evalute data using a variety of methods, and that it is felt by some that analyst sshould report all of their analytic results including those that do not concur with a reported conclusion.
Considering that any scientific test is intended to quantify (probabilistically) some phenomena that cannot be subject to perfect deterministic observation (which would negate the need for a test), and also cannot be subject to direct physical measurement (which requires both a physical phenomena and a defined unit of measure), the purpose of any test is to create a basis of information to support a conclusion about the phenomena of interest. As it turns out in psychology and social sciences, the only way to attempt to quantify many very interesting phenomena is via some testing procedure that yields probabilistic results. Anyway, what often happens in polygraph is that examiners will report only one of the results - avoiding mention of any alternate analytic solution. My guess is that some attorneys would be very interested in the existence of a result differed from the one reported. With normal polygraphs conducted on coopertive persons, we tend to see different analysis methods concur approximately 90% of the time. When they do not concur, it is an interesting question as to what the possible causes may be. We can, at times identify the most likely cause of the observed difference.
Yet another of the issues to address when considering the use of computerized analysis is the question of who is responsible for the result. The 737MAX situation is an interesting learning and discussion context for all areas of science and tech that may involve the interaction of autonomous (semi-autonomous) systems human professionals. When human lives are on the plan the human pilots are ultimately responsible, but there is also a point where people may want to ask and know exactly what kind of enginerding decisions were made in the design of the MCAS and why it acts the way it does. Ultimately, human care still requires human expertise - just in case the confusers get computed.
Human experts will always matter. Ask anyone on US Airways 1594 - all of whom survived because the pilot had grey hair. But certainly, when I travel by air, I want the pilot to use autopilot - so that they are not fatigued during the landing (seems that take off and landing are the most dangerous parts).
And so, computer algorithms are not gone. In fact they remain highly reliable and very useful (less subject to expection bias and other forms of bias and unreliability than old-school eye-ball analysis. Some algorithms are reasonably well documented as to how they work and what they do. We have written replication code for 7 different models. And, unfortunately, there are some black-box problems for which perhaps one human on earth may know what they do.
We will eventually know a bit more than we do at present. Of course, we'll never know everything - because, well, it's a big universe, and people are kinda complex.
In the meantime, the walk-back on 1.8.3 is only partial, and there still exists a requirement that evidentiary polygraphs are supported by probabilistic, not merely categorical results. How an examiner may obtain that probabilistic solution is limited to a few methods - involving both computer algorithms and the ESS-M.
Feel free to contact me directly if there are questions or interest in rational discussion.
Peace,
/rn