Texas sex offender & mandatory polygraph

Started by WorriedMom, Nov 27, 2001, 11:22 PM

Previous topic - Next topic

Dan Mangan

#120
 Ray, the American Polygraph Association model policy for PCSOT (https://apoa.memberclicks.net/assets/docs/pcsot%20model.pdf) includes the following "tests"...

> Instant offense exam

> Instant offense investigative exam

> Prior allegation exam

> Sexual history exam I [victims]

> Sexual history exam II [behaviors/paraphilias]

> Maintenance exam

> Sex offense monitoring exam

What is the demonstrated accuracy of each of these "tests," and where can we find the independent peer-reviewed (non-self-report) studies that support your claims of each "test's" accuracy?

Joe McCarthy

#121
ah the good ole days when it was just

maint
mont
SH
and inst off

We have gone from 4 tests to 7 tests. 

I can see both ends of the argument on this; but in Texas, it seems it gives a lot of room for abuse for examiners who have a problem with the cop out word, "inconclusive."

For an industry (texas) that seems to love the words "because this is the way it's always been done", when it pertained to how business is handed out, they seem to be willing to separate themselves from that saying when it comes to more testing opportunities. 

Now, having said that, I do see the utility of a couple of these extra tests.  I just see room for abuse, especially in a market where profiteering has been kinda obvious.

Just my two cents
Joe

Raymond Nelson

Joe,

The different types of exams merely attempt to clarify the different types of purposes that one might consider conducting these exams.

What Dan is doing is displaying either a disingenuous desire to confuse people or a genuine misunderstanding of scientific testing.

Any of us can choose to be dissatisfied with the present state of research on this, but that is no excuse for ignoring what evidence we do have to describe our knowledge at the present time.

You can find information on what we know about the accuracy of multiple issue screening polygraphs in the 2011 meta-analytic survey. For better or worse there may not at this time be a better source for information. Or we can pretend like we know absolutely - and subsequently pretend that the test is ~100% perfect in the vacancy.

Evidence at this time tends to converge at mean accuracy estimates in the mid .80s for exams of this type that are interpreted with an assumption (some would say a strong assumption) of independent criterion variance - with a corresponding confidence interval that describes where we expect to observe accuracy in subsequent studies and real world settings. Contrast this with event specific accuracy rates that are a bit higher.

Given that the results from laboratory and field studies tend to converge at similar levels - within an expected range of variability - there is no evidence at this time to support an assumption that polygraph accuracy would be very different for different topics. Perhaps some day our knowledge will be fine-grained and precise enough to support such an assumption, but at present it does not.

At the present time all that is assumed is that polygraph questions describe a behavioral issue for which an examinee is capable of knowing the truth about his or her past conduct. That is all. The thing that seems to have the greatest affect on accuracy is whether the set of test stimulus question describe a single issue - for which we make no assumption of independence - or multiple issues - for which we make an assumption of independence. The difference is a rather well known statistical phenomena called multiplicity. Simply put, making multiple statistical decisions is a mathematically and statistically more complex endeavor. More complex in this context means more potential sources of uncontrolled variance and subsequently lower precision and somewhat wider margins of error compared to exams that do not involve multiplicity.

Whereas Dan's publication of ~100% accuracy is simple opportunistic predation on people's desperation for certainty in a context of uncertainty, most educated people will understand that tests are not expected to be perfect. Perfection would require a deterministic observer. Near perfection (i.e., physical measurement) would require both a physical substance and a well defined physical unit of measurement - for which we would use a measurement not a test. Tests are needed and used when we want to quantify something that is neither deterministic nor subject to physical measurement. The purpose of any scientific test is to attempt to quantify some amorphous phenomena. Because the target phenomena are amorphous, tests are inherently probabilistic and inherently imperfect. They are only expected to quantify the margin of uncertainty using a structured and replicable analytic procedure.

If the procedure is not structured and replicable - if it depends on the personal prowess of the expert - then it is a clinical procedure. These are useful when we do not have a structured and replicable analytic procedure. But the problem is always subjectivity - it seems that there is always another expert with a bigger degree and more grey hair who is willing to offer the conclusion that is sought and bought.

And so structured analytic procedures have tended to rather flatly outperform clinical procedures over several decades of research across a variety of professional disciplines - even though expert practitioners have historically tended to sell near certainty around their conclusions, whereas structured analytic procedures simply quantify the margin of uncertainty.

As often occurs there are growing pains and professional (ego) conflicts among those who love the old-school models (claims of virtual certainty supported by self-aggrandized experteeism) vs analytic models for which the basis of validity is the process itself and not so much the persona of the expert.

Different types of PCSOT exams merely clarify the different types of purposes and objectives for these exams. They do not themselves form the basis of validity. Dan's argument is simply another example of his misunderstanding of science and validity.

As always, there is still more to learn.

.02

rn




Dan Mangan

Quote from: the_fighting_irish on Aug 25, 2015, 02:03 PMah the good ole days when it was just

maint
mont
SH
and inst off

We have gone from 4 tests to 7 tests. 


Yes, Joe, polygraph's most lucrative ca$h cow had calve$.

cha-ching

Dan Mangan

#124
Ray "Believe me, I'm a scientist" Nelson sez...

Quote from: Raymond Nelson on Aug 25, 2015, 03:59 PM

If the procedure is not structured and replicable - if it depends on the personal prowess of the expert - then it is a clinical procedure. These are useful when we do not have a structured and replicable analytic procedure. But the problem is always subjectivity...


Personal prowess is the foundation of all polygraph testing. Subjectivity -- for example, bias in the form of sympathy or contempt for the test subject -- is a very real problem.

That's why the APA model policy for PCSOT has a strict rule limiting the number of times a polygraph operator can test the same subject:

5.7.2.
Number of exams per examinee. Examiners should not conduct more than four separate examinations per year on the same examinee except where unavoidable or required by law or local regulation.


If the polygraph "test" process were as scientifically valid and analytical as Ray wants people to believe, there would be no need for that rule.

The pro-polygraph propagandists often compare polygraph accuracy to that of film mammography. Can you imagine a similar rule being applied to radiologists and x-ray technicians?

Polygraph "testing" is all about the expertise of the examiner. There is precious little science involved.




Joe McCarthy

Forgive my slow response Ray.

I see valid point on both ends and trying to find a way to articulate them.
Joe

Raymond Nelson

Joe,

You and I and others, including Dan, were probably all taught in polygraph school that we should say that the polygraph is nearly infallible if you have a competent examiner. This is what was taught back in the day. It feels good because it both glorifies our expertise and also gives us our personal marketing angle.

Dan is simply adding confusion again. The need for restrictions on the number of exams was necessary to prevent the impulse toward practices in which the examination procedure is short-cut in time so that more exams can be completed each day. This is historically an issue for private examiners - and we've all heard the stories from the 1980s (Doug Williams era) in which examiners ran numerous exams each day, and the commodity of interest was the confession and not the test result. Government examiners today will often conduct only 1 or 2 exams per day, with sufficient time for each.

If the solutions embedded in the details and language of our published standards for examination scheduling seem to look odd or not completely satisfying then it is probably an reflection of the social and political difficulties involved in putting such a standard and restriction into place.

But yes, in old-school anti-science polygraph practice the basis of expertise is the persona of the expert.

The problem for us today is this: what we can actually describe and replicate, in terms of test precision and error rates, does not seem to agree with the historical claims of infallibility.

Dan has a unique angle on this because he has published a study showing ~100% accuracy - a study that others have panned as unscientific and a failure of the peer review process. To put this in context, none of us believe that Dan actually wrote the study - its reads exactly like the written language of Matte - and the journal that published it at the time was allowing authors to suggest their own peer reviewers.

Now look at Dan's procedure, and notice that there are 23 scoring feature in Dan's/Matte's model, along with 23 rules. This is a manual scoring protocol for which Matte reported a reliability coefficient in excess of .99. Which means manual scorers almost never disagree while using those 23 features and 23 rules. (For my part, I cannot remember 23 things let alone 46 things.) So it is suspicious. And we have the problem that those 23 features and 23 rules cannot be organized into a logical flow-chart or algorithm for which we could program a computer to achieve automated reliability. This highly complex model is in fact a subjective and unstructured clinical model disguising itself as an analytic model.

So what we have in Dan's old-school polygraph model is in fact a clinical process - not unlike the historical tradition in the greater polygraph profession of the 1980s.

Examiners who were trained before 2006 probably had to learn and memorize 23 or 25 scoring features - most of which were without scientific support - whereas today we tend to focus mainly or only on the things that are statistically significant discriminators, and these are smaller in number.

Also, notice that automated computer algorithms tend to make absolutely no use of dogmatic rules that have no scientific support.

Again, what we can actually replicate using structured and even automated procedures seems a lot more conservative than the ~100% accuracy reported by the clinical model of Dan and Matte.

In the end, it may be our choice: old school polygraph the way Doug Williams accuses - in which the test result is simply "tool" for gaining confessions - and for which examiners are secretly embarrassed about the test result because they cannot realistically quantify the margin of uncertainty around an old-school clinical process, unless they get a confession. Or we can have evidence-based 21st century polygraph, in which we attempt to realistically quantify the level of precision and uncertainty with which we should regard the test result.

At the present time we seem to be observing both old-school clinical polygraph (in which we can only adopt a form of blind faith that the examiner is in fact an un-biased expert with no subjective interference) the and new-school practices (in which we emphasize and evidence-base, norm-referenced, and standardized protocols for both test administration and test data analysis so that the analysis can be replicated). The tension we observe today is sometimes a product of the dynamic and dialectical process between these two professional practice paradigms: old-school polygraph would be called an "expert-practice model" (or the even older "experimental practice model" wherein the expert observer is simply experimenting and learning on each new case - whereas most professions today would view experimental practice with a lot of ethical caution unless we have both no existing solutions and the informed consent of the individual that will be subject to the experimental procedure). Most professions today will also look with caution at the continued use of an expert-practice clinical model - for which analytic procedures have been repeatedly shown to be potentially vulnerable to confirmation bias  (just see the 1986 Diane Sawyer event), and for which the analysis is largely unreproducible and dependent on the persona of the expert - at a time when there does exist some structured and replicable test administration and analytic procedures that do not depend on selling false-hope in an "infallible" conclusion as a basis for instilling public confidence.

Its our choice.

I believe the existence and interest in this particular website is some evidence or indication of a public desire for a replicable and accountable analytic solution for the lied detection and credibility assessment needs of our communities and governments.

Finally, Dan's confusion can been seen more easily when we consider that even an evidence-base, norm-reference and standardized test administration and analytic model will still require that we take the time to do it correctly. In fact, things like standardized practices become even more important when we agree or decide not to be satisfied with a test result simply because we are impressed with the CV or persona of the expert - unless that expert uses an evidence-based norm-referenced and standardized protocol for which the analysis can actually be replicated.

As always,

.02

rn



Dan Mangan

#127
QuoteThe need for restrictions on the number of exams was necessary to prevent the impulse toward practices in which the examination procedure is short-cut in time so that more exams can be completed each day.

That's bullshit, just like much of what Ray says.

The reason why examiners are prohibited from testing the same examinee more than four times a year is so familiarization -- in any capacity -- does not contaminate the polygraph "test."

Again, I ask you... The pro-polygraph propagandists often compare polygraph accuracy to that of film mammography. Can you imagine a similar rule being applied to radiologists and x-ray technicians?

It's absurd.

If Ray truly believed in what he's saying about polygraph's scientific validity, he'd endorse both a countermeasure challenge series and a bill of rights for polygraph test subjects.

Why the resistance, Ray? Please explain. [cue crickets]

People, I strongly suggest that you do not buy into the polygraph-science snake oil. From what I've seen in my 10+ years in the field, it is but a mere pipe dream. Fortunately, the vast majority of the courts share my view.

The polygraph indu$try is all about money, and "scientist" Ray Nelson is a rainmaker of the first order.

Raymond Nelson

Dan,

Which exact part is BS?

The part about your publication of an unrealistic ~100% accuracy for a manually scored polygraph with 23 features and 23 rules?

The part about an unrealistic reported reliability coefficient of .99 for you method - meaning that different manual scorers almost never disagree?

Or the part about scientific reviews converging at something lower than ~100% accuracy?

Or the part about the fact that I, and probably some others, cannot remember 23 features and 23 rules every day?

Or the part about the lack of an unambiguous logical flow-chart for those 23 scoring features and 23 rules?

Or the part about the history of polygraph originating in a clinical model for which the basis of validity or precision was assumed to be the examiner?

Or the part about the trend toward increased use of numerical scoring and quantitative analysis as a solution to argument and disagreement around unreplicatable clinical opinions/conclusions from experts who were acting subjectively in the absence of replicable quantitative models?

Tell us please which part is BS?

Keep in mind that there are some rather well known phenomena that can be expected to occur whenever a persons is presented repeatedly with the same stimulus. And while the exact influence of these has not completely quantified, there is some experience and evidence on which to base some reasonably cautious policy assumptions.

And you should also keep in mind that known phenomena associated with retesting and repeated presentation of test stimuli are not solely a function of either a clinical or quantitative analytic model.

So it makes little sense that you adopt and anti-science posture towards the polygraph - except when considering the market potential for a polygraph examiner who wants to sell ~100% confidence (over-confidence) in a unique and proprietary brand of clinical secret-sauce, for which the basis of validity is having been trained by a certain person. I get it, your kind of polygraph is definitely not science. And if your brand of polygraph ain't science, then what is it? Maybe it's just marketing, in which you pander to those individuals in desperate need of an anti-science polygraph examiner. (There does seem to be a market vacancy at this time.)

.02

rn


Dan Mangan

#129
Ray,

Here are some examples of your BS...

You claim that I "market" 100% accuracy. That is false.

You claim that my business model is predation upon desperate clients. That is false.

You imply that my one study -- actually a micro-survey documenting the performance of a single expert examiner -- is to be interpolated as guaranteeing deterministic perfection for anyone who uses the MQTZCT. The study may be a true outlier, but highly experienced examiners have been known to have very lengthy stretches of perfection. I am totally forthright with all prospective clients about the risks, realities and limitations of the "test" -- regardless what technique is used and who is administering it. See my "Recommended Reading" web page: http://polygraphman.com/id59.html

As for your inability to remember a bunch of rules, I can't help you there other than to say "try harder."

Now, here's something that's not BS...

The pro-polygraph propagandists within the APA saw the kind of upward traction my realist position has been gaining over the past two election cycles with the progressives within the organization.

That trend has proven to be so alarming to the establi$hment, the board of directors decided to prevent me from running for president-elect in 2016.

I guess moving the goalposts was the safest short-term solution to avoiding of full-blown schism within the Church of Polygraph.

Raymond Nelson

Dan,

I asserted that you published a study claiming ~100% accuracy, and that your ranting is simply part of your marketing your consultation services to individuals who are so desperate they agree to pay your fee after you have assuaged you conscience by providing them all the negative information you can find (and that your use of derogatory names, suggests that you lack concern for them as individuals). Call it what you want; to me it looks like marketing.

http://mattepolygraph.com/2008_fieldstudy_quadritrack.html

I quote you on page 23 (last paragraph) when you assert that your technique can "... nullify the effect of countermeasures... "

That is a position that was reargued by you (or whomever wrote the paper for you) in your published rebuttal to Iacono and Verschuere et al. who published their concerns about your conclusions.

Seems like marketing to me, but what do I know.

I suppose it is possible that you truly believe in these claims.

So perhaps you can clarify for everyone whether you believe your favored polygraph technique to be ~100% accurate and capable of "nullify[ing] the effects of countermeasures" as you wrote and published? Or, is this just a abuse of the publication process to achieve some slick marketing-and self-promotion?

If you do not believe that your claimed ~100% accuracy is reproducible or generalizable, and if not marketing and not mere self-promotion, then why not contact the journal editors to retract those publications? Would that be bad for business? Would that damage the your credibility or the authenticity of your message?

.02

rn




Dan Mangan

#131
Ray, enough talk.

Let's settle things with an officially sanctioned countermeasure challenge series at multiple APA events.

Now, Mr. Scientist, bring on the excuses.

pailryder

Dan

Man up and answer Ray's question!

Have the crickets got your tongue?
No good social purpose can be served by inventing ways of beating the lie detector or deceiving polygraphers.   David Thoreson Lykken

Joe McCarthy

Joe

Dan Mangan

pailryder, how noble of you to come to the aid of the reluctant scientist Nelson who is too afraid to put the "test" to the test.

Just as certain TAPE operatives are afraid of Joe McCarthy's polygraph challenge, the APA leadership is afraid of mine.

Does anyone see a trend here?

Little wonder, then, that the APA has quietly disposed of its erstwhile motto, "Dedicated to Truth."

Keep circling the wagons, boys.

Quick Reply

Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.

Name:
Email:
Verification:
Please leave this box empty:
Type the letters shown in the picture
Listen to the letters / Request another image

Type the letters shown in the picture:
Shortcuts: ALT+S post or ALT+P preview