The Polygraph Place

Thanks for stopping by our bulletin board.
Please take just a moment to register so you can post your own questions
and reply to topics. It is free and takes only a minute to register. Just click on the register link


  Polygraph Place Bulletin Board
  Professional Issues - Private Forum for Examiners ONLY
  Questions on validated techniques (Page 1)

Post New Topic  Post A Reply
profile | register | preferences | faq | search

This topic is 2 pages long:   1  2  next newest topic | next oldest topic
Author Topic:   Questions on validated techniques
Dan Mangan
Member
posted 04-26-2007 08:37 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Is the 2-RQ federal bi-zone (you phase) test a validated technique? If not, are federal examiners who use the bi-zone as a specific-issue breakdown test -- as described in the Federal PDD Examiners Handbook -- in violation of APA standards of practice?

Is the use of neutral questions in a Utah test mandatory? Raskin's chapter in Kleiner's book does not indicate that the NQs are optional, but I recall someone on this board stating that they are optional.

Thanks,
Dan

IP: Logged

Barry C
Member
posted 04-26-2007 08:41 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
I'll get back to you on the first question.

As for the second question, yes they are mandatory. The reason, according to Dr. Kircher, is to make the test less bias against the truthful, which I'll explain as best I can if necessary.

IP: Logged

Dan Mangan
Member
posted 04-26-2007 09:41 AM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Thanks, Barry. I understand Kircher's angle -- no need to explain. My own preference is to let the see-saw of psych set flow undisturbed, hence my desire to drop the NQs.

I look forward to any light that can be shed on my first question and its follow-up.

Dan

IP: Logged

Barry C
Member
posted 04-26-2007 10:25 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
To answer your first question, it depends on how one interprets what a "validated testing technique" is.

Here's the current definition, which changes in 2012 to what was published in the January magazine:

quote:
3.2.1.3 Validated Testing Technique: A polygraph testing technique, for which exists a body of acceptable scientific studies.

What is "acceptable," and what is a "body"? I suppose you could argue this one either way with what is already out there in the literature. There was at least one study done on the two-RQ version, which I'd have to hunt to find. You could argue, as Honts would for example, that a CQT is a CQT and they are all valid if they utilize all the necessary elements (i.e., principles).

However, when you look at the specific percentages in the new language, I think you'd have to say you need replicated studies on those individual formats (with acceptable deviations supported by well founded principles).

As for your second question, how do you know the "see-saw" flows "undisturbed" without the neutral. That's the opposite position of Dr. Kircher. The reason for the neutral is to allow a person to answer the CQ with a beginning signal that is "undisturbed." Otherwise, you get a smaller reaction and bias the test against the truthful.

As an aside, somebody brought up Matte's inside / outside track to Dr. Kircher at last week's AAPP seminar. He thought it was a novel idea to try to pin down the psychological construct underlying PDD. He did advise against using the test in the field, and then he gave us his opinion of what it is that causes people to respond to any given question, and fear wasn't on his list.

IP: Logged

Dan Mangan
Member
posted 04-26-2007 01:25 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Very interesting. Thank you. I wonder... How many real-life field exams (e.g., criminal issue/fidelity/PCSOT cases) has Kircher himself conducted? What's the breakdown (percentage-wise) of his own "field tests v. lab tests" experience?

IP: Logged

Barry C
Member
posted 04-26-2007 01:57 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Dr. Kircher isn't an examiner. He's a researcher. It is my understanding he maintains a database of both field and lab data. In any event, I don't think anybody has ever found any statistical significance between accuracy of lab decisions vs field decisions, so I'm not sure what your point is. If you're going to argue lab data doesn't generalize to the field, then we're all in trouble.

Your fear of error / hope of error would still apply in the lab. If I had $100 riding on a test, either could conceivably apply.

IP: Logged

Dan Mangan
Member
posted 04-26-2007 02:41 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
My point relates to Kircher's position on fear -- or lack of it. Of Kircher, you said:

"...and then he gave us his opinion of what it is that causes people to respond to any given question, and fear wasn't on his list."

No fear, eh? Really? Hard for me to believe. That's what made me wonder if Kircher himself has conducted many high-stakes field exams.

And speaking of high stakes, your assertion that a lousy hundred bucks would get one's fear/hope juices sufficiently flowing, is, IMHO, a stretch. On the other hand, if someone's in that polygraph chair -- trussed up like a Christmas turkey with a video camera in his face -- as part of, say, a rape/murder/armed robbery/infidelity case, that's an entirely different story from a fear perspective. Just my $.02.

Dan

IP: Logged

Barry C
Member
posted 04-26-2007 02:56 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
You're forgetting differential reactivity. How much fear shouldn't really matter.

We've proved in the lab fear isn't necessary for a test to work, so that's a done deal. His idea - and those that heard it, correct me if I'm wrong - was essentially cognitive complexity.

I didn't get any benefit in polygraph school - just duping delight, and that seemed to do the trick.

Most of my murder cases go NDI - with no fear questions involved, so how much of a problem is it if at all? (I'll save you the time: we don't know, but it can't be much.)

IP: Logged

stat
Member
posted 04-26-2007 05:53 PM     Click Here to See the Profile for stat     Edit/Delete Message
My experience in poly school along with fellow students was a high rate (I'd guess 1 in 4 )of false pos' and false negs'----but as I recall mostly inconclusives. Initially, I and several others were not impressed with CQT polygraph in the lab (SPOT,POT, and CKT were a different story though.). It wasn't until I entered the field after some time did poly really resonate with me. This is only my simple opinion though. I do feel that contrary to research indications showing usefullness of lab settings, poly tests are similar to cardio stress tests in that ya really have to get things stirring for accurate results--be it true pos or true neg.Just whether that is "fear" or not I defer to the endophysiologists and the lot.I still to this day am amazed at lab studies that show better than barely above chance accuracy. Hell, I would even go as far to say that I'm a little suspicious of lab research on polygraph. Call me silly.

[This message has been edited by stat (edited 04-26-2007).]

IP: Logged

Barry C
Member
posted 04-26-2007 06:21 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
You sound like the CVSA people - it only works well in the field. That's a problem.

How do you know you weren't the problem in polygraph school? Being new has costs associated with it, and not all schools start people scoring charts on day one. The data could have been there, but you may not have seen it correctly. Looking at the NAS findings et al, we seem to have a good number of examiners who've been in the field a while who can't score charts as well as others. If you re-scored those charts today, I suspect you'd have more correct conclusions. Keep in mind, back then you scored things that would be the opposite of what they should have been.

We've got to get away from how we "feel" about things, and find out what we can prove. One thing I've noticed in this field is a lot of things have been settled, but polygraph examiners haven't figured that out yet, and we fight over battles that have already been won or lost.

I've kept quiet (except for personal conversations) on what I think is going on as I'm not sure, and I suspect it's different things (or combinations thereof) for different people. In any event, the physiology lines up better with an orienting response than it does FFF, which changes things.

IP: Logged

Dan Mangan
Member
posted 04-26-2007 06:47 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Barry,

You said:

"One thing I've noticed in this field is a lot of things have been settled"

Please identify the things that have been "settled."

Dan

IP: Logged

Barry C
Member
posted 04-26-2007 06:50 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Are you out of your mind? We'd run Ralph out of space, and I don't have that much time to re-type the work of so many people. Read the literature.

IP: Logged

Dan Mangan
Member
posted 04-26-2007 06:58 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Don't worry about Ralph. C'mon, Barry, humor us. List the top ten "settled" items as one-line bullet points.

IP: Logged

Barry C
Member
posted 04-26-2007 07:21 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Now I've got to figure out how to create bullets on this board to make you happy too? I thought I did well with the quote thing. When will it end?

Really though, you don't think polygraph science has progressed to the point at which we've settled certain issues? For example, the CQT lacks construct validity, but not criterion validity. Truthful people do not react as strongly to CQs as the guilty do to RQs. A multiple-issue test is less accurate than a single-issue ZCT. The more issues, the greater the rate of INCs (and errors), etc. Asking an RQ before a CQ sends your score in a more negative direction. Polygraph is bias against the truthful. Again, fear isn't necessary for polygraph to work. Polygraph isn't perfect. And - I think this is at least 10 - polygraph, in the hands of a well-trained examiner, discriminates between the truthful and deceptive at well above chance rates.

Now you re-type them with the bullets.

IP: Logged

Dan Mangan
Member
posted 04-26-2007 07:39 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
Indeed, when WILL it end? :-)

o CQT lacks construct validity, but not criterion validity

o Truthful people do not react as strongly to CQs as the guilty do to RQs

o A multiple-issue test is less accurate than a single-issue ZCT

o The more issues, the greater the rate of INCs (and errors), etc.

o Asking an RQ before a CQ sends your score in a more negative direction

o Polygraph is bias against the truthful

o Again, fear isn't necessary for polygraph to work

o Polygraph isn't perfect

o [polygraph] in the hands of a well-trained examiner, discriminates between the truthful and deceptive at well above chance rates
-------------------------------------------

Well done, Barry! (Even if it's just nine. But I'll make up the diff:

o Complex scoring rules reduce inter-rater reliability (so they say)

Seriously, we may need to agree to disagree now and again, but the exchanges on this board are all for the good...

When you get any definitive info on my bi-zone validation question, I'd appreciate your post.

Dan

IP: Logged

Barry C
Member
posted 04-26-2007 07:58 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
o CQT lacks construct validity, but not criterion validity - Hey, that's two!

As far as the Bi-zone issue is concerned, that's about all I have. I'll see if I can find the study to which I referred. If I recall correctly, the study wasn't designed to prove it worked, but they did so in the process. I tried to get Bi-Zone data at one time, but it's lacking, so when it'll make the valid list is a unknown. Perhaps somebody could do another lab study in which variables can be controlled?

IP: Logged

rnelson
Member
posted 04-26-2007 07:58 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
I've been trying to just lurk, 'cause I'm swamped in a bunch of math right now...

but then this...

quote:
For example, the CQT lacks construct validity, but not criterion validity.

I agree with most of what you've said Barry, but I think this is overstated.


Deficits in construct validity for CQT have been stated by polygraph adversaries, but that doesn't mean its as simple as that. We are giving in too much to our opponents to end the statement like that.


  • If the CQT has criterion validity, then there is somewhere an explanation for the underlying constructs
  • If we are not satisfied with arcane vagaries like psychological set, then we simply have more work to do to understand the construct

Its highly unlikely that there is absolutely no possible explanation for the psychological and physiological mechanics (constructs) that support the criterion validity of the CQT.

We have to be willing to follow the data, discard constructs that cannot be either defined or validated, and endorse discussion about constructs that do a better job of accounting for the variety of phenomena observed in the CQT. Cognitive complexity, for example, provides a much better integrated explanation of psychological and physiological reactions associated with the CQT than does fear or psychological set.

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


IP: Logged

stat
Member
posted 04-26-2007 08:29 PM     Click Here to See the Profile for stat     Edit/Delete Message
Barry said: "How do you know you weren't the problem in polygraph school?"
Barry, what ever you heard about me from school, it's not true--except for the beastiality---I was drunk.

Barry said "Being new has costs associated with it, and not all schools start people scoring charts on day one.' " The data could have been there, but you may not have seen it correctly."
True, but I've revisited those analogue charts and aside from the rediculous changes/ revisions for pneumo scoring over the years, the calls are the same.
Barry said "Keep in mind, back then you scored things that would be the opposite of what they should have been." I am the least proud of this possibility over anything else within our beloved but highly disfunctional field. sigh

[This message has been edited by stat (edited 04-26-2007).]

IP: Logged

Dan Mangan
Member
posted 04-26-2007 08:41 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
"Keep in mind, back then you scored things that would be the opposite of what they should have been."

Clearly, you are not speaking of the Backster School. What are some examples of these then-and-now polygraph flip-flops?

Morbidly curious,
Dan

IP: Logged

stat
Member
posted 04-27-2007 07:46 AM     Click Here to See the Profile for stat     Edit/Delete Message
I don't now about you all, but the following phasic activities are not scored as they would have been years ago: anticipative arousals (1-2 seconds before the question), amplitude changes on pneumos (vs. now suppression), also, I look more cautiously at cardio baseline rebound vs. assuming arousal, I no longer get excited for "complex arousal" on GSR---and stick with amp only, and don't get me started on duration of cardio and GSR---no longer viable criteria. Quite frankly, the old scoring rules were supposed to be affirmed with empirical research, so pardon my cynisism for the "better and latest" scoring criteria." I stick with the latest "defendable dozen", but FEEL a little annoyed and suspicious that D. Krapohl and the other researchers found such deminished accuracy even WITH the newer more viable Dozen. It makes one feel a little impuned for carrying on for years with bad practice. I suppose it's better knowing than not knowing. What next though? I will predict here and now with only anecdotal evidence that research will show in the future that pneumographic polygrams do not indicate deception whatsoever. Sure you see pneumo reinforcement on charts on occasion, but most of the time the pneumos are a nuissance, a prop, and most of all, a component to keep the test incorporating the 4 component rule of testing. I'd bet money that if the APA stated that pneomos weren't useful, many examiners would decorate their Christmas trees with them----rather than coming to their defense as viable components.

[This message has been edited by stat (edited 04-27-2007).]

IP: Logged

rnelson
Member
posted 04-27-2007 09:51 AM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
So, how do you really feel about those pneumos?

To me its not really a surprise to find them the least useful components. Pneumos are inherently the least reliably interpreted component, especially while the interpretation is not mechanized, in a tangible, objective, and repeatable manner. I know some folks will try to argue that our scoring rules are objective and repeatable, but most of our criteria (aside from RLL) are not really objective but impressionistic.

The pneumos could be argued to collect the richest volume of information, from both sympathetic and peripheral activity. That is the problem. Humans, when presented with a wide array of information, will attend selectively to that which they think they understand and that which there anecdotal experiences tells them is useful - and that is not the same as attending to the data that offer the most robust correlation with the criterion. A parallel example in psychology is the Rorschach (ink-blot) test - arugably the single richest source of test data for a variety of psychodynamic concerns. The Rorschach is widely maligned as not very reliable, not because it is not a good test, but because it is so complex and variable that different evaluators sometimes get different things in the data - just like the pneumos.

In psychology, we still like tests that cannot be mechanically or reliably measured, but we call them projectives. That is because the evaluator is encouragedor required to interpret or read-into the data, and synthesize that data with other data. It is understood by psychological evaluators that doing so, inevitably introduces personal and professional bias - we project our own understanding onto the data. Objective data, on the other hand, will be interpreted as meaning the same thing every time - because the collection and interpretation of the data are mechanized. Other examples of projective assessments are the famous draw-a-man, house-tree-person, Bender-Gestalt test, the Walker images (photos), the Rey Osterrieth complex figure, and even the sentence-completion exercises. All good comprehensive psych-evals include some projective testing alongside objective testing.

Pneumos, because of their complexity, and because of the difficulty of obtaining objective measurements, are inherently the least reliable and least productive component. They may contain the richest volume of info, but we not be able to use a lot of the data because there is so much at once. That is unless we mechanize and simplify the data. Pneumos may also be among the most vulnerable component to voluntary interference.

I know that was a digression, but its helpful to understand polygraph testing and measurement challenges as they compare to testing considerations in our sister sciences.


r

------------------
"Gentlemen, you can't fight in here. This is the war room."
--(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


[This message has been edited by rnelson (edited 04-27-2007).]

IP: Logged

Barry C
Member
posted 04-27-2007 11:52 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
Ray,

I'm going back a couple posts. Of course it was overstated. I had almost no inclination to answer it at all. I know there's a construct out there that we've got to identify, and I believe we will one day. With that said, point well taken as without being careful here it appears I believe our opponents errors.

Gotta run.

IP: Logged

rnelson
Member
posted 04-28-2007 12:20 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
Thanks Barry,

I already knew that you appreciate the importance of those types of conceptual subtleties. I felt compelled to respond because we sometimes repeat such statements to the point where they become mindless platitudes. that serve only to prevent further thought.

Now, on to the real concerns...


  • Why would a BiZone be less valid than a ZCT?
  • What are the real differences in the underlying constructs for the several ZCT variants?
  • What are the differences in the constructs underlying the MGQT variants?

And while we are at it,


  • What are the construct differences between MGQT variants and ZCT variants?

  • What are the differences in the ways that the test data features themselves are interpreted, among the MGQT and ZCT variants?
    [/list]

    (Please, not the simpleton bean-counter's answer about the number of published studies. The questions are about the differences in the underlying constructs.)

    I would propose that there are very few, if any differences in the underlying constructs. Those that do exist have mostly to with some of the more esoteric variants that attempt to employ constructs like the inside track or fear/hope of error. Those constructs require there own validation, and the criterion validity of dataset results in not sufficient for that.

    So, if there are very few construct differences, what then are the differences in the various CQT techniques and why is it so important that we attend to these differences? (Aside from the obvious ego-driven hypermasclulinity contests - my technique is bigger than yours).

    I propose that the difference has primarily to do with the # of questions and charts, and the effect that has on cumulative data. (i.e., 3 questions and three charts will produce potentially greater total sums than 2 questions and three charts, and 4 or 5 charts will generally produce greater totals than three charts).

    The discussion about increasing thresholds is not unfounded, just incomplete. ZCT and MGQT thresholds of +/- 6 are sometimes adjusted to +/- 4 for BiZone tests. That is nice, and there may be criterion studies that show decision accuracy to be adequate or comparable to ZCTs or whatever. But but nobody has ever shown the math and the actual significance of the results remains unknown. It is not less unknown for ZCT or MGQT variants than for BiZone/U-Phase variants.

    The concern is addressed by the OSS methods (1 and 2), which account for the limitations of a cumulative data structure involving 3 questions and 3 charts, by imposing a limitation on the application of the method to other techniques with more or fewer questions and more or fewer charts. Not because the underlying constructs are different, but because the cumulative totals are affected by the frequencies of questions and charts (3x3), and because OSS attempts to account for decision accuracy through the normal distribution function of the standardized cumulative total. Statistically based decision thresholds for cumulative data models require separate stratifications for the matrix of conditions (2 to 4 questions and 3 to 5 charts).

    (Wouldn't it be nice if some propeller-headed person would do the math necessary to allow the confident and expedient statistical interpretation of all types of tests based on similar constructs, and end this silly tail-chasing discussion about valid techniques.)

    So, I'd love to hear more about Honts' statements about valid constructs or principles inherent to the CQT.

    r

    ------------------
    "Gentlemen, you can't fight in here. This is the war room."
    --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


    IP: Logged

  • Dan Mangan
    Member
    posted 04-28-2007 12:57 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
    Ray,

    Great stuff. Your question -- "Why would a BiZone be less valid than a ZCT?" -- gets to the core of my original inquiry on this thread. I look forward to hearing any theories or observations from fellow examiners.

    Speaking of validation concerns, I have a few more questions...

    1. Does the use of inclusive CQs "invaidate" a Federal Zone (or bi-zone)? (Don's NPC guidesheet specifies exclusive CQs, but perhaps this has been superseded by "validated principles." Dunno.)

    2. Does the use of a "softer" OI question (e.g., "Do you understand that I will only ask questions that we have reviewed?) invalidate the zone/bi-zone?

    3. What about replacing the last OIQ of a bi-zone with an IQ? Acceptable?

    4. The Krapohl/NPC quidesheet for the federal zone technique states that directed lies are not allowed. Is this still the official policy?

    5. Is there any practical difference between rotating the CQs vs. rotating the RQs?

    Dan

    IP: Logged

    rnelson
    Member
    posted 04-28-2007 01:34 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
    Dan Mangan wrote:
    quote:

    5. Is there any practical difference between rotating the CQs vs. rotating the RQs?

    Funny you should ask.

    I was just now working on a bootstrap t-test of the significance of observed differences in response amplitude by RQ position - using the training dataset for OSS (almost 300 ZCT cases).

    There are some observable differences in mean scores - R5 produces stronger DI values than R7, and R7 produces stronger DI values than R10. For NDI cases its the opposite, R10 produces stronger NDI mean values than R7 and R7 produces stronger NDI means than R5. That might seem to suggest that subjects react more strongly to the R5 position. However its difficult to confidently interpret without some form of significance test. So, I'm running a bootstrap permutation t-test, using 1000 resampled permutations of the dataset, that way I can can use a bootstrap estimate of the variance for my t-test.

    More later.

    r

    ------------------
    "Gentlemen, you can't fight in here. This is the war room."
    --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


    [This message has been edited by rnelson (edited 04-28-2007).]

    IP: Logged

    Dan Mangan
    Member
    posted 04-28-2007 02:02 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
    Thanks, Ray. You said:

    "There are some observable differences in mean scores - R5 produces stronger DI values than R7,..."

    I'm not surprised, as Backster hammers this home early and often as it relates to his You-Phase test. R10(evidence connecting) is another ballgame, for obvious reasons.

    Cleve discourages the third (evidence connecting) RQ with a passion, claiming it turns a single-issue test into a multi-issue test... Makes me wonder how OSS would be different (if at all) had the training model of nearly 300 zone tests been pure single-issue tests as Backster espouses.

    Dan

    IP: Logged

    Barry C
    Member
    posted 04-28-2007 02:33 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
    Okay, I just finished something I had to get done, and now I've come here, as I often do, to get away from reality and veg a little. Some of these questions will attempt to force me out of this state; however, I will not be dragged back!

    Dan,

    quote:
    1. Does the use of inclusive CQs "invaidate" a Federal Zone (or bi-zone)? (Don's NPC guidesheet specifies exclusive CQs, but perhaps this has been superseded by "validated principles." Dunno.)

    According to a conversation I had with Charles Honts, there's no study to show any one CQ type is any better than any other, so no it shouldn't matter, but be prepared to defend yourself in a silly debate. As for me, I use "Not connected with this case..." often - regardless of whether I run a Bi-zone, Army ACT, or Utah ZCT. (Hey it worked in the CPC validation study.)

    quote:
    2. Does the use of a "softer" OI question (e.g., "Do you understand that I will only ask questions that we have reviewed?) invalidate the zone/bi-zone?

    That's from the Utah test, and it's really not an OI question. It's placed there for those who think they must have it, and it's first in the series so a reaction is almost guaranteed - though not meaningful (as far as an OI is concerned). They call it an "introductory" question; although, Dr. Kircher slipped (or maybe not?) and called it an OI question at the seminar.

    In any event, I think DACA only suggests the form of the OI questions, but you can look it up in their handbook to be sure. if so, then the question you suggest should be okay.

    quote:
    3. What about replacing the last OIQ of a bi-zone with an IQ? Acceptable?

    That's what I do, but I realize some will moan and groan. I'm prepared to defend that practice better than most of those who would challenge me.

    quote:
    4. The Krapohl/NPC quidesheet for the federal zone technique states that directed lies are not allowed. Is this still the official policy?

    They haven't published anything to say otherwise, so yes, that's the case. (They use DLCQs in the TES.)

    quote:
    5. Is there any practical difference between rotating the CQs vs. rotating the RQs?

    Dr. Kircher suggested both. I forget the term he used to explain why that was a good thing. It had to do with "balancing" something. I've got to review his presentation again - but not now.

    He did say putting the CQs before the RQs favors habituation to the RQs, which helps overcome the bias against the truthful. There is probably also a lot of benefit to the Utah type CQs rather than the strict "similar" CQs in order to avoid habituation.

    For example, three lie or three theft CQs would be expected to habituate faster than three others such a a lie, illegal activity and a guilt/shame or hurt/harm CQ.

    The habituation phenomena may explain what you are finding Ray - as long as they are single-issue tests.

    quote:
    Why would a BiZone be less valid than a ZCT?

    I've wrestled with this one a bit myself. We've mentioned before the general principle of science that says more data equals more accurate results, and right off the bat, you've got less data. (You can, however, run an additional chart.) Since the Bi-zone (or Bi-spot, or You-phase, I know) has more CQs than it does RQs, and since each RQ is scored to the strongest adjacent CQ, I've asked if that helps reduce or eliminate the bias against the truthful. Intuitively I would think so, but the data is lacking to see if that is the case. If my suspicions are true, then the Bi-zone could be found to be more accurate than the three-RQ version.

    Because truthful people do not respond as greatly to CQs as the deceptive do to RQs, asymmetrical cut-offs make sense. However, as I've discussed before, policy decisions must be made in which those who set the policies decide what errors they are willing to live with, and missing liars isn't going to be one some will be comfortable doing - even at the expense of mislabeling the truthful.

    quote:
    What are the differences in the ways that the test data features themselves are interpreted, among the MGQT and ZCT variants?

    What exactly do you mean here, particularly in regard to the "test data features themselves"?

    Okay, now it's time to go crack the whip and get my kids to accomplish something other than assisting with the Second Law of Thermodynamics.

    IP: Logged

    Barry C
    Member
    posted 04-28-2007 02:34 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
    quote:
    Cleve discourages the third (evidence connecting) RQ with a passion, claiming it turns a single-issue test into a multi-issue test...

    Backster's right!

    IP: Logged

    rnelson
    Member
    posted 04-28-2007 03:34 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
    Kids left the doors wide open again, heh Barry??

    Just don't go alec-baldwin on them in your efforts to straighten them out?

    --------

    quote:
    quote: [quote]Cleve discourages the third (evidence connecting) RQ with a passion, claiming it turns a single-issue test into a multi-issue test...


    Backster's right![/quote]

    I agree in principle, and there is a lot of things I like about the BiZone test for certain purposes.

    The mistake is in attributing an unwarranted level of precision to the logic of human language. We have mathematical, logical, and computer languages because human language is so fuzzy at times - thats good for poetry and music.

    I'm not at all convinced that asking multi-facet or evidence connecting questions depletes the accuracy or validity of the test, in a manner that a strict logical interpretation would suggest.

    quote:
    He did say putting the CQs before the RQs favors habituation to the RQs, which helps overcome the bias against the truthful. There is probably also a lot of benefit to the Utah type CQs rather than the strict "similar" CQs in order to avoid habituation.

    For example, three lie or three theft CQs would be expected to habituate faster than three others such a a lie, illegal activity and a guilt/shame or hurt/harm CQ.

    The habituation phenomena may explain what you are finding Ray - as long as they are single-issue tests.


    Good point, and that is exactly the suspicion. Stronger reactions to R5 are not a feature of the question, but of the position.

    quote:
    I've wrestled with this one a bit myself. We've mentioned before the general principle of science that says more data equals more accurate results, and right off the bat, you've got less data. (You can, however, run an additional chart.)

    You are correct in priniciple. However, in practice that is true only because more data (measurments) allows us to better understand our measurement estimates (all measurements are estimates) by more effectively evaluating variability of measurement. That' math, to you and me, and we never do that in field practice - plus I've never seen it in publication. So, any increased accuracy for that reason is purely fictional.

    quote:
    Since the Bi-zone (or Bi-spot, or You-phase, I know) has more CQs than it does RQs, and since each RQ is scored to the strongest adjacent CQ, I've asked if that helps reduce or eliminate the bias against the truthful. Intuitively I would think so, but the data is lacking to see if that is the case. If my suspicions are true, then the Bi-zone could be found to be more accurate than the three-RQ version.

    A better way to discuss this phenomena is to focus on specificity, or sensitivity to truthfulness. The term bias, in science, commonly refers to an ever present and anticipatable (somewhat controllable) difference between statistics based on a sample and their corresponding population parameters. For example: the mean (average) response score of truthful people - we would find one value in a sample, and that would hopefully be close to the population parameter mean. In reality they will never be exactly the same - that is why we say all samples are biased. It is our job to reduce and understand bias.

    The colloquial useage of the term bias is to refer to the disparity between things like sensitivity and specificity (to deception and truthfulness in polygraph) - but its not exactly correct usage - and perpetuates the mindless platitudes around polygraph bias.

    Experiments with OSS, while calculating R/C rations using the mean of CQ values for each component, improve the specificity (reducing the "bias" you mentioned previously.) Using the mean of CQ values is procedurally analogous to scoring to the stronger CQ.

    quote:
    Because truthful people do not respond as greatly to CQs as the deceptive do to RQs, asymmetrical cut-offs make sense. However, as I've discussed before, policy decisions must be made in which those who set the policies decide what errors they are willing to live with, and missing liars isn't going to be one some will be comfortable doing - even at the expense of mislabeling the truthful.[quote]

    I agree. But to force the point further...

    How many liars do we think score as high as +4 on a ZCT?

    [quote] quote: [quote]What are the differences in the ways that the test data features themselves are interpreted, among the MGQT and ZCT variants?



    What exactly do you mean here, particularly in regard to the "test data features themselves"?[/quote]

    Just a bit of sarcasm on my part.

    We score the exact same physiological criteria for ZCT and MGQT techniques. How can they be so different, that the same physiological and psychological constructs do not underly both methods?


    --------

    My overarching point is that we sometimes take our own metaphors to literally - as of "psychological set" is a thing - something that "flows" in fact. "Psychological set" is not in fact a physical substance or thing, its an explanation.

    We make the same mistake when we attribute unwarranted precession to the logic of language, and end up engaging in silly discussions (how many angels can dance on the head of a pin? does an evidence connecting question make it a mixed issue test).

    -

    Siggy (freud) made this mistake when he assume the ego, id, and superego were actually things that we would eventually find inside people. Its important to remember when we are speaking metaphorically.

    -

    Its a common mistake. Consider early thermodynamic researchers and theorists, and our usage and understanding of the term "entropy."

    Not so long ago (few hundred years). Engineers endorsed a materialistic understanding of 'entropy' - like freud - as if it were a thing or substance. They even did experiments to collect data and test their hypothesis - using cannon. Boring (drilling) cannon barrels created heat, and that wore down cutting tools and warped cannon barrels. Heat it was explained was the result of thermodynamic chaos, or entropy, caused by the physical operation of cutting into the metal. The solution was to cut or drill faster, so as to get the drilling done before too much of that there entropy got into the product (cannon barrel) and wrecked things.

    Entropy isn't a thing, its an explanation (or a measurement), and metaphors are just metaphors.

    r

    ------------------
    "Gentlemen, you can't fight in here. This is the war room."
    --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


    IP: Logged

    Barry C
    Member
    posted 04-28-2007 05:05 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
    I didn't say what significance there was to Backster being right. I simply said he's right when he says R10 can make a test a multi-issue test. That's a simple point on which we should all agree.

    When you look at Don Krapohl's "Validated Techniques" article, you'll see there was only a one point difference between the Federal ZCT and the Utah ZCT average accuracies (89% vs 90%). That's a pretty hard blow to Backster's argument (that stems from his statement with which I agree) that the Federal ZCT is so watered down by R10 that the test is highly suspect.

    IP: Logged

    rnelson
    Member
    posted 04-28-2007 05:29 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
    Here are the results of the bootstrap ANOVA, using 1000 resamples of the OSS dataset.

    Raw Data
    -------------------

    Deceptive Cases (N=149) http://www.raymondnelson.us/oss3/DI_raw.jpg

    Mean p values

    R5 = .0354
    R7 = .0539
    R10 = .0541

    (with p values, smaller is more significant)

    Non-deceptive cases (N=143) http://www.raymondnelson.us/oss3/NDI_raw.jpg

    R5 = .0583
    R7 = .0436
    R10 = .0573

    Already it doesn't look very impressive, but the difference at R5 for the deceptive cases might be interesting.


    bootstrap ANOVA using 1000 resamples of the deceptive and non-deceptive data
    ------------------

    Deceptive Cases (k=1000, n=149) http://www.raymondnelson.us/oss3/DI_ANOVA.jpg


    bootstrap means

    R5 = .035
    R7 = .057
    R10 = .054

    (r7 showed the most regression)

    bootstrap ANOVA

    bootstrap F = .010

    bootstrap p = .990

    = the difference is not even close to statistically significant


    Non-deceptive cases (k=1000, n=143) http://www.raymondnelson.us/oss3/NDI_ANOVA.jpg


    R5 = .060
    R7 = .041
    R10 = .057

    bootstrap ANOVA

    bootstrap F = .009

    bootstrap p = .991

    = again not even close to significant

    (note: the symmetry of F and p are only because our values are in the extreme end of the F distribution)


    ---------

    So once again, it seems we may be making something out of nothing.

    Rotating or not rotating the RQs may not make any difference. It would be incorrect to assume it wrong either way. There appears to be no indication in this data, that rotating or not rotating the RQs would "invalidate" or reduce the validity of the test.


    r

    ------------------
    "Gentlemen, you can't fight in here. This is the war room."
    --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


    IP: Logged

    stat
    Member
    posted 04-29-2007 09:32 AM     Click Here to See the Profile for stat     Edit/Delete Message

    [This message has been edited by stat (edited 04-29-2007).]

    IP: Logged

    Barry C
    Member
    posted 04-30-2007 08:12 AM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
    "Edited" or nuked?

    IP: Logged

    rnelson
    Member
    posted 04-30-2007 09:05 AM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
    Yeah, what gives? I appreciated that post.

    r

    ------------------
    "Gentlemen, you can't fight in here. This is the war room."
    --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


    IP: Logged

    stat
    Member
    posted 04-30-2007 10:48 AM     Click Here to See the Profile for stat     Edit/Delete Message
    Sorry gentleman, I reread my post and diagnosed myself with diarrhea of the fingertips. Sometimes my cynisism gallops away from a point like a wild mustang. I felt that my points(?) diminished from the thoughtful discussion. sorry Dan, Ray, Barry

    IP: Logged

    rnelson
    Member
    posted 04-30-2007 01:39 PM     Click Here to See the Profile for rnelson   Click Here to Email rnelson     Edit/Delete Message
    I, for one, think you should have left it.

    It would take me far more words to make your points - which I think contributed to the discussion.

    Dan started by asking about the "validity" of the BiZone technique, and that prompts the inevitable discussion about what makes a technique valid and what differentiates one technique from another.

    Any efforts to distill a common understanding of polygraph testing techniques and polygraph testing principles can only help bring us together to speak a common professional language. That will ultimately help us look a little less stupid to our adversaries, who must take some form of pleasure in the fact that we so seldom agree with each other and so often engage in some form of ego-driven intellectual arm-wrestling.

    For polygraph to be regarded as a mature profession, we have to get to the point where we don't rely primarily on appeals to authority for argument, but base our argument on sound principles and data. The result will be a profession, like others, in which it is viewed as irresponsible for any expert (even those venerated authorities) to make bold assertions that are not founded in data.

    For example, an adolescent, authority oriented profession would heed this type of statement carefully, (not that adolescents respect authority - just that they are still on the front side of the developmental learning trajectory).

    Dan wrote:

    quote:
    Cleve discourages the third (evidence connecting) RQ with a passion, claiming it turns a single-issue test into a multi-issue test...

    Most mature sciences would be a more than a little concerned about the degree of assertion here, and would favor the asking of a question instead. Scientists recognize the difference between a statement supported by data, and a statement that is not - and they appreciate that statements not supported by data are opinions only.

    The problem is that when we become experts we begin to regard our opinions themselves as valid, and forget there is a very important distinction between personal opinions and professional opinions - professional opinions are supported by data. Opinions not supported by data are either personal opinions (and should be treated as such) or hypothesis (and should be further investigated).

    This is a particularly good example, because how many people endorse this type of statement like an axiom or platitude. Now, look at the data that Barry cited (even in the context of his personal opinion that Backster was right). The data does not support the assertion, and that is the difference between fact and opinion.

    The lesson of this mini-example is that we should feel prompted to caution any time any professional (regardless of how venerated or how authoritative) makes assertive statements not supported by data, and asks that personal opinion be regarded with same weight as findings based in data.

    The real challenge is to impose some clear thinking around the issues questions of techniques (and testing principles) - without allowing personal or professional egos to replace the point of the discussion. These discussions have historically been unfortunately contaminated by what look like reaction to narcissistic injury, at which point we cease discussing the science and focus primarily on undifferentiated adolescent loyalties.

    I recall an article from Eric Holden a few years back - something about PCSOT being "still basic polygraph." What is important is that we continue to pursue an improved understanding of the "basic" principles that make a test valid. That will require our willingness to discard arcane and inadequate explanations in favor or more adequate ones that have better parsimony with what we know from other sciences and what we know from data.

    In the case of the BiZone/U-Phase technique, I'm not aware of any real differences in the underlying principles upon which the test is constructed - compared with the ZCT or MGQT - and I would defy anyone to try to articulate the differences in terms of construct validity. The only real difference is a bit of cumulative math, and that is easily solved through simple stratification of the cumulative data, or perhaps even more effectively by mathematical probability models not based on simple inefficient cumulation.

    -------

    I think stat said all this more succinctly.


    Peace,


    r

    ------------------
    "Gentlemen, you can't fight in here. This is the war room."
    --(Stanley Kubrick/Peter Sellers - Dr. Strangelove, 1964)


    IP: Logged

    Dan Mangan
    Member
    posted 04-30-2007 04:21 PM     Click Here to See the Profile for Dan Mangan     Edit/Delete Message
    Hey stat,

    I never got to see your nuked post, but whatever it was, don't worry about it. We all benefit when this message board is alive and kickin' -- warts and all!

    Dan

    IP: Logged

    Barry C
    Member
    posted 04-30-2007 08:13 PM     Click Here to See the Profile for Barry C   Click Here to Email Barry C     Edit/Delete Message
    I recall him stating that all tests - even single-issue tests - are multiple-issue tests because there are CQ regarding various issues and RQs regarding one issue. It's a valid point, and we've had some discussion about that to some extent here before - with no good answers as I recall.

    I think we ought to define our terms here as sometimes we talk of multi-facet tests and multi-issue tests as synonymous. In a multi-facet test, the subject should be lying to all the RQs, but as Cleve points out, that's not always the case as a person could be present (R10, example) without doing R5 and R7. Again, the research review Don did seems to show it's not the problem Cleve (and Nate G. and Jim M., etc) predicted.

    However, in a multi-issue test (e.g., screening exams) the subject could be lying to none, some or all of the RQs. We know that reduces accuracy, if nothing less than "widening" the INC range depending on the number of RQs.

    One of the findings in the (DoDPI) TES studies was that a person could lie to R1 but go DI on R2 (to which he was truthful). Why? We don't know. They could catch the deceptive, but they couldn't necessarily say which RQ the person was lying about. Why doesn't that carry over into a single-issue CQT? (One of Stats points - in a roundabout sort of way.) Maybe it does. Perhaps that's one of the reasons for errors?

    I think Dr. Kircher did a study on a multi-facet test (or multi-issue, I can't recall off the top of my head) in which he cautioned against breaking out the DI scored RQ to put it in a single-issue format in order to clear or confirm the issue. His point (whoever wrote it) was that if you ended there (and "cleared" the issue), you might have missed the real one. That is why DACA's LEPET requires so many tests to be run (potentially).

    For example, if a screening exam yields a -3 on the drug question, for example, the subject would be interviewed regarding the issue. If no admissions are made, the issue (drugs) is tested in a break out exam. If that test yields an NSR/NDI result, the initial screening exam is run (again) without the drug question. Testing goes on until all issues are resolved (or the guy DQs himself).

    So, no answers, just agreement with the now missing post!

    IP: Logged

    stat
    Member
    posted 05-01-2007 09:57 PM     Click Here to See the Profile for stat     Edit/Delete Message
    Sorry guys. You wouldn't believe my day---or you probably would. I broke a cold case homocide from the 80's in a maintenace exam (!) while fishing for historic control material. I couldn't resist exploring his path of sadism and the test ended with a confession to raping and killing 2 (a teen and preteen) girls-----and of course with all the trophies (panties) and necro-type stuff involved with the acts. What a hit.It's like catching a tuna in a creek.

    I wonder who will notify the families----a task that chokes me up at the thought. After many calls with powers and authorities as a result of the (woops) identifying info and a subsequent 3 hr commute from office to office---I'm home and I missed the TV show House (dangit!).

    You guys went easy on my previous post as I was a bit too sardonic---even for me. I'm 36 yrs old but I write like a jerky kodger.

    [This message has been edited by stat (edited 05-01-2007).]

    IP: Logged

    Bill2E
    Member
    posted 05-01-2007 11:45 PM     Click Here to See the Profile for Bill2E     Edit/Delete Message
    Congratulations on your test and solving many crimes. And the statement "Old Kodger" must refer to me and others on this board, 36 and old, give me a break, I have a son your age LOL.

    IP: Logged

    This topic is 2 pages long:   1  2 

    All times are PT (US)

    next newest topic | next oldest topic

    Administrative Options: Close Topic | Archive/Move | Delete Topic
    Post New Topic  Post A Reply
    Hop to:

    Contact Us | The Polygraph Place

    copyright 1999-2003. WordNet Solutions. All Rights Reserved

    Powered by: Ultimate Bulletin Board, Version 5.39c
    © Infopop Corporation (formerly Madrona Park, Inc.), 1998 - 1999.