Post reply

The message has the following error or errors that must be corrected before continuing:
Warning: this topic has not been posted in for at least 120 days.
Unless you're sure you want to reply, please consider starting a new topic.
Attachments: (Clear attachments)
Restrictions: 4 per post (4 remaining), maximum total size 192 KB, maximum individual size 64.00 MB
Uncheck the attachments you no longer want attached
Click or drag files here to attach them.
Other options
Verification:
Please leave this box empty:
Type the letters shown in the picture
Listen to the letters / Request another image

Type the letters shown in the picture:
Type the last letter of the word, "America.":
Shortcuts: ALT+S post or ALT+P preview

Topic summary

Posted by 1904
 - Nov 23, 2007, 08:09 AM
QuoteThere are thousands of polygraph tests done each year, and yes there are and will be errors (as it the case with any test), so a few anecdotal stories do not support anything.  

QuoteWhen we run tests in the field, we do confirm some of them independently.  

QuoteFor example, I ran tests in one case that we .....

QuoteThat is why there is a desire for polygraph and truthfulness testing of all sorts.

QuoteAny test that discriminates truth from lies at better than chance rates, no matter how poor

QuoteIf, however, we introduce polygraph,(test) what will happen?  Assume a polygraph (test) is 80% accurate.

RE: POLYGRAPH TESTS
Refer Letter elsewhere on this site by:

John Furedy, Emeritus Professor of Psychology
University of Toronto
Sydney, Australia

"The Polygraph "Test" Is Not A Test"

"What one would think if one heard that IQ tests "varied among agencies". Wouldn't one conclude that these so-called tests were not tests at all, but rather unstandardized interviews where IQ "testers" arrived at their scores by having a conversation with the examinee to determine the examinee's IQ?  Why is it that even North American scientists commonly accept the polygraph as a "test", and then go on to argue about validity, whereas the argument about validity cannot even begin if one is not dealing with a test.
It would be bad enough if faith in these polygraph "tests" were confined to talk show hosts like Dr. Phil, who deal with personal problems.  What is worse is that national security depends on this peculiarly North American superstitious flight of technological fancy."


Posted by 1904
 - Nov 20, 2007, 09:55 AM
Quote from: Sergeant1107 on Nov 20, 2007, 09:32 AM
QuoteThe idea that statistics branched off from mathematics is a widely held misconception. Some place an undue emphasis on the relationship, but the two disciplines are very different.

The purpose of descriptive statistics is to communicate information, while inferential statistics is used to reach conclusions and deductions that possibly explain the data. Both of these together make up applied statistics. There is also a discipline called mathematical statistics, which is concerned with the theoretical basis of the subject.

More plagiarism.  Have you any ability to think on your own?

http://en.wikipedia.org/wiki/Statistics

9x6 = 54.
Oops. That's plagiarism. My teacher said it first.

I have many original thoughts. Sadly all of which would pass over your head like Swissair - you're too shallow and undeveloped.

When the polyshop biz grinds to a halt you can always write out parking tickets and tell the public about all the research behind parking meters.

Posted by 1904
 - Nov 20, 2007, 09:47 AM
Quote from: Sergeant1107 on Nov 20, 2007, 09:32 AM
QuoteThe idea that statistics branched off from mathematics is a widely held misconception. Some place an undue emphasis on the relationship, but the two disciplines are very different.

The purpose of descriptive statistics is to communicate information, while inferential statistics is used to reach conclusions and deductions that possibly explain the data. Both of these together make up applied statistics. There is also a discipline called mathematical statistics, which is concerned with the theoretical basis of the subject.

More plagiarism.  Have you any ability to think on your own?

http://en.wikipedia.org/wiki/Statistics

Good Boy Noddy. You fetched the bone. Good Boy Noddy.

Here's some more - you're on the clock..... go  for it:

Refer: www.mentalfloss.com
"A stupid or silly person named NoodleNush a dolt. A person named CarryB whose mental acumen is well below par. A person of moderate to severe mental retardation having a mental age of from three to seven years and generally being capable of some degree of communication and performance of simple tasks under supervision. Namely CarryB. The term belongs to a classification system no longer in use and is now considered offensive, except if used correctly."

Go Boy.
Posted by Barry_C
 - Nov 20, 2007, 09:36 AM
1904,

Good catch.  Two typos - that everybody else figured out.  If you do the math and follow the logic, then you'll see those should be 500 - not 50, but my point is the same.
Posted by Barry_C
 - Nov 20, 2007, 09:32 AM
QuoteThe idea that statistics branched off from mathematics is a widely held misconception. Some place an undue emphasis on the relationship, but the two disciplines are very different.

The purpose of descriptive statistics is to communicate information, while inferential statistics is used to reach conclusions and deductions that possibly explain the data. Both of these together make up applied statistics. There is also a discipline called mathematical statistics, which is concerned with the theoretical basis of the subject.

More plagiarism.  Have you any ability to think on your own?

http://en.wikipedia.org/wiki/Statistics
Posted by 1904
 - Nov 20, 2007, 08:39 AM
Quote from: Sergeant1107 on Nov 01, 2007, 03:26 PM

Okay Ray,
You beat me to the punch.  

I think he punched you in the head. Your arithmetic (sorry Noodle, I mean your 'math') contains some elementary errors. I certainly hope that someone brighter than you checks out your research before you
publish it.

Quote
..blah blah..If, however, we introduce polygraph, what will happen?  Assume a polygraph is 80% accurate.  (That number doesn't come from thin air either.

No. it comes from the same place where they teach you that 80% of 50 = 400


Quote
There are a few studies on screening exams: the TES and the R/I.  Both exceed 80% accuracy, so this figure is conservative.  For those of you who aren't data-driven, I can't help you understand this.)

1000 candidates
50% base rate of liars
500 jobs
80% chance of catching liars with polygraph

Let's do the math now:

Truthful hired      = 400 (80% of 50 polygraph NDI decision candidates – that are really truthful)

Since when does 80% of 50 = 400 ???

Quote
Liars hired      = 100 (20% of 50 polygraph NDI decision candidates – that are really liars)

Since when does 20% of 50 = 100 ???
And so the BS continues......

Posted by 1904
 - Nov 20, 2007, 07:54 AM
Quote
NODDY: Statistics is a branch of applied mathematics.  You should have learned that in college.  Regardless, you haven't shown where I err in my math or reasoning.

Refer: Wikipedia: Staitics:
"The idea that statistics branched off from mathematics is a widely held misconception. Some place an undue emphasis on the relationship, but the two disciplines are very different.

The purpose of descriptive statistics is to communicate information, while inferential statistics is used to reach conclusions and deductions that possibly explain the data. Both of these together make up applied statistics. There is also a discipline called mathematical statistics, which is concerned with the theoretical basis of the subject."

The bone has been thrown. Go fetch Noddy !!!
Posted by Barry_C
 - Nov 19, 2007, 08:54 PM
QuoteQuite fanciful to call your simple arithmetic 'math'

Statistics is a branch of applied mathematics.  You should have learned that in college.  Regardless, you haven't shown where I err in my math or reasoning.  

QuoteAn "inconclusive" result means that reactions to relevant and so-called "control" or comparison questions were about the same. To pass, reactions to the "control" questions must be larger than those to the relevant questions.

That is true of a CQT.  Other testing techniques can result in in INC too.  In some situations it means you made a post-test admission to a question so further testing is necessary to clear the issue.  (Some would report it as deceptive; others, as I've explained.)
Posted by George W. Maschke
 - Nov 18, 2007, 03:25 AM
Quote from: 2C272C27420 on Nov 17, 2007, 02:59 PMThe results of my polygraph were inclonclusive and I have to take it again. Can you define inconclusive when it comes to a poly result?

Thanks

An "inconclusive" result means that reactions to relevant and so-called "control" or comparison questions were about the same. To pass, reactions to the "control" questions must be larger than those to the relevant questions. You'll find polygraph procedure explained in detail in Chapter 3 of The Lie Behind the Lie Detector:

https://antipolygraph.org/lie-behind-the-lie-detector.pdf
Posted by cielo
 - Nov 17, 2007, 02:59 PM
The results of my polygraph were inclonclusive and I have to take it again. Can you define inconclusive when it comes to a poly result?

Thanks
Posted by 1904
 - Nov 14, 2007, 08:05 AM
Quote from: Sergeant1107 on Nov 03, 2007, 02:34 PM

Quote
Do you have a problem with my math?  Have I presented it wrong?  What's the issue?  I suspect you don't really have one, but I'm willing to listen if you're up to the task.

Cough cough coughBS cough
Quite fanciful to call your simple arithmetic 'math'
It's in the same vein as examiners titling themselves Forensic Psychophysiologists.......shortly before the phony PhD is added.
Posted by Barry_C
 - Nov 03, 2007, 02:34 PM
Translation:

"I don't understand the math.  I don't understand statistics or research methodology.  I haven't read the research literature I've implied I have, and I haven't an intelligent and sound response, so I'll resort to name-calling."

I think most have figured that our already, but I am curious as to why you continue to post when you have nothing of substance to add to the discussion.

Do you have a problem with my math?  Have I presented it wrong?  What's the issue?  I suspect you don't really have one, but I'm willing to listen if you're up to the task.
Posted by 1904
 - Nov 03, 2007, 10:06 AM
Cough, cough, coughbullshit, cough cough.
Nostradamus predicted a great flood in the year 2007.
I didnt know it was gonna be a river of bs.
Posted by Barry_C
 - Nov 01, 2007, 03:26 PM
Okay Ray,

You beat me to the punch.  I've been typing a line here and there (in Word) all day.  Here's my much similar response:

Sarge,

Is your glass always half empty?

Let's look at the numbers and see what's really what.

Assume you have 1000 candidates for 500 jobs.  Assume further that half of them have failed to disclose information at what would be the polygraph stage had the hiring agency had a polygraph program. Therefore, the base rate of "liars" is 50%.  (That's not a figure I pulled out of thin air as you'll recall that I have that data, which has yet to be published.)  Since this agency has no polygraph requirement, then chance will dictate which 50 get the job offers (as we're at the end of the road as far as the hiring process goes.)

1000 candidates
50% base rate of liars
500 jobs
50% chance of catching liars (chance / coin flipping)

So, we're going to end up hiring 250 liars (50%) and 250 truthful candidates (50%):

Truthful hired      = 250 (50% of 50 job applicants)
Liars hired      = 250 (50% of 50 job applicants)

If, however, we introduce polygraph, what will happen?  Assume a polygraph is 80% accurate.  (That number doesn't come from thin air either.  There are a few studies on screening exams: the TES and the R/I.  Both exceed 80% accuracy, so this figure is conservative.  For those of you who aren't data-driven, I can't help you understand this.)

1000 candidates
50% base rate of liars
500 jobs
80% chance of catching liars with polygraph

Let's do the math now:

Truthful hired      = 400 (80% of 50 polygraph NDI decision candidates – that are really truthful)

Liars hired      = 100 (20% of 50 polygraph NDI decision candidates – that are really liars)

400 truthful hired with polygraph – 250 hired without polygraph = 150 additional truthful hires.

150/250 = 60% more truthful people get jobs with polygraph that is 80% accurate if base rate of liars is 50%.

Let's look at your figure, 60%, which the research shows to be a very conservative figure:

1000 candidates
50% base rate of liars
500 jobs
60% chance of catching liars with polygraph

Truthful hired      = 300 (60% of 500 polygraph NDI decision candidates – that are really truthful)

Liars hired      = 200 (40% of 500 polygraph NDI decision candidates – that are really liars)

50/250= 20% more truthful people get jobs with polygraph that is 60% accurate if base rate of liars is 50%.

So even with your 60% figure, the process is fairer to the truthful (on average) when all is said and done.  (Of course, the percentages are the same whether it's 10 candidates and 5 jobs or 10,000 candidates and 5,000 jobs.)
Posted by raymond.nelson
 - Nov 01, 2007, 11:43 AM
Sergeant1107:
QuoteI guess that depends on how you look at it.  I think the test would have to nearly perfect in order to be worthwhile.

If you have a test that is 60% accurate (which would be better than average chance) it will be inaccurate, on average, 40% of the time.  If you have one hundred applicants, how many do you believe will lie about something on their application?  Twenty?  Thirty?  Half?  Let's say that 40 of them will lie about something, just for the sake of simplifying the math.

If the 60% accurate test functions normally, at the end of the test you will have 36 truthful people pass, and 24 truthful people fail.  You will also have 24 deceptive people fail, and 16 deceptive people pass.  

You will have a total of 52 people pass, and 48 people fail.  But of the people who passed, 16 of them lied and got away with it.  And out of the people who failed, 24 of them were telling the truth.

So now you are left with 52 applicants, nearly a third of which are liars who were able to defeat the test.  And out of the 48 people you booted from the application process, half of them were telling the truth and were disqualified for absolutely no reason whatsoever.

I don't think that sort of process is fair or logical.  It allows too many liars to proceed and disqualifies too many truthful applicants.  It also provides a false sense of security because the 16 liars who just got sworn in as police officers are viewed as having already "passed" a test designed to detect deception.

Except, good sergeant, your math is far from complete.

If you are going to use math examples to prove a point, then please do so correctly. To do an incomplete, and therefore incorrect, job is to provide inaccurate, false, and misleading information to others (that's bad).

You didn't state this, but assuming you have your hypothetical N=100 (lets say they are police applicants, hoping for a long and rewarding, and safe career of service to their communities), lets complete your example.

For the purpose of completely this example we can accept your hypothetical suggestion that 40 will lie about something (perhaps underreporting the frequency or currency of their use of illegal drugs, involvement in thefts or other crimes, or history of sexual contact with animals or sexual assaults against persons).

You have suggested an "accuracy" of 60%. Keep in mind that accuracy is a complex, and therefore vague, term unless you specify what type of accuracy you are discussing. Your example is a Bayesian type, of which so many people seem to gain an incomplete understanding from the NAS report. You make the completely unjustified assumption that (accuracy/sensitivity is uniform with Fps). It reality its not that simple, and most Bayesian models are not uniform. Polygraph is an example of a non-uniform model, because it is in effect a test of two different signal issues of concern. But we'll accept your overly simplified premise for this example, and set a hypothetical suggestion of a sensitivity level of .6 (though there is a lot of evidence to suggest greater sensitivity).  We will, for this example only, accept your ridiculous suggestion that accuracy/sensitivity is uniform and inverse with errors, and assume a hypothetical error rate of .4.

Accepting that your simple addition/subtraction math is correct, we have to take the concern to two (or three) practical levels.

  • One practical concern is  whom, if you are a law enforcement administrator, do you hire and how do you decide who are the decent law abiding citizens who would make responsible law enforcement officers, and who are the people who lack integrity and would bring corruption and problems to a department. How to hire "good guys" and not hire "bad guys."

  • Another concern is can you get hired if you are a decent, law abiding citizen who desires to work in law enforcement. How to get hired if you are a "good guy."

  • Perhaps a third concern is how to get hired into law enforcement if you know your past behavior, and perhaps future intentions, wrought with integrity flaws, poor judgment and behavior that violates laws, social mores, and the rights of others. What happens to the hiring prospects of "bad guys."
Now, for the purpose of completing your hypothetical example, lets assume the impossible and pretend that we could control for all extraneous variables that might affect the hiring process and focus all of our attention on the role that polygraph outcomes would have on those practical concerns.

In your hypothetical example involving 100 police applicants of which 40 are lying, and therefore presumably unsuitable for police work, 52 people would pass the polygraph and 36 of those results would be accurate. Additionally 48 people would not pass, of which ½ or 24 would be accurate.

That may look unimpressive at simple-minded first glance, but data are sometimes not obvious or intuitive (they are sometimes counterintuitive).

So, lets loot at the mathematical and practical aspects of those hypothetical results.

A simple test of proportions of 36/52 and 24/24 provides a z value of 1.961161, which gives a value of p=0.02493. There are, or course, better statistical models with greater power to determine the presence of a significant difference. We could build a simulation sample and use monte-carlo techniques to build say distribution of 1000, or 10,000, or 30,000 resampled distributions, and then calculate standard error rates and confidence intervals around our estimates. But why take the time and expense for a monte-carlo simulation when quick and dirty test reveals the point so well. A statistical test with more power would only reveal a greater, not lesser, degree of significance. So why take the time and expense for a monte-carlo simulation when quick and dirty test reveals the point so well.

By common standards p=0.02493 is a statistically significant result.

I know, that's just math.

Lets look at the practical application, using your over-simplified Bayesian example to interpret our statically significant hypothetical of p=0.025, using an "accuracy" level of only 60% and a base rate of 40%.

Remember now that we have hypothetically controlled for all other variables. Human judgment, being what it is, can be assumed to be no better than chance (at least for those of us who don't possess some magical mind-reading capabilities).

  • To a police hiring administrator, using the polygraph, even if in this hypothetical example with as low as 60%, appears to provide a statistically significant improvement, over chance (alternative method/human judgment/not using the polygraph) in the likelihood of hiring a "good guy" vs hiring a "bad guy."

  • OK, you say, what if you are a "good guy" and want a job. Chance alone (all other variables being controlled for) would reveal the obvious – you have a 50/50 chance of being hired, compared with the chance of a "bad-guy" getting the job instead. With the polygraph, your chances are 36/52 = .69 which seems to be an improvement. Though some might point out the obvious fact that this is still "well below perfection," it is a statistically significant improvement.

  • Now, if you are a bad guy, without the polygraph (all else being hypothetically equal) you seem to have a 50/50 of being hired. Now with the polygraph, your chances seem to be reduced to about 40%.
In summary, good sergeant, using your own (over-simplified) hypothetical example - using the polygraph can be expected to produce three results:

  • Improve the probabilities of police hiring administrators to hire "good guys,"
  • Improve the probabilities of "good guys" to get hired, and
  • Decrease the the probabilities of "bad guys" getting hired.
Sounds OK to me.

Sure its not perfect, and if you experienced a inaccurate result, that is truly unfortunate. But but claims that it is unsound are not accurate and not scientific.



r