False positive paradox

From Wikicliki
Jump to: navigation, search

False Positive Paradox is where the incidence of the condition occurring is lower than the false positive rate of the test. Therefore when the test shows that the condition exists, it is more probable that the result is just a false positive.

For example, assuming you have a test that is 99% accurate, this would mean that if the test were to be applied to 100 people, 99 people would get an accurate result, while 1 person would get an inaccurate result. But if the test were to be applied to 100 000 people, the 99 000 people would get an accurate result, while 1000 people would get an inaccurate result. Now, if this test was to find a very rare disease that occured 1 in every 1 000 000 people, then there would be more inaccurate results than people who are likely to get this disease.

  • False Positive - Type I Error / α error : returning null hypothesis when it is actually true
  • False Negative - Type II Error / β error: returning true
"The single most pernicious threat to liberty today is humanity's natural tendency to misunderstand the statistics of rare events. We're just not wired to have good intuition about things that happen with extreme infrequency." - Cory Doctorow in the Guardian

See Also