Five problems with the p-value
[Problem #1] P-values attempt to exclude the null hypothesis without actually showing that the alternative is much better.
[Problem #2] The P-value ignores pre-test probability
[Problem #3] P-values actually tell us the reverse of what we want to know
[Problem #4] P-values are not reproducible
[Problem #5] The P-value is generally used in a dogmatic and arbitrary fashion
Six ways to avoid being misled by P-values
[Solution #1] Re-scale your interpretation of the p-value
[Solution #2] Consider the pre-test probability.
[Solution #3] Always bear in mind that the p-value does not equal ? (type-I error)
[Solution #4] Consider modifying the acceptable Type-I error (?) based on clinical context
[Solution #5] Evaluate the P-value in the context of other statistical information
[Solution #6] Don't expect statistics to be a truth machine
- P-values over-estimate the strength of evidence. Research using Bayesian Statistics suggests that p=0.05 corresponds to a positive likelihood ratio of only 3-5 that the experimental hypothesis is correct.
- P-values are very poorly reproducible. Repeating an experiment will often yield a dramatically different p-value.
- Any approach to hypothesis testing should take into account the pre-test probability that the hypothesis is valid. Just like a laboratory test, a statistical test is meaningless without clinical context and pre-test probability.
- Avoid blindly using conventional cutoff values (e.g., p<0.05 and ?<0.05) to make binary decisions about the hypothesis (e.g., significant vs. nonsignificant). Life just isn't that simple.
- Goodman SN. Toward evidence-based medical statistics Part 1: The P-value fallacy. Ann Intern Med 1999; 130: 995-1004, as well as adjacent article Part 2: The Bayes Factor1005-1013.
- Goodman SN. A dirty dozen: Twelve p-value misconceptions. Semin Hematol 2008; 45: 135-140.
- Johnson VE. Revised standards for statistical evidence. Proceedings of the National Academy of Science, 2013; 110 (48) 19313-19317.
- Halsey LG et al. The fickle P value generates irreproducible results. Nature Methods 2015; 12(3) 179-185.
(3) Unfortunately, likelihood ratios and Bayes Factors are defined in terms of odds, but in general it's easier to think about things in terms of probabilities. Odds and probabilities can be easily converted to one another, although this gets tiresome. The fastest way to convert a pre-test probability into a post-test probability using the Bayes Factor (or a Likelihood Ratio) is via an online statistical calculator.