Misinterpretations of evidence, and worse misinterpretations of evidenceFiona Fidler Related
P-values are frequently misinterpreted. Confidence intervals are too. So are Bayesian statistics. Sometimes this simple equivalence is used as an argument that statistical cognition shouldn’t play a role in deciding which analysis approach to adopt in practice, or to teach to students. But are misinterpretations of these different displays of statistical evidence equally severe?
Do they have the same consequences in practice? In this talk I’ll present the limited empirical evidence related to these questions that we have so far, and suggest that, at the very least, we don’t know enough to assume Abelson’s law yet, i.e., “Under the law of the diffusion of idiocy, every foolish application of signifi- cance testing is sooner or later going to be translated into a corresponding foolish practice for confidence limits” (Abelson, 1997, p. 130). There may be other sound reasons – technical or philosophical reasons—to reject one approach or another, but we shouldn’t (yet) consider them cognitively equivalent.