If it doesn't hurt

I’ve been thinking about the disconnect between how statisticians and methodologists often beg people to not unnecessarily categorize continuous variables and then turn around and tell them to dichotomize evidence produced on a continuous scale (p-values or test statistics) into significant and not significant). If we have no reason to impose a dichotomy on the world (as is the case with most studies in epidemiology), why not leave measures of evidence in their continuous state? Especially when misinterpreting significance tests can lead people to make incorrect inferences or bad decisions.

But it occurred to me that if dichotomizing evidence doesn’t have any harm, then there’s no argument against it. For example, if someone who’s not a scientist but reads about science for fun about “the discovery of the Higgs boson” instead of, “the probability of the existence of the Higgs boson crosses the 5 sigma threshold,” there are likely no negative consequences of doing this. Why not do it then? So maybe I can be convinced that for the purposes of science communication dichotomizing evidence isn’t such a bad thing. Though I do lament to some extent that people might get the wrong idea about how science works and think that we are more certain than we actually are. I suppose that is a potential drawback.

Avatar
Jeremy A. Labrecque
Assistant professor, Epidemiology and causal inference

My research is on how we know what we know.