No, I’m not a statistician. I’m one of those biologists who, for decades, misused null hypothesis significance testing for testing hypotheses.
Recently, I don’t even do a lot of statistical analyses on my own. As soon as it gets a little complicated, but often also if it seems trivial, I’m reaching out to a statistician who will likely help if I ask persistently enough.
I really wonder how people can do research and publish papers without having a statistician on board. But obviously it does happen.
Burying unwanted results?
This year we saw many COVID-19 studies that could have used better statistical advice. One example that was widely discussed where I live (Europe) was a preprint suggesting that “children may be as infectious as adults.” David Spiegelhalter and other statisticians found serious problems with the statistical analyses and recommended the paper be withdrawn from circulation.
In short, the study authors made so many unnecessary statistical comparisons that their finding that the investigated children actually had, on average, a lower viral load than adults was no longer “statically significant.” In a tweet, the epidemiologist and statistician Sander Greenland called this “upward P-hacking” – burying an unwanted result by making its P-value large and non-significant by adjusting for unplanned multiple comparisons. (By the way, also a statistically non-significant result should not be taken as evidence of no difference.)
Yes, children may be as infectious as adults. However, this study actually provided evidence that in the investigated group of patients, children may have been less infectious than adults.
What made this a particularly delicate issue was that the last author, the virologist Christian Drosten, is by far the most widely known coronavirus expert in Germany and a member of the European Commission’s advisory panel on COVID-19. It was thus clear that this preprint would hit the news and that the errors would hit the news as well, which happened when the yellow press started a campaign against Drosten, harming his reputation and casting doubt on advice by scientists in general.
Drosten quickly acknowledged some errors and published a revised version. It is not unusual to publish a preprint with preliminary analyses and then to add updates addressing comments by readers – in fact, one might argue this is how open science should work.
Outsourcing statistical analysis
What I did not understand, however, is how a scientist in such a publicly exposed position could rush a manuscript on a question of primary public interest to publication, presumably without having it checked by a statistician, or better several statisticians.
I guess what often happens is that experienced scientists trust their own statistical expertise. Hey, we all visited our intro stats courses! We are used to using statistics in our research as routinely as we use the English language to describe it!
Sometimes I really feel sorry for the statisticians. It seems that their work is mainly applied for developing statistical software that any layperson can then use to make nonsensical analyses, just as any word processor can be used to write nonsense.
I have colleagues who get the language in their manuscripts checked by professional science writers before they submit them to a journal. I think we should as well get our analyses and interpretations checked by professional statisticians before we even start writing.
And just like many researchers outsource elaborate genetic analyses to specialized laboratories, why should we not let statisticians do our statistics?
One might argue that outsourcing statistical analyses means letting one of the most important scientific work processes out of our hands. Yes, sometimes my statistician collaborators feel a bit like black boxes to me – I don’t necessarily understand everything they do and why. But there are black boxes everywhere in science. Every software, statistical or otherwise, does things I don’t fully understand. And unlike a software, statisticians may actually reflect and try to explain what they do, and try to adapt to what I need.
One might also argue that external statisticians may be disconnected from the complexities of the science and data collection. Maybe, but on the other hand their advice and suggested analyses provide an outside view by somebody whose job it is to be critical about data, analyses, and conclusions.
To me, the two most rewarding steps in a research project are the planning stage of an experiment and of its prospective statistical analysis, and then discussing the performed analysis and what the results could mean. I enjoy involving a professional data analyst in both.
Becoming more modest
Certainly, having a statistician on board doesn’t mean the analysis is correct, just like being a native English speaker doesn’t mean the English is correct.
In my experience, however, most statisticians will readily acknowledge that statistical analyses are prone to error- for example because there is no such thing as a perfect statistical analysis. There are almost always multiple ways of analyzing a given data set, and each might be a reasonable choice but likely would lead to different conclusions. This garden of forking paths is actually one of the reasons why many scientific conclusions don’t hold up.
When I have met someone who claimed to know exactly how a particular data set needs to be analyzed, or how a statistical result must be interpreted, this person was almost always a biologist, not a statistician. It looks like a good way to become more modest about our data, analyses and conclusions is to show them to a statistician, or better to several statisticians with different views.
And yes, I think being modest about our conclusions is one of the most important scientific virtues.
3 comments on this post
The term statistician is too narrow here as Andrew points out, but also two wide. Almost all statisticians have limited fields of expertise in which they are, at least at present, competent to give advice on or undertake important analyses.
Getting a more apt terminology might be along the lines of someone having undertaken enough training in the specific area of data analysis while additionally having been communally calibrated as being reliable by others working in that area. For instance, I would not expect an expert in machine learning in computer science to know how to analyze a randomized clinical trail nor a clinical trial statistician to know how to develop a real time online prediction algorithm.
I indeed agree, too often intellectual arrogance destroys good science. There are many scientists that acquired a strong background in statistics and then performed excellently in analyzing data. But they were modest enough to learn statistics before speaking :-)
I agree, at least in part. You heard about that Stanford coronavirus study, right? On the other hand, a lot of great data analysis has been done by flexible researchers in applied sciences, not to mention machine learning in computer science, so maybe your claim about statisticians is a bit too strong?