I really don’t think the detailed progress of science is suitable for the news cycle. News is about something unexpected that just happened; science is a slow buildup of studies and understanding.
Rarely will a single study completely change our understanding of the world, although if you read news media you may get the impression that this happens frequently.
Here is one that did, from The Times in November 1919. Einstein’s theory of gravitation had been confirmed by Eddington. But most studies are not revolutionary. In this post I provide three recent stories from print media, and tell you why I don’t think they should have been considered newsworthy.
This article, published in Time on January 6, 2010 deals with epigenetics, the idea that traits not based on DNA can be inherited. One of the main cases illustrating this (some say proving this) is a study from Överkalix in northern Sweden [Bygren et al. BMC Genetics 2014, 15:12]. Two crop failure years in the early 19th century resulted in sharp changes in food availability for the ancestors of 317 people in the small town. The study shows that if “the paternal grandmother up to puberty lived through a sharp change in food supply from one year to next, her sons´ daughters had an excess risk for cardiovascular mortality.” There were no such effects found for other grandparents. There are 16 possible gender-combinations of grandparents, parents, and child, and therefore 16 statistical tests that were performed. Now, if you carry out 16 tests at the 5% level (which the authors of the study did), the chance of getting at least one rejection of the hypothesis of no effect, even if there is no effect in reality, is about 56%. The appealing story does not come with very convincing evidence, although it convinced a fairly statistics-savvy genetics professor friend of mine.
The next story was an intentional hoax originally published in the India Times on March 31, 2015 — under the headline Excellent News: Chocolate Can Help You Lose Weight!
Many diet studies are contradictory, and lots of people only pay attention to things they would like to be true. This study was correctly designed, but there was no hypothesis stated before the study (this is what we call cherry picking). It could just as well have resulted in chocolate reducing blood pressure, or increased well-being, or changes in the opposite direction.
The treatment in this study [Behannan et al., Int. Arch. Med., vol. 8 no. 55] was eating 42 grams of 81% chocolate per day, together with a low-carb diet for 21 days. Comparison groups just followed the same low-carb diet, or were free to eat what they wanted. There were several indicators of potential trouble with the study.
The first indicator was that it was published in International Archives of Medicine. You may not have heard of it before. It has gone through three different publishers since it was started in 2008.
The second indicator of trouble was that while the paper stated participants were healthy and randomly assigned to treatments, it did not say how many people were assigned to each of the three treatments. Such lack of information should be an immediate red flag. Studies with very small sample sizes tend not to be reliable, and may oversell tentative conclusions.
The third indicator was that the study looked at 18 different measurements (such as weight, blood pressure, cholesterol level etc.). Only one, weight, was found to be associated with chocolate consumption. Note the similarity to the 16 possible combinations in the epigenetics study, except that with 18 variables the chance of detecting at least one relationship with the hypothesis tests used in the study (when really there are none) is 60%.
This study, although having real subjects (15 of them as it turns out—5 for each group—a very small sample size) and actual lab measurements, was in fact led by a science journalist who had been contacted by a television reporter working on a film about the junk science diet industry [I fooled millions into thinking chocolate helps weight]. In advance, the researchers (a medical doctor, the journalist, and a statistician) did not know what the outcome would be, but they knew there was a 60% chance of finding something. It turned out to be weight, but might just as well have been something else. The manuscript was submitted to 20 journals – many accepted it. International Archives of Medicine asked for £600 in order to publish it immediately without any peer review. The journal has now withdrawn the paper.
Diet research is a very difficult area. Individuals react differently to diets and there are so many issues that cannot be controlled. People breathe the air where they live. A smoker or drinker may not respond the same way to a diet (or to eating chocolate) as a non-smoking teetotaler. And the researcher typically cannot control the participants’ eating – instead, they must trust that they ate what they say they ate.
The third story describes real and urgent research, but in my mind should never have been published, as it contains no information that can be of use to the reader. Rather, it promotes a false sense of safety—that a vaccine appears to be nearly ready.
The way human clinical vaccine trials work is that you first give the vaccine to a few healthy individuals to see whether it has side effects, and to see if it produces the antibodies you hope it will. These are called Phase 1 trials. Next, the drug is given to a larger number of volunteers, in order to better understand side effects, dosages and biological effectiveness. These are the Phase 2 trials. The final stage, Phase 3 trials, uses thousands of individuals, randomly selected into treatment groups (getting the vaccine) and control groups (getting a placebo injection which has no effect).
The trial described in this article was the first third of a Phase 1 trial, looking at 8 healthy adult individuals under 55 years of age. The only outcome that is possible at that stage of the trial is to declare the vaccine dangerous or not functional. Why even publish this? Well, the press release came from the Moderna company, whose stock immediately went up by 25%.
It is news if a vaccine trial has been halted. It is possibly newsworthy that a vaccine has proceeded to the next phase of the trials. But it is not newsworthy that the first third of the Phase 1 trial is done (while the data have not been released).
My main point of this post is that single studies are not indicative of where scientific understanding is and where it may be going. A single study may report a result that is due to chance. This is why more than one study is needed to establish an alleged effect. The consequence for science news reporters is that every study must be put into a context — a description of what additional information we might glean from a new study. That, of course, should also be done by the researchers. But, like the journalists, they want to sell their story in as few words as possible.