On February 3, 2015, I published an article in this blog in which I tried to carefully and fairly critique a headline-grabbing manuscript that had been published the previous day in the Journal of the American College of Cardiology. This manuscript had, essentially, concluded that strenuous exercise is as dangerous to an individual’s health as sitting on a couch. However, this study was statistically underpowered and relied on retrospective data (people’s memories). Therefore, it was not possible to draw conclusions about the risk of death of people who engaged in strenuous exercise.
But this was not how the medical media reported this research. The BBC posted an article yesterday that summarized this mistake. This article quotes earlier, erroneous, news reports:
“Training very hard ‘as bad as no exercise at all,'” reported the BBC. “Fast running is as deadly as sitting on couch,” agreed the Daily Telegraph and countless other newspapers around the world, striking fear into the hearts of hardcore runners.
So why was such an obviously weak study done, how did it get into the Journal of the American College of Cardiology, and how did it create such a public outcry?
Before reading further, it is important to explain how bias affects the creation and dissemination of information, even among intelligent well-meaning people. Let me first explain my personal bias. I have engaged in bench and clinical research and I have a number of peer-reviewed manuscripts to my name. I also have written a number of review articles in medical journals. For the last 15 years, I have reviewed medical manuscripts (in my field) for a number of highly-regarded medical journals. Therefore, I have some credibility as a researcher who understands how research is designed and performed and as a reviewer who knows how to critique other people’s work. I am also an endurance athlete and, therefore, may be more inclined to look carefully at a study that suggests that exercise may be harmful.
With regard to researchers, just like anyone else, they need to have jobs. Their jobs are determined by the research funding they receive. This funding is determined, in large part, by the quality of research they perform and the impact their research has on medicine (this is public funding, like the National Institutes of Health (NIH) in the US, not private pharmaceutical funding, which is a slightly different topic). How does the NIH know if a researcher is performing high-quality, impactful research? By the medical journals in which the researcher’s manuscripts are published and the attention these manuscripts receive. Therefore, it is tremendously beneficial to a researcher’s career to publish in high-profile journals the most attention-getting material they can. But sometimes, lower-quality research can be more attention-getting.
There is an important system, however, to protect respected journals like the Journal of the American College of Cardiology from publishing bad, flashy research. This system is called peer-review. A typical sequence in the life of a manuscript is as follows:
- A researcher conducts a study, writes a manuscript about it, and sends it to a medical journal.
- The journal’s editor, or sub-editor, briefly reviews the manuscript for suitability to the subject matter of the journal and for obvious deficiencies (such as terrible writing).
- The editor or sub-editor then sends the manuscript, with identifying information like the names of the authors and sponsoring institutions removed, to one or more reviewers. A typical number of reviewers is two, but I have seen up to four reviewers for a single manuscript.
- These reviewers carefully read the manuscript and then submit their opinions to the editor or sub-editor.
- The editor or sub-editor then reads the reviews and forms an independent opinion about the quality of the manuscript and its suitability for publication.
- A letter is sent to the author of the manuscript about whether the manuscript is accepted or not. Usually, manuscripts are not accepted on a first submission, but have to go through a number of revisions that are suggested by the reviewers and editor.
- If the manuscript is not accepted, the author can often make revisions and re-submit the manuscript.
- The manuscript is then reviewed, again, by blinded reviewers (essentially going back to step 4, above).
Editors, reviewers, and publishers are human, however, and can make mistakes. Reviews, for example, may not be performed with as much care as is necessary.. An editor may not form a separate opinion but may, instead, rely only on the reviewers to make a decision about publication. Many intelligent reviewers don’t have a good grasp of statistics and this is an obvious area where the manuscript in the Journal of the American College of Cardiology should have been critiqued. Reviewers are often, in fact, given permission to ask for an independent statistical review of manuscripts. Finally, publishers want their journals seen and manuscripts cited. This is the all-important “impact factor.” The higher the impact factor, the “better” the journal. This is sort of like a credit score for medical journals. Therefore, in spite of the poor quality of this study, the buzz it created led to an increase in the impact factor of the Journal of the American College of Cardiology.
But there is another level at which poor-quality research should be filtered: the medical media. Medical journalists have specific training in reading medical manuscripts with a skeptical eye. They also, however, have deadlines and they need to publish their own material to make a living. Furthermore a headline like “running kills” or “running is as bad as couch surfing” is just too tempting to media outlets around the world, like the BBC and Daily Telegraph. These media outlets rely on readership to survive, after all.
So, possibly because it was a controversial, flashy topic, this research study got through the filters of peer-review and medical journalists to reach the public. This is the point at which it is important to discuss social responsibility.
When information is published on the internet, it never, ever, truly goes away. For decades, this study and the media response to it can be taken out of context and innumerable people, who could benefit from running, may use this information as an excuse to be inactive. The consequences to public health of sharing faulty information is immeasurable.
But here are more quotes from the new BBC article:
The lead author of the study, clinical cardiologist Dr Peter Schnohr, now concedes that he didn’t have the evidence to say that strenuous jogging is bad for you.
“We should have said we suspect that it is so, but we can’t say for sure. Everybody makes some mistakes in papers,” he says.
But Schnohr thinks experienced readers of research papers would have realised this – he says it was obvious from the statistical analysis that you couldn’t have confidence in the claim that strenuous joggers have the same average life expectancy as those with a sedentary lifestyle.
“It shouldn’t have been misunderstood,” he says, because if you go into the statistics the limits of the research are clear. “If you normally read papers you could say ‘Ah! This is not good statistically – this is too thin.'”
Even so, he doesn’t regret the shock-horror headlines, and isn’t worried that they might have put people off jogging.
“I don’t think so… you always have deaths in marathons and so on,” he says – and suggests that with regular check-ups, cases of heart disease may have been detected and some of these fatalities “could have been prevented”.
So the lead author of this study, Dr. Schnohr, just shrugged.
From the BBC article in which Dr. Schnohr is interviewed, it is clear that he had personal bias in designing and submitting this manuscript:
Schnohr remains convinced that although he hasn’t proved it this time, strenuous jogging might be bad for you.
Revelations like this are a great opportunity for medical journals to disassociate themselves from mistakes they have made. Instead, as quoted again in the BBC:
The Journal of the American College of Cardiology, which published the research, stands by the paper.
“Some news articles appear to have misinterpreted and exaggerated” the story leading to “misleading headlines,” says editor-in-chief Valentin Fuster.
But the fact remains that the paper’s headline conclusion included the statistically insignificant finding about strenuous jogging – something Peter Schnohr admits shouldn’t have been highlighted.
The public is constantly subjected to information about health and it is clearly difficult, even for knowledgeable people, to differentiate between high- and poor-quality information. But if information is new and has a splashy headline, please do yourself a favor and be skeptical. Medical journals usually get it right. The medical new media usually gets it right. But the system is not perfect.
Published April 7, 2015.
Schnohr P, O’Keefe JH, Marott JL, et al. Dose of jogging and long-term mortality. J Am Coll Cardiol 2015; 65:411-419