Should the Media Report on Health Research?

When findings and recommendations change all the time, where does media coverage fit in?

There is perhaps no better illustration of the malleability of health research than fat.

In the 1990s, low-fat was the health regimen of choice. Fat was reviled, sucked out of everything from yogurt to cookies in the name of wellness. Fast-forward to today, and experts are calling that advice misguided—saying that, oh wait, the low-fat craze may have actually made us fat.

Therein lies the challenge of following scientific research: It changes. It needs to be duplicated, verified, tested again and again. It typically takes years to know if a particular breakthrough will, in fact, break through anything at all, or if it’s just data that looked promising for a little while there.

Scientists know that’s how the process works; so do doctors. So, hopefully, do health journalists. But does the public? And do media reports on health studies cloud that process even further?

During any given week, media outlets, including this publication, report on promising-looking health and medical studies. Not every study, however, will be duplicated; not every study will turn out to be correct; not every recommendation will still be lauded in 10, or even two, years. Some conclusions change naturally as knowledge develops; others were likely reached haphazardly in the “publish or perish” environment of academia. The question, then, becomes whether covering new, sexy studies is a service to the audience, or a hindrance.

David Hunter, acting dean of the Harvard T.H. Chan School of Public Health, says he supports health reporting, but sympathizes with the public’s research exhaustion.

“People can’t be blamed for getting confused, even a bit nihilistic, when one year X causes Y, and a few months later X doesn’t cause Y, and we play out this ping pong match in public,” he says. “It’s important to recognize that addressing these medical and scientific issues is always a cumulative, long-term endeavor.”

Long-term may be the operative word in that sentence. Science is a process, and health reporting would do well to make that clear. Harry Orf, senior vice president for research at Massachusetts General Hospital, says it’s important for health stories to note that most discoveries don’t go immediately, or even quickly, from an academic journal to a hospital, pharmacy, or dietitian’s office.

“When a discovery is newsworthy, it’s important for the story to relate the realistic timeframe by which that discovery could potentially impact medicine,” Orf says. “That often is not emphasized enough or early enough in an article.”

“Science is messy. It’s not really linear,” adds Gary Young, director of the Northeastern University Center for Health Policy and Healthcare Research. “We’re always trying to refine what we know, and hypotheses change in ways we didn’t think about in prior studies.”

Articles must also, then, describe the study that yielded a newsworthy result. If research is conducted in rats, for example, it’s irresponsible not to say so. If a study was small or somehow compromised, the readers deserve to know. “Every study has its strengths and weaknesses,” Hunter says. “It is important to be adequately skeptical.”

It’s also important for readers to know that some study findings will never translate to clinical applications. For example: A 2003 paper from Greek researchers examined 101 drugs that, between 1979 and 1983, were billed as having “novel therapeutic or preventive promise.” By the time the paper was published, only one of those wonder drugs was still in regular use.

Replication is another issue. Adjusting the tiniest variable in a study can lead to a totally different result—but does that mean the first study was wrong? Should it be ignored as soon as a contradictory result is produced?

“It is a concern,” Orf says of studies that can’t be repeated, noting that many journals and research institutions are adopting better duplication parameters to address that issue. “It’s a minority of cases, but it’s not an insignificant minority.”

As such, Harvard Chan’s Hunter recommends that both journalists and consumers put more weight on meta-analyses and cumulative surveys than on new, flashy, one-off studies. “If I had one concern about how the media and medical scientific journals approach the problem, it’s that they do tend to publish single studies and give much lower weight to studies that summarize a large volume of research,” he says.

Still, Young says it’s not damaging for journalists to write about steps in the process, so long as they’re adequately framed and explained with nuance.

“When are you going to decide to report it—after the 100th paper confirms it? By that point, a lot of people could have benefitted from having that information,” he says. “There are risks either way, and I’d always lean toward more information is better.”

Orf echoes Young. “If the discovery has a potential major impact, I think it is worth reporting, but, again, putting it in context,” he says.

Part of the onus of defining a “major impact,” Orf adds, must also fall on the hospitals, universities, and researchers producing these studies and relaying that information to the press. Orf says only about 5 percent of the results that come out of Mass General find their way to mainstream publication, a conscious decision by the hospital’s administration and public affairs department.

“I occasionally get a call from the other side [saying] that we’re being a little too critical about what we [publish],” Orf says. “Frankly, I’d rather err on that side than the other.”