For a journal editor, having the highest impact factor . . . confers bragging rights.
Those who spend many years in school to become physicians, lawyers, engineers, and the like are taught to be skeptical of what they are told and what they read. They demand a higher level of proof, and are perhaps not as trusting of some claims as their friends. Sometimes they are accused of being overly doubting, or even cynical. Occasionally they hear the phrase: "Don't be so negative."
One group that doesn't like "negative" is the editors and reviewers of our medical publications, be they peer-reviewed or not. Editors want to print articles that will excite the doctors who read them, that will provide information to change medical practice, and that will have a big "impact."
In fact, journal editors routinely follow something called the "impact factor." Articles with some exciting piece of news that makes a big splash tend to be cited frequently in subsequent papers on that topic. If the papers in your journal get cited a lot, then you are rewarded with a high impact factor. For a journal editor, having the highest impact factor is akin to a mother having a child accepted to Harvard. It confers bragging rights.
How not to get the big impact factor is to publish a lot of studies that have results that are not exciting, that fail to show significant differences between the new treatment and the pre-existing standard of care, or where the new drug is same as placebo. In short, avoid "negative" studies.
Here's a hypothetical example. Twenty different doctors in 20 cities independently decide to do a study in their practices comparing the results of LASIK performed with laser A with those with laser B. Nineteen doctors find there is no difference between the lasers. Not very excited by this finding, most will likely decide not to bother submitting their results for publication.
If they do, the unimpressed reviewer or editor might well decline to publish this unexciting study. But the 20th doctor does find a difference (as the odds suggest he or she might if a p value of 0.05 is used). That study does get written and submitted, and the odds are high it will appear in print.
The busy doctor, ignorant of the other 19 studies that found no difference, reads this one article that found laser A to be superior, thinks about it, is glad that he or she uses the better laser, or worries about whether a switch should be made. The company that makes laser A has its representatives distribute copies of this article "proving" superiority of their product.
None of this hoopla would result if the other 19 negative studies were published along with this positive one. In short, there's much ado about nothing, because positive articles are so much more likely to get written and published.
Although I should know better, my own experience has led me down this path.
Studies with no statistically significant differences have been typically rejected, or not even written. I once showed my residents and fellows a rejection letter from a journal that described my study as "well-designed but with negative results."
But every positive study did get written up and published.
When reviewing submissions now, including for Ophthalmology Times, I try to avoid falling into this trap. But I observe in myself the strong tendency to prefer "positive" results, and assume these would be more interesting to our readers.
Peter J. McDonnell, MD is director of The Wilmer Eye Institute, The Johns Hopkins University School of Medicine, Baltimore, and chief medical editor of Ophthalmology Times. He can be reached at 727 Maumenee Building, 600 North Wolfe St., Baltimore, MD 21287-9278 Phone: 443/287-1511 Fax: 443/287-1514 E-mail: email@example.com