Inspired by an exhaustive list of medical trial acronyms, last year I wrote wrote a post that still makes me smile (the part where my mind's eye conjures Batman at a medical conference). Here's an excerpt:
It's comforting to see our best minds are studying LIFE and LIMB, MIRACLE and MIRAGE. The aforementioned CALM is balanced with EXCITE. You can also learn the difference between SYMPHONY and OPERA. As for more conventional names: ADAM, DAVID, MONICA, RUTH, and SONIA are all ALIVE, with VIGOR and GUSTO.
There are too many more to mention, though I was a little dismayed to find the really memorable ones were often sponsored by pharm companies. Though they're catchy, I have no idea if the studies are well-conducted, or tell us anything important. For this reason, I'd like to organize a study examining whether clinical trials with fancy acronyms have higher impact than serious studies denoted by plain collections of letters. We'll call it ABSURD -- Acronym Behavior overShadowing Useful Results and Data.
Well, last week, the NEJM (um, the New England Journal of Medicine) published such research -- Acronym-named Randomized Trials in Medicine - The ART in Medicine Study (I like my proposed title better). An excerpt is reprinted below:
As compared with studies without acronym names, acronym-named studies had higher Jadad methodologic quality scores, enrolled five times as many patients, had follow-up periods half as long, but were not more likely to report positive results. Acronym-named studies were four times as likely to be funded by the pharmaceutical industry and eight times as likely to be authored by an industry employee.
Acronym-named randomized trials were cited at twice the rate of trials that were not named with acronyms (13.8 vs. 5.7 citations per year)...
Although other explanations are possible (for example, exemplary investigators may generate both clever acronyms and important research), these results support the hypothesis that naming randomized trials with an acronym may enhance the citation rate. ...
Enhanced attention to and recall of studies through the use of acronyms may facilitate the appropriate translation of research findings into clinical practice. If acronyms exert influence independently of normative markers of clinical credibility, however, such influence is not rational scientifically, even if it is understandable psychologically. Consequently, this subtle linguistic tool could undermine evidence-based practice. The observed close association between acronym use and sponsorship by the pharmaceutical industry amplifies this concern.
Stanbrook et al deserve credit for sifting through the literature, quantifying 173 studies and drawing some important conclusions about article quality and citation rate.
But these authors's work is part of a burgeoning field of research ON research, a meta-analysis, if you will, on how science is conducted and disseminated. There are now whole conferences studying peer review, bias, and the impact of "impact factor" (Stanbrook originally presented this research at one such event).
Thus, we can expect more research into this topic. Is it easier to apply an acronym scoring system like APACHE, or an eponymous one, like Ranson's criteria? Do patients fare better when they're told they suffer from POEMS, or the Crow-Fukase syndrome of polyneuropathy, organomegaly, endocrinopathy, monoclonal gammopathy, and skin changes?
I wish to contribute to such research, but right now all I can offer is another title: the Study of Medical Acronyms in Reinforcing Memory. It may be unfairly catchy, but I think its very acknowledges something overlooked: the attractiveness of meta-research may well be disproportionately higher than its actual usefulness. There are more important to study, debate, and get sanctimonious about.