1/Here’s a rant about meta-analyses in the CV space. I don’t mean to be disparaging (especially because the first meta-analysis I published was probably the most labor intensive project I’ve ever done), but I want to speak honestly.
2/Meta-analytic software is SO EASY to use these days. Literally anyone with a computer can generate a summary estimate and confidence intervals by just plugging in a few simple and easily accessible numbers (that are also accessible to everyone else).
3/I KNOW for a fact that there are enterprising fellows, residents, and attendings around the world who often have their software loaded up and ready, introduction/methods/much of discussion written, and when the next big trial gets presented at a major meeting the RACE IS ON.
4/Analyses are fervently run, drafts are completed and then circulated and sent to big-name authors around the world for “mentorship”, and once that formality is over, a blast of submissions is received at all the major journals.
5/Upon receipt, journal editors have a dilemma. If the journal expedites what becomes “the definitive” meta-analysis, the impact factor will reward. But, at the same time it’s logical to surmise that there at least 5 other versions of essentially same analysis at competitors.
6/Add to this fact that meta-analyses are usually the most labor-intensive types of manuscripts to review, assuming that reviewer actually goes back and fact-checks included studies and tables. And how many actually do that? Partly due to meta-analytic fatigue, it rarely happens.
7/Let’s face it, for anyone who has actually done a meta-analysis the right way, ALL of the work is in setting up the search/inclusion/exclusion & carefully looking at the articles with an a priori statistical plan.
8/A really good systematic review/meta-analysis done this way can be very important and even like a work of art.
9/But these works of art are few & far between. For every work of art, there are a TON of crappy (& duplicative) meta-analyses out there, like those that combine obs studies w/RCTs, or those that take limited trials & combine them for lower p-values or updated summary estimates.
10/It’s important not to forget that if you combine a bunch of garbage with some nice flowers, the stench of the garbage overwhelms the floral scent. Sometimes a meta-analysis gives the opposite picture of what well-conducted RCTs have shown, which helps NOBODY.
11/I just wonder if it wouldn’t simply be easier for EVERYONE if someone could just run the summary statistic when a new trial comes out and post it online, and save the writing of the introduction, methods, discussion, as well as the entire review/publication process?
12/Only problem is that guidelines writers would have less ability to cite (their own) meta-analyses. But on the flip side, guidelines process could actually become a more scientific/analytic process by actually performing independent analyses that then help to inform guidelines
13/BC there is a qualitative difference between truly unique meta-analyses and cottage industry of other analyses that sucks up so much effort/time, as an added benefit, truly unique analyses could be reviewed more thoughtfully/fairly, & there would be less meta-analytic fatigue.
END/What do YOU think?
By the way - this was partly inspired by the folks who have been so kind as to post summary statistics on #cardiotwitter recently. Thanks!!
And also for the record I’ve authored some of the types of analyses I’m ranting against
https://abs.twimg.com/emoji/v2/... draggable="false" alt="😂" title="Gesicht mit Freudentränen" aria-label="Emoji: Gesicht mit Freudentränen">
https://abs.twimg.com/emoji/v2/... draggable="false" alt="🙏🏽" title="Folded hands (mittlerer Hautton)" aria-label="Emoji: Folded hands (mittlerer Hautton)">