I just released my first #Rstats package 📦

Here's a quick rundown on how you can use the {metameta} package to effortlessly calculate the statistical power of published meta-analyses to better understand the evidential value of included studies

https://github.com/dsquintana/metameta
First we'll install the package via Github and then load it.

The package contains two main features:
1. Functions to calculate the statistical power of studies in a meta-analysis

2. A function to create a Firepower plot, which visualises statistical power across meta-analyses
For first example, we're going to extract some data from this forest plot published in this meta-analysis on the impact of intranasal oxytocin on social cognition https://pubmed.ncbi.nlm.nih.gov/29032324/ 

All we need is the effect sizes and confidence interval info for each study
At a minimum, the dataset for analysis needs three columns, with the following labels:

1. "yi" for the effect size
2. "lower" for the lower CI bound
3. "upper" for the upper CI bound

You can also add a column for the study name, but this isn't strictly necessary
Assuming we've named this dataset "dat_keech", we're going to use this in the 'mapower_ul' function. This requires three arguments:

1. The data
2. The observed summary effect size estimate
3 The name of the meta-analysis (required for the other core function we'll get to soon)
This will give us statistical power for a range of possible "true" effect sizes. The reported summary effect size ("power_es_observed") and a range of effect sizes from 0.1 to 1 (in increments on 0.1)
In this particular field, the average effect size is around 0.2, and that's probably inflated due to publication bias. So conservatively assuming that 0.2 is the true effect size, power ranges from 7% to 30% in these studies.
There's also a function for meta-analyses that report effect sizes and standard errors. To illustrate, let's extract the data from this forest plot published in this meta-analysis
For this function, only two columns are required

1. "yi" for the effect size
2. "sei" for the standard error
Assuming we've named this dataset "dat_ooi", we're going to use this in the 'mapower_se' function. This requires three arguments, as before

1. The data
2. The observed summary effect size estimate
3 The name of the meta-analysis
As before, this will return the statistical power for a range of possible "true" effect sizes.
Sometimes it’s useful to calculate power for a body of meta-analyses, which might be reported in the same article or across articles. But Illustrating the power of individual studies from multiple meta-analyses can be difficult to interpret if there are many studies
An alternative is to illustrate the power per meta-analysis by calculating the median power across studies. We can illustrate this with a “Firepower” plot, which we can create using the 'firepower' function. First, we need to prepare the data. Here, we're combining three MAs
Now we're doing to create the firepower plot using the list we just created
Here's our Firepower plot 🎉
You can follow @dsquintana.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: