Does labour epidural analgesia cause #autism? Probably not (but only an RCT can answer this question). A thread @CASUpdate @APSForg @OttawaHospital @OttAnesthesia @ASALifeline @SOAPHQ @glbryson @Ron_George @antonchau1 @DrLucieFilteau @Dolores_McKeen @EMARIANOMD @IARS_Journals
2/n A #bigdata study finds an association between labor epidurals and subsequent #autism in offspring.
Concerning? Yes.
Will lead to much concern for women, their partners and clinicians? Yes
Robust, clinically relevant findings? Probably not
3/n We always need to be careful interpreting observational studies of treatments, ESPECIALLY if they are retrospective and based on routinely collected data.
Why? Because data may be inaccurate and all important variables may not be measureable.
4/n Plus, big data can create small p-values that make findings look extra convincing. But big data is not accurate enough, the large number of people studied can just amplify sources of bias and inaccuracies.
5/n So how can we approach this new study systematically? I suggest 4 steps:
1. Is the treatment variable accurate?
2. Is the outcome variable accurate?
3. Can we measure all the important differences between patients?
4. Are the right methods used the right way?
6/n Is the treatment variable accurate here? Probably. Insurance claims plus pharmacy data are typically quite reliable
So, we can be confident that people who had an epidural in the data had one in real life and people who didn’t in the data didn’t in real life
7/n is the exposure variable accurate? This is where we start to run into big concerns. #Autism defined using diagnostic codes. Such codes can be accurate, but they need to be validated to prove that the code that says #Autism lines up properly with the real life good standard dx
8/n here were told they’ve been validated with a positive predictive value of 88%-seems reassuring. They’ve also been used before in similar studies. But I dig into the validation refs and have major concerns.
9/n First, PPV from a validation study is almost meaningless when the code is used in a new setting. PPV depends on prevalence (how common the true diagnosis is). In the validation study #autism prevalence was ~45%. In the current study it is ~1.5%. Ugh
10/n we need sens and spec and likelihood ratios to know whether the codes are accurate enough-but these key values are not reported in the 2014 or 2017 validation references. Only ref I could find pegged #autism code sens=69% and spec=77% (LR+3/LR-0.4).
11/n this means that in a low prevalence situation the presence of a code only moderately increases the true likelihood of a clinical #autism dx, and the absence of a code doesn’t fully rule it out. This is called #misclassification bias
12/ People often say that misclassification bias wasn’t a big deal because it always pulled the effect to the null (made the effect less significant). But work we’ve done on #OSA shows that this isn’t always the case.
13/n In our study ppl w/ a code were sicker in almost every way, meaning the misclassification would likely lead to a bigger/worse effect estimate. This could absolutely be the case in the epidural/autism study.
14/n there were big differences in what was measured btw epidural vs none (parity, race, comorbidity...) so who knows what could differ in the unmeasured stuff!
15/n which leads us to indication bias. People who get a treatment like an epidural in observational studies are almost always systematically different that those who don’t. Tx have indications and clinicians have biases.
16/n in my practice (which includes OB) people who get epidurals are more likely to have co-existing issues and more severe disease overall. I know that admin data doesn’t capture this entirely, ESPECIALLY severity
17/n and to be a source of bias, a factor must lead to both increased likelihood of getting the treatment(ie epidural) and outcome (ie #autism). We know we can’t capture all the factors that lead to getting an epidural and we don’t even know what causes #autism!
18/n there are some holes in the analyses in this paper (and some strengths) but we’re on tweet 18, so let’s move on
19/n at the end of the day we have a flawed data set (like most data sets) with a very small effect size (6/1000 increased risk) in a large dataset that can amplify flaws and biases.
20/n if patients and clinicians think this is a question that must be answered it can ONLY be answered in a #RCT where epidurals are randomly and blindly allocated and #autism is clinically diagnosed by experts. @CASUpdate @APSForg @ASALifeline
21/n in the meantime this reminds me of the early childhood GA exposure and developmental delay issue. Early observational studies using codes and animal models suggested a big problem and caused lots of concerns. Big RCTs done and no evidence of an issue
22/n so if I get a Q from a labouring woman I’m going to say:
23/n
1. Much more uncertainty than certainty here
2. Only RCTs can answer this question and we don’t have them yet
3. RCTs tell us epidurals ⬇️pain/⬆️satisfaction and don’t lead to cesareans
4. Having an epidural is a woman’s choice, and if she wants one she should have one
END
You can follow @mcisaac_d.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: