Today’s #tweetorial—positivity for #causalinference: what it is and why it matters.

First, what it’s not: the type of positivity Im going to talk about is not a positive mental attitude!
The technical definition of positivity is that the probability of having a particular level of exposure, conditional on your covariates, is greater than 0 and less than 1, for all strata and exposure levels of interest:

0<P(A=a|L)<1 for all a in A and L
But what does that actually mean?

If you want to compare two types of treatment, then you have to have people in your data who are able to & sometimes will receive all relevant treatment options!
This is a fundamental requirement for #causalinference, but also just plain common sense.

You can’t compare apples to oranges, if you’ve only ever seen apples!
When we want to understand causal effects, we need to check for potential positivity violations.

There are two types of violations which can happen:
•random non-positivity
•structural non-positivity
Random non-positivity happens when by chance or because of a small sample size some types of people in your study are only exposed or only unexposed.
Random non-positivity is most likely for strata of continuous variables. For eg, enroll 45-65 year olds but by chance recruit only four 52 year olds, none of whom are/get exposed. Luckily, for continuous vars, it’s usually OK to borrow info from similar people (ie interpolation).
Structural non-positivity is tricker, because in your data it can look the same but the meaning is different.

Imagine the exposure in the scenario above was “is on a 2019 50-under-50 list”. The 52 year olds *cant* be exposed: there is structural non-positivity.
When we have structural non-positivity, the causal effect in that group is meaningless because they can’t ever (or will always) have exposure.

The solution is to exclude them from our data & inference entirely!

Eg to learn about 50-under-50 lists, only enroll people under 50!
How does this play out in designing studies?

In an #RCT, randomization performs 2 important functions. The first, and most commonly discussed, is that it removes confounding. The second is that it ensures positivity: everyone has a chance of being assigned either treatment.
Importantly, this is true even if we don’t randomize 1:1. We still have positivity if we assign twice as many people to treatment compared to control.

But we *don’t* have positivity if we assign everyone to treatment (ie “single arm” trials aren’t really trials, don’t @ me!)
So, one reason RCTs work is because they have positivity. But, just like with confounding, random assignment only guarantees positivity *at baseline*.

When we have sustained treatments, like medication use, we can get post-randomization positivity violations!!
Say we want to look at statin use over time versus no use ever. At baseline, we enroll people with no contraindications & assign them to statins or no statins.

People with contraindications are excluded before randomization and can’t be in either the treatment or control arm.
But, life happens, and some people will develop contraindications over followup.

In the intention-to-treat analysis, that’s fine, because ITT is the effect of *assignment*

But if we want to estimate the statins effect, we need to build in rules for how to handle these people.
If everyone who develops a contraindication stops statins, we have structural non-positivity for statins among people with contraindications.

And so, we couldn’t estimate a per-protocol effect for “continuous statins” vs “no statins ever” even if we can control for confounding.
But maybe a more relevant per-protocol effect would compare “take statins unless a contraindication develops” vs “no statins except if strong indication occurs”.

We probably *do* have positivity for that!
So in trials, we have positivity for the ITT and for some but not all definitions of the per-protocol effect. What about observational studies?

Unlike an RCT, in an obs study we don’t necessarily have positivity at baseline, and we need to worry about positivity over follow-up.
So, we need to add two things to our design:
(1) when groups can’t be or will always be exposed at baseline, we should exclude them from our study & target pops.
(2) when people enter these groups over follow-up, we should excuse them or specify a rule in our exposure definition.
So, in summary, positivity in #causalinference means we only assess causal effects in people who are eligible for all levels of exposure we care about.

Anyone who would always or never get the exposure should not be included in our study or our target pop.
If you'd prefer to read this without all the gifs or want something to bookmark for easy reference later, I've also posted this #tweetorial on Medium. Check it out here: Positivity: what it is and why it matters for data science https://link.medium.com/BF7A9OERNT 
You can follow @EpiEllie.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: