Only just managed to get to the new @HealthFdn evaluation of Mid-Notts integrated care model. Yes, it shows a reduction in A&E visits, but the evaluation methodology tells us so much more about the nature of integrated care. GEEKS read on:
What was the intervention? It was a real mix of different interventions - proactive home care service, home support, 24/7 care navigator. Lots of things to support prevention and early intervention for people at-risk of costly and dangerous emergency admissions.
That's lesson number 1: no integrated care intervention happens in isolation. And they always change and get introduced. It's a real headache for evaluators, but it is the complexity of healthcare and you can either fight it or go with it. The @healthfdn guys went with it.
This complexity challenge plays out in the methodology too. How do you tell whether differences are a genuine difference? Control groups are hard to come by, natural experiments don't often exist when you are implementing such a lot of different interventions.
People are very different so how do you know you aren't choosing a bunch of people who are just getting better over time. And would have done no matter what you do. You need a way of creating a control group.
So the evaluators generated a 'synthetic' control group, picking lots of other people from different parts of the country that matched the people they saw in Mid-Notts. These people looked as much like the intervention group as possible, but without being in the intervention.
Armed with a new methodology, the final challenge is the classic integrated care evaluation challenge. Time. We never give enough time to evaluate complex interventions. @healthfdn researchers (through their clever use of data) were able to come back after 6 years to evaluate.
But after the first couple of years, things start to look up. Emergency admissions are still not great, but length of stay decreases (an indicator of quality and system cost, if not of patient outcome).
By year 5 and 6, emergency admissions are beginning to decline notably compared with the synthetic control area. There are also promising signs in some other admission rates. A huge achievement and a massive exercise in patience.
What COULD all this tell us about integrated care models? (I'm in speculation and moving beyond the evidence now.)
Firstly: evaluating through complexity is important. This means being creative about control groups. It also means allowing time and creating a methodology which is flexible to a shifting intervention.
Secondly: metrics like length of stay and possibly other interm metrics that indicate that something important is changing, could be key bellwethers to understand whether an intervention requires more time to embed.
Thirdly: it takes time to get this right. The perfectly designed intervention is not going to be rolled out in six months. People will tweak it and make it their own. We need to allow that to happen and be realistic about implementation timescales.
Fourthly: if we are investing in evaluation. We need to invest for the long term. You might argue that a six year timeframe is too long to wait. But if this is how long it takes to bed in, then there's no point in starting again from scratch. Otherwise the clock starts again.
You can follow @HarryAEvans.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: