Why use a list to measure violence instead of [or in addition to] direct measures? I've seen two main reasons:

1) if you cannot meet the ethical req for asking direct measures
2) if you're worried abt under-reporting [esp if your intervention might increase reporting]

2/n
Lists were used during COVID-19 when ethical risks have been too high to ask questions directly

Most recently @yloxford included these in Peru & India asking youth about family violence - finding increases - 🧵on #Peru 🇵🇪 findings by @MartaFavara 👇🏽

3/n https://twitter.com/MartaFavara/status/1383771454592872461
. @ebert_cara & @jisteinert also use lists to collect measures of severe physical & sexual violence against women & children within an online survey - motivated by ethical concerns 👇🏽

4/n https://twitter.com/a_peterman/status/1374027878065606658
Others used lists as robustness checks on direct measures in IEs

In Bolivia & Ethiopia lists confirm impacts on direct reports of violence against girls & IPV, respectively

Bolivia @diegoubfal @selimgulesci: http://documents1.worldbank.org/curated/en/498221613504523709/pdf/Can-Youth-Empowerment-Programs-Reduce-Violence-against-Girls-during-the-COVID-19-Pandemic.pdf

Ethiopia @kotsadam: https://www.cesifo.org/DocDL/cesifo1_wp8108.pdf
What do we learn methodologically from list studies?

First, most evidence points to under-reporting in standard surveys - including for IPV

Nice paper by @ccullen_1 shows lists generally result in ⬆️ prevalence in #Nigeria & #Rwanda

https://openknowledge.worldbank.org/handle/10986/33876

6/n
But, not always!

@taitarasu & @VeronicaFrisan1 compare item by item for IPV in the DHS vs list methods in Peru - so clearly these gaps vary by context, population, including levels of social norms & stigmatization around violence

8/n https://twitter.com/a_peterman/status/1363883786576949248
This work can help us understand IPV in numerous ways, including [but not limited to]:

1) understanding biases in reported data & what characteristics are linked to under-reporting

2) provide robustness checks & alternative analyses for IEs

A useful tool in the toolbox.

9/n
But, their design & implementation can be tricky

@EconCath et al's new paper provides a nice summary of technical issues related to "control" item selection, including:

* Avoid high (low) prev control items
* Include neg corr control items

10/n

https://www.sciencedirect.com/science/article/pii/S2352827321000677
This often requires piloting control items to figure out what the prevalence is (likely) to be in the population & how they are correlated with each other to avoid ceiling & floor effects

In addition, to avoid contrast effects - control items should not "stand out"

11/n
This is particularly an issue when using lists in an IE

Another drawback is the logistical limitation in terms of indicator choice --> A list can only contain 1 violence question

Yet, the typical IPV gold standard combines many different behavioral measures

13/n
This makes comparability hard & as clearly a few list indicators do not equate to gold standard aggregates (even if they are under-reported!)

Also, some ethical questions left "hanging" --> do methods like lists [alone] mean no recommended violence protocols are needed?

14/n
Obviously lots to learn for violence community on lists [& other methods to encourage safe disclosure] - which I find super exciting!

Please add any of your fav list studies for violence that I've missed. . .

end/
You can follow @a_peterman.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: