This thread is particularly interesting to me because it merges my research life with dinner-table conversations with my wife, a social worker who has spent years serving communities that interact with child-protective services and the criminal justice system.

(a brief thread) https://twitter.com/msalganik/status/1263886774746656768
The study "suggest[s] practical limits to the predictability of life outcomes." This is useful additional evidence that algorithmic systems that partially determine people's lives based on their histories can be problematic at best, but something else interest's me here:
This study suggests the best we can do in terms of predictive modeling can still be pretty bad, but in my wife's line of work, "the best we can do" is still a far-off dream.
The distance between the state of the art offered here, and the partial and piecemeal use of the research instruments (surveys, interviews, checklists, etc.) upon which these predictive models rely is, honestly, terrifying.
At one recent dinner conversation, my partner was telling me about a questionnaire used to determine whether and what services would be offered to people upon release from prison in a certain region.
The questionnaire appeared to be chimera, with questions picked hodgepodge from several different common research instruments. Based, we suspected, on what some state employee several years ago happened to think was important in determining someone's ability to reform.
The questionnaire also included some scores offered by the interviewer, such as a subject's levels of aggression during the interview. The interviewer would be untrained.

The answers would then be tallied, and a numerical score produced to determine post-prison services offered.
We were looking at the end of a long game of telephone. Some studies years ago offered the best they could for predicting life outcomes (which as shown in the above quote-tweet was still likely not great), and they got refracted/contorted several times before being put into use.
The results of the questionnaire were tallied in a partially automated old excel spreadsheet, the final tally would be entered in some database, and then someone else made a decision based on this number.
This wasn't a cutting edge predictive system, with fancy dashboards and mysterious black boxes; it was just a poorly designed study instrument that by happenstance ended up as a significant part of a process that determines people's lives post-prison.
My colleagues and I make a lot of noise about the damage possible by blackboxed, often inaccurate state-of-the-art predictive systems whose results directly influence people's lives. And they are a problem, especially in bigger cities! But in conversations with my partner,
Those state of the art systems seem like a red herring. It feels like the way bureaucracies ingest and process predictive life outcome research, and then blackboxes it not by obfuscation but by the accidental fog of bureaucratic machinery, could be the greater object of focus.
Looping this back to the above-quoted thread ( https://twitter.com/msalganik/status/1263886774746656768), I appreciate that the point isn't about the flaw in some specific predictive system, but about the limits of predictability in these sorts of cases.
It offers a blanket critique, that can transcend the distance between state of the art systems and the way such systems appear to get used in practice.
I'll caveat this thread by saying: please take these comments as the intended morning musings that they are. I am not an expert on bureaucracies or the prison system (though I happen to live with one), but I do realize plenty of researchers actively study this area.
You can follow @scott_bot.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: