This is a lovely thread about Bayesian statistics and Bayesian cognition. That being said, I think a *lot* of care is required before treating Bayesian models as normative standards for human cognition. A thread. https://twitter.com/JessicaHullman/status/1319685647481868290
My colleagues and I wrote this paper a few years back. Our main point is simple: to constitute a "normative" standard for human reasoning, a model must be more than merely Bayesian; it must be a Bayesian model for the actual problem people need to solve https://psyarxiv.com/25gcm/ 
An example. Back in 2007, Tom Griffiths & Josh Tenenbaum ran an experiment on how people judge strength of evidence. They gave people a cover story designed to manipulate people's priors, by framing the context as a genetic engineering experiment or a psychokinesis experiment
We first replicated their experiment, and reproduced their analysis. As you can see from the figure below, the replications and reproduction both worked perfectly. In both data sets their Bayesian cognitive model captures the qualitative effects at an aggregate level (roughly)
When you look more carefully at the individual level data (right panel), however, something is seriously amiss. No matter what priors we feed the Bayesian model (left) the curves relating the "number of successes" to the "strength of evidence" are *much* steeper for the model
This seems bad. Should we conclude that Tom & Josh did a bad job analysing the data? No. Should we conclude that people are stupid or not Bayesian reasoners? Also no. Rather than insult the intelligence of our colleagues or our participants, we tried to understand what happened
Upon closer inspection, it seemed to us that the cover story manipulation might affect not just people's prior beliefs, but also their trust in the value of each generated datum. If an observed datum is unconnected to the phenomenon of interest, a rational learner ignores it
This skepticism is not incorporated within the original Bayesian model, but it seems terribly sensible in real life. So we built a "distrustful Bayesian" model (solid black lines) that allows the learner to discount some proportion of the data, and... it works beautifully
It would be so easy to look at the discrepancy between the original model (a standard beta-binomial model widely used in statistics) and human behaviour and then conclude that humans are not good at judging strength of evidence. I think this is a mistake.
As I have said many times in statistics, the probabilistic models we build operate in a toy world; human reasoners live in the real one. When human judgment disagrees with your formal probabilistic model, do not be too quick to assume the human is wrong https://psyarxiv.com/ygbjp/ 
To sum up: I am a strong advocate of Bayesian models for human cognition. I think probabilistic language is a useful tool for formalising theories of how people think and reason. What I am skeptical of, however, is the notion that Bayesian models imply strong normative claims
Of course, as always... I might be wrong 🙂
You can follow @djnavarro.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: