I really like this whole research stream from @aaronclauset @DanLarremore @alliecmorgan @samfway. May their supplementals be read and their altmetrics counts runneth over. Adding a few notes that are absent from the rt and replies I saw in the @Sci_j_my thread below...
1/n https://twitter.com/Sci_j_my/status/1276880222126776323
And just to clarify, my idea of a fun Saturday night IS to post unsolicited comments on the literature in a feed with E[likes|tweeting] = 0
2/n
For the headline analysis they used hiring patterns to construct prestige hierarchies to predict hiring patterns. They find inequality worse than the US income distribution...
3/n
But more equal than many other aspects of science. See: de Solla Price's 'General Theory of Bibliometrics'
Merton's 'Matthew Effect in Science'
Cole and Cole's 'The Ortega Hypothesis'
and every google scholar result for 'scale-free' + 'science of science'..
4/n
Back to hiring... For their hierarchy if AUC = 1 it means that hiring is highly unequal (no heck). And if the correlation between individual ability and institutional prestige < AUC then it means that hiring is inequitable on the basis of ability at the population level...
5/n
Ceteris paribus, etc.
Their AUCs (.58-.67 IIRC) aren't great but again, the imperfection is part of the result. In their other papers they use that prestige measure to ask (1) is the observed inequality is inequitable and (2) what does the inequity mean for careers and knowledge production.
6/n
Sidenote: There's a limitation that their data covers people that were hired as professors in US institutions. Despite that, I think the construct and face validity of their prestige measurement is pretty strong. But it's tough to interpret the supply side.
7/n
Addendum to the sidenote: Fig S3 shows that uncertainty is greater for mid-prestige institutions than in the tails. It's not clear that it's still convex if they use the PhD cohort instead of the hired cohort. Could flatten a bit.
8/n
Almost done. from 6/n one of the things they acknowledge in those other papers but don't test. Which isn't well discussed in the other thread is the selection process into the PhD.
9/n
That selection process might have the weakest predicting criteria of any step between elementary school and tenure... Delamont & Atkinson argue this marks a shift from tasks with largely certain outcomes to largely uncertain ones.
10/n
It's also true that most students have poor information about themselves and the program during that process. But this isn't universal across the ability distribution. Zuckermann argues convincingly that elite students and elite professors are better at finding each other..
11/n
Which suggests among other things that: The low uncertainty at the upper tail of the prestige distribution might be due to more efficient assortative selection between students and elite professors which are overrepresented there...
12/n
And that weak selection in the middle of the prestige hierarchy probably has heterogenous effects on students adaptation to increased uncertainty in addition to the weak selection on ability. s.t. an equitable job market | exit ability is inequitable | the counterfactual.
13/n
Selection in the lower tail may be similarly efficient to what happens in the upper tail even though it likely operates on much worse information. Which is just another explanation for Fig S3. And all goes to say we need more research on the production of scientists.
14/14
To clarify 10/n.. The shift means there isn't a performance record to base evaluation on. This is a little bit better by the time a PhD student graduates and much better by tenure decision time. Whether we use the right measures is a different story.
You can follow @robertnward.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: