A quick thread on the data behind how you run interviews, and how much difference it makes...
1/
1/
In 1998 Schmidt & Hunter published their study analysing at 85 years of research on candidate assessment techniques, and ranked them all according to how well they predicted candidates performance on the job. It's the best resource I know for this... (for now
)
2/

2/
There are lots of different ways to interview so they split that into two broad types;
- Structured interviews ask consistent questions and have some kind of marking scheme
- Unstructured interviews are when you put people in a room and have the interviewer form an opinion
3/
- Structured interviews ask consistent questions and have some kind of marking scheme
- Unstructured interviews are when you put people in a room and have the interviewer form an opinion
3/
Take a moment to think about which type you think would perform better, and I'm curious what your first reaction is so please vote & share
Your instinct might be to scroll down to see, but it's far more interesting not to, right?
4/
Your instinct might be to scroll down to see, but it's far more interesting not to, right?
4/
What they found was that many popular methods are deeply ineffective... which results in hiring processes that need large numbers of stages in order to filter out bad hires... the filters are weak.
As for interview methods...
5/
As for interview methods...
5/
They found that;
- Unstructured correlate at r=0.3 with later performance
- Structured interviews correlate at r=0.5
(As a reminder, 0 is totally uncorrelated, 1 is perfect prediction)
6/
- Unstructured correlate at r=0.3 with later performance
- Structured interviews correlate at r=0.5
(As a reminder, 0 is totally uncorrelated, 1 is perfect prediction)
6/
That means that unstructured interviews are typically weaker at picking the best hires.
As an aside, structured interviews also come with another benefit... they open the door to collecting data (I'll come back to this).
7/
As an aside, structured interviews also come with another benefit... they open the door to collecting data (I'll come back to this).
7/
But r=0.3 and r=0.5 are just numbers, they're hard to visualise.
So, I simulated 100 candidates scoring with the same correlations as we're looking at to see if that made it any clearer.
Let's see how often you select someone from the top 10
8/
So, I simulated 100 candidates scoring with the same correlations as we're looking at to see if that made it any clearer.
Let's see how often you select someone from the top 10
8/
This is what UNSTRUCTURED interviewing looks like.
It shows how good you THINK the candidate is (vertical) against how good they actually are (horizontal)
Notice how rarely you'll bring an actual 'top 10' candidate through to the next stage... just once for this set of 100.
9/
It shows how good you THINK the candidate is (vertical) against how good they actually are (horizontal)
Notice how rarely you'll bring an actual 'top 10' candidate through to the next stage... just once for this set of 100.
9/
It's worth re-iterating.
This interview has brought through only 1 in 10 of the best candidates (top 10%).
What's more, this interview has green-lighted 3 candidates who are actually BELOW AVERAGE
(same chart, different part highlighted)
10/
This interview has brought through only 1 in 10 of the best candidates (top 10%).
What's more, this interview has green-lighted 3 candidates who are actually BELOW AVERAGE
(same chart, different part highlighted)
10/
STRUCTURED interviews do better. In this set at r=0.5 you can see that, of the 10 people you've brought through to the next stage
* 5 were in the actual top 10%
* 2 were below average
(Lots of caveats apply, not general rules, etc)
11/
* 5 were in the actual top 10%
* 2 were below average
(Lots of caveats apply, not general rules, etc)
11/
BUT even at r=0.5 we're still losing half of the best hires at each round.
A similar problem exists when picking which CVs to call in for their first interview... biographical information predicts at r=0.3 or less so you're leaking talent there too.
12/
A similar problem exists when picking which CVs to call in for their first interview... biographical information predicts at r=0.3 or less so you're leaking talent there too.
12/
Added to the inaccuracy of low-predicitivity interviews there's another very serious problem.
When your interviews aren't driven by objectivity and predictivity, what ARE they driven by?
Culture? Rapport?
13/
When your interviews aren't driven by objectivity and predictivity, what ARE they driven by?
Culture? Rapport?
13/
On culture; unless you're defining a framework to understand and assess culture, you're giving your interviewers no help escaping Kahneman's System 1 thinking.
System 1 is gut, it's where bias lives.
14/
System 1 is gut, it's where bias lives.
14/
On rapport; you need to recognise that rapport is heavily driven by shared experience, which means... *dramatic music* ... it adds bias to your outcome.
15/
15/
Interestingly rapport is much easier to rise above if your interviews don't rely on small-talk.
The structure is already decided, you already have a rough outline, you can just explain the process and get on with it.
16/
The structure is already decided, you already have a rough outline, you can just explain the process and get on with it.
16/
Side note: I personally find that since the candidate is being measured by the structure, I as the interviewer no longer feel in opposition and can step to one side and just facilitate instead, which makes it feel far less stressful.
17/
17/
By bias I'm talking about systemic error... predictable patterns of mistakes that steer your team away from the ideal outcome.
A biased process has a lower signal/noise ratio, so hires lower performers than an unbiased one, on average.
18/
A biased process has a lower signal/noise ratio, so hires lower performers than an unbiased one, on average.
18/
Earlier in the thread I mentioned collecting data being beneficial.
Is an interview question valuable if the people who score highly on it aren't the people you go on to hire?
Is a question valuable if your interviewers can't agree which answers are good?
19/
Is an interview question valuable if the people who score highly on it aren't the people you go on to hire?
Is a question valuable if your interviewers can't agree which answers are good?
19/
Is a question valuable if it fails to separate the field, eg by being too hard, or too easy?
Is a question valuable if its wording or nuance puts some demographic groups at an unfair disadvantage?
Which interviewer is most/least predictive of who we hire?
20/
Is a question valuable if its wording or nuance puts some demographic groups at an unfair disadvantage?
Which interviewer is most/least predictive of who we hire?
20/
Being able to answer those kinds of questions creates an environment in which you can see problems and iterate towards great hiring... and in my (biased) view a move to this being more widespread is inevitable.
Anyone else will be running in clogs, and will be left behind
/end
Anyone else will be running in clogs, and will be left behind
/end
(Nerd note: Schmidt & Hunter published an update to their 1998 paper more recently, but the changes to the predictivity numbers include caveats that I think are misleading without a lot of explanation, so find it easier to use the 1998 paper)