While the government have made a number of mistakes (both long-term and short-term) leading to today& #39;s A-Level results chaos, there are a number of other points to take in consideration.
It& #39;s obviously horrible having your future decided by a statistical model, but there& #39;s no evidence at this point that more people have lost out under this methodology, compared to the normal examination system.
Lots of people every year will find their actual A-Level results are worse than they expected (I was one of them). I had always assumed this was just a normal error margin that went both ways.
It seems now though that predicted grades given by teachers have always consistently overestimated their ability. It& #39;s only natural to want the best for the young people you teach and to see their potential.
However this will have led to a tonne of misery over the years as people lose their Uni places and feel like failures for not hitting predicted grades that were overestimated in the first place.
This is one of the key long-term mistakes this government has made. Teachers should be informed annually how closely their predicted grades track actual results so they can improve their estimation, with the worst persistent offenders reprimanded in some manner.
We& #39;ll only know with any certainty if the government& #39;s model actually did a good job of predicting which people had their predicted grades overestimated when data comes out from UCAS of how many people were accepted into their firm choice of University.
If the model is good then we& #39;d expect the percentage to be similar to previous years. If it& #39;s radically different then that would be a strong suggestion that the government& #39;s algorithm had made poor individual decisions despite being reflective of the overall big picture.
That said though, the individual choices made by the model are reflective of data fed in by teachers, not just the predicted grades but the class ranking that teachers provided this year of all their students.