Currently reading OFQUAL's undated 'Data protection impact assessment: summer 2020 grading' that I obtained under FOI.

"Article 22 considerations.
What is the decision?
The decision that will affect students is the final grade issued by the exam boards to each student."

"Is the decision based solely on automated processing?
No - whilst part of the process is automated, the decision is not based solely on automated processing, as human intervention will be involved at a number of stages prior to final results being issued."<really? hmm cc @TheABB
Metadata reveals the pdf document was created at 18.47 on the 14 August. So after the assignment and publication of Grades determined by the flawed algorithm. Hmmm
I'm waiting for the ICO to reply to my FOI and then, I already have questions.

To suggest the assignment of grades as first algorithmically determined by was not done by automated decision making is just a joke.

I ain't done. This ain't over.
The DPIA states that "that while Ofqual will be running the process alongside the exam boards, this is as a check on the exam board outputs – Ofqual will not be issuing grades. This responsibility will rest with the exam boards."
"The model developed by Ofqual will be provided to the exam boards in the form of a set of requirements. Each exam board will process the personal data in its possession as data controller to apply the model and eventually calculate the final grade. "
"Each exam board will consider whether they need to undertake their own data protection privacy impact assessment."

"it should also be highlighted that there is further human review of overall outcomes prior to results being awarded"<really?
A level results were announced on the 13 August.

OFQUAL submitted the DPIA to the @ICOnews on the 12 August. I'm not sure that counts as prior consultation as per Art 36 of the GDPR.
DPIA: "Does the processing actually achieve your purpose?"

"Yes. Without undertaking this sort of exercise, Ofqual is unable to develop a process for moderating centre assessment grades"

"In developing the model, Ofqual will: Ensure algorithmic accountability by checking that the algorithms used are doing what they should do and not producing discriminatory, erroneous or unjustified results" <that worked out well then. Not.
Erm: "Privacy risk to individuals.

Processing could Likelihood contribute to: reputational damage. Risk to Ofqual’s reputation is significant if standards are not maintained as this will impact upon the integrity of the awards" <
"any other significant economic or social disadvantage. We have not identified any other economic or social disadvantage arising from the use of the standardisation model and process than already described here." <The DPIA doesn’t identify any 🤷🏼‍♂️
You can follow @PrivacyMatters.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: