📢Today @sarahchander and @ellajakubowska1 takeover the @edri twitter account to give you their first impressions of the #EUAI Regulation as soon as it is out! Stay tuned for the latest impressions from civil society.
We're here! We'll be watching the press conference here and sharing our thoughts:

https://audiovisual.ec.europa.eu/en/ebs/live/1 
Press conference starting! College Read out by Margrethe VESTAGER, Executive Vice-President of the European Commission
Stated objective from @vestager is "strengthening the uptake of AI in Europe. "
. @vestager "Proportional risk based approach" High risk is the main focus of the Reg, uses which bring new risks to our safety and health. Five "strict" obligations - but on providers of AI.

🚨acknowledges that some uses of AI are simply unacceptable, eg social scoring
@Vestager says "We focus on remote biometric identification, where many people are being screened simultaneously. Any use is highly risky from a fundamental rights point of view". That's why it has "even stricter rules than other high risk use cases"...
...But for law enforcement authorities in public places it is prohibited:

"There is no room for mass surveillance in our society". That's why it is "prohibited in principle [with] narrow exception... limited ... and regulated."
Starting EDRi's first impressions on the EU #AI Reg

(1) Firstly That the Commission recognises the need for red lines is a major win.

🔴 Some uses of AI are simply unacceptable, compromise fundamental rights and therefore must be banned.
(2) It's become harder & harder for @EU_Commission to justify not banning biometric mass surveillance (BMS).* The fact that today’s official #EU #AIProposal has banned *SOME* types of BMS is evidence that 47k EU citizens & 60 human rights groups are being heard. HOWEVER:
(3) The ban on "remote biometric identification" applies only to law enforcement, has big exceptions & is narrow. Whilst happy to see some prohibition (national law required, case-case authorisations) this proposal does not go far enough to ban biometric mass surveillance.
(4) (*Thanks to @EDRi's analysis showing BMS is incompatible w/human rights, pressure from @ReclaimYourFace, civil society open letters, growing opposition to the practice from MEPs in 5 parties & a new poll from @GreensEFA showing that a majority of Europeans support a ban!)
(5) As a result, the proposal does not sufficiently protect people from being watched, tracked & analysed everywhere we go by authorities and corporations using pervasive public facial recognition/BMS which degrade our faces and our bodies into barcodes to be used against us.
(6) We encourage the European Parliament to improve the text by calling for a full ban as more than 47 thousand Europeans have demanded already. Get active, sign our petition and add your voice to now before they get your face! http://reclaimyourface.eu 
(7) Also banned AI that subliminally distorts consciousness, exploits vulnerabilities based on age and disability, and social scoring.
(9) Why does this matter? For most high risk uses *huge power* is given to AI providers to assess their own conformity with the legal requirements - see Article 43.
(10) The entire proposal governs the relationship between providers (those developing) and users (those deploying).

Where do people come in? We know AI is increasing power imbalance over those subjected to these systems👁️🚨🔴
(11) Seems to be very few mechanisms by which those directly affected or harmed by AI systems can claim redress.

This is a huge miss for civil society, discriminated groups, consumers and workers
@EnarEurope @edf @beuc @etuc_ces https://twitter.com/NathalieSmuha/status/1384559924492050435?s=20
(12) More on biometrics mass surveillance:

It's positive to see Articles 42 and 43 in the 1st leak removed, as @EDRi and MEPs called out the fact that they could be seen to actually permit BMS practices that violate fundamental rights. This is a meaningful improvement.
(13) The move of some types of BMS from the category of 'high risk' to the category of 'prohibited with exceptions' is also a cautious step forward.

Yet whilst the exceptions to the ban are more limited than vague 'public security' justifications, they are still bad because:
(14) They have a very low threshold for the sort of (suspected) criminal behaviour that would allow police to conduct BMS: a crime linked to just a 3 year sentence! And judicial/administrative authorisations are undermined by lack of judicial independence, as @SamuelStolton noted
(15) Also unclear how the definition of "remote" might allow other forms of harmful BMS by law enforcement. Seems there could be a loophole if it is done up close and personal so that the subject knows it is happening (think: drones above protesters? Police "smart" glasses?)
(16) Member States do, however, have to legalise the exceptions to the prohibition in their national laws, meaning that all real-time "remote biometric identification" in public for law enforcement would be totally banned until the exceptions are made legal.
(17) This still leaves a gap for law enforcement BMS if it isn't done in real time (aka "post") - think police forces using ClearviewAI. Shockingly, it's still allowed! And other authorities (schools, transport) & companies are not banned from ANY types of BMS. Very bad news!
(18) Associated biometric 'categorisation' processes such as predicting people's ethnicity and sexuality are NOT banned - despite the fact that this is an inherently unjustifiable, flawed & discriminatory practice. Inferring race or disability are not mentioned in the regulation.
(19) And where has mass surveillance gone? It was prohibited in the January leak, but this prohibition has disappeared. This is contrary to #EU #FundamentalRights.

(On a positive: at least the awful exceptions to the ban on mass surveillance are gone).
(20) The @EU_Commission should not have taken a pick-and-choose approach to which BMS is permissible and which is not. BMS is always unjustifiably harmful to our societies and needs to be outright prohibited. #ReclaimYourFace http://reclaimyourface.eu 
(22) We also cannot rely on companies to self assess their comformity with the law. This will hinder, not help our human rights and safety.
You can follow @edri.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: