Per art. 1 the draft reg covers:
- placing on the market
- putting into service
- use of AI systems in the Union
Does this leave out training of AI? Possibly. But when they're trained w personal data, no worries. The GDPR applies.
2/
Other rules in scope of the regulation:
- prohibitions of certain AI systems (!)
- requirements for high-risk AI systems
- transparency rules for AI intended to interact w people
- rules on market monitoring and surveillance. 3/
The draft reg is broadly extraterritorial:
- it applies to providers placing or putting into service AI systems in EU, irrespective of where they are established in the world
- providers & users of AI systems located in a 3rd country, where the output produced is used in EU 4/
Important: high risk AI systems that are safety components of products and systems (e.g. in transportation) are outside the scope, they are regulated by other acts, and only Art. 84 of the draft proposal applies to them - to do w evaluation and review of what is high risk 5/
AI systems 4 military purposes are outside the scope, as expected.
And surprise❗️it specifically excludes public authorities in a 3rd country and international orgs that would normally fall under the extraterritorial rules, if they use the AI systems as part of int agreements 6
Intermediary liability rules from the current eCommerce Directive to be replaced by the #DSA will prevail if there are conflicts with liability rules in the draft AI reg. 7
The definition of an AI system did not seem to suffer modifications from the leak we've seen. It focuses on software, refers to techniques in Annex I, which includes a broad spectrum from supervised & unsupervised ML to statistical approaches & search & optimization methods ...8/
... and has as core parts:
- human-defined objectives
- and generating outputs "such as" (so no closed list) content, predictions, recommendations, decisions that influence the environment they interact with (like results of an election? asking for a friend). 9/
So the draft #AIReg is also an encyclopedia of the AI universe: it has 44 definitions! It's like we need an AI to help us process all this content 😅 Some examples: "publicly accessible space", "biometric data" (& it's different from the GDPR's), "input data", "training data" 10/
There are definitions for "substantial modification", "performance of an AI system", "withdrawal of an AI system" - this is too much to go through. Let's focus on the meaty part: it makes a difference between "remote biometric identification system" and "real-time RBIS". Hmm 11/
So the difference between RBIS and real-time RBIS is something called "without a significant delay", which is relevant for the time between capturing of biometrics, comparison w central database and identification of a person. While a "'post' RBIS" is "not a real-time RBIS" 12/
And now the big reveal: the list of "AI practices" that are prohibited.
1) AI that deploys "subliminal techniques" to manipulate behavior in a manner that "causes or is likely to cause" physical or psychological harm to self or others. 13/
2) AI that exploits vulnerabilities of a group due to their age and disability in order to manipulate ("materially distort") behavior of a person in a manner that causes or is likely to cause physical or psychological harm to self or others 14/
3) AI used by public authorities or on their behalf for social scoring, where i) the social scoring leads to detrimental or unfavorable treatment in social contexts different than the contexts where the data was collected or ii) same, where the treatment is unjustified. ‼️ 15
Wait, what? Doesn't that seem too narrow? So social scoring based on "evaluation or classification of the trustworthiness of individuals" based on predicted personal characteristics and their social behavior will be allowed in all other contexts than above?! 16/
4) The use of "real time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement", with a bunch of exceptions, detailed over several paragraphs.
That is all. This is the whole list of banned AI systems.
17/
The draft reg continues with rules for high risk AI systems, which are defined either as AI listed in Annex III or AI intended to be used as a safety component of a product required to undergo 3rd party conformity assessments and covered by regs listed in Annex II 18/
The Annex listing High Risk AI systems has some interesting parts, like:
- AI used for assessing students and test takes for admission to educational institutions (IB scandal?)
- AI used in employment, workers management and access to self-employment (gig economy?) 19/
I'm sorry, I keep thinking of the short list of banned AI systems. Could it be at this point that Art. 22 GDPR has a broader prohibition in place for software that meets the AI system definition and supports automated decision-making having a significant or legal effect? /20
Going back to examples of high risk AI from Annex III:
- credit scoring
- eligibility for social benefits
- AI to be used for law enforcement
- AI for administration of justice and democratic processes
- AI for migration, asylum & border control
Pretty exhaustive. 21/
So what are the rules high risk AI systems need to follow?
- establish a "risk management system" to run through the entire lifecycle of the AI system (Art. 9)
- training, validation & testing data must be subject to "appropriate data governance & management practices" Art.10 22/
Interestingly - 10(3) requires training, validation and testing data to be "relevant, representative, free of errors and complete". I am no computer scientist, but I can imagine this looks like a dream data set. 23/
Here is something of interest for GDPRheads like me: these datasets "shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used" aaaaand: 24/
Processing of sensitive personal data is allowed "to the extent that it is strictly necessary for the purpose of ensuring bias monitoring, detection and correction in relation to the high-risk AI", subject to following safeguards: 25/
i) "appropriate safeguards" for the fundamental rights and freedoms of natural persons, including
ii) technical limitations on the re-use of the data
iii) use of state-of-the-art security and privacy preserving measures, such as.... 26/
pseudonymisation or encryption "where anonymisaton may significantly affect the purpose pursued" (which is countering bias I think) - if you want to look deeper into this, dr. Heng Xu wrote about statistical disparities that may be caused by anonymizing sensitive data sets. 27/
Going back to the list of obligations for high risk AI:
- technical documentation Art. 11
- record-keeping Art. 12
- transparency & provision of information to users Art. 13
- human oversight Art. 14
- Accuracy, robustness & cybersecurity Art. 15 28/
If you are wondering who has these obligations, good question. The provisions use a broad passive, e.g. "high risk AI systems shall be designed and developed" taking into account x or y. So whoever is designing and developing them.
29/
The transparency and human oversight articles are very complex (read: long) - I will file them for later enjoyment. I think some of the most interesting provisions are here and possibly a justification of why the transparency rules in the GDPR were indeed not enough. 30/
The entire following chapter contains a lot of obligations specifically targeted to "providers and users of high risk AI systems", from quality management to drawing up technical documentation and conformity assessments. One point of note for my US based friends: /31
Providers established outside the EU shall appoint by written mandate an authorized representative which is in the EU, and has specific obligations, including keeping a copy of declarations of conformity and such. Does not seem to have liability, though, just like in the GDPR /32
As one can easily imagine, I am completely lost in the following two chapters, which are all about notification of conformity assessments, certification, registration, CE marking of conformity. This is all still the Title on High Risk AI. We are at Art. 51. /33
Moving on from High Risk AI to "Certain AI systems", under a new Title of the draft reg: Transparency obligations for certain AI systems. Only one article in this Title:
- AI re: emotion recognition, deep fakes and those interacting w people have clear transparency obligations 34
Title V has "Measures in support of Innovation", starting with AI Regulatory sandboxes. (UK may have exited, but some legacy remains - I'm thinking here of the ICO's sandbox project). Interesting, the @EU_EDPS is specifically nominated as possible organizers of such sandboxes /35
The other possible organizers are "one or more Member States competent authorities". These could be any authorities, not necessarily the DPAs. Member States will have to nominate them. But when personal data is involved, DPAs will need to be "associated" to the operation /36
Art. 54 contains rules on the "further processing of personal data for developing certain AI systems in the public interest in the AI regulatory sandbox". Art 55 encourages States to prioritize start-ups and SMEs for these sandboxes. /37
Finally, the Governance of this behemoth. Yet another board is created: European Artificial Intelligence Board. EAIB. We will need an AI to keep track of all the Boards too, I think. EAIB "shall provide advice and assistance to the Commission". So it does not enforce. /38
This new Board is set up similarly to the EDPB, to be composed of national supervisory authorities, represented by the head of equivalent high level official, and the @EU_EDPS. The big difference is that the EAIB will be chaired by the Commission & its Secretariat by COM too /39
At national level it seems there will be more "national competent authorities" to be established or designated, as well as one "supervisory authority" chosen from the several competent authorities. It will act as market surveillance authority & notifying authority /40
(Where will the EU come up with so many AI specialists, ethicists, administrative law specialists, data protection lawyers and experts, I wonder? How do you staff for these authorities?) There is an obligation to "ensure national authorities are provided w adequate resources" /41
The draft reg proposes the creation of an EU database for stand-alone high risk AI systems, to be administered by the European Commission. Title VIII deals with post-market monitoring, sharing information on malfunctioning and market surveillance. /42
As for sanctions and penalties, Member States are left to lay down rules within their administrative procedure systems. However, the level of penalties is set in the draft reg: 30 million EUR or 6% of global annual turnover for non-compliance w the prohibitions in Art. 5 & /43
the data quality and data governance requirements for training, validation and testing data in Art. 10 for high risk AI systems. To be clear, that is "UP TO" 30 million or 6% of the global annual turnover in the past year. /44
Smaller penalties for non-compliance w the rest of the obligations in the draft regulation: up to 20 mil euro or 4%.
Novelty: Administrative fines (up to 500k) are proposed for EU institutions, agencies and bodies that do not follow these rules! To be enforced by the @EU_EDPS /45
The entry into force is envisioned with no grace period (the GDPR had 2 years), so "on the 20th day following that of its publication". But a loooong process towards adoption starts today. Thank you all for accompanying me on this read! 46/END
You can follow @gabrielazanfir.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: