This morning, I read the new proposed @EU_Commission AI legislation, "Proposal for a Regulation on a European approach for Artificial Intelligence." Here are my notes on the first 50 or so pages. https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence
I'm writing a new book for @mitpress about race & gender & ability & technology. I read the proposed legislation hoping that it would be fodder for the chapter currently called "hopeful and policy stuff."
The proposal starts with some history. EU folks have been working on AI ethics issues since at least 2017.
"In 2017, the European Council called for a ‘sense of urgency to address emerging trends’ including ‘issues such as artificial intelligence …, while at the same time ensuring a high level of data protection, digital rights and ethical standards."
In 2019... "the Council further highlighted the importance of ensuring that European citizens’ rights are fully
respected and called for a review of the existing relevant legislation to make it fit for purpose for the new opportunities and challenges raised by AI."
I'm impressed by how long this took and how thorough the process was. It's not easy.
The European Parliament has already made several resolutions around AI in ethics, liability, copyright, criminal matters, education, culture, and the audio-visual sector. The current proposal cuts across categories.
This regulatory framework has 4 specific objectives:
1. ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;
2. ensure legal certainty to facilitate investment and innovation in AI;
3. enhance governance and effective enforcement of existing law on fundamental
rights and safety requirements applicable to AI systems;
4. facilitate the development of a single market for lawful, safe and trustworthy AI
applications and prevent market fragmentation.
What does this mean for tech firms?
- AI systems must follow the law
- Less legal wrangling about what tech firms can and can't do with AI/data. This is a relief; we're two decades into the internet revolution, it's about time we stopped treating tech like an upstart.
- Puts in place enforcement mechanisms for different industries using AI

- States a commitment to consistency across markets. This is important because it's the EU, an alliance of very different countries, and things can get fragmented.
Consistency is also important to state because the rest of the world is going to be abiding by the EU rules as well. In the US, you know how after GDPR, you got a million pop-up windows asking for your consent to use data on websites? That was because of GDPR.
The EU has been at the forefront of tech regulation globally, and most of the rest of us will reap the benefits because it's easiest to build the tech for the most restrictive environment.
We're only on page 4 by this point. Speeding up. Is this why more people don't live-tweet close reads of proposed regulation?
The regulation "proposes a single future-proof definition of AI." Good luck with that! We don't really have a stable definition of AI. Did anybody see the thing with Banjo this week where they found no evidence of AI?
Bottom of pg 4 gets to the meaty stuff: this regulation sets out definitions of what is AI, creates a mechanism for regulating it, and splits AI systems into 3 categories: low-risk, medium-risk, high-risk. Only high-risk AI is closely regulated and monitored.
Low-risk AI has "minimum transparency obligations." Chatbots and deepfakes are considered low-risk AI.
Really important recommendation: developing AI regulatory sandboxes for testing and evaluating fairness. Please email or DM me if you want to talk more about regulatory sandboxes. I want to build one.
There will be a new European Artificial Intelligence Board. I assume this will be the body that will keep the database of high-risk AI software.
Did I mention? This regulation calls for the EU to establish a registry of high-risk AI software. If you create high-risk AI, you will have to register it with the government and will be subject to regulatory review to ensure safety and non-discrimination.
"Among those who formulated their opinion on the enforcement models, more than 50%, especially from the business associations, were in favour of a combination of an ex-ante risk self-assessment and an ex-post enforcement for high-risk AI systems."
I like that ex-ante and ex-post AI analysis are called out here. Ex-post means "based on knowledge and retrospection and being essentially objective and factual" and ex-ante means "based on assumption and prediction and being essentially subjective and estimative"
A lot of the problem we run into with AI is that people make dramatic claims about what the software can do (ex ante claims) and then the analysis afterward (ex-post) reveals that the claims are false and/or that the AI discriminates.
Definitely check out @themarkup because they are doing the best algorithmic accountability journalism right now. I love their stories that evaluate Big Tech's claims and audit software for actual performance. Spoiler: often the software is doing a bad job.
On pg 17 we get to the recitals. (Thank you @mikarv!) These numbered bits are important to understand the Articles at the end. The Articles are the parts that have the most heft legally. Numbers in the next part of the thread correspond to the numbered recitals.
This regulation applies to any company that does business with anyone in the EU.
“To prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country...”
Exempt are public authorities of non-EU countries, and international organizations under certain circumstances like international law enforcement. (Is this a terrorism carve-out?) Military AI systems are also exempt. This military exception could get tricky.
“It is therefore necessary to prohibit certain artificial intelligence practices, to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down transparency obligations for certain AI systems.” Strong words here! Prohibitions!
“Research for legitimate purposes...should not be stifled by the prohibition, if such research does not amount to use of the AI system in human-machine relations that exposes natural persons to harm..."
"... and such research is carried out in accordance with recognised ethical standards for scientific research.” This means scraping is not a crime! Huzzah!
"The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to
occur, should be forbidden.”
Social scoring AI is prohibited. Praise be.
No real-time biometric ID of people in public, except in 3 cases: missing children, threats like terrorism, and really bad crimes like the 32 criminal offenses in Council Framework Decision 2002, which is about terrorism.
There are some definitions of what counts as public, which are interesting. This is going to kneecap facial recognition in policing. Good. #CodedBias
20/ Real-time biometric ID data should be deleted and not stored indefinitely. This is good for data privacy.
This thread could easily be a talk, if anyone wants a professor to talk about what the new EU AI regulations will mean for software development.
21/ Facial recognition in public for law enforcement needs to be specifically authorized in advance by a local court, and needs to be minimal. #CodedBias
22/
The next section is about what is considered high-risk AI.
28/ “For instance, increasingly autonomous robots,
whether in the context of manufacturing or personal assistance and care should be able to safely operate... in complex environments.” This is about autonomous cars and delivery robots. Also medical robots and AI diagnostics.
AI diagnostic systems must work really well before being adopted: “Similarly, in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate.”
Proof that AI diagnostic systems work on a range of skin tones and ethnicities will be important as well.
33/ Facial recognition systems are considered high-risk and need human oversight. Specific requirements here for both real-time and post remote biometric ID systems.
I tend to think first about facial recognition systems, but biometric ID also applies to fingertip scan, palm scan, adn retinal scan. I hate palm scans, btw. I don't know why, I just think they're weird and gross.
34/ Critical infrastructure (like the power grid) needs human oversight if using AI. This is good; we won't have an EU blackout if AI fails. I am a big fan of old-fashioned backup systems for critical infrastructure.
37/ Credit worthiness and access to public benefits is high-risk AI. So is AI used to decide dispatch for ambulances or firetrucks. I didn't know AI was being used or proposed for prioritizing emergency response. Bad idea to use AI for this.
38/ Law enforcement AI is high-risk, especially “emotion detection” and polygraph and recidivism prediction. Great! These types of systems are often hotbeds of snake oil and bias.
39/ Border control AI is high-risk
40/ Judicial AI is high-risk
41/ Any AI classified as high-risk is not necessarily lawful. I like that this is specifically stated. In other words: 'Even though it's listed here as a kind of software that exists in the world, we're not saying it's OK for the software to do the thing it's doing.'
42/ Mitigating AI risks is important. General requirements are laid out here.
44/ Training data should be representative. Yay, Joy Buolamwini and Timnit Gebru and Deb Raji! This is a direct reference to your #GenderShades work! #CodedBias #CodedGaze @AJLUnited
Bias monitoring, detection, and correction should be in place for high-risk AI systems. This is important. All of the work from FAccT is about to get really popular.
45/ High quality datasets in different fields should exist and be shared. Authorities in each field should support this effort.

I think there is an @AINowInstitute project relevant here, yes?
46/ Required: information on how high-risk AI systems were developed & how they perform over time; sufficient documentation in order to assess compliance.
This means good documentation is required. I am so happy! Technical writers are about to get a lot more jobs. It's about time. Software documentation practices in the agile era are a disaster.
"Such information should include the general characteristics, capabilities & limitations of the
system, algorithms, data, training, testing & validation processes used as well as documentation on the relevant risk management system."
"The technical documentation should be kept up to date." The documentation must be kept up to date! See what they did there? They accounted for human nature! Nice.
Recital 69 is a code of conduct requirement. 😂
76/ A European Artificial Intelligence Board should be established and will issue opinions, recommendations, advice or guidance. I saw 10 FTE mentioned in the document; does this mean there will be 10 people on the Board, or does it mean Board + staff=10?
77/ EU member states need to have their own competent national AI authorities. Not sure how this will play out. Those Macedonian teens making fake news seem like they're already a handful for the Greek government.
78/ High-risk AI makers should have a post-market monitoring system in place. This is like in the US drug market, where pharma firms are required to do post-market monitoring to catch problems that arise. Good idea.
80/ Financial markets, already heavily regulated, continue to be heavily regulated.
81/ Makers of non-high-risk AI are encouraged to make a code of conduct and voluntarily apply the rules required for high-risk AI. Not sure this will happen.
83/ Requirement of confidentiality in assessing AI compliance. This preserves trade secrets. This seems to be a reference to Jack Belkin's work on information fiduciaries.
Only 6 more to go. Losing steam.
You can follow @merbroussard.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: