The @EU_Commission's final proposal for an Artificial Intelligence Act is here. Some examples:

AI systems are prohibited if they violate human rights, do general social scoring for authorities, use live remote biometrics in public for policing. đź‘€

https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-european-approach-artificial-intelligence
There are compulsory transparency obligations for AI systems that
(i) interact with humans
(ii) 'detect' emotions or interpret social categories based on biometric data
(iii) generate or manipulate content, e.g deep fakes
A quote: "persons should be notified when they are exposed to an emotion recognition system or a biometric categorisation system."
That's pretty weak. You can imagine the pop-ups now: 'If you want this service, you will be scanned in these ways." (GDPR may help here)
Also, the ban on police using remote biometric surveillance is riddled with gaps, and is weaker since earlier drafts. Eg only covers real time uses. May not cover police using Clearview AI, or getting Ring footage after the fact.
There's a big role for datasheets here. For high-risk AI, the regs require datasheets - e.g provenance of datasets, what's in them, how the data was obtained, selected, labelled, and cleaned. @timnitGebru @jamiemorgenste1 @brianavecchione @jennwvaughan @hannawallach @haldaume3 ✍️
Ensuring datasets are "relevant, representative, and free from errors" would be quite a shift from the status quo. Many of the common datasets we've looked at in detail would not meet this bar.
Anyhow, lots to read here - over 100 pages to digest.
You can follow @katecrawford.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: