One fairly major thing standing out in the EC's proposal for an AI Act: systems which create deepfakes, or similarly generate/modify images or video of real people, must declare this to the user in a clear and understandable way - and generated content should be auto-labelled.
See they also include a note there about informing users if they're exposed to biometric categorisation. Elsewhere in the proposal they actually state that AI for real-time biometrics, as well as occupational uses such as hiring or firing, should be classed as 'high-risk'.
High-risk AI would be subject to a number of laws under the proposal, including registration in a database, checks for conformity before going live and after any change, and clear rules for communication with users, control by human operators, and transparency in training.
Surprisingly, though, they recommend that high-risk AI systems can dodge checks if there are exceptional safety reasons for society. This would essentially give a free pass to Palantir's aggressive moves on EU governments during the pandemic, for example, which is bad.
You can follow @mtrc.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: