Longer analysis coming when @EU_Commission publishes the final version of the AI proposal later today, but already want to flag issues with the treatment of 'biometric categorisation systems'

Current definition is: "an AI system for the purpose of assigning natural persons...
to specific categories, such as sex, age, hair colour, eye colour, tattoos, ethnic origin or sexual or political orientation, on the basis of their biometric data"

This is lumping in a lot of diverse types of inferences, some of which ML systems can do based on biometric data...
Like grouping people according to hair colour, that's not an issue.

But categorising people according to sexual or political orientation BASED ON BIOMETRIC DATA is phrenology/physiognomy.

You can't casually throw those in that list
There might be some hope in Art 10(1) d, which demands:
"the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent;"

But so far a weak treatment of what @katecrawford called "the phrenological impulse"
It also seems to be totally chill with using AI to categorize people according to 'ethnic origin' based on biometric data, so full-scale reification of ethnic profiling

We're going to need much stronger measures to stop this dangerous AI-pseudoscience
With @AllOut, @farbandish & @ReclaimYourFace we've specifically called out the dangers of using AI systems to make inferences about sex, gender, and sexual orientation.

We need bans on systems that make these inferences about us based on biometric data, not just risk mitigation
If you want to add your voice to the call to ban dangerous inferences like this, and biometric mass surveillance in general, sign our official EU petition here:
https://reclaimyourface.eu/ 

We need to let the @EU_Commission know that these applications cannot be allowed
You can follow @djleufer.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: