ASWG Whitepaper on Facial Recognition released. Increasing support, but also polarisation among Australians.
Key recs:
- improved digital literacy education
- more responsible reporting
- consultative legislation
- forebearance on any rollouts
https://www.monash.edu/__data/assets/pdf_file/0011/2211599/Facial-Recognition-Whitepaper-Monash,-ASWG.pdf
Generally speaking, we saw people as split 50/50 , though people thought the databases are inaccurate, insecure, unsafe, and an invasion of privacy. For these reasons, we argue that we need a more digitally literate society in Australia.
Participants are generally unaware of implicit consent but use a language of ‘rights to personal data’ to describe normative expectations. We suggest that these expectations are ignored during rollout of facial recognition services.
49% of polled individuals agreed that use of facial recognition technology to identify people in public places is an invasion of privacy. Despite that, 61% said that facial recognition technology could be an important tool for improving public safety.
37% of people agreed (or strongly agreed) that the risks of using facial recognition technology outweigh the benefits, yet 25% disagreed (or strongly disagreed). This is despite the fact that 29%-32% thought that it is important that the technology is reliable or accurate.
Polling survey respondents we found that, for facial recognition, 9 in 10 Australians have heard of facial recognition technology, less than 1 in 10 feel they know a lot about it. This is self-reported, so the real number of people ‘knowing a lot about it’ will be lower.
We polled on various 'pro-social' uses to see alternative use cases, including detecting sex offenders near schools, screening users on dating apps for DV convictions, and for weapons purchases.
We also polled on various workplace endeavours - using FR to screen people clocking in and out of work, for job applications, and for monitoring mood. We looked at schools, looking at materials that included attention, attendance, and mood.
While we didn't state it, many of these methods have already been implemented, and yet respondents saw these as future events that would need to be properly regulated, unaware that they are already in place in some instances.
We also note that there are a number of data-sharing agreements that will impact access to the already-existing databases, especially when we have data collection that continues without any particular direct or beneficial purpose.
For these reasons, we argue that we need better regulation, better awareness of regulation, and a more consultative approach. Most importantly, we need to ask 'do we need this?' In many cases, we don't.
We can only have good tech when the public knows how it works and what it does - it makes people safer and gives people agency and freedom in their experience of school, work, home, and the city.
3 in 5 (61%) feel their personal data is not safe and secure, even more (64%) feel like it's not safe from hacking/cybertheft, and about 1 in 3 feel that racial bias is an issue.
We also note that, in the context of FR, people become less supportive the closer to home it gets. One likely interpretation of this is that 'facial recognition is for other people, not for me'. People are concerned about their safety, but don't want to be tracked themselves.
You can follow @r4dyc.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: