Today #Australia working on use of #AI to detect abhorrent images to stop humans needing to be exposed to them. I do not see how this is a positive step forward for Victims. Many discussions on "data ownership restrictions" and etc. How can removing the Human element be good? https://twitter.com/Data61news/status/1258184729494384640
There is just to much academic and industry research which shows the fallacies in this. Another concern is that this could be used as a further tool of "police discretion" the AI didnt pick up anything bad? Or the AI ranked it as bad? Seems poorly thought out on greater impacts.
Bigger concern is

"Within the next 12 months, CSIRO’s Data61 plans to equip Data Airlock with cryptography and differential-privacy algorithms to improve its usability in domains including healthcare."

Tax Payer funded research the code should be OpenSource for starters
Why are @CSIRO and @Data61news creating an AI for image abuse that can then be rolled out to healthcare when increased encryptions are applied? Also why does Tax Payer research turn into commerical code products when tax money that paid for it and maybe the code could help others
Why would you not join an Open Source partnership with bodies like @facebook and @twitter and @google for AI abuse detection in images if you felt that it worked and would benefit people? I am lost for words at why this is a closed Source product with no vendor level input etc?
You can follow @aimee_maree.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: