Today #Australia working on use of #AI to detect abhorrent images to stop humans needing to be exposed to them. I do not see how this is a positive step forward for Victims. Many discussions on "data ownership restrictions" and etc. How can removing the Human element be good? https://twitter.com/Data61news/status/1258184729494384640">https://twitter.com/Data61new...
There is just to much academic and industry research which shows the fallacies in this. Another concern is that this could be used as a further tool of "police discretion" the AI didnt pick up anything bad? Or the AI ranked it as bad? Seems poorly thought out on greater impacts.
Bigger concern is
"Within the next 12 months, CSIROâs Data61 plans to equip Data Airlock with cryptography and differential-privacy algorithms to improve its usability in domains including healthcare."
Tax Payer funded research the code should be OpenSource for starters
"Within the next 12 months, CSIROâs Data61 plans to equip Data Airlock with cryptography and differential-privacy algorithms to improve its usability in domains including healthcare."
Tax Payer funded research the code should be OpenSource for starters
Why are @CSIRO and @Data61news creating an AI for image abuse that can then be rolled out to healthcare when increased encryptions are applied? Also why does Tax Payer research turn into commerical code products when tax money that paid for it and maybe the code could help others