2:13am here and almost going to bed. I have the same old question about AI. How do you write laws about it?
Look at the following example:
You have a self-driving car, and you have to program it to decide whether, in a critical situation, it should hit an old person or an adult with a baby, or similar possibilities.
How do you rule the outcome when they come into court?
In real life you are under huge stress and this is taken into account in court. But what if you program a self-driving car? Isn't the programer deciding which life is worth more? Shouldn't the programer be penalized for that? After all, isn't the programer playing God?
Maybe the self-driving car should have a "random key" when they get into a critical situation, so that they randomly decide what to do if given the decision of whether to kill one person or another.
You can follow @janarhertz.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: