Today is #PhilosophyFriday
!
I want to talk about the ethical implications of using AI, but this is a HUGE topic. So today I'll focus on one specific issue:
Will super-intelligent beings be nice to us? 

I want to talk about the ethical implications of using AI, but this is a HUGE topic. So today I'll focus on one specific issue:




Would they destroy us, adopt us, teach us, or ignore us?

Being a dick would be kind of a Great Filter. Violent civilizations would not survive long enough to conquer the Galaxy.

In this view, being a dick would be evolutionarily inevitable.

Or they could have moral values completely incomparable to ours, unclassifiable as good or bad.

Such an encounter has never ended well for the least advanced civilization.
But who's to say there isn't a technological level after which being good is a condition for further progress?
We may very well be at the brinks of this transition right now.
What do you think?
We may very well be at the brinks of this transition right now.

Why is this relevant? If we invent super-intelligent AI someday, they could be like aliens to us.
If goodness is a necessary condition of super-intelligence, we have nothing to worry about.
Otherwise, we may already have little time left to solve this problem.


And if there is even a small chance that we end up being destroyed by our god-children, shouldn't we be paying more attention to this problem?
There are lots of pressing issues with AI today, though. Deciding what to prioritize is also important.

As usual, if you like this topic, reply in this thread or @ me at any time. Feel free to
like and
retweet if you think someone else could benefit from knowing this stuff.
Read this thread online at https://apiad.net/tweetstorms/philosophyfriday-orth


