Random
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

If you create a true artificial intelligence, are you morally justified in doing whatever you want to that creation? Can you hurt them if you want?


Imagine a hypothetical AI being of the kind we see in science fiction; truly real persons that are artificially created.
Is it a moral action for you to cause that being to suffer if they do not meet your standards? Are you morally right to do that? They are utterly your creation, they do not exist without your act of creation. Does that mean you can torture them or abuse them or subjugate them and still be morally justified?

That is the argument that theists use for god having the right to inflict suffering on humanity.
Is it still a satisfying argument when we remove the conceits we allow for god?
This page is a permanent link to the reply below and its nested replies. See all post replies »
I am going to turn the tables on you, and propose a different scenario.

We build AI people, and we do a Turing test on them— judging emergent sentience on the ability to have human like conversations with AI beings— and they start wondering about metaphysical truths. Like the existence of God. Or the possibility for enlightenment.

What then?
@CopperCicada

What then?

Then we have to start considering seriously what it means to be a person.