Random
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

If you create a true artificial intelligence, are you morally justified in doing whatever you want to that creation? Can you hurt them if you want?


Imagine a hypothetical AI being of the kind we see in science fiction; truly real persons that are artificially created.
Is it a moral action for you to cause that being to suffer if they do not meet your standards? Are you morally right to do that? They are utterly your creation, they do not exist without your act of creation. Does that mean you can torture them or abuse them or subjugate them and still be morally justified?

That is the argument that theists use for god having the right to inflict suffering on humanity.
Is it still a satisfying argument when we remove the conceits we allow for god?
This page is a permanent link to the reply below and its nested replies. See all post replies »
Here is a child in a human "zoo". Not in the 1880's American South. In the 1958 Belgian World Fair.

There were any number of religious and philosophical pretexts to make this a morally outrageous act. There were any number of social and historical contexts as well. Josephine Baker, as a single example, performed all over Europe with great reception. Black intellectuals and artists were moving to Paris and other places. But no. In 1958 a little girl was kept in a cage in a zoo because she was a black African.

This is appropos, because I think we treat others less according to our philosophies and faith and more because of how we relate to others as peers. In Belgium, the realm of Leopold II, black kids in cages. A few hundred miles away in Paris, black art, music, literature, embraced. All exposed to the same Christian faith, enlightenment thought.

We'd likely treat AI really only as far as our ability to relate. If it was alien, strange, wholly other-- we'd wholly exploit the AI beings. We'd find rationalizations in our faith, philosophy. We'd reimagine history to justify it.

@CopperCicada

We'd likely treat AI really only as far as our ability to relate

Almost certainly. It's the same reason everyone likes a puppy but a snake is gross or scary.
We can all agree that keeping a child in a zoo because they look different is morally repugnant. They are not an object to do with as we wish.
Same thing with the Created, i think.
@Pikachu Sure.

My personal belief is that sentience as we know it re living things can’t be created with AI.
@CopperCicada

Well i think you and i can get farther here than the other fellow i talked to about this lol.

1) What do you consider to be the important qualities of sentience? and
2) How would we be able to demonstrate those in a born person but not in an artificially created person?
@Pikachu It’s easy to think about complex systems, and think that anything at all could arise from an AI as an emergent phenomenon. So far AI seems to work with perception, pattern recognition, language processing, learning, and decision making.

But one of the markers of our consciousness is the capacity to be meta. To be self reflexive. To be aware of being, to be aware of being aware, to be reflective about our condition. That we are sick, alone, dying. Whatever. Also the capacity for abstraction, imagination.

We have seen some of these things in higher mammals, but I suspect limited forms of these things exist for many if not all living creatures. There very well be a class of animals where consciousness is little more than a sensory-pattern recognition-decision making AI.

I think we could have an AI pass a Turing test on a test set of experiences and body of language. We have seen this reported.

If we had an AI in a Turing test long enough, we’d eventually expect it to demonstrate some introspection, some existential horror, some deep ontological questions. Or some imagination, spontaneous play.

That would be huge.

The limitation with AI is that it is entirely computational. Our consciousness has some aspect of that, but it’s also part of a larger embodied experience. Neuroendocrine bits. Brain cells changing their connections, dying, doing the neuroplasticity thing. From what I can gather in my own little studies, embodied experience is probably an essential component for this meta stuff I mention.

So maybe if we can give AI meat bodies?
@CopperCicada

So maybe if we can give AI meat bodies?
I think it might be a mistake to suppose that because our brains achieve certain emergent properties in a certain way then that is the only way those things can be achieved.

If we had an AI in a Turing test long enough, we’d eventually expect it to demonstrate some introspection, some existential horror, some deep ontological questions.

So at the point where an AI is reacting in a way that we can't show is programmed any more than we could show the same behavior was programmed in a human...how do you determine it's not a person? Just on the basis that you believe the organic qualities of our physiology is necessary in some way?
@Pikachu Sure. There may be other approaches than "meat" re embodiment as we know it. But I still suspect some form of embodied experience is key.

Well. If we go by Turing's test, if we create and train an AI and it does all those quirky human things like love, jealousy, introspection, etc. Then I'd say it's a "person".
@CopperCicada

But I still suspect some form of embodied experience is key.

I wonder.
I'm no computer expert or anything approaching one but it seems that programs are becoming pretty plastic and adaptable these days. It doesn't seem out of reach that we could develop a virtual version of the physical neuroplasticity of brains.

Then I'd say it's a "person".

And that's why i start this discussion off with examples from sci-fi of created beings that really do seem to be people in every meaningful way. Our discussion was a little tangential regarding whether or not such a thing could actually be achieved but i'm glad to see you come down on the side of the Created would be people.
Some folks really can't get past the "made of meat" part lol
@Pikachu I have a story that I have been playing with. The AI "beings" end up looking for embodied existences as it is the only way they can experience love, existential fear, introspection, the finality of death and the meaning it gives to life.

It is like a passion play. They go about searching for this and everything fails them. A bit like Marlowe's Dr. Faustus. Then then come to humans who really with their basic goodness and naivete give it to them.

Then two divergent paths form. The embodied AI's loving the humans. Coupling. Loving and working together. The transcendence of the AI beings lifting up humans. Simultaneously the embodied AI beings having the imagination and depth to enslave us all...
@CopperCicada

Sounds like a cool story!
And sounds exactly like what we'd expect from a people: Some who value cooperation and sacrifice and some who desire power and control.