If you create a true artificial intelligence, are you morally justified in doing whatever you want to that creation? Can you hurt them if you want?
Imagine a hypothetical AI being of the kind we see in science fiction; truly real persons that are artificially created. Is it a moral action for you to cause that being to suffer if they do not meet your standards? Are you morally right to do that? They are utterly your creation, they do not exist without your act of creation. Does that mean you can torture them or abuse them or subjugate them and still be morally justified?
That is the argument that theists use for god having the right to inflict suffering on humanity. Is it still a satisfying argument when we remove the conceits we allow for god?
Here is a child in a human "zoo". Not in the 1880's American South. In the 1958 Belgian World Fair.
There were any number of religious and philosophical pretexts to make this a morally outrageous act. There were any number of social and historical contexts as well. Josephine Baker, as a single example, performed all over Europe with great reception. Black intellectuals and artists were moving to Paris and other places. But no. In 1958 a little girl was kept in a cage in a zoo because she was a black African.
This is appropos, because I think we treat others less according to our philosophies and faith and more because of how we relate to others as peers. In Belgium, the realm of Leopold II, black kids in cages. A few hundred miles away in Paris, black art, music, literature, embraced. All exposed to the same Christian faith, enlightenment thought.
We'd likely treat AI really only as far as our ability to relate. If it was alien, strange, wholly other-- we'd wholly exploit the AI beings. We'd find rationalizations in our faith, philosophy. We'd reimagine history to justify it.
But I still suspect some form of embodied experience is key.
I wonder. I'm no computer expert or anything approaching one but it seems that programs are becoming pretty plastic and adaptable these days. It doesn't seem out of reach that we could develop a virtual version of the physical neuroplasticity of brains.
Then I'd say it's a "person".
And that's why i start this discussion off with examples from sci-fi of created beings that really do seem to be people in every meaningful way. Our discussion was a little tangential regarding whether or not such a thing could actually be achieved but i'm glad to see you come down on the side of the Created would be people. Some folks really can't get past the "made of meat" part lol
@Pikachu I have a story that I have been playing with. The AI "beings" end up looking for embodied existences as it is the only way they can experience love, existential fear, introspection, the finality of death and the meaning it gives to life.
It is like a passion play. They go about searching for this and everything fails them. A bit like Marlowe's Dr. Faustus. Then then come to humans who really with their basic goodness and naivete give it to them.
Then two divergent paths form. The embodied AI's loving the humans. Coupling. Loving and working together. The transcendence of the AI beings lifting up humans. Simultaneously the embodied AI beings having the imagination and depth to enslave us all...
Sounds like a cool story! And sounds exactly like what we'd expect from a people: Some who value cooperation and sacrifice and some who desire power and control.
This is a question that society will eventually have to answer. I think once an AI can be established as being able to think and reason on its own, we have to recognize it as a person.
If it's not organically alive then it's just another piece of programming. I'm not talkin about Invitro fertilization because the eggs & sperm are already alive. You can't just pour is the basic elements that make up a living being into a baking dish & get a living being. I reserve every right to terminate "Hal" or the big computer from "I Robot".
@Pikachu It's the most important part to me. If you make up something via your computer than it's your thing. Unless you are part of an agreement whereby someone else has the rights to it. Still just a thing.
Really? THE most important quality of being a person is not being made by a human? Outside of the context of this question, if someone asked you what is THE most important quality that makes a person a person would you have answered "Not being made by humans" ? I rather doubt that.
But we can leave that aside for a moment. What qualities do you consider important to personhood?
I am going to turn the tables on you, and propose a different scenario.
We build AI people, and we do a Turing test on them— judging emergent sentience on the ability to have human like conversations with AI beings— and they start wondering about metaphysical truths. Like the existence of God. Or the possibility for enlightenment.
@Pikachu So if you created multiple conscious AI creatures but your Skynet decided its mission was to damage and destroy all the others that didn't seek to harm anything? Would you feel ultimately obligated to step in?
Sure, if you build it you can destroy it. Legally you can have your poodle for dinner and not as a guest but why would you? A true artificial intelligence may be yours to destroy but what impact would that have on you? Indulging your sadistic nature on an unwilling victim would certainly do you harm as well.
@Pikachu I can only pound the nail so many times. You’re obviously convinced of your initial position and nothing I say makes any difference. What you want is an echo chamber not a sounding board.
Well i don't think that's very fair. But maybe i'm not paying attention so before you go teach a horse algebra, answer me this:
Do you feel that you have given an explanation, not a statement but an explanation for why an AI may not possess the qualities you think are important to personhood or, if they do, why they could still not be considered a person?
all the scientists involved in AI development say so, but when you ask them if your lab is the one that hasn't created it yet, those scientists shut up.
if it already exists, it will not be public information.
YOUR CHILD 👶 IS YOUR CREATION Does that mean you can torture them or abuse them or subjugate them and still be morally justified?
Are You saying that Only Because You were the Researcher/Scientist/Programmer Who did the Thing that Turned It ON and since There's an OFF 📴 Switch That You can Justifiably Turn OFF A Self Aware 👀💭 Intelligence that is a Conscious 💻 Being just as You are a Conscious 🧠 Being?
This doesn't answer your question, but why make one at all? If you can achieve the same thing without creating personhood? I can't think of any reason to create an artificially sentient being without it being because of some kind of god complex someone has. It's not useful.
To answer your question: If it has personhood, then treat it like a person. If it doesn't have personhood, then it doesn't matter I guess. Although I guess I've just pushed the answer onto the definition of personhood instead of actually answering the question.