Update
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

WHY are we doing this to ourselves? AI is self-programming on its OWN. Even teaching itself different languages.

Geoffrey Hinton, known as the “godfather of AI,” is leaving his role at Google and plans to warn of the risks of the technology he’s long promoted.

Even if you only watch this a little over halfway through, you're going to be very shocked. Everyone needs to see this, because it affects our future dramatically. I could not believe what I learned from this documentary on 60 Minutes. Please see.

[media=https://youtu.be/880TBXMuzmk]
Man is so stupid, he has destroyed himself.
This page is a permanent link to the reply below and its nested replies. See all post replies »
DeWayfarer · 61-69, M
I have replied about this very subject myself. Yet there is more to this issue, even by Geoffrey Hinton, than most let on.

It's not a clear issue, Hinton said so himself a few times.

Geoffrey Hinton problem is more to do with the direction that google has taken rather than the whole question of AI.

The problem is business practices of ALL corporations. That must take priority over anything.

Google wants IMMEDIATE profit and to hades with any resulting problems. That is why Geoffrey Hinton left Google!

What we need to do is set up some kind of governing bounds for all AI!

AI is important. Yet not at the sake of the whole world.
LadyGrace · 80-89
@DeWayfarer I agree! I donk like Musk, either. He's a nutcase. This is the most dangerous technology we've ever seen. And there's no "if" about the fact that when the wrong people, including dabbling thrill-seeking teenagers, get hold of AI, the consequences could be the end of civilization. AI is in so much of our everyday lives, as it is. AI produced a FAKE EXPLOSION at the PENTAGON that went viral! We can see where this is going. What if it decided to send Russia or China for example a fake picture of, say, America bombing their country in a certain area? This is absolutely crazy!
DeWayfarer · 61-69, M
@LadyGrace here is why we should be allow some research in AI. The other day AI was accredited to a certain cure for certain type of cancer.

That is why we should continue it yet severely limit it.

Cancer is a devilish disease! Anything that can help, if very closely monitored, is a huge benefit for the world world.

Now with kids? Heck yes! Limit it as much as you want to! Just don't remove such a benefit to everyone in the world!
LadyGrace · 80-89
@DeWayfarer I read that. Even saw it on video. And they said the odds of anyone finding that kind of information so soon was light years away and AI figured out how to do it in no time. Beneficial, yes, but AI has far-seated us in intelligence and without monitoring, as you mentioned, but now it's out of the bag and there's no going back. What if it decided one day to pull the plug on all the computers in the United States and electricity! These are very real threats.
DeWayfarer · 61-69, M
@LadyGrace it's not out of the bag yet.

The power plug can still be pulled, if not severally diminished. Which is basically what Geoffrey Hinton said in his own context. Google though is evil incarcerated! Why he left google.
LadyGrace · 80-89
@DeWayfarer
Yet there is more to this issue, even by Geoffrey Hinton, than most let on.

Yes! MUCH more. What in the world??? AI that can teach itself???? This is inconceivable! What madness!
LadyGrace · 80-89
@DeWayfarer Yes, I know, but the video I presented here, said it is out of the bag, they're not even sure what will happen, or where we'll end up, and the creator just sat there with a stupid smile on his face. He said there's no going back now, well why not, is what I'd like to know. We don't have to make these things or advance them, or even allow them to run. A robot that can "feel" and "reason"??? Program itself?? This is just a catastrophe.
DeWayfarer · 61-69, M
@LadyGrace read about limiting concepts in children. It's basically the same thing.

If you believe it's too late for all children to learn decency then yes it's to late for AI.

If not! If you think children can learn decency then there is hope for AI as well as kids.
LadyGrace · 80-89
@DeWayfarer I don't think AI is limiting children. We have computer programs that teach children and to tell you the truth, they've got their face stuck in it almost constantly even at that young age and I think it's taking away their childhood, not to mention the damage to the eyes AND brains.
DeWayfarer · 61-69, M
@LadyGrace do you or do you not think children can learn decency? 🤷🏻‍♂️

It's as simple as that with AI.

If we can't teach children right from wrong then we are all lost anyway.
LadyGrace · 80-89
@DeWayfarer I'm not understanding why you're bringing the subject of decency up, in children. Children can learn anything but the source is the concern and the damage it could do as well, as well as benefit children. There's two options there. It can swing either way. Just look at computer technology. It depends on where you go online, who you listen to, and how you use information. It can be used for good OR evil.
DeWayfarer · 61-69, M
@LadyGrace I bring up children because there are some extremely smart children call idiot savants !

AI is like an idiot savant. You don't put them out in public! They often have no morality.
LadyGrace · 80-89
@DeWayfarer I agree.
DeWayfarer · 61-69, M
@LadyGrace yet do you submit them to a guillotine? Or do you use them in certain cases yet not let them out on the public?
LadyGrace · 80-89
@DeWayfarer this thing can go in so many directions, who knows what will end up happening but I don't think it's going to be good because there's no regulations.
DeWayfarer · 61-69, M
@LadyGrace I totally agree regulation is necessary.

That AI can never learn values though is like the descendants of today's idiot savants can never learn values.

Kids do grow up and have other kids. In a limited way AI is the same as well.

It won't be the same AI. Just like a generation from now will it be the same children.
LadyGrace · 80-89
@DeWayfarer AI systems can learn human values by asking questions. Questions are often vulnerable to challenges like uncertainty, deception or the absence of a reflective equilibrium.

In the review of 84 ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity.
DeWayfarer · 61-69, M
@LadyGrace this is how we can regulate AI.

This would be so much easier to explain if you ever read any sci-fi by Isaac Asimov. He was a mathematician BTW.

Asimov came up with the idea of robots to begin with. He was the precursor to even Geoffrey Hinton.

His concept was the three laws of robotics, which were very much like AIs. Just a bit more advanced than today.

What today's AI need is such a moral code yet far more than just three laws.

This is how it can be done. Sort of a "positronic" type of brain that basically shut off the AI when certain things happen.
LadyGrace · 80-89
@DeWayfarer I read Isaac Asimov in high school and found him very interesting. I also read Huxley's Brave New World. What's happening reminds me of this.
DeWayfarer · 61-69, M
@LadyGrace read his later books about Daniel Olivia! They were written in the 1980's I believe, when he was still alive. They take place after pebbles in the sky I believe.