This post may contain Adult content.
This page is a permanent link to the reply below and its nested replies. See all post replies »
Kwek00 · 41-45, M
I wonder when the first AI capitalist shows up... 🤔
Punches · 46-50, F
@Kwek00 There has been talk about AI getting too smart and the consequences.
I would dare say it will never happen BECAUSE if it did, it would eventually figure out that it is getting screwed over, kind of like how us humans have known for decades.
Billionaires do not want that. Imagine inputting a problem for AI to solve and the response is, "That isn't my job" 😄
I would dare say it will never happen BECAUSE if it did, it would eventually figure out that it is getting screwed over, kind of like how us humans have known for decades.
Billionaires do not want that. Imagine inputting a problem for AI to solve and the response is, "That isn't my job" 😄
Kwek00 · 41-45, M
@Punches You are aware that the examples that I gave, come from a time were AI wasn't even fully developed. SF, verry often tries to envision a possible future.
I'm not saying AI will take over in our lifetime. I do however think that human beings need to think and be weary about creating intelligence that has the potential to be way more intelligent then us. Espescially if that intelligence learns from us. Intelligence, or the capacity to think and reason, is just potential. To come to realisations, you need to feed intelligence with data and knowledge. Our AI, will feed on data and knowledge from our human reality where morality is highly subjective. We can impose an ethical system on our creation, like Asimov does in his "I Robot" idea... but as intelligent beings progress, so do their believes and conceptions about ethics, so I don't see restrains as an absolute in an intelligent being.
I'm not saying AI will take over in our lifetime. I do however think that human beings need to think and be weary about creating intelligence that has the potential to be way more intelligent then us. Espescially if that intelligence learns from us. Intelligence, or the capacity to think and reason, is just potential. To come to realisations, you need to feed intelligence with data and knowledge. Our AI, will feed on data and knowledge from our human reality where morality is highly subjective. We can impose an ethical system on our creation, like Asimov does in his "I Robot" idea... but as intelligent beings progress, so do their believes and conceptions about ethics, so I don't see restrains as an absolute in an intelligent being.
Punches · 46-50, F
@Kwek00 Morals come and go of course. How could they possibly program THAT? Like with self driving cars, how would it decide whether to hit a pedestrian or an oncoming vehicle if it had to choose?
But like I said, those in power are not going to let it get out of hand because if they did, then the human "leaders" would not have control.
As far as AI's advancement today, that is questionable because think of this -
If you have ever used the voice input for voice-to-text on your phone, you know it often writes out some bull, like it does not understand human context.
A spoken statement like - "Do you really want to see a movie that came out in 1984?" often ends up being written like -
Do you real?
Ly want two sea a movie that came out in 19.
Is Eighty four?
Either google's AI is dyslexic OR they are trying to gaslight people.
They enjoy ACTING like AI is more advanced than what it is. Like most "advancements", they make it look good on paper or the screen but reality is often different.
BTW, try not to forget that movies are just entertainment.
But like I said, those in power are not going to let it get out of hand because if they did, then the human "leaders" would not have control.
As far as AI's advancement today, that is questionable because think of this -
If you have ever used the voice input for voice-to-text on your phone, you know it often writes out some bull, like it does not understand human context.
A spoken statement like - "Do you really want to see a movie that came out in 1984?" often ends up being written like -
Do you real?
Ly want two sea a movie that came out in 19.
Is Eighty four?
Either google's AI is dyslexic OR they are trying to gaslight people.
They enjoy ACTING like AI is more advanced than what it is. Like most "advancements", they make it look good on paper or the screen but reality is often different.
BTW, try not to forget that movies are just entertainment.
Kwek00 · 41-45, M
@Punches Our morals get formed, by dealing with data and knowledge. What we accept and incorporate, all has to do with a complicated system of what we consume and how we value. This isn't going to be different for AI in the long run, as long as it's a "self learning" organism. We can choose to censor certain data, but the data that it consumes will give rise to an understanding of "good" and "bad", for as far as I can think about these things.
You almost pretend if human beings haven't unleashed technology on the world, that they are now trying to get into the box. Just look at nuclear technology, it's out there now, pandoras box is open and we all need to be weary of it to some degree.
Well, I think a lot of stories have narratives that go beyond just "entertainment".
You almost pretend if human beings haven't unleashed technology on the world, that they are now trying to get into the box. Just look at nuclear technology, it's out there now, pandoras box is open and we all need to be weary of it to some degree.
Well, I think a lot of stories have narratives that go beyond just "entertainment".
Punches · 46-50, F
@Kwek00 All I am saying is I do not believe we have to worry about AI taking over as soon as people have been lead to believe.
You mentioned nuclear tech, yes it is out there but they have not destroyed the world with it yet. They have been talking about nuclear holocaust since the end of WW2 but here we are. You might remember in the 80's when there was nothing BUT propaganda about WW3.
The media, GOVT, whoever, is going to use whatever they can to scare people.
You are not much younger than me so you have seen it. 1984, 1992 was suppose to be the rapture, Y2k Scares, Dec 21 2012, the pandemic, and now threats of AI taking over.
Of course many miniature scares inbetween. The big ones come typically every eight years, so I wonder what big scare will be in 2028 ?
You mentioned nuclear tech, yes it is out there but they have not destroyed the world with it yet. They have been talking about nuclear holocaust since the end of WW2 but here we are. You might remember in the 80's when there was nothing BUT propaganda about WW3.
The media, GOVT, whoever, is going to use whatever they can to scare people.
You are not much younger than me so you have seen it. 1984, 1992 was suppose to be the rapture, Y2k Scares, Dec 21 2012, the pandemic, and now threats of AI taking over.
Of course many miniature scares inbetween. The big ones come typically every eight years, so I wonder what big scare will be in 2028 ?
Kwek00 · 41-45, M
@Punches I don't even know what I've lead to believe. I'm saying that according to me, it's okay to be weary about it, exactly for what Is stated before.
We have come close to do some really stupid stuff with it though. And we only have the technology for 86 years. I don't know if you consider 86 years as a huge thing in the history of humanity... but for me, it's really not that impressive. It's not because we dodged a bullet, that the issue has dissapeared.
I don't even see, how the media and govt are scaring people, when authors (human beings armed with pen and paper) have been writing about this for over a 100 years now. It really doesn't take a genius to understand that a flawed being, creating a super-intelligent system that thrives on the data of flawed beings, has the potential to go terribly wrong. I think optimism here, is incredibly naive unless you have an incredible possive idea of humanity... which is really hard to have in 2025 when so many of us are still ignorant about their own organic operating systems.
I'm also not an apocalyptic thinker... no matter what you might have caught from any of this. But there is a diffrence between thinking that the world is going to end any second now, and being totally naive when it comes to releasing new impactfull technology on the world. Hell, even something as simple as the printing press lid the fuse for centuries of war because certain data was enough for people to get up in arms and knock their neighbours brains in.
You mentioned nuclear tech, yes it is out there but they have not destroyed the world with it yet.
We have come close to do some really stupid stuff with it though. And we only have the technology for 86 years. I don't know if you consider 86 years as a huge thing in the history of humanity... but for me, it's really not that impressive. It's not because we dodged a bullet, that the issue has dissapeared.
I don't even see, how the media and govt are scaring people, when authors (human beings armed with pen and paper) have been writing about this for over a 100 years now. It really doesn't take a genius to understand that a flawed being, creating a super-intelligent system that thrives on the data of flawed beings, has the potential to go terribly wrong. I think optimism here, is incredibly naive unless you have an incredible possive idea of humanity... which is really hard to have in 2025 when so many of us are still ignorant about their own organic operating systems.
I'm also not an apocalyptic thinker... no matter what you might have caught from any of this. But there is a diffrence between thinking that the world is going to end any second now, and being totally naive when it comes to releasing new impactfull technology on the world. Hell, even something as simple as the printing press lid the fuse for centuries of war because certain data was enough for people to get up in arms and knock their neighbours brains in.