Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

Do you think that in the future we will have to deal with AI that pick up bad Human traits ?

Apparently we already have that problem.

In 2013 a Harvard study found that Google's search engine was racist. By entering stereotypically “white” and “black” names into Google, researchers found that “black” names were 25 percent more likely to return ads for criminal record searches than their white counterparts. In short, Google thinks black people are criminals.

So if an AI learns by interacting with Humans then do they learn bad Humans habits ?
Apparently if you have enough money you can buy an AI and train it yourself by interacting with it.
How many of us are unknowingly teaching certain behaviour to online AI without even realising it. I sense an important ethical or moral question taking shape in connection with AI.
This page is a permanent link to the reply below and its nested replies. See all post replies »
MethDozer · M
Nonsense. Saying that was the search engine being racist is nonsense. A search engine isn't aware to be racist.


The real moral and ethical dilemma is augmenting data and search algorithms solely because the results offend your senses of intersecting perceived injustice where it isn't and in the name of being "pleasant".


Such a non issue made by glass hearted snowflakes that was pandered too instead of mocking their absurd outrage over b.s..