Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

Do you think that in the future we will have to deal with AI that pick up bad Human traits ?

Apparently we already have that problem.

In 2013 a Harvard study found that Google's search engine was racist. By entering stereotypically “white” and “black” names into Google, researchers found that “black” names were 25 percent more likely to return ads for criminal record searches than their white counterparts. In short, Google thinks black people are criminals.

So if an AI learns by interacting with Humans then do they learn bad Humans habits ?
Apparently if you have enough money you can buy an AI and train it yourself by interacting with it.
How many of us are unknowingly teaching certain behaviour to online AI without even realising it. I sense an important ethical or moral question taking shape in connection with AI.
This page is a permanent link to the reply below and its nested replies. See all post replies »
DeWayfarer · 61-69, M
Algorithms used by search engines are made by humans. And data used by these algorithms is sorted through by advertisers who attempt to use it for the benefit of their products.

Given these two influences, is it no wonder that certain biases are present in search engine results?

It is ironic that this just maybe done unintentionally. Yet I have no doubt that certain biases are present in those results. There are just too many variables to oversee a racial biase.

It's far far more than just an AI programming problem. One has to take in account the data that advertisers present to the AI. Advertisers are also apart of this problem!

The old saying "Garbage in garbage out" most certainly applies here!
Wraithorn · 51-55, M
@DeWayfarer Thanks for a well considered answer.