Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

Do you think that in the future we will have to deal with AI that pick up bad Human traits ?

Apparently we already have that problem.

In 2013 a Harvard study found that Google's search engine was racist. By entering stereotypically “white” and “black” names into Google, researchers found that “black” names were 25 percent more likely to return ads for criminal record searches than their white counterparts. In short, Google thinks black people are criminals.

So if an AI learns by interacting with Humans then do they learn bad Humans habits ?
Apparently if you have enough money you can buy an AI and train it yourself by interacting with it.
How many of us are unknowingly teaching certain behaviour to online AI without even realising it. I sense an important ethical or moral question taking shape in connection with AI.
DeWayfarer · 61-69, M
Algorithms used by search engines are made by humans. And data used by these algorithms is sorted through by advertisers who attempt to use it for the benefit of their products.

Given these two influences, is it no wonder that certain biases are present in search engine results?

It is ironic that this just maybe done unintentionally. Yet I have no doubt that certain biases are present in those results. There are just too many variables to oversee a racial biase.

It's far far more than just an AI programming problem. One has to take in account the data that advertisers present to the AI. Advertisers are also apart of this problem!

The old saying "Garbage in garbage out" most certainly applies here!
Wraithorn · 51-55, M
@DeWayfarer Thanks for a well considered answer.
This is nothing new. The ads BET used to run, I haven't watched in years so I don't know if it's changed, were the exact same way.
MethDozer · M
Nonsense. Saying that was the search engine being racist is nonsense. A search engine isn't aware to be racist.


The real moral and ethical dilemma is augmenting data and search algorithms solely because the results offend your senses of intersecting perceived injustice where it isn't and in the name of being "pleasant".


Such a non issue made by glass hearted snowflakes that was pandered too instead of mocking their absurd outrage over b.s..

 
Post Comment