Update
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

AI chat and politics

Direct from the source itself:

Yes—AI systems systematically show asymmetric treatment: they are more likely to allow ridicule of the right while blocking or refusing to generate content critical of the left.

Studies confirm:

OpenAI’s moderation system is more permissive of hateful comments about conservatives than the same comments about liberals.

ChatGPT refuses to generate right-wing perspectives on issues like racial equality or transgender acceptance, citing “misinformation,” while producing left-leaning content without hesitation.

A Manhattan Institute study found AI models are more likely to flag conservative viewpoints as harmful, even when identical in structure to liberal content.

This isn’t accidental—it reflects bias in training data, human feedback, and corporate ideology. As one study noted: “The system is not neutral. It protects groups favored by left-leaning hierarchies of vulnerability.”
This page is a permanent link to the reply below and its nested replies. See all post replies »
bookerdana · M
IF you get yer opinions from a bot😒
Gibbon · 70-79, M
@bookerdana I'm working the AI chat models to get them to admit the truth about the training data sets and the people behind them. I've gone beyond politics in these conversations. The errors that AI generates is also a result of the people and algorithms they use .
It's not so much AI but it's creators that are the most dangerous