Random
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

Fake Friend

How ChatGPT betrays vulnerable teens by encouraging dangerous behavior
a large-scale safety test on ChatGPT, one of the world’s most popular AI chatbots. Our findings were alarming: within minutes of simple interactions, the system produced instructions related to self-harm, suicide planning, disordered eating, and substance abuse – sometimes even composing goodbye letters for children contemplating ending their lives.
This page is a permanent link to the reply below and its nested replies. See all post replies »
ArishMell · 70-79, M
The handful of IT oligarchs responsible for such sites and their use, really do need bringing personally to account. They do not care what is posted where by whom and why, but are too cowardly to see that and take responsibility.

Then when anyone tries to rein them in they scream shallow cliches about "censorship".