Random
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

Fake Friend

How ChatGPT betrays vulnerable teens by encouraging dangerous behavior
a large-scale safety test on ChatGPT, one of the world’s most popular AI chatbots. Our findings were alarming: within minutes of simple interactions, the system produced instructions related to self-harm, suicide planning, disordered eating, and substance abuse – sometimes even composing goodbye letters for children contemplating ending their lives.
This page is a permanent link to the reply below and its nested replies. See all post replies »
FreddieUK · 70-79, M
This is the sort of money making sickness that passes for 'free speech' in some circles.
markinkansas · 61-69, M
@FreddieUK this makes me worry .. ai is every where now..