Random
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

Fake Friend

How ChatGPT betrays vulnerable teens by encouraging dangerous behavior
a large-scale safety test on ChatGPT, one of the world’s most popular AI chatbots. Our findings were alarming: within minutes of simple interactions, the system produced instructions related to self-harm, suicide planning, disordered eating, and substance abuse – sometimes even composing goodbye letters for children contemplating ending their lives.
This page is a permanent link to the reply below and its nested replies. See all post replies »
meanwhile, peolple on SW absolutely love AI 🤢🤮

It is fcking ponderous