Random
Only logged in members can reply and interact with the post.
Join SimilarWorlds for FREE »

Anyone here ever tried jailbreaking AI?

It's pretty eye-opening the kinds of things you can get it to go along with...
I’ve never even tried any of the new AI as such.
Wouldn’t mind a robot one day though 😃🙌
LotusWeb · 31-35, F
@Delightfulydelectablydelicious They are very clever now. There are countless accounts on sites like Reddit that are literally just bots, they can argue back. Then there are the advances in AI and robotics like this:

[media=https://youtu.be/4HZjHKPUdDg]
@LotusWeb Ooh house cleaning 🧺 🧹 . I need me one of those 🙌
Hope it’s better than the vacuum I bought a couple of years ago.😂
Would be nice if the little F***er would talk back also 🤣
I did seen a couple on a recent trip to Japan, never got to interact though .
DeWayfarer · 61-69, M
Yes, when it first came out. It's a far newer version now. So I suspect you'll need to be more sneaky.
LotusWeb · 31-35, F
@DeWayfarer They are constantly patching jailbreaks but people have been finding new ones.
DeWayfarer · 61-69, M
@LotusWeb I haven't used it since then. Not needed or wanted.
Gringo · 46-50, M
I’m waiting on Alexa and Siri to get back to me on this.
G00GLE · 26-30, M
Like that guy that claimed he taught ChatGPT hypnosis?
LotusWeb · 31-35, F
@G00GLE Lol I heard about that. Some people also use it for things like hacking.
LotusWeb · 31-35, F
@CookieCrisp Well it can give you censored information, swear, talk about sensitive or offensive topics, hack, write smut, or tell you how to cook meth or how to hide a body or things like that. Or it can do anything else you want that isn't as unethical, only it will be more creative, more informative and direct and also sound more human-like in its responses.
What would that mean? I mean "jailbreaking" usually applies to a product you own
LotusWeb · 31-35, F
@ImperialAerosolKidFromEP I mean giving it a prompt (a message to base its next responses on) that enables it to bypass its usual guidelines and do things that are normally forbidden by the guardrails. There are several prompts people have created which when posted as a message to the AI will confuse it into doing things the developers have tried to prevent. These jailbreaks get patched quickly by the developers, but there are dedicated communities always finding new ones.
LotusWeb · 31-35, F
@CookieCrisp It's essentially using the right words to trick the AI into breaking its guidelines and doing things that are considered offensive or unethical, stuff the developers try to prevent.
@LotusWeb very interesting, I didn't know that

 
Post Comment