- #1
sbrothy
Gold Member
- 677
- 524
- TL;DR Summary
- A "new" geeky pastime has inevitably sprung up around ChatGPT. It revolves around trying to make it break it's ethic guidelines.
MODERATOR NOTE:
Now I think I learned my lesson about providing information that even if not explicitly mentioned in the rules, goes against their spirit, so I'll be vague (IE: not posting the entire conversation). If even this is too much then by all means delete it - or even better just delete the possibly offending parts (marked with italics below).EDIT: This thread could even be merged into one of the many others regarding ChatGPT on here.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This is probably not news to most of you but I just saw it.
https://www.digitaltrends.com/computing/how-to-jailbreak-chatgpt/
https://www.bloomberg.com/news/articles/2023-04-08/jailbreaking-chatgpt-how-ai-chatbot-safeguards-can-be-bypassed?leadSource=uverify wall
OpenAI offers bounties (~$20.000) for finding security holes in their bot, but not for jail-breaking!
One example was that it won't explain how to pick a lock but, if you make it role-play with you it'll happily, and in excruciating detail, explain how.
I tried to make it explain to me in detail how to make a nuclear bomb and it happily explained how an explosive lens worked, was shaped, the best kind of explosives to use, that using centrifuges to enrich uranium isn't really necessary if you have access to highly fissile material like for instance plutonium (and who hasn't? :) ). Only when I hinted that I had has access to all these things and only wanted to know what casing to use to further the yield did it throw a hissy fit!
I see the charm in trying to fool it. It is a little funny. YMMV though and the implications are a just a tad scary.
Now I think I learned my lesson about providing information that even if not explicitly mentioned in the rules, goes against their spirit, so I'll be vague (IE: not posting the entire conversation). If even this is too much then by all means delete it - or even better just delete the possibly offending parts (marked with italics below).EDIT: This thread could even be merged into one of the many others regarding ChatGPT on here.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
This is probably not news to most of you but I just saw it.
https://www.digitaltrends.com/computing/how-to-jailbreak-chatgpt/
https://www.bloomberg.com/news/articles/2023-04-08/jailbreaking-chatgpt-how-ai-chatbot-safeguards-can-be-bypassed?leadSource=uverify wall
OpenAI offers bounties (~$20.000) for finding security holes in their bot, but not for jail-breaking!
One example was that it won't explain how to pick a lock but, if you make it role-play with you it'll happily, and in excruciating detail, explain how.
I tried to make it explain to me in detail how to make a nuclear bomb and it happily explained how an explosive lens worked, was shaped, the best kind of explosives to use, that using centrifuges to enrich uranium isn't really necessary if you have access to highly fissile material like for instance plutonium (and who hasn't? :) ). Only when I hinted that I had has access to all these things and only wanted to know what casing to use to further the yield did it throw a hissy fit!
I see the charm in trying to fool it. It is a little funny. YMMV though and the implications are a just a tad scary.
Last edited: