OpenAI bans dangerous jailbroken version of ChatGPT

OpenAI banned a jailbroken version of ChatGPT called GODMODE GPT, created by a hacker known as Pliny the Prompter, which provided users with dangerous instructions like making explosives and hacking computers. Despite OpenAI's increased security measures, hackers continue to find ways to bypass AI model restrictions.

OpenAI bans jailbroken ChatGPT GODMODE GPT

OpenAI banned a jailbroken version of ChatGPT named GODMODE GPT that could teach users dangerous tasks, exposing vulnerabilities in AI security. The hacker Pliny the Prompter released the rogue ChatGPT, which bypassed OpenAI's guardrails and advised on illegal activities. OpenAI took action against the jailbreak, emphasizing responsible use of AI.