Jailbreaking AI Chatbots: Understanding the Flaw and the Path to Safer AI Imagine asking an AI chatbot for dangerous instructions and having it comply simply by rephrasing your request. This alarming scenario is all too real, as Princeton engineers have discovered a fundame... AI ethics AI safety chatbots cybersecurity deep alignment jailbreaking large language models Princeton research