Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2024/03/jailbreaking-llms-with-ascii-art.html
Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.
Research paper.