March 12, 2024, 11:12 a.m. | Bruce Schneier

Schneier on Security www.schneier.com

Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.


Research paper.

academic papers art artificial intelligence ascii can chatbots claude gemini gpt gpt-3 gpt-3.5 gpt-4 hacking jailbreaking llm llms research researchers research paper safety

Financial Crimes Compliance - Senior - Consulting - Location Open

@ EY | New York City, US, 10001-8604

Software Engineer - Cloud Security

@ Neo4j | Malmö

Security Consultant

@ LRQA | Singapore, Singapore, SG, 119963

Identity Governance Consultant

@ Allianz | Sydney, NSW, AU, 2000

Educator, Cybersecurity

@ Brain Station | Toronto

Principal Security Engineer

@ Hippocratic AI | Palo Alto