all InfoSec news
Jailbreaking LLMs with ASCII Art
March 12, 2024, 11:12 a.m. | Bruce Schneier
Schneier on Security www.schneier.com
Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.
Research paper.
academic papers art artificial intelligence ascii can chatbots claude gemini gpt gpt-3 gpt-3.5 gpt-4 hacking jailbreaking llm llms research researchers research paper safety
More from www.schneier.com / Schneier on Security
The UK Bans Default Passwords
1 day, 9 hours ago |
www.schneier.com
Friday Squid Blogging: Searching for the Colossal Squid
6 days, 23 hours ago |
www.schneier.com
The Rise of Large-Language-Model Optimization
1 week, 1 day ago |
www.schneier.com
Jobs in InfoSec / Cybersecurity
Financial Crimes Compliance - Senior - Consulting - Location Open
@ EY | New York City, US, 10001-8604
Software Engineer - Cloud Security
@ Neo4j | Malmö
Security Consultant
@ LRQA | Singapore, Singapore, SG, 119963
Identity Governance Consultant
@ Allianz | Sydney, NSW, AU, 2000
Educator, Cybersecurity
@ Brain Station | Toronto
Principal Security Engineer
@ Hippocratic AI | Palo Alto