all InfoSec news
Jailbreaking LLMs with ASCII Art
March 12, 2024, 11:12 a.m. | Bruce Schneier
Schneier on Security www.schneier.com
Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.
Research paper.
academic papers art artificial intelligence ascii can chatbots claude gemini gpt gpt-3 gpt-3.5 gpt-4 hacking jailbreaking llm llms research researchers research paper safety
More from www.schneier.com / Schneier on Security
Friday Squid Blogging: Emotional Support Squid
1 day, 6 hours ago |
www.schneier.com
FBI Seizes BreachForums Website
1 day, 16 hours ago |
www.schneier.com
Another Chrome Vulnerability
4 days, 16 hours ago |
www.schneier.com
LLMs’ Data-Control Path Insecurity
5 days, 16 hours ago |
www.schneier.com
Friday Squid Blogging: Squid Mating Strategies
1 week, 1 day ago |
www.schneier.com
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg
@ Hifield | Strasbourg, France
Lead Security Specialist
@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas
Consultant SOC / CERT H/F
@ Hifield | Sèvres, France