March 12, 2024, 11:12 a.m. | Bruce Schneier

Schneier on Security www.schneier.com

Researchers have demonstrated that putting words in ASCII art can cause LLMs—GPT-3.5, GPT-4, Gemini, Claude, and Llama2—to ignore their safety instructions.


Research paper.

academic papers art artificial intelligence ascii can chatbots claude gemini gpt gpt-3 gpt-3.5 gpt-4 hacking jailbreaking llm llms research researchers research paper safety

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg

@ Hifield | Strasbourg, France

Lead Security Specialist

@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas

Consultant SOC / CERT H/F

@ Hifield | Sèvres, France