April 24, 2024, 4:11 a.m. | Javier Rando, Francesco Croce, Kry\v{s}tof Mitka, Stepan Shabalin, Maksym Andriushchenko, Nicolas Flammarion, Florian Tram\`er

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.14461v1 Announce Type: cross
Abstract: Large language models are aligned to be safe, preventing users from generating harmful content like misinformation or instructions for illegal activities. However, previous work has shown that the alignment process is vulnerable to poisoning attacks. Adversaries can manipulate the safety training data to inject backdoors that act like a universal sudo command: adding the backdoor string to any prompt enables harmful responses from models that, otherwise, behave safely. Our competition, co-located at IEEE SaTML 2024, …

adversaries alignment arxiv attacks backdoors can competition cs.ai cs.cl cs.cr cs.lg data illegal inject instructions jailbreak language language models large llms misinformation poisoning poisoning attacks process report safe safety training training data vulnerable work

Consultant infrastructure sécurité H/F

@ Hifield | Sèvres, France

SOC Analyst

@ Wix | Tel Aviv, Israel

Information Security Operations Officer

@ International Labour Organization | Geneva, CH, 1200

PMO Cybersécurité H/F

@ Hifield | Sèvres, France

Third Party Risk Management - Consultant

@ KPMG India | Bengaluru, Karnataka, India

Consultant Cyber Sécurité H/F - Strasbourg

@ Hifield | Strasbourg, France