Feb. 8, 2024, 5:10 a.m. | Javier Rando Florian Tram\`er

cs.CR updates on arXiv.org arxiv.org

Reinforcement Learning from Human Feedback (RLHF) is used to align large language models to produce helpful and harmless responses. Yet, prior work showed these models can be jailbroken by finding adversarial prompts that revert the model to its unaligned behavior. In this paper, we consider a new threat where an attacker poisons the RLHF training data to embed a "jailbreak backdoor" into the model. The backdoor embeds a trigger word into the model that acts like a universal "sudo command": …

adversarial attacker backdoors can cs.ai cs.cl cs.cr cs.lg feedback human jailbreak language language models large prompts threat work

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Principal Security Analyst - Threat Labs (Position located in India) (Remote)

@ KnowBe4, Inc. | Kochi, India

Cyber Security - Cloud Security and Security Architecture - Manager - Multiple Positions - 1500860

@ EY | Dallas, TX, US, 75219

Enterprise Security Architect (Intermediate)

@ Federal Reserve System | Remote - Virginia

Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Associate -- Dallas

@ Goldman Sachs | Dallas, Texas, United States

Vulnerability Management Team Lead - North Central region (Remote)

@ GuidePoint Security LLC | Remote in the United States