all InfoSec news
Bergeron: Combating Adversarial Attacks through a Conscience-Based Alignment Framework
March 19, 2024, 4:11 a.m. | Matthew Pisano, Peter Ly, Abraham Sanders, Bingsheng Yao, Dakuo Wang, Tomek Strzalkowski, Mei Si
cs.CR updates on arXiv.org arxiv.org
Abstract: Research into AI alignment has grown considerably since the recent introduction of increasingly capable Large Language Models (LLMs). Unfortunately, modern methods of alignment still fail to fully prevent harmful responses when models are deliberately attacked. These attacks can trick seemingly aligned models into giving manufacturing instructions for dangerous materials, inciting violence, or recommending other immoral acts. To help mitigate this issue, we introduce Bergeron: a framework designed to improve the robustness of LLMs against attacks …
adversarial adversarial attacks alignment arxiv attacks can cs.ai cs.cl cs.cr fail framework instructions into ai introduction language language models large llms manufacturing research
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Senior Security Engineer - Detection and Response
@ Fastly, Inc. | US (Remote)
Application Security Engineer
@ Solidigm | Zapopan, Mexico
Defensive Cyber Operations Engineer-Mid
@ ISYS Technologies | Aurora, CO, United States
Manager, Information Security GRC
@ OneTrust | Atlanta, Georgia
Senior Information Security Analyst | IAM
@ EBANX | Curitiba or São Paulo
Senior Information Security Engineer, Cloud Vulnerability Research
@ Google | New York City, USA; New York, USA