April 23, 2024, 4:11 a.m. | Zichuan Liu, Zefan Wang, Linjie Xu, Jinyu Wang, Lei Song, Tianchun Wang, Chunlin Chen, Wei Cheng, Jiang Bian

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.13968v1 Announce Type: cross
Abstract: The advent of large language models (LLMs) has revolutionized the field of natural language processing, yet they might be attacked to produce harmful content. Despite efforts to ethically align LLMs, these are often fragile and can be circumvented by jailbreaking attacks through optimized or manual adversarial prompts. To address this, we introduce the Information Bottleneck Protector (IBProtector), a defense mechanism grounded in the information bottleneck principle, and we modify the objective to avoid trivial solutions. …

address adversarial arxiv attacks can cs.ai cs.cl cs.cr information jailbreaking language language models large llms natural natural language natural language processing prompts protecting

Financial Crimes Compliance - Senior - Consulting - Location Open

@ EY | New York City, US, 10001-8604

Software Engineer - Cloud Security

@ Neo4j | Malmö

Security Consultant

@ LRQA | Singapore, Singapore, SG, 119963

Identity Governance Consultant

@ Allianz | Sydney, NSW, AU, 2000

Educator, Cybersecurity

@ Brain Station | Toronto

Principal Security Engineer

@ Hippocratic AI | Palo Alto