March 27, 2024, 4:11 a.m. | Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.17710v1 Announce Type: new
Abstract: LLM-as-a-Judge is a novel solution that can assess textual information with large language models (LLMs). Based on existing research studies, LLMs demonstrate remarkable performance in providing a compelling alternative to traditional human assessment. However, the robustness of these systems against prompt injection attacks remains an open question. In this work, we introduce JudgeDeceiver, a novel optimization-based prompt injection attack tailored to LLM-as-a-Judge. Our method formulates a precise optimization objective for attacking the decision-making process of …

arxiv assessment attack attacks can cs.ai cs.cr human information injection injection attack injection attacks judge language language models large llm llms novel optimization performance prompt prompt injection prompt injection attacks question research robustness solution studies systems

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Lead Technical Product Manager - Threat Protection

@ Mastercard | Remote - United Kingdom

Data Privacy Officer

@ Banco Popular | San Juan, PR

GRC Security Program Manager

@ Meta | Bellevue, WA | Menlo Park, CA | Washington, DC | New York City

Cyber Security Engineer

@ ASSYSTEM | Warrington, United Kingdom

Privacy Engineer, Technical Audit

@ Meta | Menlo Park, CA