all InfoSec news
Optimization-based Prompt Injection Attack to LLM-as-a-Judge
March 27, 2024, 4:11 a.m. | Jiawen Shi, Zenghui Yuan, Yinuo Liu, Yue Huang, Pan Zhou, Lichao Sun, Neil Zhenqiang Gong
cs.CR updates on arXiv.org arxiv.org
Abstract: LLM-as-a-Judge is a novel solution that can assess textual information with large language models (LLMs). Based on existing research studies, LLMs demonstrate remarkable performance in providing a compelling alternative to traditional human assessment. However, the robustness of these systems against prompt injection attacks remains an open question. In this work, we introduce JudgeDeceiver, a novel optimization-based prompt injection attack tailored to LLM-as-a-Judge. Our method formulates a precise optimization objective for attacking the decision-making process of …
arxiv assessment attack attacks can cs.ai cs.cr human information injection injection attack injection attacks judge language language models large llm llms novel optimization performance prompt prompt injection prompt injection attacks question research robustness solution studies systems
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 13 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 13 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 13 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Lead Technical Product Manager - Threat Protection
@ Mastercard | Remote - United Kingdom
Data Privacy Officer
@ Banco Popular | San Juan, PR
GRC Security Program Manager
@ Meta | Bellevue, WA | Menlo Park, CA | Washington, DC | New York City
Cyber Security Engineer
@ ASSYSTEM | Warrington, United Kingdom
Privacy Engineer, Technical Audit
@ Meta | Menlo Park, CA