Feb. 21, 2024, 5:11 a.m. | Martin Gubri, Dennis Ulmer, Hwaran Lee, Sangdoo Yun, Seong Joon Oh

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.12991v1 Announce Type: cross
Abstract: Large Language Model (LLM) services and models often come with legal rules on who can use them and how they must use them. Assessing the compliance of the released LLMs is crucial, as these rules protect the interests of the LLM contributor and prevent misuse. In this context, we describe the novel problem of Black-box Identity Verification (BBIV). The goal is to determine whether a third-party application uses a certain LLM through its chat function. …

adversarial arxiv box can compliance cs.ai cs.cl cs.cr cs.lg honeypot identification language large large language model legal llm llms prompt protect random rules services

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Cyber Crime Student Internship

@ West Midlands Police | Birmingham, West Midlands, United Kingdom

Cyber Security Engineer (Junior/Journeyman)

@ CSEngineering | El Segundo, CA 90245, USA

Application Security Lead

@ Tokio Marine HCC | United Kingdom