all InfoSec news
Logits of API-Protected LLMs Leak Proprietary Information
March 15, 2024, 4:10 a.m. | Matthew Finlayson, Swabha Swayamdipta, Xiang Ren
cs.CR updates on arXiv.org arxiv.org
Abstract: The commercialization of large language models (LLMs) has led to the common practice of high-level API-only access to proprietary models. In this work, we show that even with a conservative assumption about the model architecture, it is possible to learn a surprisingly large amount of non-public information about an API-protected LLM from a relatively small number of API queries (e.g., costing under $1,000 for OpenAI's gpt-3.5-turbo). Our findings are centered on one key observation: most …
access api architecture arxiv cs.ai cs.cl cs.cr cs.lg high information language language models large leak learn led llms non practice public work
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 7 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 7 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Director, Cybersecurity - Governance, Risk and Compliance (GRC)
@ Stanley Black & Decker | New Britain CT USA - 1000 Stanley Dr
Information Security Risk Metrics Lead
@ Live Nation Entertainment | Work At Home-Connecticut
IT Product Owner - Enterprise DevSec Platform (d/f/m)
@ Airbus | Hamburg - Finkenwerder
Senior Information Security Specialist
@ Arthur Grand Technologies Inc | Arlington, VA, United States
Information Security Controls SME
@ Sword | Aberdeen, Scotland, United Kingdom