Feb. 6, 2024, 5:10 a.m. | Junjie Chu Zeyang Sha Michael Backes Yang Zhang

cs.CR updates on arXiv.org arxiv.org

In recent times, significant advancements have been made in the field of large language models (LLMs), represented by GPT series models. To optimize task execution, users often engage in multi-round conversations with GPT models hosted in cloud environments. These multi-round conversations, potentially replete with private information, require transmission and storage within the cloud. However, this operational paradigm introduces additional attack surfaces. In this paper, we first introduce a specific Conversation Reconstruction Attack targeting GPT models. Our introduced Conversation Reconstruction Attack …

attack cloud cloud environments conversation conversations cs.cl cs.cr environments gpt information language language models large llms private series storage task transmission

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Architect - Identity and Access Management Architect (80-100% | Hybrid option)

@ Swiss Re | Madrid, M, ES

Alternant - Consultant HSE (F-H-X)

@ Bureau Veritas Group | MULHOUSE, Grand Est, FR

Senior Risk/Cyber Security Analyst

@ Baker Hughes | IN-KA-BANGALORE-NEON BUILDING WEST TOWER

Offensive Security Engineer (University Grad)

@ Meta | Bellevue, WA | Menlo Park, CA | Seattle, WA | Washington, DC | New York City

Senior IAM Security Engineer

@ Norfolk Southern | Atlanta, GA, US, 30308