Feb. 6, 2024, 5:10 a.m. | Junjie Chu Zeyang Sha Michael Backes Yang Zhang

cs.CR updates on arXiv.org arxiv.org

In recent times, significant advancements have been made in the field of large language models (LLMs), represented by GPT series models. To optimize task execution, users often engage in multi-round conversations with GPT models hosted in cloud environments. These multi-round conversations, potentially replete with private information, require transmission and storage within the cloud. However, this operational paradigm introduces additional attack surfaces. In this paper, we first introduce a specific Conversation Reconstruction Attack targeting GPT models. Our introduced Conversation Reconstruction Attack …

attack cloud cloud environments conversation conversations cs.cl cs.cr environments gpt information language language models large llms private series storage task transmission

Director of IT & Information Security

@ Outside | Boulder, CO

Information Security Governance Manager

@ Informa Group Plc. | London, United Kingdom

Senior Risk Analyst - Application Security (Remote, United States)

@ Dynatrace | Waltham, MA, United States

Security Software Engineer (Starshield) - Top Secret Clearance

@ SpaceX | Washington, DC

Network & Security Specialist (IT24055)

@ TMEIC | Roanoke, Virginia, United States

Senior Security Engineer - Application Security (F/M/N)

@ Swile | Paris, France