Feb. 16, 2023, 2:10 a.m. | Ali Al-Kaswan, Maliheh Izadi, Arie van Deursen

cs.CR updates on arXiv.org arxiv.org

Previous work has shown that Large Language Models are susceptible to
so-called data extraction attacks. This allows an attacker to extract a sample
that was contained in the training data, which has massive privacy
implications. The construction of data extraction attacks is challenging,
current attacks are quite inefficient, and there exists a significant gap in
the extraction capabilities of untargeted attacks and memorization. Thus,
targeted attacks are proposed, which identify if a given sample from the
training data, is extractable …

attack attacks called capabilities challenge construction current data extract gap gpt language language models large privacy targeted attack targeted attacks training work

Product Regulatory Compliance Specialist

@ Avery Dennison | Oegstgeest, Netherlands

Cyber Security Analyst

@ FinClear | Melbourne, Australia

Senior Application Security Manager, United States-(Virtual)

@ Stanley Black & Decker | New Britain CT USA - 1000 Stanley Dr

Vice President - Information Security Management - FedRAMP

@ JPMorgan Chase & Co. | Chicago, IL, United States

Vice President, Threat Intelligence & AI

@ Arctic Wolf | Remote - Minnesota

Cybersecurity Analyst

@ Resource Management Concepts, Inc. | Dahlgren, Virginia, United States