May 13, 2024, 4:11 a.m. | Yang Bai, Ge Pei, Jindong Gu, Yong Yang, Xingjun Ma

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.05990v1 Announce Type: new
Abstract: Large language models (LLMs) have achieved remarkable performance on a wide range of tasks. However, recent studies have shown that LLMs can memorize training data and simple repeated tokens can trick the model to leak the data. In this paper, we take a step further and show that certain special characters or their combinations with English letters are stronger memory triggers, leading to more severe data leakage. The intuition is that, since LLMs are trained …

arxiv attack can characters cs.ai cs.cl cs.cr cs.lg data extraction language language models large leak llms performance simple special studies tokens training training data trick

Sr. Product Manager

@ MixMode | Remote, US

Information Security Engineers

@ D. E. Shaw Research | New York City

Endpoint Security Engineer

@ Sabre Corporation | GBR LNDN 25 Walbrook FL5&6

Consultant - System Management

@ LTIMindtree | Bellevue - Washington - USA, WA, US

Security Compliance Officer - ESO

@ National Grid | Wokingham, GB, RG41 5BN

Information Security Specialist (Governance and Compliance)

@ Co-operators | Ontario, Canada; Saskatchewan, Canada; Alberta, Canada