May 13, 2024, 4:11 a.m. | Yang Bai, Ge Pei, Jindong Gu, Yong Yang, Xingjun Ma

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.05990v1 Announce Type: new
Abstract: Large language models (LLMs) have achieved remarkable performance on a wide range of tasks. However, recent studies have shown that LLMs can memorize training data and simple repeated tokens can trick the model to leak the data. In this paper, we take a step further and show that certain special characters or their combinations with English letters are stronger memory triggers, leading to more severe data leakage. The intuition is that, since LLMs are trained …

arxiv attack can characters cs.ai cs.cl cs.cr cs.lg data extraction language language models large leak llms performance simple special studies tokens training training data trick

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC