May 3, 2023, 1:10 a.m. | Aly M. Kassem

cs.CR updates on arXiv.org arxiv.org

Large Language models (LLMs) are trained on large amounts of data, which can
include sensitive information that may compromise per- sonal privacy. LLMs
showed to memorize parts of the training data and emit those data verbatim when
an adversary prompts appropriately. Previous research has primarily focused on
data preprocessing and differential privacy techniques to address memorization
or prevent verbatim memorization exclusively, which can give a false sense of
privacy. However, these methods rely on explicit and implicit assumptions about
the …

adversary compromise data information language language models large llms may policy privacy prompts research sensitive information training verbatim

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Security Operations Manager-West Coast

@ The Walt Disney Company | USA - CA - 2500 Broadway Street

Vulnerability Analyst - Remote (WFH)

@ Cognitive Medical Systems | Phoenix, AZ, US | Oak Ridge, TN, US | Austin, TX, US | Oregon, US | Austin, TX, US

Senior Mainframe Security Administrator

@ Danske Bank | Copenhagen V, Denmark