April 29, 2024, 4:11 a.m. | Kongyang Chen, Zixin Wang, Bing Mi, Waixi Liu, Shaowei Wang, Xiaojun Ren, Jiaxing Shen

cs.CR updates on arXiv.org arxiv.org

arXiv:2404.16841v1 Announce Type: new
Abstract: Recently, large language models (LLMs) have emerged as a notable field, attracting significant attention for its ability to automatically generate intelligent contents for various application domains. However, LLMs still suffer from significant security and privacy issues. For example, LLMs might expose user privacy from hacking attacks or targeted prompts. To address this problem, this paper introduces a novel machine unlearning framework into LLMs. Our objectives are to make LLMs not produce harmful, hallucinatory, or privacy-compromising …

application arxiv attacks attention cs.cr domains expose hacking hacking attacks language language models large llms machine privacy prompts security user privacy

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Security Operations Manager-West Coast

@ The Walt Disney Company | USA - CA - 2500 Broadway Street

Vulnerability Analyst - Remote (WFH)

@ Cognitive Medical Systems | Phoenix, AZ, US | Oak Ridge, TN, US | Austin, TX, US | Oregon, US | Austin, TX, US

Senior Mainframe Security Administrator

@ Danske Bank | Copenhagen V, Denmark