all InfoSec news
PreCurious: How Innocent Pre-Trained Language Models Turn into Privacy Traps
March 15, 2024, 4:10 a.m. | Ruixuan Liu, Tianhao Wang, Yang Cao, Li Xiong
cs.CR updates on arXiv.org arxiv.org
Abstract: The pre-training and fine-tuning paradigm has demonstrated its effectiveness and has become the standard approach for tailoring language models to various tasks. Currently, community-based platforms offer easy access to various pre-trained models, as anyone can publish without strict validation processes. However, a released pre-trained model can be a privacy trap for fine-tuning datasets if it is carefully designed. In this work, we propose PreCurious framework to reveal the new attack surface where the attacker releases …
access arxiv can community cs.cr easy fine-tuning language language models offer paradigm platforms privacy processes standard training turn validation
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Network Security Administrator
@ Peraton | United States
IT Security Engineer 2
@ Oracle | BENGALURU, KARNATAKA, India
Sr Cybersecurity Forensics Specialist
@ Health Care Service Corporation | Chicago (200 E. Randolph Street)
Security Engineer
@ Apple | Hyderabad, Telangana, India
Cyber GRC & Awareness Lead
@ Origin Energy | Adelaide, SA, AU, 5000
Senior Security Analyst
@ Prenuvo | Vancouver, British Columbia, Canada