all InfoSec news
TMI! Finetuned Models Leak Private Information from their Pretraining Data
March 22, 2024, 4:11 a.m. | John Abascal, Stanley Wu, Alina Oprea, Jonathan Ullman
cs.CR updates on arXiv.org arxiv.org
Abstract: Transfer learning has become an increasingly popular technique in machine learning as a way to leverage a pretrained model trained for one task to assist with building a finetuned model for a related task. This paradigm has been especially popular for $\textit{privacy}$ in machine learning, where the pretrained model is considered public, and only the data for finetuning is considered sensitive. However, there are reasons to believe that the data used for pretraining is still …
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 10 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 10 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 10 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Open-Source Intelligence (OSINT) Policy Analyst (TS/SCI)
@ WWC Global | Reston, Virginia, United States
Security Architect (DevSecOps)
@ EUROPEAN DYNAMICS | Brussels, Brussels, Belgium
Infrastructure Security Architect
@ Ørsted | Kuala Lumpur, MY
Contract Penetration Tester
@ Evolve Security | United States - Remote
Senior Penetration Tester
@ DigitalOcean | Canada