all InfoSec News
IDT: Dual-Task Adversarial Attacks for Privacy Protection
July 1, 2024, 4:14 a.m. | Pedro Faustini, Shakila Mahjabin Tonni, Annabelle McIver, Qiongkai Xu, Mark Dras
cs.CR updates on arXiv.org arxiv.org
Abstract: Natural language processing (NLP) models may leak private information in different ways, including membership inference, reconstruction or attribute inference attacks. Sensitive information may not be explicit in the text, but hidden in underlying writing characteristics. Methods to protect privacy can involve using representations inside models that are demonstrated not to detect sensitive attributes or -- for instance, in cases where users might not trust a model, the sort of scenario of interest here -- changing …
adversarial adversarial attacks arxiv attacks can cs.cl cs.cr cs.lg explicit hidden information language leak may natural natural language natural language processing nlp privacy private protect protection sensitive sensitive information task text using writing
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Ground Systems Engineer - Evolved Strategic SATCOM (ESS)
@ The Aerospace Corporation | Los Angeles AFB
Policy and Program Analyst
@ Obsidian Solutions Group | Rosslyn, VA, US
Principal Network Engineering
@ CVS Health | Work At Home-California
Lead Software Engineer
@ Rapid7 | NIS Belfast
Software Engineer II - Java
@ Rapid7 | NIS Belfast
Senior Software Engineer
@ Rapid7 | NIS Belfast