all InfoSec news
Unlearning Protected User Attributes in Recommendations with Adversarial Training. (arXiv:2206.04500v1 [cs.IR])
June 10, 2022, 1:20 a.m. | Christian Ganhör, David Penz, Navid Rekabsaz, Oleg Lesota, Markus Schedl
cs.CR updates on arXiv.org arxiv.org
Collaborative filtering algorithms capture underlying consumption patterns,
including the ones specific to particular demographics or protected information
of users, e.g. gender, race, and location. These encoded biases can influence
the decision of a recommendation system (RS) towards further separation of the
contents provided to various demographic subgroups, and raise privacy concerns
regarding the disclosure of users' protected attributes. In this work, we
investigate the possibility and challenges of removing specific protected
information of users from the learned interaction representations of …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 8 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 8 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 8 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 8 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Security Officer Level 1 (L1)
@ NTT DATA | Virginia, United States of America
Alternance - Analyste VOC - Cybersécurité - Île-De-France
@ Sopra Steria | Courbevoie, France
Senior Security Researcher, SIEM
@ Huntress | Remote US or Remote CAN
Cyber Security Engineer Lead
@ ASSYSTEM | Bridgwater, United Kingdom