all InfoSec news
Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios. (arXiv:2204.13594v2 [cs.IR] UPDATED)
June 9, 2022, 1:20 a.m. | Dazhong Rong, Qinming He, Jianhai Chen
cs.CR updates on arXiv.org arxiv.org
Various attack methods against recommender systems have been proposed in the
past years, and the security issues of recommender systems have drawn
considerable attention. Traditional attacks attempt to make target items
recommended to as many users as possible by poisoning the training data.
Benifiting from the feature of protecting users' private data, federated
recommendation can effectively defend such attacks. Therefore, quite a few
works have devoted themselves to developing federated recommender systems. For
proving current federated recommendation is still vulnerable, …
More from arxiv.org / cs.CR updates on arXiv.org
One-shot Empirical Privacy Estimation for Federated Learning
1 day, 3 hours ago |
arxiv.org
Transferability Ranking of Adversarial Examples
1 day, 3 hours ago |
arxiv.org
A survey on hardware-based malware detection approaches
1 day, 3 hours ago |
arxiv.org
Explainable Ponzi Schemes Detection on Ethereum
1 day, 3 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Information Security Engineers
@ D. E. Shaw Research | New York City
Cyber Security Architect - SR
@ ERCOT | Taylor, TX
SOC Analyst
@ Wix | Tel Aviv, Israel
Associate Director, SIEM & Detection Engineering(remote)
@ Humana | Remote US
Senior DevSecOps Architect
@ Computacenter | Birmingham, GB, B37 7YS