all InfoSec news
On the Conflict of Robustness and Learning in Collaborative Machine Learning
Feb. 22, 2024, 5:11 a.m. | Mathilde Raynal, Carmela Troncoso
cs.CR updates on arXiv.org arxiv.org
Abstract: Collaborative Machine Learning (CML) allows participants to jointly train a machine learning model while keeping their training data private. In scenarios where privacy is a strong requirement, such as health-related applications, safety is also a primary concern. This means that privacy-preserving CML processes must produce models that output correct and reliable decisions \emph{even in the presence of potentially untrusted participants}. In response to this issue, researchers propose to use \textit{robust aggregators} that rely on metrics …
applications arxiv conflict cs.cr cs.lg data health machine machine learning privacy private processes robustness safety train training training data
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 23 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 23 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 23 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Application Security Engineer - Enterprise Engineering
@ Meta | Bellevue, WA | Seattle, WA | New York City | Fremont, CA
Security Engineer
@ Retool | San Francisco, CA
Senior Product Security Analyst
@ Boeing | USA - Seattle, WA
Junior Governance, Risk and Compliance (GRC) and Operations Support Analyst
@ McKenzie Intelligence Services | United Kingdom - Remote
GRC Integrity Program Manager
@ Meta | Bellevue, WA | Menlo Park, CA | Washington, DC | New York City