Feb. 10, 2023, 2:10 a.m. | Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

cs.CR updates on arXiv.org arxiv.org

The ubiquity of distributed machine learning (ML) in sensitive public domain
applications calls for algorithms that protect data privacy, while being robust
to faults and adversarial behaviors. Although privacy and robustness have been
extensively studied independently in distributed ML, their synthesis remains
poorly understood. We present the first tight analysis of the error incurred by
any algorithm ensuring robustness against a fraction of adversarial machines,
as well as differential privacy (DP) for honest machines' data against any
other curious entity. …

adversarial algorithm algorithms analysis applications data data privacy differential privacy distributed domain error machine machine learning machines privacy protect public robustness ubiquity

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Computer and Forensics Investigator

@ ManTech | 221BQ - Cstmr Site,Springfield,VA

Senior Security Analyst

@ Oracle | United States

Associate Vulnerability Management Specialist

@ Diebold Nixdorf | Hyderabad, Telangana, India