all InfoSec news
Privacy-Preserving, Dropout-Resilient Aggregation in Decentralized Learning
April 30, 2024, 4:11 a.m. | Ali Reza Ghavamipour, Benjamin Zi Hao Zhao, Fatih Turkmen
cs.CR updates on arXiv.org arxiv.org
Abstract: Decentralized learning (DL) offers a novel paradigm in machine learning by distributing training across clients without central aggregation, enhancing scalability and efficiency. However, DL's peer-to-peer model raises challenges in protecting against inference attacks and privacy leaks. By forgoing central bottlenecks, DL demands privacy-preserving aggregation methods to protect data from 'honest but curious' clients and adversaries, maintaining network-wide privacy. Privacy-preserving DL faces the additional hurdle of client dropout, clients not submitting updates due to connectivity problems …
aggregation arxiv attacks challenges clients cs.ai cs.cr data decentralized demands efficiency leaks machine machine learning novel paradigm peer-to-peer privacy protect protect data protecting resilient scalability training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
COMM Penetration Tester (PenTest-2), Chantilly, VA OS&CI Job #368
@ Allen Integrated Solutions | Chantilly, Virginia, United States
Consultant Sécurité SI H/F Gouvernance - Risques - Conformité
@ Hifield | Sèvres, France
Infrastructure Consultant
@ Telefonica Tech | Belfast, United Kingdom