all InfoSec news
Privacy-Preserving, Dropout-Resilient Aggregation in Decentralized Learning
April 30, 2024, 4:11 a.m. | Ali Reza Ghavamipour, Benjamin Zi Hao Zhao, Fatih Turkmen
cs.CR updates on arXiv.org arxiv.org
Abstract: Decentralized learning (DL) offers a novel paradigm in machine learning by distributing training across clients without central aggregation, enhancing scalability and efficiency. However, DL's peer-to-peer model raises challenges in protecting against inference attacks and privacy leaks. By forgoing central bottlenecks, DL demands privacy-preserving aggregation methods to protect data from 'honest but curious' clients and adversaries, maintaining network-wide privacy. Privacy-preserving DL faces the additional hurdle of client dropout, clients not submitting updates due to connectivity problems …
aggregation arxiv attacks challenges clients cs.ai cs.cr data decentralized demands efficiency leaks machine machine learning novel paradigm peer-to-peer privacy protect protect data protecting resilient scalability training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Cyber Crime Student Internship
@ West Midlands Police | Birmingham, West Midlands, United Kingdom
Cyber Security Engineer (Junior/Journeyman)
@ CSEngineering | El Segundo, CA 90245, USA
Application Security Lead
@ Tokio Marine HCC | United Kingdom