Nov. 20, 2023, 2:10 a.m. | Sheldon C. Ebron Jr., Kan Yang

cs.CR updates on arXiv.org arxiv.org

Federated Learning (FL) enables collaborative machine learning model training
across multiple parties without sharing raw data. However, FL's distributed
nature allows malicious clients to impact model training through Byzantine or
backdoor attacks, using erroneous model updates. Existing defenses measure the
deviation of each update from a 'ground-truth model update.' They often rely on
a benign root dataset on the server or use trimmed mean or median for clipping,
both methods having limitations.


We introduce FedTruth, a robust defense against model …

attacks backdoor backdoor attacks clients data defenses distributed federated federated learning framework impact machine machine learning malicious measure model training nature sharing training truth update updates

More from arxiv.org / cs.CR updates on arXiv.org

Security Specialist

@ Protect Democracy | Remote, US

Cybersecurity Systems Security Engineer II-T

@ ManTech | 809AR - Ft Carson,Colorado Springs,CO

Security Engineer (Supporting NASA at JSC)

@ KBR, Inc. | USA, Houston, 2101 NASA Parkway, Building 21, Texas

Head of Security & IT

@ ORFIUM | Dublin, County Dublin, Ireland

Chief Privacy Officer

@ Nike | Santa Clara,CA

Security Engineer

@ SPINS | Chicago, IL