Nov. 20, 2023, 2:10 a.m. | Sheldon C. Ebron Jr., Kan Yang

cs.CR updates on arXiv.org arxiv.org

Federated Learning (FL) enables collaborative machine learning model training
across multiple parties without sharing raw data. However, FL's distributed
nature allows malicious clients to impact model training through Byzantine or
backdoor attacks, using erroneous model updates. Existing defenses measure the
deviation of each update from a 'ground-truth model update.' They often rely on
a benign root dataset on the server or use trimmed mean or median for clipping,
both methods having limitations.


We introduce FedTruth, a robust defense against model …

attacks backdoor backdoor attacks clients data defenses distributed federated federated learning framework impact machine machine learning malicious measure model training nature sharing training truth update updates

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Brand Experience and Development Associate (Libby's Pumpkin)

@ Nestlé | Arlington, VA, US, 22209

Cybersecurity Analyst

@ L&T Technology Services | Milpitas, CA, US

Information Security Analyst

@ Fortinet | Burnaby, BC, Canada