Sept. 20, 2023, 1:10 a.m. | Zizhen Liu, Weiyang He, Chip-Hong Chang, Jing Ye, Huawei Li, Xiaowei Li

cs.CR updates on arXiv.org arxiv.org

While Federated learning (FL) is attractive for pulling privacy-preserving
distributed training data, the credibility of participating clients and
non-inspectable data pose new security threats, of which poisoning attacks are
particularly rampant and hard to defend without compromising privacy,
performance or other desirable properties of FL. To tackle this problem, we
propose a self-purified FL (SPFL) method that enables benign clients to exploit
trusted historical features of locally purified model to supervise the training
of aggregated model in each iteration. The …

attacks clients data distributed federated learning hard non performance poisoning poisoning attacks privacy problem security security threats threats training training data

More from arxiv.org / cs.CR updates on arXiv.org

Business Information Security Officer

@ Metrolink | Los Angeles, CA

Senior Security Engineer

@ Freedom of the Press Foundation | Remote, 4 hour time zone overlap with New York City

Security Engineer

@ ChartMogul | Remote, EU

REF7225P- Information Security (HIPPA& GDPR) Pune-Contract Employee

@ WNS Global Services | Pune, India

Cortex Systems Engineer, SecOps Platform - North America

@ Palo Alto Networks | Remote, Texas, United States

Senior Threat Engineer

@ Zscaler | Tel Aviv-Yafo, Israel