June 1, 2023, 1:10 a.m. | Liam Fowl, Jonas Geiping, Steven Reich, Yuxin Wen, Wojtek Czaja, Micah Goldblum, Tom Goldstein

cs.CR updates on arXiv.org arxiv.org

A central tenet of Federated learning (FL), which trains models without
centralizing user data, is privacy. However, previous work has shown that the
gradient updates used in FL can leak user information. While the most
industrial uses of FL are for text applications (e.g. keystroke prediction),
nearly all attacks on FL privacy have focused on simple image classifiers. We
propose a novel attack that reveals private user text by deploying malicious
parameter vectors, and which succeeds even with mini-batches, multiple …

applications breach centralizing data federated learning industrial information language language models leak prediction privacy tenet text trains transformers updates user data work

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Offensive Security Engineer

@ Ivanti | United States, Remote

Senior Security Engineer I

@ Samsara | Remote - US

Senior Principal Information System Security Engineer

@ Chameleon Consulting Group | Herndon, VA

Junior Detections Engineer

@ Kandji | San Francisco

Data Security Engineer/ Architect - Remote United States

@ Stanley Black & Decker | Towson MD USA - 701 E Joppa Rd Bg 700