Aug. 25, 2022, 1:20 a.m. | Dimitris Stripelis, Umang Gupta, Nikhil Dhinagar, Greg Ver Steeg, Paul Thompson, José Luis Ambite

cs.CR updates on arXiv.org arxiv.org

Federated training of large deep neural networks can often be restrictive due
to the increasing costs of communicating the updates with increasing model
sizes. Various model pruning techniques have been designed in centralized
settings to reduce inference times. Combining centralized pruning techniques
with federated training seems intuitive for reducing communication costs -- by
pruning the model parameters right before the communication step. Moreover,
such a progressive model pruning approach during training can also reduce
training times/costs. To this end, we …

lg

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Senior Software Engineer, Security

@ Niantic | Zürich, Switzerland

Consultant expert en sécurité des systèmes industriels (H/F)

@ Devoteam | Levallois-Perret, France

Cybersecurity Analyst

@ Bally's | Providence, Rhode Island, United States

Digital Trust Cyber Defense Executive

@ KPMG India | Gurgaon, Haryana, India

Program Manager - Cybersecurity Assessment Services

@ TestPros | Remote (and DMV), DC