all InfoSec news
Towards Sparsified Federated Neuroimaging Models via Weight Pruning. (arXiv:2208.11669v1 [cs.LG])
Aug. 25, 2022, 1:20 a.m. | Dimitris Stripelis, Umang Gupta, Nikhil Dhinagar, Greg Ver Steeg, Paul Thompson, José Luis Ambite
cs.CR updates on arXiv.org arxiv.org
Federated training of large deep neural networks can often be restrictive due
to the increasing costs of communicating the updates with increasing model
sizes. Various model pruning techniques have been designed in centralized
settings to reduce inference times. Combining centralized pruning techniques
with federated training seems intuitive for reducing communication costs -- by
pruning the model parameters right before the communication step. Moreover,
such a progressive model pruning approach during training can also reduce
training times/costs. To this end, we …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Senior Software Engineer, Security
@ Niantic | Zürich, Switzerland
Consultant expert en sécurité des systèmes industriels (H/F)
@ Devoteam | Levallois-Perret, France
Cybersecurity Analyst
@ Bally's | Providence, Rhode Island, United States
Digital Trust Cyber Defense Executive
@ KPMG India | Gurgaon, Haryana, India
Program Manager - Cybersecurity Assessment Services
@ TestPros | Remote (and DMV), DC