all InfoSec news
Visual Transformer Meets CutMix for Improved Accuracy, Communication Efficiency, and Data Privacy in Split Learning. (arXiv:2207.00234v1 [cs.LG])
July 4, 2022, 1:20 a.m. | Sihun Baek, Jihong Park, Praneeth Vepakomma, Ramesh Raskar, Mehdi Bennis, Seong-Lyun Kim
cs.CR updates on arXiv.org arxiv.org
This article seeks for a distributed learning solution for the visual
transformer (ViT) architectures. Compared to convolutional neural network (CNN)
architectures, ViTs often have larger model sizes, and are computationally
expensive, making federated learning (FL) ill-suited. Split learning (SL) can
detour this problem by splitting a model and communicating the hidden
representations at the split-layer, also known as smashed data.
Notwithstanding, the smashed data of ViT are as large as and as similar as the
input data, negating the communication …
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Regional Leader, Cyber Crisis Communications
@ Google | United Kingdom
Regional Intelligence Manager, Compliance, Safety and Risk Management
@ Google | London, UK
Senior Analyst, Endpoint Security
@ Scotiabank | Toronto, ON, CA, M1K5L1
Software Engineer, Security/Privacy, Google Cloud
@ Google | Bengaluru, Karnataka, India
Senior Security Engineer
@ Coinbase | Remote - USA