all InfoSec news
Attacking Byzantine Robust Aggregation in High Dimensions
April 22, 2024, 4:11 a.m. | Sarthak Choudhary, Aashish Kolluri, Prateek Saxena
cs.CR updates on arXiv.org arxiv.org
Abstract: Training modern neural networks or models typically requires averaging over a sample of high-dimensional vectors. Poisoning attacks can skew or bias the average vectors used to train the model, forcing the model to learn specific patterns or avoid learning anything useful. Byzantine robust aggregation is a principled algorithmic defense against such biasing. Robust aggregators can bound the maximum bias in computing centrality statistics, such as mean, even when some fraction of inputs are arbitrarily corrupted. …
aggregation arxiv attacks bias can cs.ai cs.cr cs.lg high learn networks neural networks patterns poisoning poisoning attacks sample train training
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Senior Security Specialist, Forsah Technical and Vocational Education and Training (Forsah TVET) (NEW)
@ IREX | Ramallah, West Bank, Palestinian National Authority
Consultant(e) Junior Cybersécurité
@ Sia Partners | Paris, France
Senior Network Security Engineer
@ NielsenIQ | Mexico City, Mexico
Senior Consultant, Payment Intelligence
@ Visa | Washington, DC, United States
Corporate Counsel, Compliance
@ Okta | San Francisco, CA; Bellevue, WA; Chicago, IL; New York City; Washington, DC; Austin, TX
Security Operations Engineer
@ Samsara | Remote - US