April 22, 2024, 4:11 a.m. | Sarthak Choudhary, Aashish Kolluri, Prateek Saxena

cs.CR updates on arXiv.org arxiv.org

arXiv:2312.14461v2 Announce Type: replace
Abstract: Training modern neural networks or models typically requires averaging over a sample of high-dimensional vectors. Poisoning attacks can skew or bias the average vectors used to train the model, forcing the model to learn specific patterns or avoid learning anything useful. Byzantine robust aggregation is a principled algorithmic defense against such biasing. Robust aggregators can bound the maximum bias in computing centrality statistics, such as mean, even when some fraction of inputs are arbitrarily corrupted. …

aggregation arxiv attacks bias can cs.ai cs.cr cs.lg high learn networks neural networks patterns poisoning poisoning attacks sample train training

Senior Security Specialist, Forsah Technical and Vocational Education and Training (Forsah TVET) (NEW)

@ IREX | Ramallah, West Bank, Palestinian National Authority

Consultant(e) Junior Cybersécurité

@ Sia Partners | Paris, France

Senior Network Security Engineer

@ NielsenIQ | Mexico City, Mexico

Senior Consultant, Payment Intelligence

@ Visa | Washington, DC, United States

Corporate Counsel, Compliance

@ Okta | San Francisco, CA; Bellevue, WA; Chicago, IL; New York City; Washington, DC; Austin, TX

Security Operations Engineer

@ Samsara | Remote - US