all InfoSec news
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search
May 13, 2024, 4:11 a.m. | Zachary Coalson, Huazheng Wang, Qingyun Wu, Sanghyun Hong
cs.CR updates on arXiv.org arxiv.org
Abstract: In this paper, we study the robustness of "data-centric" approaches to finding neural network architectures (known as neural architecture search) to data distribution shifts. To audit this robustness, we present a data poisoning attack, when injected to the training data used for architecture search that can prevent the victim algorithm from finding an architecture with optimal accuracy. We first define the attack objective for crafting poisoning samples that can induce the victim to generate sub-optimal …
architecture architectures arxiv attack attacks audit cs.cr cs.lg data data poisoning distribution hard network neural network pay poisoning poisoning attacks robustness search shifts study training training data work
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Security Compliance Strategist
@ Grab | Petaling Jaya, Malaysia
Cloud Security Architect, Lead
@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)