May 13, 2024, 4:11 a.m. | Zachary Coalson, Huazheng Wang, Qingyun Wu, Sanghyun Hong

cs.CR updates on arXiv.org arxiv.org

arXiv:2405.06073v1 Announce Type: cross
Abstract: In this paper, we study the robustness of "data-centric" approaches to finding neural network architectures (known as neural architecture search) to data distribution shifts. To audit this robustness, we present a data poisoning attack, when injected to the training data used for architecture search that can prevent the victim algorithm from finding an architecture with optimal accuracy. We first define the attack objective for crafting poisoning samples that can induce the victim to generate sub-optimal …

architecture architectures arxiv attack attacks audit cs.cr cs.lg data data poisoning distribution hard network neural network pay poisoning poisoning attacks robustness search shifts study training training data work

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Security Compliance Strategist

@ Grab | Petaling Jaya, Malaysia

Cloud Security Architect, Lead

@ Booz Allen Hamilton | USA, VA, McLean (1500 Tysons McLean Dr)