May 13, 2024, 4:11 a.m. | Zachary Coalson, Huazheng Wang, Qingyun Wu, Sanghyun Hong

cs.CR updates on

arXiv:2405.06073v1 Announce Type: cross
Abstract: In this paper, we study the robustness of "data-centric" approaches to finding neural network architectures (known as neural architecture search) to data distribution shifts. To audit this robustness, we present a data poisoning attack, when injected to the training data used for architecture search that can prevent the victim algorithm from finding an architecture with optimal accuracy. We first define the attack objective for crafting poisoning samples that can induce the victim to generate sub-optimal …

architecture architectures arxiv attack attacks audit cs.lg data data poisoning distribution hard network neural network pay poisoning poisoning attacks robustness search shifts study training training data work

Sr. Product Manager

@ MixMode | Remote, US

Information Security Engineers

@ D. E. Shaw Research | New York City

Staff Technical Support Engineer - Endpoint Security

@ Palo Alto Networks | Singapore, Singapore

Identity and Access Management (IAM) Engineer

@ Vodafone | Madrid, ES

Director, Product and Solutions Marketing - CIAM Solutions

@ ForgeRock | USA - Remote - Austin, TX

Ingénieur de Production IAM (H/F)

@ CITECH | Marseille, France