Nov. 16, 2022, 2:20 a.m. | Sanghyun Hong, Nicholas Carlini, Alexey Kurakin

cs.CR updates on arXiv.org arxiv.org

When machine learning training is outsourced to third parties, $backdoor$
$attacks$ become practical as the third party who trains the model may act
maliciously to inject hidden behaviors into the otherwise accurate model. Until
now, the mechanism to inject backdoors has been limited to $poisoning$. We
argue that a supply-chain attacker has more attack techniques available by
introducing a $handcrafted$ attack that directly manipulates a model's weights.
This direct modification gives our attacker more degrees of freedom compared to
poisoning, …

backdoors networks neural networks

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Architect - Hardware

@ Intel | IND - Bengaluru

Elastic Consultant

@ Elastic | Spain

OT Cybersecurity Specialist

@ Emerson | Abu Dhabi, United Arab Emirates

Security Operations Program Manager

@ Kaseya | Miami, Florida, United States

Senior Security Operations Engineer

@ Revinate | Vancouver