Web: http://arxiv.org/abs/2204.12281

April 27, 2022, 1:20 a.m. | Pengfei Xia, Ziqiang Li, Wei Zhang, Bin Li

cs.CR updates on arXiv.org arxiv.org

Recent studies have proven that deep neural networks are vulnerable to
backdoor attacks. Specifically, by mixing a small number of poisoned samples
into the training set, the behavior of the trained model can be maliciously
controlled. Existing attack methods construct such adversaries by randomly
selecting some clean data from the benign set and then embedding a trigger into
them. However, this selection strategy ignores the fact that each poisoned
sample contributes inequally to the backdoor injection, which reduces the
efficiency …

attacks backdoor data

Software Engineering Lead, Application Security

@ Hotjar | Remote

Mentor - Cyber Security Career Track (Part-time/Remote)

@ Springboard | Remote

Project Manager Data Privacy and IT Security (d/m/f)

@ Bettermile | Hybrid, Berlin

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

Network Architect

@ Earthjustice | Remote, US

DevOps Application Administrator

@ University of Michigan - ITS | Ann Arbor, MI

Threat Analyst (WebApp)

@ Patchstack | Remote, EU Only

NIST Compliance Specialist

@ Coffman Engineers, Inc. | Seattle, WA

Senior Cybersecurity Advisory Consultant (Argentina)

@ Culmen International LLC | Buenos Aires, Argentina

Information Security Administrator

@ Peterborough Victoria Northumberland and Clarington Catholic District School Board | Peterborough, Ontario

Senior SOC Analyst - REMOTE

@ XOR Security | Falls Church, Virginia

Cyber Intelligence Analyst

@ FWG Solutions, Inc. | Shaw AFB, SC