all InfoSec news
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks
April 2, 2024, 7:11 p.m. | Orson Mengara
cs.CR updates on arXiv.org arxiv.org
Abstract: Audio-based machine learning systems frequently use public or third-party data, which might be inaccurate. This exposes deep neural network (DNN) models trained on such data to potential data poisoning attacks. In this type of assault, attackers can train the DNN model using poisoned data, potentially degrading its performance. Another type of data poisoning attack that is extremely relevant to our investigation is label flipping, in which the attacker manipulates the labels for a subset of …
arxiv assault attackers attacks audio backdoor can cs.ai cs.cl cs.cr cs.lg data data poisoning eess.sp machine machine learning network neural network party poisoning poisoning attacks public systems third third-party train
More from arxiv.org / cs.CR updates on arXiv.org
WAVES: Benchmarking the Robustness of Image Watermarks
44 minutes ago |
arxiv.org
MOAT: Towards Safe BPF Kernel Extension
44 minutes ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Senior Security Researcher - Linux MacOS EDR (Cortex)
@ Palo Alto Networks | Tel Aviv-Yafo, Israel
Sr. Manager, NetSec GTM Programs
@ Palo Alto Networks | Santa Clara, CA, United States
SOC Analyst I
@ Fortress Security Risk Management | Cleveland, OH, United States