all InfoSec news
A Backdoor Approach with Inverted Labels Using Dirty Label-Flipping Attacks
April 2, 2024, 7:11 p.m. | Orson Mengara
cs.CR updates on arXiv.org arxiv.org
Abstract: Audio-based machine learning systems frequently use public or third-party data, which might be inaccurate. This exposes deep neural network (DNN) models trained on such data to potential data poisoning attacks. In this type of assault, attackers can train the DNN model using poisoned data, potentially degrading its performance. Another type of data poisoning attack that is extremely relevant to our investigation is label flipping, in which the attacker manipulates the labels for a subset of …
arxiv assault attackers attacks audio backdoor can cs.ai cs.cl cs.cr cs.lg data data poisoning eess.sp machine machine learning network neural network party poisoning poisoning attacks public systems third third-party train
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Associate Manager, BPT Infrastructure & Ops (Security Engineer)
@ SC Johnson | PHL - Makati
Cybersecurity Analyst - Project Bound
@ NextEra Energy | Jupiter, FL, US, 33478
Lead Cyber Security Operations Center (SOC) Analyst
@ State Street | Quincy, Massachusetts
Junior Information Security Coordinator (Internship)
@ Garrison Technology | London, Waterloo, England, United Kingdom
Sr. Security Engineer
@ ScienceLogic | Reston, VA