June 29, 2023, 1:10 a.m. | Wenxiao Wang, Soheil Feizi

cs.CR updates on arXiv.org arxiv.org

The increasing access to data poses both opportunities and risks in deep
learning, as one can manipulate the behaviors of deep learning models with
malicious training samples. Such attacks are known as data poisoning. Recent
advances in defense strategies against data poisoning have highlighted the
effectiveness of aggregation schemes in achieving state-of-the-art results in
certified poisoning robustness. However, the practical implications of these
approaches remain unclear. Here we focus on Deep Partition Aggregation, a
representative aggregation defense, and assess its …

access aggregation attacks data data poisoning deep learning defense defense strategies malicious opportunities poisoning poisoning attacks risks training

Network Security Administrator

@ Peraton | United States

IT Security Engineer 2

@ Oracle | BENGALURU, KARNATAKA, India

Sr Cybersecurity Forensics Specialist

@ Health Care Service Corporation | Chicago (200 E. Randolph Street)

Security Engineer

@ Apple | Hyderabad, Telangana, India

Cyber GRC & Awareness Lead

@ Origin Energy | Adelaide, SA, AU, 5000

Senior Security Analyst

@ Prenuvo | Vancouver, British Columbia, Canada