June 29, 2022, 1:20 a.m. | Wenxiao Wang, Alexander Levine, Soheil Feizi

cs.CR updates on arXiv.org arxiv.org

Data poisoning attacks aim at manipulating model behaviors through distorting
training data. Previously, an aggregation-based certified defense, Deep
Partition Aggregation (DPA), was proposed to mitigate this threat. DPA predicts
through an aggregation of base classifiers trained on disjoint subsets of data,
thus restricting its sensitivity to dataset distortions. In this work, we
propose an improved certified defense against general poisoning attacks, namely
Finite Aggregation. In contrast to DPA, which directly splits the training set
into disjoint subsets, our method first …

certified data data poisoning lg poisoning

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Information Security Engineers

@ D. E. Shaw Research | New York City

Information Systems Security Officer (ISSO), Junior

@ Dark Wolf Solutions | Remote / Dark Wolf Locations

Cloud Security Engineer

@ ManTech | REMT - Remote Worker Location

SAP Security & GRC Consultant

@ NTT DATA | HYDERABAD, TG, IN

Security Engineer 2 - Adversary Simulation Operations

@ Datadog | New York City, USA