March 24, 2023, 1 p.m. | Payal Dhar

IEEE Spectrum spectrum.ieee.org



Training data sets for deep-learning models involves billions of data samples, curated by crawling the Internet. Trust is an implicit part of the arrangement. And that trust appears increasingly threatened via a new kind of cyberattack called “data poisoning”—in which trawled data for deep-learning training is compromised with intentional malicious information. Now a team of computer scientists from ETH Zurich, Google, Nvidia, and Robust Intelligence have demonstrated two model data poisoning attacks. So far, they’ve found, there’s no …

ai models artificial intelligence called compromised cyberattack data data poisoning data sets information internet malicious poisoning poisoning attacks protecting team training trust

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Senior Security Researcher - Linux MacOS EDR (Cortex)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel

Sr. Manager, NetSec GTM Programs

@ Palo Alto Networks | Santa Clara, CA, United States

SOC Analyst I

@ Fortress Security Risk Management | Cleveland, OH, United States