all InfoSec news
Protecting AI Models from “Data Poisoning”
IEEE Spectrum spectrum.ieee.org
Training data sets for deep-learning models involves billions of data samples, curated by crawling the Internet. Trust is an implicit part of the arrangement. And that trust appears increasingly threatened via a new kind of cyberattack called “data poisoning”—in which trawled data for deep-learning training is compromised with intentional malicious information. Now a team of computer scientists from ETH Zurich, Google, Nvidia, and Robust Intelligence have demonstrated two model data poisoning attacks. So far, they’ve found, there’s no …
ai models artificial intelligence called compromised cyberattack data data poisoning data sets information internet malicious poisoning poisoning attacks protecting team training trust