March 24, 2023, 1 p.m. | Payal Dhar

IEEE Spectrum spectrum.ieee.org



Training data sets for deep-learning models involves billions of data samples, curated by crawling the Internet. Trust is an implicit part of the arrangement. And that trust appears increasingly threatened via a new kind of cyberattack called “data poisoning”—in which trawled data for deep-learning training is compromised with intentional malicious information. Now a team of computer scientists from ETH Zurich, Google, Nvidia, and Robust Intelligence have demonstrated two model data poisoning attacks. So far, they’ve found, there’s no …

ai models artificial intelligence called compromised cyberattack data data poisoning data sets information internet malicious poisoning poisoning attacks protecting team training trust

QA Customer Response Engineer

@ ORBCOMM | Sterling, VA Office, Sterling, VA, US

Enterprise Security Architect

@ Booz Allen Hamilton | USA, TX, San Antonio (3133 General Hudnell Dr) Client Site

DoD SkillBridge - Systems Security Engineer (Active Duty Military Only)

@ Sierra Nevada Corporation | Dayton, OH - OH OD1

Senior Development Security Analyst (REMOTE)

@ Oracle | United States

Software Engineer - Network Security

@ Cloudflare, Inc. | Remote

Software Engineer, Cryptography Services

@ Robinhood | Toronto, ON