June 17, 2024, 8:18 p.m. |

GovInfoSecurity.com RSS Syndication www.govinfosecurity.com

Hackers Can Use the Attack Method to Manipulate ML Model Output and Steal Data
Researchers have found a new way of poisoning machine learning models that could allow hackers to steal data and manipulate the artificial intelligence unit's output. Using the Sleepy Pickle attack method, hackers can inject malicious code into the serialization process, said Trail of Bits.

artificial artificial intelligence attack can code data find found hackers inject intelligence machine machine learning machine learning models malicious ml model pickle poisoning researchers sleepy pickle steal using

Information Technology Specialist I: Windows Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, California

Information Technology Specialist I, LACERA: Information Security Engineer

@ Los Angeles County Employees Retirement Association (LACERA) | Pasadena, CA

Vice President, Controls Design & Development-7

@ State Street | Quincy, Massachusetts

Vice President, Controls Design & Development-5

@ State Street | Quincy, Massachusetts

Data Scientist & AI Prompt Engineer

@ Varonis | Israel

Contractor

@ Birlasoft | INDIA - MUMBAI - BIRLASOFT OFFICE, IN