Jan. 9, 2024, 4:30 a.m. | Mirko Zorz

Help Net Security www.helpnetsecurity.com

Adversaries can intentionally mislead or “poison” AI systems, causing them to malfunction, and developers have yet to find an infallible defense against this. In their latest publication, NIST researchers and their partners highlight these AI and machine learning vulnerabilities. Taxonomy of attacks on Generative AI systems Understanding potential attacks on AI systems The publication, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2),” is a key component of NIST’s broader initiative to … More


The post …

abuse adversaries ai and machine learning appomni artificial intelligence attacks defense developers don't miss evasion find generative generative ai how-to latest machine machine learning nist partners poisoning report research researchers strategy systems tips understanding vulnerabilities

More from www.helpnetsecurity.com / Help Net Security

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Information Security Engineer, Sr. (Container Hardening)

@ Rackner | San Antonio, TX

BaaN IV Techno-functional consultant-On-Balfour

@ Marlabs | Piscataway, US

Senior Security Analyst

@ BETSOL | Bengaluru, India

Security Operations Centre Operator

@ NEXTDC | West Footscray, Australia

Senior Network and Security Research Officer

@ University of Toronto | Toronto, ON, CA