Jan. 9, 2024, 4:30 a.m. | Mirko Zorz

Help Net Security www.helpnetsecurity.com

Adversaries can intentionally mislead or “poison” AI systems, causing them to malfunction, and developers have yet to find an infallible defense against this. In their latest publication, NIST researchers and their partners highlight these AI and machine learning vulnerabilities. Taxonomy of attacks on Generative AI systems Understanding potential attacks on AI systems The publication, “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2),” is a key component of NIST’s broader initiative to … More


The post …

abuse adversaries ai and machine learning appomni artificial intelligence attacks defense developers don't miss evasion find generative generative ai how-to latest machine machine learning nist partners poisoning report research researchers strategy systems tips understanding vulnerabilities

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC