April 27, 2023, 1:38 p.m. | Bruce Schneier

Schneier on Security www.schneier.com

Stanford and Georgetown have a new report on the security risks of AI—particularly adversarial machine learning—based on a workshop they held on the topic.


Jim Dempsey, one of the workshop organizers, wrote a blog post on the report:


As a first step, our report recommends the inclusion of AI security concerns within the cybersecurity programs of developers and users. The understanding of how to secure AI systems, we concluded, lags far behind their widespread adoption. Many AI products are deployed …

adoption adversarial ai models ai security artificial intelligence blog blog post cybersecurity cybersecurity programs developers inclusion institutions machine machine learning organizations products report reports risks security security risks stanford systems understanding workshop

Sr Cyber Threat Hunt Researcher

@ Peraton | Beltsville, MD, United States

Lead Consultant, Hydrogeologist

@ WSP | Chattanooga, TN, United States

Senior Security Engineer - Netskope/Proofpoint

@ Sainsbury's | London, London, United Kingdom

Senior Technical Analyst-Network Security

@ Computacenter | Bengaluru Bengaluru (Bengaluru, IN, 560025

Senior DevSecOps Engineer - Clearance Required

@ Logistics Management Institute | Remote, United States

Software Test Automation Manager - Cloud Security

@ Tenable | Israel - Office - CS