all InfoSec news
Protect AI Guardian scans ML models to determine if they contain unsafe code
Help Net Security www.helpnetsecurity.com
Protect AI announced Guardian which enables organizations to enforce security policies on ML Models to prevent malicious code from entering their environment. Guardian is based on ModelScan, an open-source tool from Protect AI that scans machine learning models to determine if they contain unsafe code. Guardian brings together the best of Protect AI’s open source offering, and enables enterprise level enforcement and management of model security, and extends coverage with proprietary scanning capabilities. The growing … More →
The post …
code environment guardian industry news machine machine learning machine learning models malicious ml models organizations policies protect protect ai scans security security policies tool