all InfoSec news
Analyzing the Quality Attributes of AI Vision Models in Open Repositories Under Adversarial Attacks
March 27, 2024, 4:11 a.m. | Zerui Wang, Yan Liu
cs.CR updates on arXiv.org arxiv.org
Abstract: As AI models rapidly evolve, they are frequently released to open repositories, such as HuggingFace. It is essential to perform quality assurance validation on these models before integrating them into the production development lifecycle. In addition to evaluating efficiency in terms of balanced accuracy and computing costs, adversarial attacks are potential threats to the robustness and explainability of AI models. Meanwhile, XAI applies algorithms that approximate inputs to outputs post-hoc to identify the contributing features. …
addition adversarial adversarial attacks ai models arxiv assurance attacks attributes cs.ai cs.cr development efficiency huggingface lifecycle production quality repositories terms under validation
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Sr. Cloud Security Engineer
@ BLOCKCHAINS | USA - Remote
Network Security (SDWAN: Velocloud) Infrastructure Lead
@ Sopra Steria | Noida, Uttar Pradesh, India
Senior Python Engineer, Cloud Security
@ Darktrace | Cambridge
Senior Security Consultant
@ Nokia | United States
Manager, Threat Operations
@ Ivanti | United States, Remote
Lead Cybersecurity Architect - Threat Modeling | AWS Cloud Security
@ JPMorgan Chase & Co. | Columbus, OH, United States