all InfoSec news
Towards more Practical Threat Models in Artificial Intelligence Security
March 27, 2024, 4:11 a.m. | Kathrin Grosse, Lukas Bieringer, Tarek Richard Besold, Alexandre Alahi
cs.CR updates on arXiv.org arxiv.org
Abstract: Recent works have identified a gap between research and practice in artificial intelligence security: threats studied in academia do not always reflect the practical use and security risks of AI. For example, while models are often studied in isolation, they form part of larger ML pipelines in practice. Recent works also brought forward that adversarial manipulations introduced by academic attacks are impractical. We take a first step towards describing the full extent of this disparity. …
academia artificial artificial intelligence arxiv cs.ai cs.cr gap intelligence isolation pipelines practice research risks security security risks threat threat models threats
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 13 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 13 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 13 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Cybersecurity Engineer
@ Booz Allen Hamilton | USA, VA, Arlington (1550 Crystal Dr Suite 300) non-client
Invoice Compliance Reviewer
@ AC Disaster Consulting | Fort Myers, Florida, United States - Remote
Technical Program Manager II - Compliance
@ Microsoft | Redmond, Washington, United States
Head of U.S. Threat Intelligence / Senior Manager for Threat Intelligence
@ Moonshot | Washington, District of Columbia, United States
Customer Engineer, Security, Public Sector
@ Google | Virginia, USA; Illinois, USA