all InfoSec news
How to harden machine learning models against adversarial attacks
Jan. 5, 2023, 6:45 p.m. | katarina.blazic@reversinglabs.com (Katarina Blažić)
ReversingLabs Blog blog.reversinglabs.com
As attacks become more sophisticated, it is imperative to harden machine learning (ML) models and reduce the adversary’s ability to evade detection.
adversarial adversarial attacks adversary artificial intelligence (ai) attacks detection evade machine machine learning machine learning (ml) machine learning models threat research
More from blog.reversinglabs.com / ReversingLabs Blog
ReversingLabs Search Extension for Splunk Enterprise
6 days, 2 hours ago |
blog.reversinglabs.com
Why GenAI fails at full SOC automation
1 week, 6 days ago |
blog.reversinglabs.com
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Principal Security Research Engineer (Prisma Cloud)
@ Palo Alto Networks | Bengaluru, India
National Security Solutions Fall 2024 Co-Op - Positioning, Navigation and Timing (PNT) Intern
@ KBR, Inc. | USA, Beavercreek Township, 4027 Colonel Glenn Highway, Suite 300, Ohio
Sr Principal Embedded Security Software Engineer
@ The Aerospace Corporation | HIA32: Cedar Rapids, IA 400 Collins Rd NE , Cedar Rapids, IA, 52498-0505 USA