all InfoSec news
SSCAE -- Semantic, Syntactic, and Context-aware natural language Adversarial Examples generator
March 19, 2024, 4:11 a.m. | Javad Rafiei Asl, Mohammad H. Rafiei, Manar Alohaly, Daniel Takabi
cs.CR updates on arXiv.org arxiv.org
Abstract: Machine learning models are vulnerable to maliciously crafted Adversarial Examples (AEs). Training a machine learning model with AEs improves its robustness and stability against adversarial attacks. It is essential to develop models that produce high-quality AEs. Developing such models has been much slower in natural language processing (NLP) than in areas such as computer vision. This paper introduces a practical and efficient adversarial attack model called SSCAE for \textbf{S}emantic, \textbf{S}yntactic, and \textbf{C}ontext-aware natural language \textbf{AE}s …
adversarial adversarial attacks aes arxiv attacks aware context cs.cl cs.cr cs.lg examples generator high language machine machine learning machine learning models natural natural language quality robustness semantic stability training vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 4 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 4 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 4 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Associate Principal Security Engineer
@ Activision Blizzard | Work from Home - CA
Security Engineer- Systems Integration
@ Meta | Bellevue, WA | Menlo Park, CA | New York City
Lead Security Engineer (Digital Forensic and IR Analyst)
@ Blue Yonder | Hyderabad
Senior Principal IAM Engineering Program Manager Cybersecurity
@ Providence | Redmond, WA, United States
Information Security Analyst II or III
@ Entergy | The Woodlands, Texas, United States