Nov. 16, 2022, 2:20 a.m. | Yiran Huang, Yexu Zhou, Michael Hefenbrock, Till Riedel, Likun Fang, Michael Beigl

cs.CR updates on arXiv.org arxiv.org

The vulnerability of the high-performance machine learning models implies a
security risk in applications with real-world consequences. Research on
adversarial attacks is beneficial in guiding the development of machine
learning models on the one hand and finding targeted defenses on the other.
However, most of the adversarial attacks today leverage the gradient or logit
information from the models to generate adversarial perturbation. Works in the
more realistic domain: decision-based attacks, which generate adversarial
perturbation solely based on observing the output …

adversarial attack box decision

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Security Compliance Architect - Experian Health (Can be REMOTE from anywhere in the US)

@ Experian | ., ., United States

IT Security Specialist

@ Ørsted | Kuala Lumpur, MY

Senior, Cyber Security Analyst

@ Peloton | New York City

Cyber Security Engineer | Perimeter | Firewall

@ Garmin Cluj | Cluj-Napoca, Cluj County, Romania

Pentester / Ethical Hacker Web/API - Vast/Freelance

@ Resillion | Brussels, Belgium