Web: http://arxiv.org/abs/2209.05980

Sept. 14, 2022, 1:20 a.m. | Maksym Yatsura, Kaspar Sakmann, N. Grace Hua, Matthias Hein, Jan Hendrik Metzen

cs.CR updates on arXiv.org arxiv.org

Adversarial patch attacks are an emerging security threat for real world deep
learning applications. We present Demasked Smoothing, the first approach (up to
our knowledge) to certify the robustness of semantic segmentation models
against this threat model. Previous work on certifiably defending against patch
attacks has mostly focused on image classification task and often required
changes in the model architecture and additional training which is undesirable
and computationally expensive. In Demasked Smoothing, any segmentation model
can be applied without particular …

adversarial attacks certified patch segmentation

More from arxiv.org / cs.CR updates on arXiv.org

Cybersecurity Engineer

@ Apercen Partners LLC | Folsom, CA

IDM Sr. Security Developer

@ The Ohio State University | Columbus, OH, United States

IT Security Engineer

@ Stylitics | New York City

Information Security Engineer

@ VDA Labs | Remote

Information Security Analyst

@ Metropolitan Transportation Commission | San Francisco, CA

Senior Professional Services Consultant I

@ Palo Alto Networks | New York City, United States

Senior Consultant, Security Research Services (Security Research Services (Unit 42) - Remote

@ Palo Alto Networks | Santa Clara, CA, United States

Software Architect – Endpoint Security

@ Zscaler | San Jose, CA, United States

Chief Information Security Officer H/F

@ AccorCorpo | Évry-Courcouronnes, France

Director of Security Engineering & Compliance

@ TaxBit | Washington, District of Columbia, United States

Principal, Product Security Architect

@ Western Digital | San Jose, CA, United States

IT Security Lead Consultant

@ Devoteam | Praha 1, Czech republic