June 28, 2022, 1:20 a.m. | Karren Yang, Wan-Yi Lin, Manash Barman, Filipe Condessa, Zico Kolter

cs.CR updates on arXiv.org arxiv.org

Beyond achieving high performance across many vision tasks, multimodal models
are expected to be robust to single-source faults due to the availability of
redundant information between modalities. In this paper, we investigate the
robustness of multimodal neural networks against worst-case (i.e., adversarial)
perturbations on a single modality. We first show that standard multimodal
fusion models are vulnerable to single-source adversaries: an attack on any
single modality can overcome the correct information from multiple unperturbed
modalities and cause the model to …

adversaries fusion single

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

EY GDS Internship Program - SAP, Cyber, IT Consultant or Finance Talents with German language

@ EY | Wrocław, DS, PL, 50-086

Security Architect - 100% Remote (REF1604S)

@ Citizant | Chantilly, VA, United States

Network Security Engineer - Firewall admin (f/m/d)

@ Deutsche Börse | Prague, CZ

Junior Cyber Solutions Consultant

@ Dionach | Glasgow, Scotland, United Kingdom

Senior Software Engineer (Cryptography), Bitkey

@ Block | New York City, United States