all InfoSec news
Enhancing the "Immunity" of Mixture-of-Experts Networks for Adversarial Defense
March 1, 2024, 5:11 a.m. | Qiao Han, yong huang, xinling Guo, Yiteng Zhai, Yu Qin, Yao Yang
cs.CR updates on arXiv.org arxiv.org
Abstract: Recent studies have revealed the vulnerability of Deep Neural Networks (DNNs) to adversarial examples, which can easily fool DNNs into making incorrect predictions. To mitigate this deficiency, we propose a novel adversarial defense method called "Immunity" (Innovative MoE with MUtual information \& positioN stabilITY) based on a modified Mixture-of-Experts (MoE) architecture in this work. The key enhancements to the standard MoE are two-fold: 1) integrating of Random Switch Gates (RSGs) to obtain diverse network structures …
adversarial amp arxiv called can cs.cr cs.lg defense examples experts immunity information making networks neural networks novel predictions stability studies vulnerability
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Sr Security Engineer - Colombia
@ Nubank | Colombia, Bogota
Security Engineer, Investigations - i3
@ Meta | Menlo Park, CA | Washington, DC | Remote, US
Cyber Security Engineer
@ ASSYSTEM | Bridgwater, United Kingdom
Security Analyst
@ Northwestern Memorial Healthcare | Chicago, IL, United States
GRC Analyst
@ Richemont | Shelton, CT, US
Security Specialist
@ Peraton | Government Site, MD, United States