March 25, 2024, 4:11 a.m. | Yiwei Zhou, Xiaobo Xia, Zhiwei Lin, Bo Han, Tongliang Liu

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.14774v1 Announce Type: cross
Abstract: The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention. Inspired by the success of vision-language foundation models, previous efforts achieved zero-shot adversarial robustness by aligning adversarial visual features with text supervision. However, in practice, they are still unsatisfactory due to several issues, including heavy adaptation cost, suboptimal text supervision, and uncontrolled natural generalization capacity. In this paper, to address these issues, we propose a few-shot adversarial prompt framework where adapting …

adversarial arxiv attention cs.cl cs.cr cs.cv cs.lg features foundation foundation models language language models networks neural networks practice prompt robustness text vulnerability

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Application Security Engineer - Enterprise Engineering

@ Meta | Bellevue, WA | Seattle, WA | New York City | Fremont, CA

Security Engineer

@ Retool | San Francisco, CA

Senior Product Security Analyst

@ Boeing | USA - Seattle, WA

Junior Governance, Risk and Compliance (GRC) and Operations Support Analyst

@ McKenzie Intelligence Services | United Kingdom - Remote

GRC Integrity Program Manager

@ Meta | Bellevue, WA | Menlo Park, CA | Washington, DC | New York City