all InfoSec news
Few-Shot Adversarial Prompt Learning on Vision-Language Models
March 25, 2024, 4:11 a.m. | Yiwei Zhou, Xiaobo Xia, Zhiwei Lin, Bo Han, Tongliang Liu
cs.CR updates on arXiv.org arxiv.org
Abstract: The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention. Inspired by the success of vision-language foundation models, previous efforts achieved zero-shot adversarial robustness by aligning adversarial visual features with text supervision. However, in practice, they are still unsatisfactory due to several issues, including heavy adaptation cost, suboptimal text supervision, and uncontrolled natural generalization capacity. In this paper, to address these issues, we propose a few-shot adversarial prompt framework where adapting …
adversarial arxiv attention cs.cl cs.cr cs.cv cs.lg features foundation foundation models language language models networks neural networks practice prompt robustness text vulnerability
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 23 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 23 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 23 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Application Security Engineer - Enterprise Engineering
@ Meta | Bellevue, WA | Seattle, WA | New York City | Fremont, CA
Security Engineer
@ Retool | San Francisco, CA
Senior Product Security Analyst
@ Boeing | USA - Seattle, WA
Junior Governance, Risk and Compliance (GRC) and Operations Support Analyst
@ McKenzie Intelligence Services | United Kingdom - Remote
GRC Integrity Program Manager
@ Meta | Bellevue, WA | Menlo Park, CA | Washington, DC | New York City