all InfoSec news
MOSSBench: Is Your Multimodal Language Model Oversensitive to Safe Queries?
June 27, 2024, 4:19 a.m. | Xirui Li, Hengguang Zhou, Ruochen Wang, Tianyi Zhou, Minhao Cheng, Cho-Jui Hsieh
cs.CR updates on arXiv.org arxiv.org
Abstract: Humans are prone to cognitive distortions -- biased thinking patterns that lead to exaggerated responses to specific stimuli, albeit in very different contexts. This paper demonstrates that advanced Multimodal Large Language Models (MLLMs) exhibit similar tendencies. While these models are designed to respond queries under safety mechanism, they sometimes reject harmless queries in the presence of certain visual stimuli, disregarding the benign nature of their contexts. As the initial step in investigating this behavior, we …
arxiv cs.ai cs.cl cs.cr cs.cv cs.lg language multimodal safe
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Senior Systems Engineer - AWS
@ CACI International Inc | 999 REMOTE
Managing Consultant / Consulting Director / Engagement Lead in Cybersecurity Consulting
@ Marsh McLennan | Toronto - Bremner
Specialist , Fraud Investigation and SecOps
@ Concentrix | Bulgaria - Work at Home
Data Engineer, Mid
@ Booz Allen Hamilton | USA, CA, San Diego (1615 Murray Canyon Rd)
Manager, Risk Management
@ Manulife | CAN, Ontario, Toronto, 200 Bloor Street East
Regional Channel Manager (Remote - West)
@ Dell Technologies | Remote - California, United States (All Other)