all InfoSec news
Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes
March 5, 2024, 3:11 p.m. | Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) are becoming a prominent generative AI tool, where the user enters a query and the LLM generates an answer. To reduce harm and misuse, efforts have been made to align these LLMs to human values using advanced training techniques such as Reinforcement Learning from Human Feedback (RLHF). However, recent studies have highlighted the vulnerability of LLMs to adversarial jailbreak attempts aiming at subverting the embedded safety guardrails. To address this challenge, …
advanced arxiv attacks cs.ai cs.cl cs.cr cs.lg generative generative ai harm human human values jailbreak language language models large llm llms loss query tool
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 15 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 15 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
1 day, 15 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Cybersecurity Engineer
@ Booz Allen Hamilton | USA, VA, Arlington (1550 Crystal Dr Suite 300) non-client
Invoice Compliance Reviewer
@ AC Disaster Consulting | Fort Myers, Florida, United States - Remote
Technical Program Manager II - Compliance
@ Microsoft | Redmond, Washington, United States
Head of U.S. Threat Intelligence / Senior Manager for Threat Intelligence
@ Moonshot | Washington, District of Columbia, United States
Customer Engineer, Security, Public Sector
@ Google | Virginia, USA; Illinois, USA