March 5, 2024, 3:11 p.m. | Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho

cs.CR updates on arXiv.org arxiv.org

arXiv:2403.00867v1 Announce Type: new
Abstract: Large Language Models (LLMs) are becoming a prominent generative AI tool, where the user enters a query and the LLM generates an answer. To reduce harm and misuse, efforts have been made to align these LLMs to human values using advanced training techniques such as Reinforcement Learning from Human Feedback (RLHF). However, recent studies have highlighted the vulnerability of LLMs to adversarial jailbreak attempts aiming at subverting the embedded safety guardrails. To address this challenge, …

advanced arxiv attacks cs.ai cs.cl cs.cr cs.lg generative generative ai harm human human values jailbreak language language models large llm llms loss query tool

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Cybersecurity Engineer

@ Booz Allen Hamilton | USA, VA, Arlington (1550 Crystal Dr Suite 300) non-client

Invoice Compliance Reviewer

@ AC Disaster Consulting | Fort Myers, Florida, United States - Remote

Technical Program Manager II - Compliance

@ Microsoft | Redmond, Washington, United States

Head of U.S. Threat Intelligence / Senior Manager for Threat Intelligence

@ Moonshot | Washington, District of Columbia, United States

Customer Engineer, Security, Public Sector

@ Google | Virginia, USA; Illinois, USA