Feb. 22, 2024, 5:11 a.m. | Zihao Xu, Yi Liu, Gelei Deng, Yuekang Li, Stjepan Picek

cs.CR updates on arXiv.org arxiv.org

arXiv:2402.13457v1 Announce Type: new
Abstract: Large Language Models (LLMS) have increasingly become central to generating content with potential societal impacts. Notably, these models have demonstrated capabilities for generating content that could be deemed harmful. To mitigate these risks, researchers have adopted safety training techniques to align model outputs with societal values to curb the generation of malicious content. However, the phenomenon of "jailbreaking", where carefully crafted prompts elicit harmful responses from models, persists as a significant challenge. This research conducts …

arxiv attack capabilities cs.ai cs.cr defense jailbreak language language models large llm llms researchers risks safety study techniques training

SOC 2 Manager, Audit and Certification

@ Deloitte | US and CA Multiple Locations

Principal Security Analyst - Threat Labs (Position located in India) (Remote)

@ KnowBe4, Inc. | Kochi, India

Cyber Security - Cloud Security and Security Architecture - Manager - Multiple Positions - 1500860

@ EY | Dallas, TX, US, 75219

Enterprise Security Architect (Intermediate)

@ Federal Reserve System | Remote - Virginia

Engineering -- Tech Risk -- Global Cyber Defense & Intelligence -- Associate -- Dallas

@ Goldman Sachs | Dallas, Texas, United States

Vulnerability Management Team Lead - North Central region (Remote)

@ GuidePoint Security LLC | Remote in the United States