Oct. 4, 2023, 1:21 a.m. | Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu

cs.CR updates on arXiv.org arxiv.org

Large Language Models (LLMs) have achieved unprecedented performance in
Natural Language Generation (NLG) tasks. However, many existing studies have
shown that they could be misused to generate undesired content. In response,
before releasing LLMs for public access, model developers usually align those
language models through Supervised Fine-Tuning (SFT) or Reinforcement Learning
with Human Feedback (RLHF). Consequently, those aligned large language models
refuse to generate undesired content when facing potentially harmful/unethical
requests. A natural question is "could alignment really prevent those …

access alignment developers language language models large llms natural natural language performance public response safety studies unprecedented

CyberSOC Technical Lead

@ Integrity360 | Sandyford, Dublin, Ireland

Cyber Security Strategy Consultant

@ Capco | New York City

Cyber Security Senior Consultant

@ Capco | Chicago, IL

Sr. Product Manager

@ MixMode | Remote, US

Corporate Intern - Information Security (Year Round)

@ Associated Bank | US WI Remote

Senior Offensive Security Engineer

@ CoStar Group | US-DC Washington, DC