Oct. 6, 2023, 1:10 a.m. | Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, Peter Henderson

cs.CR updates on arXiv.org arxiv.org

Optimizing large language models (LLMs) for downstream use cases often
involves the customization of pre-trained LLMs through further fine-tuning.
Meta's open release of Llama models and OpenAI's APIs for fine-tuning GPT-3.5
Turbo on custom datasets also encourage this practice. But, what are the safety
costs associated with such custom fine-tuning? We note that while existing
safety alignment infrastructures can restrict harmful behaviors of LLMs at
inference time, they do not cover safety risks when fine-tuning privileges are
extended to end-users. …

apis cases customization datasets gpt gpt-3 language language models large llama llms meta openai practice release safety use cases

Sr Cyber Threat Hunt Researcher

@ Peraton | Beltsville, MD, United States

Lead Consultant, Hydrogeologist

@ WSP | Chattanooga, TN, United States

Senior Security Engineer - Netskope/Proofpoint

@ Sainsbury's | London, London, United Kingdom

Senior Technical Analyst-Network Security

@ Computacenter | Bengaluru Bengaluru (Bengaluru, IN, 560025

Senior DevSecOps Engineer - Clearance Required

@ Logistics Management Institute | Remote, United States

Software Test Automation Manager - Cloud Security

@ Tenable | Israel - Office - CS