all InfoSec news
Increased LLM Vulnerabilities from Fine-tuning and Quantization
April 9, 2024, 4:11 a.m. | Divyanshu Kumar, Anurakt Kumar, Sahil Agarwal, Prashanth Harshangi
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have become very popular and have found use cases in many domains, such as chatbots, auto-task completion agents, and much more. However, LLMs are vulnerable to different types of attacks, such as jailbreaking, prompt injection attacks, and privacy leakage attacks. Foundational LLMs undergo adversarial and alignment training to learn not to generate malicious and toxic content. For specialized use cases, these foundational LLMs are subjected to fine-tuning or quantization for better …
adversarial agents arxiv attacks auto cases chatbots cs.ai cs.cr domains fine-tuning found foundational injection injection attacks jailbreaking language language models large llm llms popular privacy prompt prompt injection prompt injection attacks task types use cases vulnerabilities vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Sr. Staff Firmware Engineer – Networking & Firewall
@ Axiado | Bengaluru, India
Compliance Architect / Product Security Sr. Engineer/Expert (f/m/d)
@ SAP | Walldorf, DE, 69190
SAP Security Administrator
@ FARO Technologies | EMEA-Portugal