all InfoSec news
Increased LLM Vulnerabilities from Fine-tuning and Quantization
April 9, 2024, 4:11 a.m. | Divyanshu Kumar, Anurakt Kumar, Sahil Agarwal, Prashanth Harshangi
cs.CR updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have become very popular and have found use cases in many domains, such as chatbots, auto-task completion agents, and much more. However, LLMs are vulnerable to different types of attacks, such as jailbreaking, prompt injection attacks, and privacy leakage attacks. Foundational LLMs undergo adversarial and alignment training to learn not to generate malicious and toxic content. For specialized use cases, these foundational LLMs are subjected to fine-tuning or quantization for better …
adversarial agents arxiv attacks auto cases chatbots cs.ai cs.cr domains fine-tuning found foundational injection injection attacks jailbreaking language language models large llm llms popular privacy prompt prompt injection prompt injection attacks task types use cases vulnerabilities vulnerable
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Cyber Security Engineer
@ ASSYSTEM | Bridgwater, United Kingdom
Security Analyst
@ Northwestern Memorial Healthcare | Chicago, IL, United States
GRC Analyst
@ Richemont | Shelton, CT, US
Security Specialist
@ Peraton | Government Site, MD, United States
Information Assurance Security Specialist (IASS)
@ OBXtek Inc. | United States
Cyber Security Technology Analyst
@ Airbus | Bengaluru (Airbus)