all InfoSec news
LLMGuard: Guarding Against Unsafe LLM Behavior
March 5, 2024, 3:11 p.m. | Shubh Goyal, Medha Hira, Shubham Mishra, Sukriti Goyal, Arnav Goel, Niharika Dadu, Kirushikesh DB, Sameep Mehta, Nishtha Madaan
cs.CR updates on arXiv.org arxiv.org
Abstract: Although the rise of Large Language Models (LLMs) in enterprise settings brings new opportunities and capabilities, it also brings challenges, such as the risk of generating inappropriate, biased, or misleading content that violates regulations and can have legal concerns. To alleviate this, we present "LLMGuard", a tool that monitors user interactions with an LLM application and flags content against specific behaviours or conversation topics. To do this robustly, LLMGuard employs an ensemble of detectors.
arxiv can capabilities challenges cs.cl cs.cr cs.lg enterprise language language models large legal llm llms opportunities regulations risk settings tool
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
2 days, 6 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
2 days, 6 hours ago |
arxiv.org
Efficient unitary designs and pseudorandom unitaries from permutations
2 days, 6 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Associate Principal Security Engineer
@ Activision Blizzard | Work from Home - CA
Security Engineer- Systems Integration
@ Meta | Bellevue, WA | Menlo Park, CA | New York City
Lead Security Engineer (Digital Forensic and IR Analyst)
@ Blue Yonder | Hyderabad
Senior Principal IAM Engineering Program Manager Cybersecurity
@ Providence | Redmond, WA, United States
Information Security Analyst II or III
@ Entergy | The Woodlands, Texas, United States