all InfoSec news
Online Safety Analysis for LLMs: a Benchmark, an Assessment, and a Path Forward
April 15, 2024, 4:10 a.m. | Xuan Xie, Jiayang Song, Zhehua Zhou, Yuheng Huang, Da Song, Lei Ma
cs.CR updates on arXiv.org arxiv.org
Abstract: While Large Language Models (LLMs) have seen widespread applications across numerous fields, their limited interpretability poses concerns regarding their safe operations from multiple aspects, e.g., truthfulness, robustness, and fairness. Recent research has started developing quality assurance methods for LLMs, introducing techniques such as offline detector-based or uncertainty estimation methods. However, these approaches predominantly concentrate on post-generation analysis, leaving the online safety analysis for LLMs during the generation phase an unexplored area. To bridge this gap, …
analysis applications arxiv assessment assurance benchmark cs.ai cs.cl cs.cr cs.lg cs.se fairness forward language language models large llms online safety operations path quality research robustness safe safety techniques
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Technical Support Specialist (Cyber Security)
@ Sigma Software | Warsaw, Poland
OT Security Specialist
@ Adani Group | AHMEDABAD, GUJARAT, India
FS-EGRC-Manager-Cloud Security
@ EY | Bengaluru, KA, IN, 560048