all InfoSec news
On Protecting the Data Privacy of Large Language Models (LLMs): A Survey
March 11, 2024, 4:10 a.m. | Biwei Yan, Kun Li, Minghui Xu, Yueyan Dong, Yue Zhang, Zhaochun Ren, Xiuzheng Cheng
cs.CR updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) are complex artificial intelligence systems capable of understanding, generating and translating human language. They learn language patterns by analyzing large amounts of text data, allowing them to perform writing, conversation, summarizing and other language tasks. When LLMs process and generate large amounts of data, there is a risk of leaking sensitive information, which may threaten data privacy. This paper concentrates on elucidating the data privacy concerns associated with LLMs to foster …
artificial artificial intelligence arxiv conversation cs.cr data data privacy human intelligence language language models large learn llms patterns privacy process protecting survey systems text understanding writing
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 2 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 2 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Senior InfoSec Manager - Risk and Compliance
@ Federal Reserve System | Remote - Virginia
Security Analyst
@ Fortra | Mexico
Incident Responder
@ Babcock | Chester, GB, CH1 6ER
Vulnerability, Access & Inclusion Lead
@ Monzo | Cardiff, London or Remote (UK)
Information Security Analyst
@ Unissant | MD, USA