all InfoSec news
Use of LLMs for Illicit Purposes: Threats, Prevention Measures, and Vulnerabilities. (arXiv:2308.12833v1 [cs.CL])
cs.CR updates on arXiv.org arxiv.org
Spurred by the recent rapid increase in the development and distribution of
large language models (LLMs) across industry and academia, much recent work has
drawn attention to safety- and security-related threats and vulnerabilities of
LLMs, including in the context of potentially criminal activities.
Specifically, it has been shown that LLMs can be misused for fraud,
impersonation, and the generation of malware; while other authors have
considered the more general problem of AI alignment. It is important that
developers and practitioners …
academia attention context criminal development distribution industry language language models large llms prevention rapid safety security threats threats and vulnerabilities vulnerabilities work