all InfoSec news
Controlling Large Language Models to Generate Secure and Vulnerable Code. (arXiv:2302.05319v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
Large language models (LMs) are increasingly pretrained on massive corpora of
open-source programs and applied to solve program synthesis tasks. However, a
fundamental limitation of LMs is their unawareness of security and
vulnerability during pretraining and inference. As a result, LMs produce secure
or vulnerable programs with high uncertainty (e.g., around 60%/40% chances for
GitHub Copilot according to a recent study). This greatly impairs LMs'
usability, especially in security-sensitive scenarios.
To address this limitation, this work formulates a new problem …
code copilot github github copilot high language language models large lms program result security study uncertainty usability vulnerability vulnerable