all InfoSec news
Can LLMs Patch Security Issues?. (arXiv:2312.00024v2 [cs.CR] UPDATED)
cs.CR updates on arXiv.org arxiv.org
Large Language Models (LLMs) have shown impressive proficiency in code
generation. Nonetheless, similar to human developers, these models might
generate code that contains security vulnerabilities and flaws. Writing secure
code remains a substantial challenge, as vulnerabilities often arise during
interactions between programs and external systems or services, such as
databases and operating systems. In this paper, we propose a novel approach,
Feedback-Driven Solution Synthesis (FDSS), designed to explore the use of LLMs
in receiving feedback from Bandit, which is a …
arxiv can challenge code databases developers external flaws human language language models large llms patch secure code security security issues services systems vulnerabilities writing