April 5, 2024, 4:11 a.m. | Yujia Fu, Peng Liang, Amjed Tahir, Zengyang Li, Mojtaba Shahin, Jiaxin Yu, Jinfu Chen

cs.CR updates on arXiv.org arxiv.org

arXiv:2310.02059v2 Announce Type: replace-cross
Abstract: Modern code generation tools, utilizing AI models like Large Language Models (LLMs), have gained popularity for producing functional code. However, their usage presents security challenges, often resulting in insecure code merging into the code base. Evaluating the quality of generated code, especially its security, is crucial. While prior research explored various aspects of code generation, the focus on security has been limited, mostly examining code produced in controlled environments rather than real-world scenarios. To address …

ai models arxiv base challenges code code base copilot cs.cr cs.se generated github insecure language language models large llms producing quality security security challenges tools weaknesses

Senior Security Researcher

@ Microsoft | Redmond, Washington, United States

Sr. Cyber Risk Analyst

@ American Heart Association | Dallas, TX, United States

Cybersecurity Engineer 2/3

@ Scaled Composites, LLC | Mojave, CA, US

Information Security Operations Manager

@ DP World | Charlotte, NC, United States

Sr Cyber Security Engineer I

@ Staples | Framingham, MA, United States

Security Engineer - Heartland (Remote)

@ GuidePoint Security LLC | Remote in the US