all InfoSec news
Do Neutral Prompts Produce Insecure Code? FormAI-v2 Dataset: Labelling Vulnerabilities in Code Generated by Large Language Models
April 30, 2024, 4:11 a.m. | Norbert Tihanyi, Tamas Bisztray, Mohamed Amine Ferrag, Ridhi Jain, Lucas C. Cordeiro
cs.CR updates on arXiv.org arxiv.org
Abstract: This study provides a comparative analysis of state-of-the-art large language models (LLMs), analyzing how likely they generate vulnerabilities when writing simple C programs using a neutral zero-shot prompt. We address a significant gap in the literature concerning the security properties of code produced by these models without specific directives. N. Tihanyi et al. introduced the FormAI dataset at PROMISE '23, containing 112,000 GPT-3.5-generated C programs, with over 51.24% identified as vulnerable. We expand that work …
address analysis art arxiv code cs.ai cs.cr cs.pl dataset gap generated insecure language language models large literature llms prompt prompts simple state study vulnerabilities writing
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Penetration Tester, Retail Engineering, Early Career
@ Apple | Austin, Texas, United States
Principal Product Security Engineer
@ Palo Alto Networks | Bengaluru, India
Senior Manager/ Director, Cyber
@ McGrathNicol | Brisbane