all InfoSec news
Just another copy and paste? Comparing the security vulnerabilities of ChatGPT generated code and StackOverflow answers
March 26, 2024, 4:10 a.m. | Sivana Hamer, Marcelo d'Amorim, Laurie Williams
cs.CR updates on arXiv.org arxiv.org
Abstract: Sonatype's 2023 report found that 97% of developers and security leads integrate generative Artificial Intelligence (AI), particularly Large Language Models (LLMs), into their development process. Concerns about the security implications of this trend have been raised. Developers are now weighing the benefits and risks of LLMs against other relied-upon information sources, such as StackOverflow (SO), requiring empirical data to inform their choice. In this work, our goal is to raise software developers awareness of the …
artificial artificial intelligence arxiv chatgpt code copy cs.ai cs.cr cs.se developers development development process found generated generative generative artificial intelligence integrate intelligence language language models large llms paste process report security sonatype trend vulnerabilities
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 6 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 6 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Director, Cybersecurity - Governance, Risk and Compliance (GRC)
@ Stanley Black & Decker | New Britain CT USA - 1000 Stanley Dr
Information Security Risk Metrics Lead
@ Live Nation Entertainment | Work At Home-Connecticut
IT Product Owner - Enterprise DevSec Platform (d/f/m)
@ Airbus | Hamburg - Finkenwerder
Senior Information Security Specialist
@ Arthur Grand Technologies Inc | Arlington, VA, United States
Information Security Controls SME
@ Sword | Aberdeen, Scotland, United Kingdom