Web: http://arxiv.org/abs/2208.09727

Nov. 21, 2022, 2:20 a.m. | Gustavo Sandoval, Hammond Pearce, Teo Nys, Ramesh Karri, Brendan Dolan-Gavitt, Siddharth Garg

cs.CR updates on arXiv.org arxiv.org

Large Language Models (LLMs) such as OpenAI Codex are increasingly being used
as AI-based coding assistants. Understanding the impact of these tools on
developers' code is paramount, especially as recent work showed that LLMs may
suggest cybersecurity vulnerabilities. We conduct a security-driven user study
(N=58) to assess code written by student programmers when assisted by LLMs.
Given the potential severity of low-level bugs as well as their relative
frequency in real-world projects, we tasked participants with implementing a
singly-linked 'shopping …

code language large security study

Cyber Transformation Consultant - Energy & Utilities

@ PA Consulting | London, United Kingdom

Security Operations Lead

@ Vattenfall | Amsterdam, Netherlands

Technology - Energy and Natural Resources sector, Security Strategy & Governance, Cyber Defence, Identity & Access

@ KPMG Australia | Sydney, Australia

DevSecOps Manager

@ Nexient | United States

IT Security Manager (REF194D)

@ Deutsche Telekom IT Solutions | Budapest, Debrecen, Pécs, Szeged, Hungary

Security GRC Consultant

@ Devoteam | Zaventem, Belgium

Information Security & Data Privacy Specialist

@ SirionLabs | Gurugram, Haryana, India

Junior Security Engineer

@ Eurofins | Barcelona, Spain

Senior Application Security Engineer [Remote - UK]

@ Confluent, Inc. | Remote, England

Threat Analysis Security Engineer

@ MANGOPAY | Paris, France

Sr. Professional Services Consultant II

@ Palo Alto Networks | Denver, CO, United States

Senior Offensive Security Engineer

@ MANGOPAY | Paris, France