Aug. 10, 2023, 1:10 a.m. | Domenico Cotroneo, Cristina Improta, Pietro Liguori, Roberto Natella

cs.CR updates on arXiv.org arxiv.org

In this work, we assess the security of AI code generators via data
poisoning, i.e., an attack that injects malicious samples into the training
data to generate vulnerable code. We poison the training data by injecting
increasing amounts of code containing security vulnerabilities and assess the
attack's success on different state-of-the-art models for code generation. Our
analysis shows that AI code generators are vulnerable to even a small amount of
data poisoning. Moreover, the attack does not impact the correctness …

attack attacks code data data poisoning malicious poisoning poisoning attacks security training vulnerabilities vulnerable work

IT Security Manager

@ Timocom GmbH | Erkrath, Germany

Cybersecurity Service Engineer

@ Motorola Solutions | Singapore, Singapore

Sr Cybersecurity Vulnerability Specialist

@ Health Care Service Corporation | Chicago Illinois HQ (300 E. Randolph Street)

Associate, Info Security (SOC) analyst

@ Evolent | Pune

Public Cloud Development Security and Operations (DevSecOps) Manager

@ Danske Bank | Copenhagen K, Denmark

Cybersecurity Risk Analyst IV

@ Computer Task Group, Inc | United States