all InfoSec news
Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks. (arXiv:2308.04451v1 [cs.CR])
cs.CR updates on arXiv.org arxiv.org
In this work, we assess the security of AI code generators via data
poisoning, i.e., an attack that injects malicious samples into the training
data to generate vulnerable code. We poison the training data by injecting
increasing amounts of code containing security vulnerabilities and assess the
attack's success on different state-of-the-art models for code generation. Our
analysis shows that AI code generators are vulnerable to even a small amount of
data poisoning. Moreover, the attack does not impact the correctness …
attack attacks code data data poisoning malicious poisoning poisoning attacks security training vulnerabilities vulnerable work