Jan. 9, 2023, 2:10 a.m. | Hojjat Aghakhani, Wei Dai, Andre Manoel, Xavier Fernandes, Anant Kharkar, Christopher Kruegel, Giovanni Vigna, David Evans, Ben Zorn, Robert Sim

cs.CR updates on arXiv.org arxiv.org

With tools like GitHub Copilot, automatic code suggestion is no longer a
dream in software engineering. These tools, based on large language models, are
typically trained on massive corpora of code mined from unvetted public
sources. As a result, these models are susceptible to data poisoning attacks
where an adversary manipulates the model's training or fine-tuning phases by
injecting malicious data. Poisoning attacks could be designed to influence the
model's suggestions at run time for chosen contexts, such as inducing …

adversary attacks automatic code copilot data data poisoning dream engineering github github copilot influence language language models large malicious poisoning public result run software software engineering tools training

Lead Security Specialist

@ Fujifilm | Holly Springs, NC, United States

Security Operations Centre Analyst

@ Deliveroo | Hyderabad, India (Main Office)

CISOC Analyst

@ KCB Group | Kenya

Lead Security Engineer – Red Team/Offensive Security

@ FICO | Work from Home, United States

Cloud Security SME

@ Maveris | Washington, District of Columbia, United States - Remote

SOC Analyst (m/w/d)

@ Bausparkasse Schwäbisch Hall | Schwäbisch Hall, DE