all InfoSec news
Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code
March 12, 2024, 4:11 a.m. | Cristina Improta
cs.CR updates on arXiv.org arxiv.org
Abstract: AI-based code generators have gained a fundamental role in assisting developers in writing software starting from natural language (NL). However, since these large language models are trained on massive volumes of data collected from unreliable online sources (e.g., GitHub, Hugging Face), AI models become an easy target for data poisoning attacks, in which an attacker corrupts the training data by injecting a small amount of poison into it, i.e., astutely crafted malicious samples. In this …
ai models arxiv code cs.ai cs.cr cs.se data developers generated github hugging face language language models large natural natural language poisoning role security security concerns software writing
More from arxiv.org / cs.CR updates on arXiv.org
Jobs in InfoSec / Cybersecurity
CyberSOC Technical Lead
@ Integrity360 | Sandyford, Dublin, Ireland
Cyber Security Strategy Consultant
@ Capco | New York City
Cyber Security Senior Consultant
@ Capco | Chicago, IL
Sr. Product Manager
@ MixMode | Remote, US
Corporate Intern - Information Security (Year Round)
@ Associated Bank | US WI Remote
Senior Offensive Security Engineer
@ CoStar Group | US-DC Washington, DC