all InfoSec news
Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code
March 12, 2024, 4:11 a.m. | Cristina Improta
cs.CR updates on arXiv.org arxiv.org
Abstract: AI-based code generators have gained a fundamental role in assisting developers in writing software starting from natural language (NL). However, since these large language models are trained on massive volumes of data collected from unreliable online sources (e.g., GitHub, Hugging Face), AI models become an easy target for data poisoning attacks, in which an attacker corrupts the training data by injecting a small amount of poison into it, i.e., astutely crafted malicious samples. In this …
ai models arxiv code cs.ai cs.cr cs.se data developers generated github hugging face language language models large natural natural language poisoning role security security concerns software writing
More from arxiv.org / cs.CR updates on arXiv.org
IDEA: Invariant Defense for Graph Adversarial Robustness
1 day, 8 hours ago |
arxiv.org
FairCMS: Cloud Media Sharing with Fair Copyright Protection
1 day, 8 hours ago |
arxiv.org
Jobs in InfoSec / Cybersecurity
SOC 2 Manager, Audit and Certification
@ Deloitte | US and CA Multiple Locations
Open-Source Intelligence (OSINT) Policy Analyst (TS/SCI)
@ WWC Global | Reston, Virginia, United States
Security Architect (DevSecOps)
@ EUROPEAN DYNAMICS | Brussels, Brussels, Belgium
Infrastructure Security Architect
@ Ørsted | Kuala Lumpur, MY
Contract Penetration Tester
@ Evolve Security | United States - Remote
Senior Penetration Tester
@ DigitalOcean | Canada