all InfoSec news
API Flaws Put AI Models at Risk of Data Poisoning
Dec. 5, 2023, 9:10 p.m. |
GovInfoSecurity.com RSS Syndication www.govinfosecurity.com
Security researchers could access and modify an artificial intelligence code generation model developed by Facebook after scanning for API access tokens on AI developer platform Hugging Face and code repository GitHub. Tampering with training data is among the top threats to large language models.
access access tokens ai developer ai models api artificial artificial intelligence code code repository data data poisoning developer developer platform facebook fixes flaw flaws github hugging face intelligence meta platform poisoning repository researchers risk scanning tampering tech threats tokens top threats training training data vulnerable
More from www.govinfosecurity.com / GovInfoSecurity.com RSS Syndication
Jobs in InfoSec / Cybersecurity
Azure DevSecOps Cloud Engineer II
@ Prudent Technology | McLean, VA, USA
Security Engineer III - Python, AWS
@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India
SOC Analyst (Threat Hunter)
@ NCS | Singapore, Singapore
Managed Services Information Security Manager
@ NTT DATA | Sydney, Australia
Senior Security Engineer (Remote)
@ Mattermost | United Kingdom
Penetration Tester (Part Time & Remote)
@ TestPros | United States - Remote