all InfoSec news
API Flaws Put AI Models at Risk of Data Poisoning
Dec. 5, 2023, 9:16 p.m. |
DataBreachToday.co.uk RSS Syndication www.databreachtoday.co.uk
Security researchers could access and modify an artificial intelligence code generation model developed by Facebook after scanning for API access tokens on AI developer platform Hugging Face and code repository GitHub. Tampering with training data is among the top threats to large language models.
access access tokens ai developer ai models api artificial artificial intelligence code code repository data data poisoning developer developer platform facebook fixes flaw flaws github hugging face intelligence meta platform poisoning repository researchers risk scanning tampering tech threats tokens top threats training training data vulnerable
More from www.databreachtoday.co.uk / DataBreachToday.co.uk RSS Syndication
Verizon Breach Report: Vulnerability Hacks Tripled in 2023
1 day, 4 hours ago |
www.databreachtoday.co.uk
Jobs in InfoSec / Cybersecurity
Social Engineer For Reverse Engineering Exploit Study
@ Independent study | Remote
Application Security Engineer - Remote Friendly
@ Unit21 | San Francisco,CA; New York City; Remote USA;
Cloud Security Specialist
@ AppsFlyer | Herzliya
Malware Analysis Engineer - Canberra, Australia
@ Apple | Canberra, Australian Capital Territory, Australia
Product CISO
@ Fortinet | Sunnyvale, CA, United States
Manager, Security Engineering
@ Thrive | United States - Remote