all InfoSec news
API Flaws Put AI Models at Risk of Data Poisoning
Dec. 5, 2023, 9:16 p.m. |
DataBreachToday.co.uk RSS Syndication www.databreachtoday.co.uk
Security researchers could access and modify an artificial intelligence code generation model developed by Facebook after scanning for API access tokens on AI developer platform Hugging Face and code repository GitHub. Tampering with training data is among the top threats to large language models.
access access tokens ai developer ai models api artificial artificial intelligence code code repository data data poisoning developer developer platform facebook fixes flaw flaws github hugging face intelligence meta platform poisoning repository researchers risk scanning tampering tech threats tokens top threats training training data vulnerable
More from www.databreachtoday.co.uk / DataBreachToday.co.uk RSS Syndication
Hackers Target US AI Experts With Customized RAT
1 day, 13 hours ago |
www.databreachtoday.co.uk
Palo Alto to Acquire IBM QRadar SIEM Business
2 days, 9 hours ago |
www.databreachtoday.co.uk
Jobs in InfoSec / Cybersecurity
Information Security Engineers
@ D. E. Shaw Research | New York City
Technology Security Analyst
@ Halton Region | Oakville, Ontario, Canada
Senior Cyber Security Analyst
@ Valley Water | San Jose, CA
Senior - Penetration Tester
@ Deloitte | Madrid, España
Associate Cyber Incident Responder
@ Highmark Health | PA, Working at Home - Pennsylvania
Senior Insider Threat Analyst
@ IT Concepts Inc. | Woodlawn, Maryland, United States