June 15, 2023, 2 p.m. | Teri Robinson

Security Boulevard securityboulevard.com


A newly discovered ChatGPT-based attack technique, dubbed AI package hallucination, lets attackers publish their own malicious packages in place of an unpublished package. In this way, attackers can execute supply chain attacks through the deployment of malicious libraries to known repositories. The technique plays off of the fact that generative AI platforms like ChatGPT use..


The post ChatGPT Spreads Malicious Packages in AI Package Hallucination Attack appeared first on Security Boulevard.

analytics & intelligence application security attack attackers attacks chatgpt cybersecurity deployment fact featured generative generative ai hallucination incident response malicious malicious libraries malicious packages malware own package packages repositories security boulevard (original) software supply chain spotlight supply supply chain supply chain attacks threat intelligence threats & breaches vulnerabilities

More from securityboulevard.com / Security Boulevard

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Senior - Penetration Tester

@ Deloitte | Madrid, España

Associate Cyber Incident Responder

@ Highmark Health | PA, Working at Home - Pennsylvania

Senior Insider Threat Analyst

@ IT Concepts Inc. | Woodlawn, Maryland, United States