all InfoSec news
AI Package Hallucination – Hackers Abusing ChatGPT, Gemini to Spread Malware
GBHackers On Security gbhackers.com
The research investigates the persistence and scale of AI package hallucination, a technique where LLMs recommend non-existent malicious packages. The Langchain framework has allowed for the expansion of previous findings by testing a more comprehensive range of questions, programming languages (Python, Node.js, Go,.NET, and Ruby), and models (GPT-3.5-Turbo, GPT-4, Bard, and Cohere). The aim is […]
The post AI Package Hallucination – Hackers Abusing ChatGPT, Gemini to Spread Malware appeared first on GBHackers on Security | #1 Globally Trusted Cyber …
abusing chatgpt expansion findings framework gemini gpt gpt-3 gpt-3.5 gpt-4 hackers hallucination langchain languages llms malicious malicious packages malware node node.js non package packages persistence programming python questions research ruby scale testing