April 5, 2024, 4:43 a.m. | Balaji

GBHackers On Security gbhackers.com

The research investigates the persistence and scale of AI package hallucination, a technique where LLMs recommend non-existent malicious packages.  The Langchain framework has allowed for the expansion of previous findings by testing a more comprehensive range of questions, programming languages (Python, Node.js, Go,.NET, and Ruby), and models (GPT-3.5-Turbo, GPT-4, Bard, and Cohere).  The aim is […]


The post AI Package Hallucination – Hackers Abusing ChatGPT, Gemini to Spread Malware appeared first on GBHackers on Security | #1 Globally Trusted Cyber …

abusing chatgpt expansion findings framework gemini gpt gpt-3 gpt-3.5 gpt-4 hackers hallucination langchain languages llms malicious malicious packages malware node node.js non package packages persistence programming python questions research ruby scale testing

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Consultant Sécurité SI Gouvernance - Risques - Conformité H/F - Strasbourg

@ Hifield | Strasbourg, France

Lead Security Specialist

@ KBR, Inc. | USA, Dallas, 8121 Lemmon Ave, Suite 550, Texas

Consultant SOC / CERT H/F

@ Hifield | Sèvres, France