April 5, 2024, 4:43 a.m. | Balaji

GBHackers On Security gbhackers.com

The research investigates the persistence and scale of AI package hallucination, a technique where LLMs recommend non-existent malicious packages.  The Langchain framework has allowed for the expansion of previous findings by testing a more comprehensive range of questions, programming languages (Python, Node.js, Go,.NET, and Ruby), and models (GPT-3.5-Turbo, GPT-4, Bard, and Cohere).  The aim is […]


The post AI Package Hallucination – Hackers Abusing ChatGPT, Gemini to Spread Malware appeared first on GBHackers on Security | #1 Globally Trusted Cyber …

abusing chatgpt expansion findings framework gemini gpt gpt-3 gpt-3.5 gpt-4 hackers hallucination langchain languages llms malicious malicious packages malware node node.js non package packages persistence programming python questions research ruby scale testing

Azure DevSecOps Cloud Engineer II

@ Prudent Technology | McLean, VA, USA

Security Engineer III - Python, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SOC Analyst (Threat Hunter)

@ NCS | Singapore, Singapore

Managed Services Information Security Manager

@ NTT DATA | Sydney, Australia

Senior Security Engineer (Remote)

@ Mattermost | United Kingdom

Penetration Tester (Part Time & Remote)

@ TestPros | United States - Remote