July 31, 2023, 12:16 p.m. | John P. Mello Jr.

Security Boulevard securityboulevard.com


Since OpenAI introduced ChatGPT to the public last year, generative AI large language models (LLMs) have been popping up like mushrooms after a summer rain. So it was only a matter of time before online predators, frustrated by the guardrails deployed by developers to keep abuse of the LLMs in check, cooked up their own model for malevolent purposes.


The post WormGPT: Business email compromise amplified by ChatGPT hack appeared first on Security Boulevard.

abuse business business email compromise chatgpt check compromise developers email email compromise generative generative ai governance hack language language models large llms matter openai public risk & compliance security operations summer wormgpt

More from securityboulevard.com / Security Boulevard

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

Information Technology Security Engineer

@ Plexus Worldwide | Scottsdale, Arizona, United States

Principal Email Security Researcher (Cortex XDR)

@ Palo Alto Networks | Tel Aviv-Yafo, Israel

Lead Security Engineer - Cloud Security, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India