e
June 20, 2023, 3 p.m. |

Embrace The Red embracethered.com

OpenAI continues to add plugins with security vulnerabilities to their store.
In particular powerful plugins that can impersonate a user are not getting the required security scrutiny, or a general mitigation at the platform level.
As a brief reminder, one of the challenges Large Language Model (LLM) User-Agents, like ChatGPT, and plugins face is the Confused Deputy Problem / Plugin Request Forgery Attacks, which means that during a Prompt Injection attack an adversary can issue commands to plugins to cause …

challenges code general language large large language model llm mitigation openai platform plugin plugins reminder security source code stolen store vulnerabilities website

Azure DevSecOps Cloud Engineer II

@ Prudent Technology | McLean, VA, USA

Security Engineer III - Python, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SOC Analyst (Threat Hunter)

@ NCS | Singapore, Singapore

Managed Services Information Security Manager

@ NTT DATA | Sydney, Australia

Senior Security Engineer (Remote)

@ Mattermost | United Kingdom

Penetration Tester (Part Time & Remote)

@ TestPros | United States - Remote