June 29, 2023, 2 p.m. |

Positive Security | IT Security research and happy coincidences positive.security

We leverage indirect prompt injection to trick Auto-GPT (GPT-4) into executing arbitrary code when it is asked to perform a seemingly harmless task such as text summarization on a malicious website, and discovered vulnerabilities that allow escaping its sandboxed execution environment.

auto auto-gpt code container docker environment gpt gpt-4 hacking injection malicious malicious website prompt injection task text vulnerabilities website

Azure DevSecOps Cloud Engineer II

@ Prudent Technology | McLean, VA, USA

Security Engineer III - Python, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SOC Analyst (Threat Hunter)

@ NCS | Singapore, Singapore

Managed Services Information Security Manager

@ NTT DATA | Sydney, Australia

Senior Security Engineer (Remote)

@ Mattermost | United Kingdom

Penetration Tester (Part Time & Remote)

@ TestPros | United States - Remote