Aggregator
有关业余无线电的碎碎念
HaE入门到精通:三条影响你一生的HaE规则
国庆快乐
A Closer Look at the Snatch Data Ransom Group
国庆快乐
Microsoft Fixes Data Exfiltration Vulnerability in Azure AI Playground
Large Language Model (LLM) applications and chatbots are quite commonly vulnerable to data exfiltration. In particular data exfiltration via Image Markdown Injection is quite frequent.
Microsoft fixed such a vulnerability in Bing Chat, Anthropic fixed it in Claude, and ChatGPT has a known vulnerability as Open AI “won’t fix” the issue.
This post describes a variant in the Azure AI Playground and how Microsoft fixed it.
From Untrusted Data to Data ExfiltrationWhen untrusted data makes it into the LLM prompt context it can instruct the model to inject an image markdown element. Clients frequently render this using an HTML img tag and if untrusted data is involved the attacker can control the src attribute.
中秋团圆
Advanced Data Exfiltration Techniques with ChatGPT
During an Indirect Prompt Injection Attack an adversary can exfiltrate chat data from a user by instructing ChatGPT to render images and append information to the URL (Image Markdown Injection), or by tricking a user to click a hyperlink.
Sending large amounts of data to a third party server via URLs might seem inconvenient or limiting…
Let’s say we want something more, aehm, powerful, elegant and exciting.
ChatGPT Plugins and Exfiltration LimitationsPlugins are an extension mechanism with little security oversight or enforced review process.