Aggregator
ShiroExploit使用指南
Machine Learning Attack Series: Overview
What a journey it has been. I wrote quite a bit about machine learning from a red teaming/security testing perspective this year. It was brought to my attention to provide a conveninent “index page” with all Husky AI and related blog posts. Here it is.
Machine Learning Basics and Building Husky AI- Getting the hang of machine learning
- The machine learning pipeline and attacks
- Husky AI: Building a machine learning system
- MLOps - Operationalizing the machine learning model
- Threat modeling a machine learning system
- Grayhat Red Team Village Video: Building and breaking a machine learning system
- Assume Bias and Responsible AI
- Brute forcing images to find incorrect predictions
- Smart brute forcing
- Perturbations to misclassify existing images
- Adversarial Robustness Toolbox Basics
- Image Scaling Attacks
- Stealing a model file: Attacker gains read access to the model
- Backdooring models: Attacker modifies persisted model file
- Repudiation Threat and Auditing: Catching modifications and unauthorized access
- Attacker modifies Jupyter Notebook file to insert a backdoor
- CVE 2020-16977: VS Code Python Extension Remote Code Execution
- Using Generative Adversarial Networks (GANs) to create fake husky images
- Using Microsoft Counterfit to create adversarial examples
- Backdooring Pickle Files
- Backdooring Keras Model Files and How to Detect It
- Participating in the Microsoft Machine Learning Security Evasion Competition - Bypassing malware models by signing binaries
- Husky AI Github Repo
As you can see there are many machine learning specific attacks, but also a lot of “typical” red teaming techniques that put AI/ML systems at risk. For instance well known attacks such as SSH Agent Hijacking, weak access control and widely exposed credentials will likely help achieve objecives during red teaming operations.
This is a test page for testing Github Action
11
22
333
444
Machine Learning Attack Series: Generative Adversarial Networks (GANs)
In this post we will explore Generative Adversarial Networks (GANs) to create fake husky images. The goal is, of course, to have “Husky AI” misclassify them as real huskies.
If you want to learn more about Husky AI visit the Overview post.
Generative Adversarial NetworksOne of the attacks I wanted to investigate for a while was the creation of fake images to trick Husky AI. The best approach seemed by using Generative Adversarial Networks (GANs). It happened that right then deeplearning.ai started offering a GAN course by Sharon Zhou.
工作一年,我学会了这些
工作一年,我学会了这些
IoTSec.io-国内首个开放式物联网安全威胁情报搜索引擎
NCSC Cyber Threat Report for 2019/20 released
攻击3389之PTH
攻击3389之PTH
攻击3389之PTH
攻击3389之PTH
Assuming Bias and Responsible AI
There are plenty of examples of artificial intelligence and machine learning systems that made it into the news because of biased predictions and failures.
Here are a few examples on AI/ML gone wrong:
- Amazon had an AI recruiting tool which favored men over women for technical jobs
- The Microsoft chat bot named “Tay” which turned racist and sexist rather quickly
- A doctor at the Jupiter Hospital in Florida referred to IBM’s AI system for helping recommend cancer treatments as “a piece of sh*t”
- Facebook’s AI got someone arrested for incorrectly translating text
The list of AI failures goes on…