Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
-
Updated
Apr 3, 2025 - Python
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
A pytorch adversarial library for attack and defense methods on images and graphs
🗣️ Tool to generate adversarial text examples and test machine learning models against them
Implementation of Papers on Adversarial Examples
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, and 2024)
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Library containing PyTorch implementations of various adversarial attacks and resources
PyTorch library for adversarial attack and training
[CVPR 2020] When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
Revisiting Transferable Adversarial Images (arXiv)
Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)
Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Understanding and Improving Fast Adversarial Training [NeurIPS 2020]
Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTorch).
Add a description, image, and links to the adversarial-examples topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-examples topic, visit your repo's landing page and select "manage topics."