[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
-
Updated
Aug 7, 2024 - Jupyter Notebook
[ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be Careful What You Smooth For".
[IEEE S&P 22] "LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis" by Fan Wu, Yunhui Long, Ce Zhang, Bo Li
Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021
Split Learning Simulation Framework for LLMs
The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)
Marich is a model-agnostic extraction algorithm. It uses a public data to query a private model, aggregates the predicted labels, and construct a distributionall equivalent/max-information leaking extracted model.
Source code for https://arxiv.org/abs/2301.10053
[NeurIPS 2024] "Pseudo-Private Data Guided Model Inversion Attacks"
A mitigation method against privacy violation attacks on face recognition systems
Add a description, image, and links to the privacy-attacks topic page so that developers can more easily learn about it.
To associate your repository with the privacy-attacks topic, visit your repo's landing page and select "manage topics."