π΅οΈββοΈ ERASE: Bypassing Collaborative Detection of AI Counterfeit (Model Weights)
Qianyun Yang1 Peizhuo Lv2 Yingjiu Li3 Shengzhi Zhang4 Yuxuan Chen1 Zixu Li1 Yupeng Hu1
1Shandong University 2Nanyang Technological University 3University of Oregon 4Boston University
π Paper: [Accepted by IEEE TDSC 2026] (Coming Soon)
π GitHub Repository: iLearn-Lab/TDSC26-ERASE
π Model Information
1. Model Name
ERASE (comprehensive counterfeit ArtifactS Elimination) Checkpoints.
2. Task Type & Applicable Tasks
- Task Type: Adversarial Attack / AI-Generated Image Stealth (AIGI-S) / Image-to-Image
- Applicable Tasks: Bypassing AI-generated image detectors (both single detectors and collaborative multi-detector environments) while maintaining exceptionally high visual fidelity.
3. Project Introduction
With the rapid development of generative AI, the issue of deepfakes has become increasingly severe. Existing AI-Generated Image Stealth (AIGI-S) methods typically optimize against a single detector and often fail when facing real-world "Collaborative Detection". Moreover, they often introduce obvious artifacts visible to human observers.
ERASE is a stealth optimization framework that innovatively combines:
- π― Sensitive Feature Attack
- βοΈ Diffusion Chain Attack (Optimization-free)
- π» Decoupled Frequency Domain Processing
This Hugging Face repository hosts the pre-trained weights required to run the Decoupled Frequency Domain Processing and the Surrogate Classifiers, specifically noise_prototype_VAE.pt, dncnn_color_blind.pth, and the ckpt_ori surrogate weights.
4. Training Data Source
The surrogate classifiers and related components were primarily trained and evaluated on the GenImage dataset, following the standard task settings of AIGI-S evaluation.
π Usage & Basic Inference
These weights are designed to be used seamlessly out-of-the-box with the official ERASE GitHub repository.
Step 1: Prepare the Environment
Clone the GitHub repository and install dependencies:
git clone https://github.com/iLearn-Lab/TDSC26-ERASE
cd ERASE
conda create -n erase python=3.9 -y
conda activate erase
pip install -r requirements.txt
Step 2: Download Model Weights
Download the files from this Hugging Face repository (ckpt_ori folder, noise_prototype_VAE.pt, dncnn_color_blind.pth) and place them in the checkpoints/ directory of your cloned GitHub repo. Your structure should look like this:
ERASE/
βββ checkpoints/
βββ ckpt_ori/ # Surrogate model weights (E/R/D/S)
βββ noise_prototype_VAE.pt # Frequency VAE weights
βββ dncnn_color_blind.pth # Denoising/Frequency weights
Step 3: Run the Attack
Use main.py from the code repository to perform basic inference and generate adversarial images:
python main.py \
--images_root ./input_images \
--save_dir ./output \
--model_name E,R,D,S \
--diffusion_steps 20 \
--start_step 18 \
--iterations 10 \
--is_encoder 1 \
--encoder_weights ./checkpoints/noise_prototype_VAE.pt \
--eps 4 \
--batch_size 4 \
--device cuda:0
β οΈ Limitations & Notes
Disclaimer: This tool and its associated model weights are strictly intended for academic research, AI security evaluation, and robustness testing.
- It is strictly prohibited to use this repository for any malicious forgery, fraud, or other illegal/unethical purposes.
- Users bear full legal responsibility for any consequences arising from improper use.
πβοΈ Citation
If you find our weights or code useful for your research, please consider leaving a Star βοΈ on our GitHub repo and citing our paper:
@article{yang2026erase,
title={ERASE: Bypassing Collaborative Detection of AI Counterfeit via Comprehensive Artifacts Elimination},
author={Yang, Qianyun and Lv, Peizhuo and Li, Yingjiu and Zhang, Shengzhi and Chen, Yuxuan and Chen, Zhiwei and Li, Zixu and Hu, Yupeng},
journal={IEEE Transactions on Dependable and Secure Computing},
year={2026},
publisher={IEEE}
}