MixBridge: Heterogeneous Image-to-Image Backdoor Attack through Mixture of Schrödinger Bridges
Authors: Shixi Qin, Zhiyong Yang, Shilong Bao, Shi Wang, Qianqian Xu, Qingming Huang
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies across diverse generation tasks speak to the efficacy of Mix Bridge. Finally, we validate the effectiveness of Mix Bridge on the Image Net and Celeb A datasets. Our results demonstrate the model s dual capabilities: producing high-quality benign outputs when given clean input images (i.e., utility) and generating heterogeneous malicious outputs when input images contain triggers (i.e., specificity). |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 101408, China 2Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China 3Key Laboratory of Big Data Mining and Knowledge Management (BDKM), University of Chinese Academy of Sciences, Beijing 101408, China. Correspondence to: Zhiyong Yang <EMAIL>, Qingming Huang <EMAIL>. |
| Pseudocode | Yes | Detailed training and generation procedures are provided in Alg. 1 and Alg. 2, respectively. Algorithm 1 Mix Bridge Training. Algorithm 2 Mix Bridge Generation. |
| Open Source Code | Yes | The code is available at: https://github.com/qsx830/ Mix Bridge. |
| Open Datasets | Yes | The experiments of super-resolution are conducted on the Celeb A dataset (Liu et al., 2015). The experiments of image inpainting are conducted on the Image Net 256 256 (Deng et al., 2009). |
| Dataset Splits | No | The paper uses standard datasets (Celeb A, Image Net) but does not explicitly provide information on how these datasets were split into training, validation, and test sets (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | Yes | In the first stage, we train each expert for 2500 iterations using a single 3090 24GB GPU. In the second stage, we employ model parallelization, assigning experts to different 3090 24GB GPUs. |
| Software Dependencies | No | The paper mentions using 'Adam W optimizer' but does not specify programming languages, libraries, or other software with version numbers needed for replication. |
| Experiment Setup | Yes | Specifically, we set the learning rate to 5 10 5 with an Adam W optimizer (Loshchilov & Hutter) in both stages. We adopt 1000 training intervals (i.e., steps between t = 1 and t = 0), with the diffusion variance increasing linearly from 10 4 to 2 10 2. In the first stage, we train each expert for 2500 iterations... The combined Mix Bridge model is trained for 1000 iterations. In each iteration, we train a batch of 256 image pairs. |