Noise Optimized Conditional Diffusion for Domain Adaptation

Authors: Lingkun Luo, Shiqiang Hu, Liming Chen

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments across 5 benchmark datasets and 29 DA tasks demonstrate significant performance gains of NOCDDA over 31 state-of-the-art methods, validating its robustness and effectiveness. 4 Experiments The experimental section covers Dataset Description, Experimental Setup, 31 Baseline Methods, Experimental Results and Discussion, and includes an Ablation Study to further discuss the individual contributions of the proposed method s design.
Researcher Affiliation Academia Lingkun Luo1, Shiqiang Hu1*, Liming Chen2,3 1School of Aeronautics and Astronautics, Shanghai Jiao Tong University, Shanghai, China 2LIRIS, CNRS UMR 5205, Ecole Centrale de Lyon, France 3Institut Universitaire de France (IUF), France EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using prose and mathematical formulations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of source code for the described methodology, nor does it provide a link to a code repository. It mentions supplementary material for details, but not specifically for code availability.
Open Datasets Yes 4.1 Dateset Description Image datasets: Digits: We evaluate domain adaptation on three digit datasets: MNIST (M)..., USPS (U)..., and SVHN (S)... Office-31: This benchmark dataset includes over 4,000 images across three domains: Amazon (A), Webcam (W), and Dslr (D)... Image CLEF-DA: This dataset contains 12 shared classes from Caltech-256 (C), Image Net ILSVRC 2012 (I), and Pascal VOC 2012 (P)... Time-series datasets: CWRU: The Case Western Reserve University (CWRU) dataset..., SEU: The Southeast University (SEU) dataset...
Dataset Splits Yes Digits: We evaluate domain adaptation on three digit datasets: MNIST (M) with 60,000 training and 10,000 test samples, USPS (U) with 7,291 training and 2,007 test samples, and SVHN (S) with over 600,000 labeled street view digits.
Hardware Specification Yes Experiments were conducted on several datasets, including Digits, Office-31, Image CLEF-DA, CWRU, and SEU, using the PyTorch framework and an Nvidia 4090 GPU.
Software Dependencies No The paper mentions 'PyTorch framework' but does not specify a version number, which is required for reproducible software dependencies.
Experiment Setup Yes 4.2 Experimental Setup For the Digits dataset, the diffusion model was trained with 1000 diffusion steps using DDIM sampling (200-step schedule with 5-step jumps), a batch size of 36, a learning rate of 0.02, and momentum of 0.5 for 100 epochs. For the Office-31 and Image CLEF-DA datasets, DDIM sampling was optimized with 50 steps (20-step jumps), generating 100 images per class, with learning rates of 0.001 (for Office-31) or 0.0003 (depending on the domain pair). For CWRU and SEU, DDIM with 10 steps (100-step jumps) was used, generating 50 samples per class, with a batch size of 64, a learning rate of 0.03, and 50 epochs for CWRU and 30 for SEU. The model employed a U-Net architecture (1D for CWRU and SEU, 2D for Office-31 and Image CLEF-DA), incorporating encoder-decoder structures, skip connections, attention layers, and residual blocks.