Counterfactual Knowledge Maintenance for Unsupervised Domain Adaptation
Authors: Yao Li, Yong Zhou, Jiaqi Zhao, Wen-liang Du, Rui Yao, Bing Liu
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted extensive experimental evaluations on several public datasets to demonstrate the effectiveness of our method. |
| Researcher Affiliation | Academia | Yao Li1,2, Yong Zhou1,2 , Jiaqi Zhao1,2, Wen-liang Du1,2, Rui Yao1,2, Bing Liu1,2 1School of Computer Sciences and Technology, China University of Mining and Technology EMAIL |
| Pseudocode | No | The paper describes the proposed method in Section 3, including subsections like 'Counterfactual Disentanglement' and 'Discrimination Knowledge Maintenance', using mathematical formulations and descriptive text. However, it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured, code-like steps for its procedures. |
| Open Source Code | Yes | The source code is available at https://github.com/Li Yaolab/CMKUDA. |
| Open Datasets | Yes | We evaluated our method on two prominent public datasets: Office-Home (Venkateswara et al., 2017), and Vis DA-2017 (Peng et al., 2018). |
| Dataset Splits | No | The paper mentions using 'labeled source domain image data Dsd' and 'unlabeled target domain image data Dtd' as part of the Unsupervised Domain Adaptation problem definition. It states that Office-Home comprises images across four distinct domains and 65 categories, and VisDA-2017 features 152,000 synthetic images in the source domain and 55,000 real images in the target domain. However, it does not explicitly provide specific percentages, sample counts, or methodology for splitting these datasets into training, validation, or test sets beyond the inherent source/target domain division. |
| Hardware Specification | Yes | All experiments were conducted on an NVIDIA RTX A6000 GPU. |
| Software Dependencies | No | The paper mentions using specific models like ResNet50, ViT-B/16, and CLIP, and the Adam optimizer. However, it does not provide specific version numbers for any programming languages (e.g., Python), deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries (e.g., CUDA, scikit-learn) that would be needed to replicate the software environment. |
| Experiment Setup | Yes | We set the length of the learnable text prompt L to 32. For optimization, we used the Adam optimizer [Kingma and Ba, 2017] with an initial learning rate of 3e 3 and trained the model for 30 epochs with 32 batch sizes. |