Gaussian Mixture Model for Graph Domain Adaptation
Authors: Mengzhu Wang, Wenhao Ren, Yu Zhang, Yanlong Fan, Dianxi Shi, Luoxi Jing, Nan Yin
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on four standard benchmarks demonstrate that the proposed GMM algorithm outperforms state-of-the-art unsupervised domain adaptation methods in terms of performance. |
| Researcher Affiliation | Academia | Mengzhu Wang1, Wenhao Ren1, Yu Zhang1, Yanlong Fan1, Dianxi Shi2 , Luoxi Jing3, Nan Yin4 1Hebei University of Technology, 2Intelligent Game and Decision Lab (IGDL), 3Peking University, 4Hong Kong University of Science and Technology EMAIL, EMAIL, EMAIL, EMAIL, EMAIL, EMAIL,EMAIL |
| Pseudocode | No | The paper describes the model and its components using mathematical equations and textual explanations, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | Office-31 [Saenko et al., 2010] is a widely used cross-domain dataset... Office-Home [Venkateswara et al., 2017] is a dataset comprising 15,500 photos... Vis DA-2017 [Peng et al., 2017] contains over 280,000 images... Domain Net [Peng et al., 2019] is a large-scale domain adaptation dataset... Image Net [Russakovsky et al., 2015]. |
| Dataset Splits | Yes | In our experiments, we use the validation set as the target domain and the training set as the source domain. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. It mentions using Res Net [He et al., 2016] as the backbone network and PyTorch for implementation, but no hardware specifications. |
| Software Dependencies | No | We use Py Torch to implement our technique, and the backbone network for all datasets is Res Net [He et al., 2016], which has been pre-trained on Image Net [Russakovsky et al., 2015]. We used Pytorch [Paszke et al., 2019] to implement all of the experiments. (Specific version numbers for PyTorch or other libraries are not mentioned.) |
| Experiment Setup | Yes | Our Stochastic Gradient Descent (SGD) algorithm is used with momentum set to 0.9 and weight decay set to 0.001. For model optimization, we follow the learning rate annealing technique described in [Ganin et al., 2016]. In this study, we specifically specify λ = 0.5, γ = 1.0 and perform a sensitivity analysis to find out how hyper-parameter selection affects the results. |