MSSDA: Multi-Sub-Source Domain Adaptation for Diabetic Foot Neuropathy Recognition
Authors: Yan Zhong, Zhixin Yan, Yi Xie, Shibin Wu, Huaidong Zhang, Lin Shu, Peiru Zhou
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive results validate the effectiveness of our method on both the newly proposed dataset for DFN recognition and an existing dataset. We conduct comprehensive experiments on two datasets, validating the effectiveness of the proposed model through experimental results. |
| Researcher Affiliation | Academia | Yan Zhong1, Zhixin Yan1, Yi Xie1, Shibin Wu2, Huaidong Zhang1*, Lin Shu 1*, Peiru Zhou3 1South China University of Technology 2City University of Hong Kong 3The Fifth Affiliated Hospital of Jinan University *Corresponding author:EMAIL |
| Pseudocode | Yes | Algorithm 1: Training algorithm |
| Open Source Code | Yes | Our code and dataset DFN-DS will be available at https://github.com/Yu Guilliman/MSSDA. |
| Open Datasets | Yes | Our code and dataset DFN-DS will be available at https://github.com/Yu Guilliman/MSSDA. Falling Risk Assessment Dataset (FRA) (Hu et al. 2022) It is a plantar pressure dataset comprises 48 subjects and 7,462 samples, with 23 high-risk subjects (labeled as 1) and 25 low-risk subjects (labeled as 0). |
| Dataset Splits | Yes | All experiments use leave-one-subject-out cross-validation (LOSO-CV) with a common vote threshold of 50%. In this setup, each subject acts as the target domain DT , while the rest are the source domain DS rotating through all patients. |
| Hardware Specification | Yes | All methods are implemented using the Py Torch framework and reproduced on a Ge Force GTX 4060. |
| Software Dependencies | No | All methods are implemented using the Py Torch framework - No specific version mentioned for Py Torch. |
| Experiment Setup | Yes | In Stage 1, we utilize a network with 4 layers of 1D CNN for contrastive learning on DFN-DS, and a network with 3 layers of 1D CNN for the FRA. In Stage 3, the feature extractor for DFN-DS remains consistent across all methods, comprising 7 layers of 1D CNN, while the FRA uses 3 layers of 1D CNN. Additionally, the domain discriminator and classifier frameworks in the domain-invariant methods are identical, featuring 3-layer fully connected networks for DFN-DS and 2-layer fully connected networks for FRA. All experiments are conducted after balancing the dataset using data reuse techniques. While in stage 1, we train the feature extractor F0 by contrastive learning for 5000 epochs using Adam W as the optimizer with initial learning rate of 5e-3, a batch size of 64 for DFN-DS. As for FRA, we set the initial learning rate as 1e-3 with a batch size of 32, other paremeters remain the same. In stage 3, same as other methods, we use Adam as our optimizer with a weight decay set to 1e-4. We select the learning rate lr from {1e-2, 8e-3, 5e-3} for best performance. Additionally, we select the weights of the domain adaptation loss from {0.2, 0.5, 1, 2} to get the best performance of the models, which is 1 for our method in FRA and 1.5 in DFN-DS. For both DFN-DS and FRA, M is set to 2. |