Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation

Authors: Yanglin Huang, Kai Hu, Yuan Zhang, Zhineng Chen, Xieping Gao

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on three main-stream benchmarks using various teacher-student pairs demonstrate that our Hetero AKD outperforms state-of-the-art KD methods in facilitating distillation between heterogeneous architectures.
Researcher Affiliation Academia 1Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, Xiangtan University 2School of Computer Science, Fudan University 3Key Laboratory for Artificial Intelligence and International Communication, Hunan Normal University
Pseudocode No The paper describes its methodology using mathematical formulations (e.g., Eq. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11) and descriptive text, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide any links to code repositories.
Open Datasets Yes Our experiments are conducted on three popular semantic segmentation datasets, including Cityscapes (Cordts et al. 2016), Pascal VOC (Everingham et al. 2010) and ADE20K (Zhou et al. 2019).
Dataset Splits No The paper mentions using 'Cityscapes validation set', 'Pascal VOC and ADE20K validation sets' and specifies crop sizes for training, but it does not provide explicit training/test/validation split percentages, sample counts, or citations for the split methodology.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions 're-implemented all methods on both CIRKD codebase (Yang et al. 2022) and Af-DCD codebase (Fan et al. 2023)', but it does not specify any software names with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes The overall loss for optimization can be formulated as the weighted sum of the task loss Ltask, class probability KD loss Lkd (Eq. 1), and heterogeneous architecture KD loss Lhakd (Eq. 10), written as: Ltotal = Ltask + λ1Lkd + λ2Lhakd... For crop size during the training phase, we use 512 x 1024, 512 x 512 and 512 x 512 for Cityscapes, Pascal VOC and ADE20K, respectively... We investigate the impact of different hyper-parameter settings. As illustrated in Figure 6, our method consistently enables students to benefit from heterogeneous teachers, with the minimum mIoU gain of 0.48%. Different hyper-parameter settings have different impacts on distillation efficiency, this difference in optimal hyper-parameters can be attributed to the varying strengths of the teacher and student.