Procedure Knowledge Decoupled Distillation Strategy for Procedure Planning in Instructional Videos
Authors: Xiaotian Pan, Zhaobo Qi, Xin Sun, Yuanrong Xu, Weigang Zhang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three datasets demonstrate that our strategy can improve the performance of multiple weakly supervised models, achieving promising procedure knowledge modeling ability and plug-and-play flexibility. |
| Researcher Affiliation | Academia | Xiaotian Pan1, Zhaobo Qi1, Xin Sun1*, Yuanrong Xu1, Weigang Zhang1* 1Harbin Institute of Technology, Weihai, China EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods and equations, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/xiaotianpan/PKDD |
| Open Datasets | Yes | Extensive experiments on three datasets demonstrate that our strategy can improve the performance of multiple weakly supervised models, achieving promising procedure knowledge modeling ability and plug-and-play flexibility. ... We apply our decoupled distillation strategy to multiple weakly supervised methods and conduct numerous experiments on widely used benchmarks Cross Task (Zhukov et al. 2019), COIN (Tang et al. 2019), and NIV (Alayrac et al. 2016). |
| Dataset Splits | Yes | We follow previous research to allocate 70% of the data to the training set and the remaining 30% to the test set. |
| Hardware Specification | Yes | All our experiments are performed on a single Ge Force RTX 4090 GPU using the pytorch framework. |
| Software Dependencies | No | The paper mentions "pytorch framework" but does not specify a version number or other software dependencies with versions. |
| Experiment Setup | No | We adopt traditional settings (Zhao et al. 2022b) to define the start and goal observations for training and inference of the student model in the PKDD strategy. All our experiments are performed on a single Ge Force RTX 4090 GPU using the pytorch framework. See the supplementary material for more implementation details. |