Prototypical Calibrating Ambiguous Samples for Micro-Action Recognition
Authors: Kun Li, Dan Guo, Guoliang Chen, Chunxiao Fan, Jingyuan Xu, Zhiliang Wu, Hehe Fan, Meng Wang
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on the benchmark dataset demonstrate the superior performance of our method compared to existing approaches. |
| Researcher Affiliation | Academia | 1School of Computer Science and Information Engineering, Hefei University of Technology 2 Institute of Artificial Intelligence, Hefei Comprehensive National Science Center 3Re LER Lab, CCAI, Zhejiang University |
| Pseudocode | No | The paper describes the methodology using text and diagrams (Figure 2) but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link to the MA-52 dataset (https://github.com/VUT-HFUT/Micro-Action) but does not explicitly state that the source code for the proposed PCAN methodology is open-source or provide a link to its implementation. |
| Open Datasets | Yes | Extensive experiments conducted on the public microaction dataset, MA-52, validate the effectiveness of the proposed method. [...] 1https://github.com/VUT-HFUT/Micro-Action |
| Dataset Splits | Yes | The dataset consists of 11,250, 5,586, and 5,586 instances in training/validation/test, respectively. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions using the SGD optimizer but does not specify software dependencies like programming languages, libraries, or frameworks with their version numbers. |
| Experiment Setup | Yes | For model training, we adopt the SGD optimizer with a learning rate of 0.0075, a momentum of 0.9, a weight decay of 1e-4, and a batch size of 10. The learning rate is reduced by a factor of 10 at the 15th and 30th epochs, and the model is trained with 40 epochs. In Eq. 10, we set γ = 1 for the RGB branch and γ = 5 for the Pose branch. In Eq. 12, we set β = 5 for the RGB branch and β = 5 for the Pose branch. |