MAMS: Model-Agnostic Module Selection Framework for Video Captioning
Authors: Sangho Lee, Il Yong Chun, Hogun Park
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on three different benchmark datasets demonstrate that the proposed framework significantly improves the performance of three recent video captioning models. The experimental analysis of the aforementioned saturation issue, experimental results and discussion, and ablation studies are presented in the paper, utilizing metrics like BLEU-4, METEOR, ROUGE, and CIDEr. |
| Researcher Affiliation | Collaboration | Sangho Lee1,2, Il Yong Chun1,3*, Hogun Park1* 1Sungkyunkwan University, Suwon, Republic of Korea 2Hippo T&C Company Incorporated, Suwon, Republic of Korea 3Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, Republic of Korea EMAIL, EMAIL. The authors are affiliated with both Sungkyunkwan University (academic) and Hippo T&C Company Incorporated (industry). |
| Pseudocode | No | The paper describes methods and processes using textual descriptions, mathematical equations, and architectural diagrams (Figures 3, 4, 5). However, it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at our Git Hub repository1. 1https://github.com/mancityg/AAAI2025-MAMS |
| Open Datasets | Yes | We ran experiments with three different datasets: MSVD (Chen et al. 2011), MSRVTT (Xu et al. 2016), and YOUCOOKII datasets (Zhou et al. 2018). |
| Dataset Splits | No | The paper mentions using specific datasets (MSVD, MSRVTT, YOUCOOKII) but does not provide explicit details about the training/test/validation splits within the main text. It states: "See details of experiments and implementation in the supplementary material." |
| Hardware Specification | Yes | For our experiments, we used Py Torch (Paszke et al. 2019) and NVIDIA A100 GPUs. |
| Software Dependencies | No | For our experiments, we used Py Torch (Paszke et al. 2019) and NVIDIA A100 GPUs. While PyTorch is mentioned, a specific version number for the software dependency is not provided in the main text. |
| Experiment Setup | No | The paper discusses the overall framework, loss functions, and module selection rules. However, specific hyperparameter values like learning rate, batch size, number of epochs, or detailed optimizer settings are not explicitly provided in the main text. It generally refers to supplementary material for details: "See details of experiments and implementation in the supplementary material." |