In-context Prompt-augmented Micro-video Popularity Prediction

Authors: Zhangtao Cheng, Jiao Li, Jian Lang, Ting Zhong, Fan Zhou

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on three real-world datasets demonstrate the superiority of ICPF compared to 14 competitive baselines.
Researcher Affiliation Academia Zhangtao Cheng, Jiao Li, Jian Lang, Ting Zhong, Fan Zhou* University of Electronic Science and Technology of China, Chengdu, Sichuan, China EMAIL, jiao EMAIL, jian EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes its methodology through textual descriptions and diagrams (e.g., Figure 2) but does not include explicit pseudocode or algorithm blocks.
Open Source Code Yes The source codes and datasets are available at https://github.com/Jolieresearch/ICPF.
Open Datasets Yes To analyze the effectiveness of our ICPF, we select three real-world micro-video datasets: Micro Lens (Ni et al. 2023), NUS (Chen et al. 2016), and Tik Tok (https://www.tiktok.com/), from various online video platforms. The source codes and datasets are available at https://github.com/Jolieresearch/ICPF.
Dataset Splits Yes Each dataset is randomly divided into training, validation, and test sets in a ratio of 8:1:1.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No During retrieval, we utilize Vi TB/32 CLIP (Radford et al. 2021) as the image encoder and Angl E (Li and Li 2023) as the text encoder. While specific models/libraries are mentioned, their version numbers are not provided, nor are general software environments like Python or PyTorch with their versions.
Experiment Setup Yes We utilize the Adam W optimizer (Loshchilov and Hutter 2017) with a learning rate of 1 10 4 for optimizing the parameters. The model is trained for 30 epochs with a batch size of 64 and tested with a batch size of 256.