Revisiting Multimodal Emotion Recognition in Conversation from the Perspective of Graph Spectrum
Authors: Wei Ai, Fuchen Zhang, Yuntao Shou, Tao Meng, Haowen Chen, Keqin Li
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have proven the superiority of the GS-MCC architecture proposed in this paper on two benchmark data sets. |
| Researcher Affiliation | Academia | 1 College of Computer and Mathematics, Central South University of Forestry and Technology, 410004, China 2 College of Computer Science and Electronic Engineering, Hunan University, 410082, China 3 Department of Computer Science, State University of New York, 12561, USA |
| Pseudocode | No | The paper describes the methodology in text and mathematical formulas but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/Fuchen Zhang/GS-MCC. |
| Open Datasets | Yes | In our experiments, we used two benchmark multimodal datasets IEMOCAP (Busso et al. 2008) and MELD (Poria et al. 2019), which are widely used in multimodal emotion recognition. |
| Dataset Splits | Yes | The optimal parameters of all models were obtained by performing parameter adjustment using the leave-one-out cross-validation method on the validation set. |
| Hardware Specification | Yes | All experiments are conducted using Python 3.8 and Py Torch 1.8 deep learning framework and performed on a single NVIDIA RTX 4090 24G GPU. |
| Software Dependencies | Yes | All experiments are conducted using Python 3.8 and Py Torch 1.8 deep learning framework |
| Experiment Setup | Yes | Our model is trained using Adam W with a learning rate of 1e5, cross-entropy as the loss function, and a batch size of 32. |