Dynamic Interactive Bimodal Hypergraph Networks for Emotion Recognition in Conversations

Authors: Xuping Chen, Wuzhen Shi

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that our proposed method outperforms existing methods on two benchmark multimodal ERC datasets. We conduct extensive experiments to validate the stateof-the-art performance of our method and perform ablation studies on our proposed modules to verify their effectiveness.
Researcher Affiliation Academia Shenzhen Key Laboratory of Digital Creative Technology Guangdong Province Engineering Laboratory for Digital Creative Technology College of Electronics and Information Engineering, Shenzhen University EMAIL, EMAIL
Pseudocode No The paper describes its methodology using textual descriptions, equations, and figures, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not contain an explicit statement about releasing its source code, nor does it provide a link to a code repository. It mentions 'from our reimplementation using opensource codes' in reference to baseline methods, not its own.
Open Datasets Yes We chose IEMOCAP and MELD for our experiments. They have comprehensive multimodal data as well as a recognized benchmark status in ERC. The details are as follows: IEMOCAP (Busso et al. 2008) comprises 151 videos featuring two-person conversations, including 7433 utterances. MELD (Poria et al. 2019) consists of video recordings from multi-person conversations extracted from the Friends television series, involving between three to nine participants per conversation. It encompasses 1433 conversations, 13708 utterances, and 304 unique speakers.
Dataset Splits No The paper mentions using IEMOCAP and MELD datasets and refers to them as benchmark datasets, but it does not explicitly state the specific training, validation, and test splits (e.g., percentages or sample counts) used for these datasets in the main text.
Hardware Specification Yes Experiments are conducted on a machine with NVIDIA GTX 4090 GPU
Software Dependencies Yes implemented by CUDA 12.1, Python 3.8, PyTorch 1.7.1, and torch-geometric 1.7.2.
Experiment Setup Yes For IEMOCAP, we set the batch size to 16 and the epoch to 80. For MELD, we adjust the batch size to 32 and epoch to 30. To achieve reproducibility performance, we iterate through a range of random seeds, identifying 1722 for IEMOCAP and 67137 for MELD. Other parameter settings are detailed in section 5.