EmoGrowth: Incremental Multi-label Emotion Decoding with Augmented Emotional Relation Graph

Authors: Kaicheng Fu, Changde Du, Jie Peng, Kunpeng Wang, Shuangchen Zhao, Xiaoyu Chen, Huiguang He

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on three datasets spanning brain activity and multimedia domains, demonstrating its effectiveness in decoding up to 28 fine-grained emotion categories. Results show that AESL significantly outperforms existing methods while effectively mitigating catastrophic forgetting. Our code is available at https://github.com/ Changde Du/Emo Growth.
Researcher Affiliation Academia 1State Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, China 2School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China 3School of Biomedical Engineering, Shanghai Tech University, Shanghai, China. Correspondence to: Huiguang He <EMAIL>.
Pseudocode Yes A. The Algorithm of AESL Algorithm 1 Training procedure of AESL.
Open Source Code Yes Our code is available at https://github.com/ Changde Du/Emo Growth.
Open Datasets Yes For thoroughly evaluating the performance of AESL and comparing approaches, three datasets are leveraged for experimental studies including Brain27 (Horikawa et al., 2020), Video27 (Cowen & Keltner, 2017) and Audio28 (Cowen et al., 2020).
Dataset Splits Yes For Brain27 and Video27, we split the datasets with B0-I9 (base class is 0 and incremental class is 9), B0-I3, B15-I3 and B15-I2. For Audio28, we split the dataset with B0-I7, B0-I4, B16-I3 and B16-I2. ... Table 7 shows the characteristics of the three datasets used in our experiments. Properties of each dataset are characterized by several statistics, including the number of training instances |Dtr|, the number of test instances |Dte|...
Hardware Specification Yes We conducted all the experiments on one NVIDIA TITAN GPU.
Software Dependencies No No specific software dependencies with version numbers are provided. The paper mentions using the Adam optimizer, but no specific libraries or their versions are listed.
Experiment Setup Yes In our experiments, the balancing parameter β is set to 0.95 in Eq.3. We set λ1 to 1 in Eq.15. Besides, λ2 is searched in {0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8} and λ3 is searched in {0.001, 0.01, 0.1, 1, 2, 5, 10}. The dimensionality of deep latent representations z is set to 64 in three datasets. We train the model using the Adam optimizer with {β1, β2} = {0.9, 0.9999}. We set weight decay to 0.005 and learning rate to 10 4 for Brain27 and Video27, and weight decay of 0 and learning rate to 10 3 for Audio28.