Fusion of Granular-Ball Visual Spatial Representations for Enhanced Facial Expression Recognition
Authors: Shuaiyu Liu, Qiyao Shen, Yunxi Wang, Yazhou Ren, Guoyin Wang
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results of experiment on eight databases show that CS-GBSBF consistently achieves higher recognition accuracy than several state-of-the-art methods. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineer, University of Electronic Science and Technology of China 2Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China 3Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications EMAIL, EMAIL, EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes its methodology using mathematical formulations (Eq. 1-10) and network diagrams (Fig. 2, Fig. 3) but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Lsy235/CS-GBSBF. |
| Open Datasets | Yes | The databases involved in the experiments include Affect Net-8, CAER-S, RAF-DB, Oulu-CASIA, CK+, SFEW 2.0, FER-2013 and SAMM, as shown in Table 1. |
| Dataset Splits | Yes | For the database divided by the original test database, we keep it constant. And for the undivided database, we follow the rules most methods [Gan et al., 2019] adopt for classification. |
| Hardware Specification | Yes | We train CS-GBSBF in an end-to-end manner with one single NVIDIA Ge Force RTX 4080 SUPER for 40 epochs, and the batch size for all databases is set to 16. |
| Software Dependencies | No | The CS-GBSBF method is implemented with the Pytorch toolbox, employing swin T-base [Liu et al., 2021] as the backbone of VREN and utilizing GCN to build the backbone of SREN, where the swin T-base is pre-trained on the Image Net-1K database. While PyTorch is mentioned, specific version numbers for it or any other libraries are not provided. |
| Experiment Setup | Yes | We train CS-GBSBF in an end-to-end manner with one single NVIDIA Ge Force RTX 4080 SUPER for 40 epochs, and the batch size for all databases is set to 16. Then our model is trained using the Adam algorithm with an initial learning rate of 0.0001, weight decay = 0.01. Based on extensive experiments in the validation set, the value of λ in Eq. (10) is empirically set to 0.1, while the value of the hyperparameter K is set to 9. |