Reliable Disentanglement Multi-view Learning Against View Adversarial Attacks
Authors: Xuyang Wang, Siyuan Duan, Qizhi Li, Guiduo Duan, Yuan Sun, Dezhong Peng
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on multi-view classification tasks with adversarial attacks show that RDML outperforms the state-ofthe-art methods by a relatively large margin. Our code is available at https://github.com/Willy1005/ 2025-IJCAI-RDML. ... We conduct extensive experiments on six multi-view datasets to verify the effectiveness and robustness of our RDML under both adversarial and clean conditions. ... 4 Experiments |
| Researcher Affiliation | Collaboration | 1College of Computer Science, Sichuan University, China 2Laboratory of Intelligent Collaborative Computing, University of Electronic Science and Technology of China, China 3National Key Laboratory of Fundamental Algorithms and Models for Engineering Numerical Simulation, Sichuan University, China 4Tianfu Jincheng Laboratory, China 5Sichuan National Innovation New Vision UHD Video Technology Co., Ltd., China |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical equations, but does not include any explicitly labeled pseudocode blocks or algorithms. |
| Open Source Code | Yes | Our code is available at https://github.com/Willy1005/ 2025-IJCAI-RDML. |
| Open Datasets | Yes | To verify the effectiveness and robustness of our method, we conduct experiments on six multi-view datasets, including PIE [Gross et al., 2010], Scene [Fei-Fei and Perona, 2005], Leaves [Cope et al., 2013], NUS-WIDE [Chua et al., 2009], MSRC [Xu et al., 2016], and Fashion [Xiao et al., 2017]. |
| Dataset Splits | Yes | For all datasets, 80% of the samples are used for training (for our method, these data are also used for pretraining), and 20% of the samples are used for testing. |
| Hardware Specification | Yes | Our experiments are conducted based on the Py Torch 2.4.1 framework with an Nvidia RTX 3090 GPU. |
| Software Dependencies | Yes | Our experiments are conducted based on the Py Torch 2.4.1 framework with an Nvidia RTX 3090 GPU. |
| Experiment Setup | Yes | The pretraining epoch of Ept( ) is 1000 with a batch size of 500, and the training epoch is 500 for the cleaning setting and 400 for the adversarial setting. The learning rate is selected from [0.003, 0.005]. ... Adam is used as the optimizer. The temperature ยต of Gumbel softmax is set as 0.1. We use Projected Gradient Descent for random view attack. The number of attack iterations is 10 with a maximum perturbation range of 8/255. |