Disentangling Multi-view Representations via Curriculum Learning with Learnable Prior

Authors: Kai Guo, Jiedong Wang, Xi Peng, Peng Hu, Hao Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five real-world datasets show that the proposed model outperforms its counterparts markedly. ... We evaluate our CL2P using five real-world multi-view datasets. Extensive experimental results demonstrate the effectiveness of the proposed method and its superior performance in comparison to baselines. ... 4 Experiments 4.1 Experimental Setup Datasets. We evaluate our CL2P and other competitive methods using five real-world datasets ... Overall Evaluation. Tables 2 and 3 show the performance of clustering and classification tasks, respectively. ... Ablation Study. We conduct an ablation study to measure the contribution of the four key components
Researcher Affiliation Academia Kai Guo1 , Jiedong Wang1 , Xi Peng1,2 , Peng Hu1 , Hao Wang1 1College of Computer Science, Sichuan University, China 2National Key Laboratory of Fundamental Algorithms and Models for Engineering Numerical Simulation, Sichuan University, China
Pseudocode Yes Algorithm 1 Training of the proposed CL2P. Input: Multi-view dataset D = {x1, , xm}. Parameter: Total training epochs Tmax, current training epoch T , K pseudo-inputs of each view, parameters θc, ϕc, θs, ϕs of encoders and decoders. Output: View-consistent representations c, and view-specific representations {sv}m v=1. 1: Initial the K pseudo-inputs for each view 2: while T Tmax do 3: c Ec({xv}m v=1), and {sv}m v=1 {Ev s (xv)}m v=1 4: Compute the consistency-loss Lc using Eq. (6) 5: Compute the specificity-loss Lv s using Eq. (7) 6: Compute the disentangling-loss Lv d using Eq. (13) 7: Update T T + 1, λ 1 ( T Tmax )2 8: Compute the total loss Lmodel using Eq. (14) via λ 9: Update θc, ϕc, θs, ϕs Lmodel(θc, ϕc, θs, ϕs) 10: end while 11: return c and {sv}m v=1
Open Source Code Yes The code is available at https://github. com/XLearning-SCU/2025-IJCAI-CL2P.
Open Datasets Yes Datasets. We evaluate our CL2P and other competitive methods using five real-world datasets, including: (1) Edge MNIST [Le Cun et al., 1998] ... (2) Edge-Fashion [Xiao et al., 2017] ... (3) Multi-COIL-20 [Nene et al., 1996b] ... (4) Multi-COIL-100 [Nene et al., 1996a] ... (5) Multi-Office-31 [Saenko et al., 2010]
Dataset Splits Yes For classification, we apply support vector classification (SVC) [Hsu, 2003] with an 80:20 train-test split ratio.
Hardware Specification Yes We implement the proposed method and other comparison methods on Py Torch 2.1.0, utilizing one NVIDIA A10 GPU (24 GB).
Software Dependencies Yes We implement the proposed method and other comparison methods on Py Torch 2.1.0, utilizing one NVIDIA A10 GPU (24 GB).
Experiment Setup Yes Both view-consistency and view-specificity dimensions are set to 20. The number of pseudo-inputs is fixed at 250, initialized with randomly selected training data. We train our model for 200 epochs using the Adam W optimizer with the learning rate of 1 10 4 and a weight decay of 1 10 4. We set a batch size of 128 for Edge-MNIST and Edge-Fashion, and 32 for Multi-COIL-20, Multi-COIL-100, and Multi-Office-31.