Capturing Individuality and Commonality Between Anchor Graphs for Multi-View Clustering

Authors: Zhoumin Lu, Yongbo Yu, Linru Ma, Feiping Nie, Rong Wang

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments demonstrate the effectiveness and efficiency of our method compared to various state-of-the-art algorithms. Our experiments employ 9 public datasets for comparison, including 3Sources, Web KB, NUS-WIDE, Notting Hill, Cifar10, Cifar100, You Tube Face10, You Tube Face20 and You Tube Face50.
Researcher Affiliation Academia 1School of Computer Science, School of Artificial Intelligence, OPtics and Electro Nics (i OPEN), Northwestern Polytechnical University, Xi an 710072, China. 2Institute of Systems Engineering, AMS, Beijing 100071, China.
Pseudocode Yes Algorithm 1 CICAG Solver Input: Dataset {X(i)}v i=1, anchor number m, cluster number c, and parameters α, β and γ. Output: Learned anchor graph Z . 1: Initialize Z(i), Z and F . 2: while non-convergence do 3: Update A(i) by Theorem 1. 4: Update Z(i) by Theorem 2. 5: Update Z by Theorem 3. 6: end while 7: Obtain Z by Equation (36).
Open Source Code No The paper does not contain any explicit statement about providing open-source code for the methodology or a link to a code repository.
Open Datasets Yes Our experiments employ 9 public datasets for comparison, including 3Sources, Web KB, NUS-WIDE, Notting Hill, Cifar10, Cifar100, You Tube Face10, You Tube Face20 and You Tube Face50.
Dataset Splits No The paper mentions using 9 public datasets but does not explicitly provide details about training/test/validation splits, sample counts for each split, or references to predefined splits for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not provide specific software dependencies, such as programming languages, libraries, or frameworks with their version numbers.
Experiment Setup Yes For our model, α is set to 0.1, while the remaining hyperparameters are tuned by a grid search, whose ranges are m {1c, 3c, 5c}, β {0.001, 0.01, , 100, 1000}, and γ {0.001, 0.01, , 100, 1000}.