Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Multi-Task Curriculum Graph Contrastive Learning with Clustering Entropy Guidance
Authors: Chusheng Zeng, Bocheng Wang, Jinghui Yuan, Mulin Chen, Xuelong Li
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that Cur GL has achieved excellent performance compared to stateof-the-art competitors. 4 Experiments 4.1 Benchmark Datasets 4.2 Evaluation Metrics 4.3 Comparison with Competitors |
| Researcher Affiliation | Collaboration | Chusheng Zeng1 , Bocheng Wang1 , Jinghui Yuan1 , Mulin Chen1 and Xuelong Li2 1School of Artifcial Intelligence, OPtics and Electro Nics (i OPEN), Northwestern Polytechnical University, China 2Institute of Artificial Intelligence (Tele AI), China Telecom, China EMAIL, EMAIL, xuelong EMAIL |
| Pseudocode | No | The paper describes the methodology in narrative text and mathematical formulations. It does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | To substantiate the efficiency of the Cur GL, fix publicly accessible real-world datasets are adopted as benchmarks, including CORA, UAT, PUBMED, AMAP, and AMAC. The datasets are collected from a range of domains such as air traffic, academic citation, and shopping networks. Further details regarding these datasets are shown in Table 1. |
| Dataset Splits | No | The paper describes using several benchmark datasets for clustering but does not specify train/test/validation splits, their percentages, or how they were created. Clustering is an unsupervised task, and evaluation metrics are typically applied to the whole dataset. |
| Hardware Specification | Yes | All deep models are trained with a NVIDIA RTX-4090 GPU. |
| Software Dependencies | No | The paper does not explicitly mention any specific software dependencies or libraries with version numbers used for the experiments. |
| Experiment Setup | Yes | For the proposed Cur GL, we use the adaptive hyper-parameter selection, which means α = v 1 N and β = 1 α. Additionally, a parameter grid search is conducted for γ. ... τ is the temperature parameter, S( ) is the similarity calculation function. ... where α, β and γ are hyper-parameters. The loss of the model can be considered as a function of the model parameters W and the indicator vector v, which can be expressed as L = g(W, v). |