A Unified Framework for Optimization-Based Graph Coarsening

Authors: Manoj Kumar, Anurag Sharma, Sandeep Kumar

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments elucidate the efficacy of the proposed framework for real-world applications. The efficacy of the proposed algorithms are thoroughly validated through exhaustive experiments on both synthetic and real data sets. Section 7 presents experimental results on both real and synthetic data sets for all the proposed algorithms.
Researcher Affiliation Academia Manoj Kumar1 EMAIL Anurag Sharma2 EMAIL Sandeep Kumar1,3,4 EMAIL Department of Electrical Engineering1 Department of Mathematics2 Yardi School of Artificial Intelligence3 Bharti School of Telecommunication Technology and Management4 Indian Institute of Technology Delhi New Delhi, 110016, India
Pseudocode Yes Algorithm 1: FGC Algorithm ... Algorithm 2: Graph Coarsening (GC) Algorithm ... Algorithm 3: Two-Stage Optimization Algorithm ... Algorithm 4: Featured Graph Coarsening with Reduction (FGCR) Algorithm
Open Source Code No The code for all the experimental results is available at CODE. (This is a placeholder, not a concrete link to a repository.)
Open Datasets Yes Cora. This data set consists of p=2708, m=5278, and n=1433... Citeseer. This data set consists of p=3312, m=4536, and n=3703... Polblogs. This data set consists of p=1490, m=16715, and n=5000... ACM. This data set consists of p=3025, m=13128, and n=1870... Bunny. This data set consists of p=2503, m=78292, and n=5000... Minnesota. This data set consists of p=2642, m=3304, and n=5000... Airfoil. This data set consists of p=4253, m=12289, and n=5000... Erdos Renyi (ER). It is represented as G(n, p)... Barabasi Albert (BA). It is represented as G(n, m)... Watts Strogatz (WS). It is represented as G(n, k, p)... Random Geometric Graph (RGG). It is represented as G(n, radius)...
Dataset Splits Yes The training of GNN is performed using adam optimizer with hyperparameter tuned using grid search, i.e., learning rate 0.001 for 10 to 50 epochs on 80% of the graph data, and the model is tested on the remaining 20% of graph data for classification accuracy. The results for both FGC and OTCOARSENING are evaluated using 10-fold cross-validation.
Hardware Specification No The paper does not explicitly describe the hardware used to run its experiments.
Software Dependencies No The paper mentions 'adam optimizer' but does not specify any software names with version numbers for reproducibility.
Experiment Setup Yes Hyperparameters (λ=500, α=500, γ=716.5) used in FGC algorithm. (Similar lines for other datasets). The training of GNN is performed using adam optimizer with hyperparameter tuned using grid search, i.e., learning rate 0.001 for 10 to 50 epochs on 80% of the graph data... The coarsening ratio is set to 0.5 for both methods. For the FGC algorithm feature matrix, X of size 34 n is generated by sampling from X N(0, Θ ), where Θ is the Laplacian matrix of the given network... The results of FGC below are shown for n = 600, however, an n of the order of 5 34 has been observed to perform well.