Clique Number Estimation via Differentiable Functions of Adjacency Matrix Permutations
Authors: Indradyumna Roy, Eeshaan Jain, Soumen Chakrabarti, Abir De
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on eight datasets show the superior accuracy of our approach. The code is available on Git Hub. ... 4 EXPERIMENTS We report on extensive experiments using eight datasets, comparing the performance of MXNET with other methods. We also instrument different components of MXNET to understand their impact. |
| Researcher Affiliation | Academia | Indradyumna Roy 1, Eeshaan Jain 2, Soumen Chakrabarti1, Abir De1 1IIT Bombay, 2EPFL EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 MSS(B) # B is binary |
| Open Source Code | Yes | The code is available on Git Hub. |
| Open Datasets | Yes | Datasets We conduct experiments on eight datasets, comprising five real-world and three synthetic datasets. Real-world datasets include (1) IMDB-BINARY (IMDB), (2) Enzymes and modular products of graph pairs from (3) PTC-MM-m, (4) AIDS, (5) Mutagenicity (MUTAG-m) datasets. We also generate three synthetic datasets from (6) DSJC, (7) Brockington (Brock), and (8) RB. ... We use modular graph products for three datasets, viz., AIDS, MUTAG, PTC-MM. We call them AIDS-m, MUTAG-m and PTC-MM-m respectively. Additional details are in in Appendix E. ... sourced from the TUDatasets repository (Morris et al., 2020): (3) PTC-MM, (4) AIDS and (5) Mutagenicity. |
| Dataset Splits | Yes | We split each dataset D = {Gi, ω(Gi) | i [I]} into 60% training, 20% eval, and 20% test folds. |
| Hardware Specification | Yes | The training of our models and the baselines was performed on servers containing AMD EPYC 7642 48-Core Processors at 2.30GHz CPUs, and Nvidia RTX A6000 GPUs. |
| Software Dependencies | Yes | We implement our models using Python 3.10 and Py Torch 2.3.0. |
| Experiment Setup | Yes | All models are trained using the Adam optimizer, with a learning rate of 10 3, and weight decay 5 10 4. ... For the design of LComposite, we set δ = 1, and search for γ, λ in {0.25, 0.75}, and {0.1, 1} respectively. Hence, the total search space for hyperparameters consists of 8 combinations ({τ} {γ} {λ}), and we select the best model out of the 8 hyperparameter configurations. ... For the early stopping criteria based on the validation MSE, we use a patience parameter as 200 epochs. |