Disentangled and Self-Explainable Node Representation Learning

Authors: Simone Piaggesi, André Panisson, Megha Khosla

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on multiple benchmark datasets demonstrate that Di Se NE not only preserves the underlying graph structure but also provides transparent, human-understandable explanations for each embedding dimension.
Researcher Affiliation Academia Simone Piaggesi EMAIL University of Pisa, Pisa, Italy André Panisson EMAIL CENTAI Institute, Turin, Italy Megha Khosla EMAIL Delft University of Technology, Delft, Netherlands
Pseudocode Yes Algorithm A1: Di Se NE(G, A, K, T, L, λent) ... Algorithm A2: Unsup Edge Subgraph(G, Z, d)
Open Source Code Yes We release our code and data at https://github.com/simonepiaggesi/disene.
Open Datasets Yes We ran experiments on four real-world datasets (Cora, Wiki, Face Book, PPI), and six synthetic datasets (Ring-of-Cliques, SBM, BA-Cliques, ER-Cliques, Tree-Cliques and Tree-Grids) with planted subgraphs... Ring Cliques and SBM (Abbe, 2017) are implemented in NetworkX5. For synthetic data, we present only results for plausibility metrics, leaving the other findings in the Appendix F.
Dataset Splits Yes For link prediction, we use a 90%/10% train/test split, and for node classification, we use an 80%/20% split.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software like Node2Vec, Inf Walk, Graph Autoencoder, Graph SAGE, DGLFRM, Dine, GNNExplainer, PGExplainer, GCN, and GATv2, and provides links to their general repositories or papers. However, it does not specify concrete version numbers for software libraries or environments (e.g., Python, PyTorch, or specific versions of these mentioned packages).
Experiment Setup Yes For Deep Walk (Perozzi et al., 2014), we train Node2Vec algorithm for 5 epochs with the following parameters: p=1, q=1, walk_length=20, num_walks=10, window_size=5. ... In Graph AE (Salha et al., 2020), we optimize a 1-layer GCN encoder with a random-walk loss setting analogous to Deep Walk. The model is trained for 50 iterations using Adam optimizer and learning rate of 0.01.