Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

LASE: Learned Adjacency Spectral Embeddings

Authors: María Sofía Pérez Casulo, Marcelo Fiori, Federico Larroca, Gonzalo Mateos

TMLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we experimentally validate the proposed LASE architecture in two GRL tasks. Firstly, as an alternative of ASE or GD to compute the spectral embeddings of a large graph. In this case, LASE may be trained on smaller sampled sub-graphs including only a fraction of the nodes of the original one. The training set comprises pairs i T , each consisting of a graph A(i) and a random noise sample X(i) 0
Researcher Affiliation Academia Sofía Pérez Casulo EMAIL Facultad de Ingeniería Universidad de la República Marcelo Fiori EMAIL Facultad de Ingeniería Universidad de la República Federico Larroca EMAIL Facultad de Ingeniería Universidad de la República Gonzalo Mateos EMAIL Department of Electrical and Computer Engineering University of Rochester
Pseudocode Yes Algorithm 1 An iterative algorithm Require: Initialize X0 l 0 while Convergence criteria not met do Xl+1 h (Xl, θl) l l + 1 end while return Xl
Open Source Code Yes All anonymized source code and examples are available as supplementary files that accompany this submission.
Open Datasets Yes Here we asses LASE s ability to approximate node embeddings for real-world graphs. For instance, let us consider a set of networks typically used as benchmarks in several learning tasks: Cora (Yang et al., 2016), Citeseer (Yang et al., 2016), Twitch ES (Rozemberczki et al., 2021), and Amazon Photo (Shchur et al., 2018). ... Finally, let us consider the problem of embedding a network with unknown edges. To this end, we use United Nations (UN) General Assembly voting data (Voeten et al., 2009).
Dataset Splits Yes We consider six randomly selected countries for which we have randomly chosen 30% of the roll calls (among those it voted) for prediction. They are thus also tagged as unknown in the mask matrix Mobs.
Hardware Specification Yes All experiments were run on a server equipped with an NVIDIA Ge Force RTX 3060 (12 GB) GPU and a 13th generation i5-13400F processor with 64 GB of RAM.
Software Dependencies No Our implementation of LASE is based on Py G (Fey & Lenssen, 2019), fully integrated as a new message passing layer in this popular framework.
Experiment Setup Yes To compute the ASE we rely on the state-of-the-art RDPG inference library Graspologic (Chung et al., 2019). Our implementation of LASE is based on Py G (Fey & Lenssen, 2019), fully integrated as a new message passing layer in this popular framework. All experiments were run on a server equipped with an NVIDIA Ge Force RTX 3060 (12 GB) GPU and a 13th generation i5-13400F processor with 64 GB of RAM. As input to LASE we use X0 with i.i.d. random entries sampled from the uniform distribution in [0,1]. This corresponds to the same initialization of the factored GD in (Fiori et al., 2024), for which here we chose the step size α in (5) through the Armijo rule. The embeddings dimension d is considered a hyperparameter. For the selection of Q in GLASE, we rely on the subgraph sampling plus eigendecompoistion method described in the closing discussion of Section 3.3.