On the Hölder Stability of Multiset and Graph Neural Networks

Authors: Yair Davidson, Nadav Dym

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTS In the following experiments, we evaluate Sort MPNN, Adapt MPNN, Relu MPNN and Smooth MPNN. As the last two architectures closely resemble standard MPNN like GIN (Xu et al., 2019) with Re LU/smooth activation, our focus is mainly on the Sort MPNN and Adapt MPNN architectures. In our experiments we consider several variations of these architectures which were omitted in the main text for brevity, and are described in appendix G alongside further experiment details.
Researcher Affiliation Academia 1 Department of Computer Science Technion, Haifa, Israel 2 Department of Mathematics Technion, Haifa, Israel
Pseudocode No The paper describes the methodology narratively and mathematically, but does not include explicit pseudocode blocks or algorithm listings.
Open Source Code Yes Code is available at 3 . https://github.com/YDavidson/On-The-Holder-Stability
Open Datasets Yes we test Sort MPNN and Adapt MPNN on a subset of the TUDatasets (Morris et al., 2020), including Mutag, Proteins, PTC, NCI1 and NCI109.
Dataset Splits Yes results are reported using the evaluation method from (Xu et al., 2019). Namely, we report the mean and standard deviation of the validation performance over stratified 10-folds, where the validation curves of all 10 folds are aggregated, and the best epoch is chosen.
Hardware Specification Yes All experiments were executed on an a cluster of 8 nvidia A40 49GB GPUs.
Software Dependencies No The paper mentions tools like 'wandb' and states that 'The code from the official LRGB github repo was used to run these experiments', but does not provide specific version numbers for software libraries or frameworks used in their own implementation.
Experiment Setup Yes The reported results in 3 for the fully trained Sort MPNN and Adapt MPNN were chosen out to be the best single result out of 30 random hyper-parameter choices from the following hyperparameters: batch size {32, 64}, depth [2 5], embed dim {16, 32, 64}, combine {Concat Project, Linear Combination, LTSum}, dropout {0, 0.1, 0.2},output mlp depth {1, 2, 3}, weight decay {0, 0.01, 0.1}, lr [0.0001, 0.01], optimizer [adam, adamw], #epochs= 500.