Training One-Dimensional Graph Neural Networks is NP-Hard
Authors: Robert Ganian, Mathis Rocton, Simon Wietheger
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | As our main result, we establish the NP-hardness of training ReLU-activated one-dimensional GNNs via a highly non-trivial reduction. We complement this result with algorithmic upper bounds for the training problem in the ReLU-activated and linearly-activated settings. |
| Researcher Affiliation | Academia | Robert Ganian, Mathis Rocton & Simon Wietheger Algorithms and Complexity Group, TU Wien, Vienna, Austria EMAIL, EMAIL |
| Pseudocode | No | The paper focuses on theoretical proofs, reductions, and algorithmic complexity, and does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper is theoretical and does not mention the release of any source code, nor does it provide links to any repositories. |
| Open Datasets | No | The paper discusses the theoretical problem of GNN training with abstract 'data X' and 'labeled nodes Y' as input to the problem definition, but it does not use or refer to any specific publicly available datasets for empirical evaluation. |
| Dataset Splits | No | The paper is theoretical and does not involve experimental evaluation on datasets, thus no dataset split information is provided. |
| Hardware Specification | No | The paper is purely theoretical, focusing on computational complexity and algorithmic bounds. It does not describe any experiments that would require specific hardware, and therefore no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not implement or run any experiments. Therefore, it does not list specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical, presenting proofs and algorithmic complexity analysis, and does not involve practical experiments. Consequently, no experimental setup details, hyperparameters, or training configurations are provided. |