Expressive Power of Graph Neural Networks for (Mixed-Integer) Quadratic Programs
Authors: Ziang Chen, Xiaohan Chen, Jialin Liu, Xinshang Wang, Wotao Yin
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present numerical experiments in this section. Numerical validation of GNNs expressive power. We train GNNs to fit Φobj or Φsol for LCQP or MI-LCQP instances. For both LCQP and MI-LCQP, we randomly generate 100 instances, each of which contains 10 constraints and 50 variables. The generated MI-LCQPs are all GNNsolvable and GNN-analyzable with probability one. Details on the data generation and training schemes can be found in Appendix G. We train four GNNs with four different embedding sizes and record their relative errors averaged on all instances during training. The results are reported in Figure 2. We can see that GNNs can fit Φobj and Φsol well for both LCQP and MI-LCQP. These results validate Theorems 3.3,3.4,4.6 and 4.7. We also observe that a larger embedding size increases the capacity of a GNN, resulting in not only lower final errors but also faster convergence. |
| Researcher Affiliation | Collaboration | 1Department of Mathematics, Massachusetts Institute of Technology, Cambridge, MA, United States 2Decision Intelligence Lab, Damo Academy, Alibaba US, Bellevue, WA, United States 3Department of Statistics and Data Science, University of Central Florida, Orlando, FL, United States. |
| Pseudocode | Yes | Algorithm 1 The WL test (Ref. to App. D for an example)... Algorithm 2 The linear-form WL test... Algorithm 3 The WL test for QCQP-Graphs |
| Open Source Code | Yes | Codes are available at https://github.com/liujl11git/GNN-QP. |
| Open Datasets | Yes | For LCQP, we train GNNs on the Maros and Meszaros Problem Set (Maros & M esz aros, 1999), which contains 138 challenging convex LCQPs... For MI-LCQP, we perform GNN training on 73 GNNsolvable instances from QPLib (Furini et al., 2019) that provide the optimal solutions and objectives. |
| Dataset Splits | Yes | For the generalization experiments, we first generate 25,000 LCQP instances for training, and then take the first 100/500/25,00/5,000/10,000 instances to form the smaller training sets. The test set contains 1,000 instances that are generated separately. |
| Hardware Specification | Yes | All experiments are conducted on a single NVIDIA Tesla V100 GPU. |
| Software Dependencies | Yes | We implement GNN with Python 3.9 and Tensor Flow 2.16.1 (Abadi et al., 2016). |
| Experiment Setup | Yes | We adopt Adam (Kingma & Ba, 2014) to optimize the learnable parameters during training. We use an initial learning rate of 5 10 4 for all networks. We set the batch size to 2,500 or the size of the training set, whichever is the smaller. |