Scalable Quantum-Inspired Optimization Through Dynamic Qubit Compression

Authors: Co Tran, Quoc-Bao Tran, Hy Truong Son, Thang N Dinh

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted experiments to evaluate GRANITE s effectiveness in compressing Ising models while maintaining solution quality. Our evaluation focused on the trade-off between compression levels and solution accuracy across various graph topologies and sizes. The experiments were performed on DWave s Advantage Quantum Processing Unit (QPU), featuring the advanced Pegasus topology (P16) with 5,640 qubits. Experimental Setup Dataset. We generate random Ising Hamiltonians represented as graphs with the spins as nodes and edges following three graph topologies Erd os-R enyi (ER), Barab asi-Albert (BA), and Watts-Strogatz (WS) models. ...The evaluation set contains 20,000 graphs to assess the effectiveness of unseen graphs. The ground state labels (alignment, anti-alignment, or neutral) for each edge were computed using an exhaustive search. Experiment Results Solution quality. Our experimental results demonstrate GRANITE s effectiveness in compressing Ising models while maintaining high solution quality across different graph types and sizes. Table 1 presents optimality for different compression ratios (12.5%, 25%, 50%, 75%) across graph of sizes n = {25, 50, 100, 200, 400}. Ablation Studies. We conducted comprehensive ablation studies to evaluate the impact of two key architectural choices: loss function and the number of GNN layers.
Researcher Affiliation Academia 1University of Texas at Austin 2Virginia Commonwealth University 3University of Alabama at Birmingham
Pseudocode Yes Algorithm 1: GRANITE Graph Neural Ising Transformer Require: 1: Ising model graph G = (V, E) 2: Desired reduction ratio α 3: Number of GNN layers L Ensure: Reduced Ising model graph G 4: Initialize the GNN model with specified hyperparameters 5: while size(G) > α (initial size of G) do 6: for ℓ= 1 to L do 7: Compute node representation h(ℓ) v , v V. 8: Compute edge representation e(ℓ) uv, (u, v) E. 9: end for 10: for each edge (u, v) E do 11: zuv = h(L) u h(L) v e(L) uv 12: ˆyuv = σ( w, zuv ) 13: Cuv = (ˆyuv log(ˆyuv)+(1 ˆyuv) log(1 ˆyuv)) 14: end for 15: (ˆu, ˆv) = arg max(u,v) E Cuv 16: if ˆyˆuˆv < 0.5 then Regular merge 17: G = Merge(G, ˆu, ˆv) 18: else Flip then merge 19: G = Flip(G, ˆu) 20: G = Merge(G, ˆu, ˆv) 21: end if 22: end while 23: return G
Open Source Code No The paper does not provide an explicit statement about releasing code, nor does it provide a link to a code repository for the GRANITE methodology. It mentions third-party tools like 'minorminer' and 'Gurobi Optimizer' but not its own implementation.
Open Datasets No The paper states: "We generate random Ising Hamiltonians represented as graphs with the spins as nodes and edges following three graph topologies Erd os-R enyi (ER), Barab asi-Albert (BA), and Watts-Strogatz (WS) models." While the generation models are known, the paper does not provide a link or specific access information for the particular instances of the dataset generated and used in their experiments.
Dataset Splits Yes The dataset includes 97,500 graphs (325 distinct configurations 100 instances 3 topologies), split into 80% training and 20% validation sets.For each type of topology, the number of nodes ranges from 2 to 26 (25 distinct sizes), while the average node degree varies from 1 to n 1 (sum up to 325 instances). For each combination of node count and average degree, we generate 100 unique graphs, resulting in a total dataset of 97,500 graphs (325 100 instances 3 types of topologies). We split training at a ratio of 80%/20% for training/validation respectively. The evaluation set contains 20,000 graphs to assess the effectiveness of unseen graphs.
Hardware Specification Yes The experiments were performed on DWave s Advantage Quantum Processing Unit (QPU), featuring the advanced Pegasus topology (P16) with 5,640 qubits. We conducted experiments on the D-Wave Advantage 4.1 Quantum Processing Unit (QPU), utilizing the advanced Pegasus topology with 5,640 qubits and an annealing time of 40 ns.
Software Dependencies No The paper mentions 'Adam optimizer' and 'minorminer' (Cai, Macready, and Roy 2014) and 'Gurobi Optimizer' (Gurobi Optimization, LLC 2024), but it does not specify version numbers for any of these software components used in their experiments.
Experiment Setup Yes Hyperparameters. We employ the Adam optimizer with a learning rate of 0.001. The maximum number of iterations is set to 300, with the best model saved based on the lowest loss performance on the validation set. The size of the MLP layers is relatively small, depending on the size of H(0) and E(0), which in this case yields 2 and 2, respectively. By default, the GRANITE consists of three GNN layers and uses a hybrid loss function with λ = 0.5.