Optimization by Parallel Quasi-Quantum Annealing with Gradient-Based Sampling

Authors: Yuma Ichikawa, Yamato Arai

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments demonstrate that our method is a competitive general-purpose solver, achieving performance comparable to i SCO and learning-based solvers across various benchmark problems. Notably, our method exhibits superior speed-quality trade-offs for large-scale instances compared to i SCO, learning-based solvers, commercial solvers, and specialized algorithms.
Researcher Affiliation Collaboration Yuma Ichikawa Fujitsu Limited, Department of Basic Science University of Tokyo Yamato Arai Fujitsu Limited, Department of Basic Science University of Tokyo
Pseudocode No The paper describes algorithms and methods using mathematical equations and textual descriptions but does not include any clearly labeled pseudocode blocks or algorithm figures.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide a link to a code repository. It mentions using existing tools like 'Adam W optimizer' but not providing their own implementation.
Open Datasets Yes We evaluate PQQA using the MIS benchmarks from recent studies (Goshvadi et al., 2023; Qiu et al., 2022), which includes graphs from SATLIB (Hoos & Stützle, 2000) and Erd os Rényi graphs (ERGs). ... We report the Ap Rs on synthetic graphs generated using the RB model (Xu et al., 2007) and a real-world Twitter graph (Jure, 2014). ... We report the results on the publicly available COLOR dataset (Trick, 2002), commonly used in graph-based benchmark studies.
Dataset Splits No The paper describes using various benchmark datasets and instances, some of which are from previous studies (e.g., 'instances used in i SCO'). It mentions running PQQA 'on each instance' or with 'fewer steps (3000 steps) or more steps (30000 steps)' but does not specify how these datasets are formally split into training, validation, or test sets for the experiments presented, or refer to a standard split. The dataset statistics in Appendix D.2 show the number of instances for evaluation, not explicit splits.
Hardware Specification Yes We report the runtime of PQQA on a single V100 GPU. The runtime can be further improved with more powerful GPUs or additional GPUs. ... If solvers cannot run even on V100 GPU, the result is reported as N/A.
Software Dependencies No The paper mentions 'Adam W optimizer' and 'Py Torch GPU' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes We use the α-entropy with α = 4 across all experiments. The parameter γ is increased linearly from γmin = 2 to γmax = 0.1 with each gradient update. ... We set σ(w) = Clamp(w). ... The Adam W optimizer (Loshchilov & Hutter, 2017) is used. The parallel number S is set to 100 or 1,000. ... We linearly increase γ from γmin to γmax over the total steps. In general, we set γmin = 2 and γmax = 0.1. For the Max Cut problem, γmin = 20 was used for larger graphs, while γmin = 5 was applied for the remaining graphs. ... The learning rate was set to an appropriate value from {1, 0.1, 0.01}, depending on the specific problem. The weight decay was fixed at 0.01 and the temperature was fixed as T = 0.001.