QuaDiM: A Conditional Diffusion Model For Quantum State Property Estimation

Authors: Yehui Tang, Mabiao Long, Junchi Yan

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Qua Di M on large-scale QPE tasks using classically simulated data on the 1D anti-ferromagnetic Heisenberg model with the system size up to 100 qubits. Numerical results demonstrate that Qua Di M outperforms baseline models, particularly auto-regressive approaches, under conditions of limited measurement data during training and reduced sample complexity during inference.
Researcher Affiliation Academia Yehui Tang1, Mabiao Long1, Junchi Yan12 1Sch. of Computer Science & Sch. of Artificial Intelligence, Shanghai Jiao Tong University 2Shanghai Artificial Intelligence Laboratory EMAIL
Pseudocode No The paper describes methods and equations (e.g., in Section 3.2.1, 3.2.2, 3.2.3) but does not contain a clearly labeled pseudocode block or algorithm figure.
Open Source Code No The paper does not provide any explicit statement about open-sourcing the code for Qua Di M, nor does it include any links to a code repository.
Open Datasets No We classically simulate relatively large-scale quantum systems with up to 100 qubits to generate extensive training and test datasets for evaluation, showing Qua Di M s scalability and practical applicability.
Dataset Splits Yes For all the methods, we set N tr = 100 and N te = 20, with the number of qubits in the quantum system L {10, 40, 70, 100}. To construct the training set, we perform repeated measurements of Min = 1000 for each ground state.
Hardware Specification Yes When reducing inference to Tf = 500 diffusion steps on a single GPU (2080Ti), Qua Di M achieves a lower RMSE score compared to the CS while demonstrating an inference speed comparable to LLM4QPE.
Software Dependencies No The paper mentions machine learning models (RNN, Transformer) and an optimizer (Adam) but does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes In this paper, all the experimental results of Qua Di M are reported for a transformer configuration consisting of 4 heads, 4 layers, and 128 hidden dimensions. The maximum denoising time steps is set to T = 2000. ... For all the methods, we set N tr = 100 and N te = 20, with the number of qubits in the quantum system L {10, 40, 70, 100}. To construct the training set, we perform repeated measurements of Min = 1000 for each ground state. ... A grid search is performed to identify the optimal regularization strength, with candidate values uniformly distributed on a logarithmic scale from 0.001 to 100. We employ a 5-fold cross-validation strategy on the training dataset... The model architecture includes a hidden layer with 128 units and is trained using the Adam optimizer.