On the Relation between Trainability and Dequantization of Variational Quantum Learning Models

Authors: Elies Gil-Fuster, Casper Gyurik, Adrian Perez-Salinas, Vedran Dunjko

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We first formalize the key concepts and put them in the context of prior research. We introduce the role of variationalness of QML models using well-known quantum circuit architectures as leading examples. Our results provide recipes for variational QML models that are trainable and nondequantizable. ... Our first contribution is the formal definition of trainability and dequantization, to disentangle the discussion from related proxies. We next explain how trainability and non-dequantization are mutually compatible for non-variational QML models, based on prior works. We formally introduce a measure of variationalness and prove accompanying results to justify the need for a restricted notion of trainability. Finally, we resolve the open question regarding trainability and dequantization in variational QML: there do exist variational QML models which are gradient-based trainable and still non-dequantizable.
Researcher Affiliation Collaboration 1 Dahlem Center for Complex Quantum Systems, Freie Universit at Berlin, 14195 Berlin, Germany 2 Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany 3 a Qa L Applied Quantum Algorithms, Universiteit Leiden 4 Lorentz Instituut, Universiteit Leiden, Niels Bohrweg 2, 2333 CA Leiden, Netherlands 5 LIACS, Universiteit Leiden, Niels Bohrweg 1, 2333 CA Leiden, Netherlands
Pseudocode Yes Algorithm 1 Gradient-based training algorithm. Input: F := {fϑ | ϑ Θ} Parametrized hypothesis family. Input: ˆRS(fϑ) Empirical risk functional. Input: P(Θ) Parameter initialization distribution. Input: C(ϑ) ( ϑ ˆRS(fϑ), Hϑ ˆRS(fϑ)) Learning rate. Output: ϑ Θ Trained parameters. 1: ϑ0 P Sample initial parameters 2: for t in {1, . . . , T} do: 3: ϑt ϑt 1 + C(ϑt 1) ϑ ˆRS(fϑt 1) Update rule. 4: end for 5: return ϑT
Open Source Code No The paper does not contain any explicit statement about providing source code, nor does it provide a link to a code repository for the methodology described in this work.
Open Datasets No The paper is theoretical and focuses on formal definitions, theorems, and proofs related to trainability and dequantization of QML models. It does not conduct empirical studies using specific datasets that would require public access information.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments using specific datasets. Therefore, there is no discussion of training/test/validation splits.
Hardware Specification No The paper is theoretical and does not describe any computational experiments or simulations that would require specifying hardware used. There is no mention of GPU, CPU, or other hardware components.
Software Dependencies No The paper is theoretical and does not conduct experiments requiring specific software dependencies with version numbers. There is no mention of programming languages, libraries, or solvers used for any empirical work.
Experiment Setup No The paper is theoretical and does not describe any empirical experiments. Therefore, it does not provide details on experimental setup, hyperparameters, or training configurations.