Q-MAML: Quantum Model-Agnostic Meta-Learning for Variational Quantum Algorithms

Authors: Junyong Lee, Jeihee Cho, Shiho Kim

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach through experiments, including distribution function mapping and optimization of the Heisenberg XYZ Hamiltonian. The result implies that the Learner successfully estimates initial parameters that generalize across the problem space, enabling fast adaptation. We conduct experiments on two different applications: Heisenberg XYZ Hamiltonian, and Molecule Hamiltonian.
Researcher Affiliation Academia Junyong Lee1*, Jeihee Cho2*, Shiho Kim2 1 BK21 Graduate Program in Intelligent Semiconductor Technology, Yonsei University, Korea 2 Yonsei University, Korea EMAIL, EMAIL, EMAIL
Pseudocode Yes Algorithm 1: MAML for Pre-training Phase Parameter α: learning rate Require: p(T ): distribution of tasks Require: classical optimizer O, cost function l T (θ) 1: randomly initialize W 2: while not done do 3: Sample batch of task Ti p(T ) 4: for all Ti do 5: Get θ from Learner : θ = h W (ϕi) 6: Prepare quantum state |ψ(θ) = g(θ) |0 7: Evaluate cost function l Ti(θ) = ψ(θk)| ˆH|ψ(θk) 8: Calculate the gradient W l Ti(g(h W (ϕi))) 9: end for 10: Update parameter W W + α W P Ti l Ti(g(h W (ϕ)) 11: end while
Open Source Code No The paper does not explicitly state that source code for the described methodology is publicly available, nor does it provide a link to a code repository.
Open Datasets No The paper describes generating data for the Heisenberg XYZ Hamiltonian and Molecule Hamiltonian using specific conditions and the Pennylane function `qml.molecular_hamiltonian`, but it does not provide concrete access information (link, DOI, repository) to a pre-existing, publicly available dataset that was used directly.
Dataset Splits No The paper states: "We randomly select 16 samples to check the adaptation performance of Q-MAML." for Heisenberg and "We randomly select 6 samples (which is 10% of the total dataset) to evaluate the performance of adaptation." for Molecule Hamiltonian. This describes the selection of samples for evaluation but does not specify explicit train/test/validation splits for the meta-training process itself with percentages or exact counts for the entire dataset.
Hardware Specification No The paper mentions 'quantum computing resources' and 'quantum processor' but does not provide specific hardware details (like exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running the classical components of its experiments, such as training the Learner.
Software Dependencies No The paper mentions using "Pennylane function qml.molecular hamiltonian" and "Adam optimizer" but does not provide specific version numbers for these or any other software components.
Experiment Setup Yes Throughout the experiments, the Learner is trained for 30 epochs using Adam optimizer with a learning rate of 0.001. For the adaptation phase, PQC is trained for 2000 iterations by Adam optimizer with 0.001 as a learning rate. We use four different initialization methods that are commonly used: Zero initialization, π initialization, reduced-domain Uniform initialization (Wang et al. 2023), and Gaussian initialization (Zhang et al. 2022). Zero and π initialization method use zero and π for all parameters. Parameters of reduced-domain Uniform initialization is sampled from U[ απ, απ] where α is set as 0.05 to ensure sufficiently small values are sampled, and Gaussian initialization from N(0, γ2) and set γ2 = 1 4S(L+2) where S is the number of non-identity Pauli operators in observable and L is the number of layers in the circuit that follows the setting of Zhang et al. (2022). The Learner is constructed using one input layer with a size of 3, two hidden layers with 256 neurons, and an output layer with a size of the number of parameters in PQC. The PQC consists of 7 layers of Strongly Entangling Layer (Schuld et al. 2020).