Predictive Performance of Deep Quantum Data Re-uploading Models

Authors: Xin Wang, Hanxiao Tao, Rebing Wu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental These findings are validated through experiments on both synthetic linearly separable datasets and real-world datasets. Our results demonstrate that when processing highdimensional data, the quantum data re-uploading models should be designed with wider circuit architectures rather than deeper and narrower ones.
Researcher Affiliation Academia 1Department of Automation, Tsinghua University, Beijing, China. Correspondence to: Re Bing Wu <EMAIL>.
Pseudocode No The paper describes methodologies and theoretical findings using mathematical formulations and diagrams of quantum circuits (e.g., Fig. 2, Fig. 3, Fig. G.1), but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology, nor does it provide links to any code repositories.
Open Datasets Yes Dataset: CIFAR-10-Gray (airplane/automobile, grayscale, 12 12 pixels), CIFAR-10-RGB (airplane/automobile, RGB, 12 12 pixels), MNIST (digit 0/1, 12 12 pixels). Le Cun, Y. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/.
Dataset Splits Yes The training set contains 2000 samples, and the test set contains 1,000,000 samples (see App. F). In numerical experiments, the training set contains 600 samples and the test set contains 10000 samples. The training set contain 600 samples and the test set contains 1000 samples per class.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' for training but does not specify any software names with version numbers (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes With normally distributed initial parameters, we trained models using cross-entropy loss with Adam optimizer (learning rate = 0.005) over 1000 epochs (batch size = 200), selecting parameters with the lowest training error for test. The experiments were repeated 10 times with randomly initialized parameters.