Quantum Mathematics in Artificial Intelligence
Authors: Dominic Widdows, Kirsty Kitto, Trevor Cohen
JAIR 2021 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The best performance for each of the four sets was obtained by a variant of the QLM, outperforming a unigram language model baseline, with statistically significant improvements over then state-of-the-art approaches using Markov Random Fields (MRF) (Metzler & Croft, 2005) for the two larger web-based sets with relative increases in MAP for the best-performing QLM as compared with the best-performing MRF of 5.5% and 5.2%. |
| Researcher Affiliation | Collaboration | Dominic Widdows EMAIL Ion Q, Inc., 4505 Campus Drive, College Park, MD 20740, USA Kirsty Kitto EMAIL University of Technology Sydney, PO Box 123, Broadway, NSW 2007, Australia Trevor Cohen EMAIL University of Washington, Box 358047, 850 Republican St, Seattle, WA 98109, USA |
| Pseudocode | No | The paper describes algorithms and mathematical techniques conceptually and refers to existing implementations, but does not include any pseudocode or algorithm blocks within its text. |
| Open Source Code | No | The paper discusses various methods and their applications, including references to implementations by other research groups. However, it does not provide any statement regarding the open-sourcing of code specifically for the methodology described in this paper, nor does it provide links to any repositories. |
| Open Datasets | Yes | Grefenstette and Sadrzadeh (2011) provided an implementation and evaluation of this approach, deriving word representations from the British National Corpus. |
| Dataset Splits | No | The paper mentions the use of '450 queries drawn from across four information retrieval evaluation sets' for evaluation in Section 3.3, and deriving 'word representations from the British National Corpus' in Section 5.3, but it does not specify any training, validation, or test dataset splits for any experiments. |
| Hardware Specification | Yes | In the work recently reported by Abbas, Sutter, Zoufal, Lucchi, Figalli, and Woerner (2021), an actual quantum neural network is trained and shown to learn faster and more effectively than a classical network (as measured by Fisher information and effective dimension), showing as much as a 250% improvement over classical training using the ibmq montreal 27-qubit hardware. |
| Software Dependencies | No | The paper mentions software like Tensor Flow in Section 5.1 and 5.4, but it does not provide specific version numbers for any software dependencies used in experiments or for reproducing the described methodologies. |
| Experiment Setup | No | The paper discusses theoretical foundations and reviews experimental results from other research. However, it does not provide specific experimental setup details such as hyperparameter values, model initialization, or training schedules for any experiments conducted by the authors. |