Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]
Universal Approximation Theorems for Differentiable Geometric Deep Learning
Authors: Anastasis Kratsios, Léonie Papon
JMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | This paper addresses the growing need to process non-Euclidean data, by introducing a geometric deep learning (GDL) framework for building universal feedforward-type models compatible with differentiable manifold geometries. We show that our GDL models can approximate any continuous target function uniformly on compact sets of a controlled maximum diameter. We obtain curvature dependant lower-bounds on this maximum diameter and upper-bounds on the depth of our approximating GDL models. Our last main result identifies data-dependent conditions guaranteeing that the GDL model implementing our approximation breaks the curse of dimensionality. The paper is structured around theorems, lemmas, and proofs, indicating a theoretical focus. For example, Section 3 is titled 'Main Results on GDNs', and Appendix B is titled 'Proofs'. |
| Researcher Affiliation | Academia | Anastasis Kratsios EMAIL Department of Mathematics Mc Master University 1280 Main Street West, Hamilton, Ontario, L8S 4K1, Canada. L eonie Papon EMAIL Department of Mathematical Sciences Durham University Upper Mountjoy Campus, Stockton Rd, Durham DH1 3LE, United Kingdom. Both authors are affiliated with universities. |
| Pseudocode | No | The paper includes several figures (e.g., Figure 1: Lifting Euclidean learning models to non-Euclidean input/output spaces., Figure 2: Visualization representation of GDNs) that illustrate concepts and computational graphs, but it does not contain any sections explicitly labeled 'Pseudocode' or 'Algorithm', nor are there structured code-like procedures presented in the text. |
| Open Source Code | No | The paper does not contain an unambiguous statement from the authors about releasing code for the methodology described, nor does it provide any direct links to a source-code repository. |
| Open Datasets | No | The paper discusses 'efficient datasets' and 'real-world datasets' in a theoretical context, but it does not perform experiments on specific named datasets or provide concrete access information (links, DOIs, repositories, or citations) for any publicly available or open dataset. |
| Dataset Splits | No | The paper is theoretical and does not involve empirical experiments with specific datasets. Therefore, it does not provide any information regarding training, testing, or validation dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not report on experimental results that would require specific hardware. No GPU models, CPU models, or other hardware specifications are mentioned. |
| Software Dependencies | No | The paper focuses on theoretical contributions and does not describe an implementation or experiments requiring specific software dependencies with version numbers. While it cites TensorFlow and Theano, these mentions are in the context of general deep learning frameworks or related work, not as dependencies for the authors' own described methodology. |
| Experiment Setup | No | The paper is theoretical and does not describe any empirical experiments. Therefore, it does not include details about an experimental setup, such as hyperparameter values, model initialization, or training schedules. |