Compact Matrix Quantum Group Equivariant Neural Networks

Authors: Edward Pearce-Crump

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We note that our contributions are primarily theoretical in nature: to demonstrate the practical potential of these neural networks, further work is needed to extend the characterisation that we have found for the easy compact matrix quantum groups to the equivariant non-linear layers so that they can be implemented.
Researcher Affiliation Academia 1Department of Computing, Imperial College London, United Kingdom. Correspondence to: Edward Pearce Crump <EMAIL>.
Pseudocode Yes Procedure 1: How to Calculate the Weight Matrix of an Equivariant Linear Layer Function from ((Cn) k, u wk) ((Cn) l, u wl) for an Easy Compact Matrix Quantum Group (G(n), u). Assume that K(n) is the two-coloured category of partitions that defines the easy compact matrix quantum group (G(n), u) as in Theorem 7.10. Perform the following steps: 1. Calculate all of the two-coloured (wk, wl) partition diagrams dπ that live in K(n). These diagrams form a basis of the morphism space Hom K(n)(wk, wl). 2. Apply the function dπ 7 Dπ to each diagram to obtain its associated spanning set matrix Dπ. 3. Attach a weight wπ C to each matrix Dπ. 4. Finally, calculate P wπDπ to give the overall weight matrix.
Open Source Code No As future work, we suggest that it would be good to extend this characterisation to the equivariant non-linear layers, which would enable us to demonstrate in practice what these neural networks promise in theory.
Open Datasets No The paper is primarily theoretical, focusing on mathematical derivations and characterizations of neural networks for non-commutative geometries, rather than empirical evaluation using specific datasets.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with datasets, therefore no dataset split information is provided.
Hardware Specification No The paper presents theoretical work and does not describe any experimental setup or hardware used for computation.
Software Dependencies No The paper focuses on theoretical mathematical derivations and does not include details on software dependencies or versions for implementation.
Experiment Setup No The paper is entirely theoretical, describing the derivation and characterization of neural networks; therefore, no experimental setup details, hyperparameters, or training configurations are present.