Permutation Equivariant Neural Networks for Symmetric Tensors
Authors: Edward Pearce-Crump
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 8. Numerical Experiments We demonstrate our characterisation on the following toy experiments. S12-Invariant Task: We evaluate our model on a synthetic S12-invariant task given by the function f(T) := P12 i,j Ti,j,i, where T is a 3-order symmetric tensor. We demonstrate the high data efficiency of our model compared with a standard MLP for this task, as shown in Figure 3. ... S8-Equivariant Task: We evaluate our model on a synthetic S8-equivariant task from (R8) 3 to R8: namely, to extract the diagonal from 8 8 8 symmetric tensors. We evaluate our model against a standard MLP and a standard S8-equivariant model from (R8) 3 to R8. We show the Test Mean Squared Error (MSE) for each of these models in Table 1. |
| Researcher Affiliation | Academia | 1Department of Mathematics, Imperial College London, United Kingdom. Correspondence to: Edward Pearce Crump <EMAIL>. |
| Pseudocode | Yes | Procedure: Generation of the Transformation Map Labels That Describe the Transformation Corresponding to a (k, l) Bipartition Diagram of a Symmetric Tensor in (Rn) k to a Symmetric Tensor in (Rn) l. Input: A (k, l) bipartition diagram dπ and a value of n. 1. Apply Subprocedure I to obtain all possible grouped outputs for dπ, and for each one, its associated set of partially labelled diagrams. ... Output: A set of transformation map labels describing the transformation Dπ of a symmetric tensor T (Rn) k to a symmetric tensor Dπ(T) (Rn) l for the given value of n, where Dπ corresponds to the (k, l) bipartition diagram dπ. |
| Open Source Code | Yes | 1The code is available at https://github.com/epearcecrump/symmetrictensors. |
| Open Datasets | No | S12-Invariant Task: We randomly generated a synthetic data set consisting of 5000 symmetric tensors, split into 90% training and 10% test. ... S8-Equivariant Task: We randommly generated a synthetic data set consisting of 10000 symmetric tensors, split into 90% training and 10% test. The paper uses synthetically generated data and does not provide concrete access information for a publicly available or open dataset. |
| Dataset Splits | Yes | S12-Invariant Task: We randomly generated a synthetic data set consisting of 5000 symmetric tensors, split into 90% training and 10% test. ... S8-Equivariant Task: We randommly generated a synthetic data set consisting of 10000 symmetric tensors, split into 90% training and 10% test. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | Both models were optimised with stochastic gradient descent with a learning rate of 0.0001. We trained both models for 50 epochs with a batch size of 50. |