Equivariant Mesh Attention Networks

Authors: Sourya Basu, Jose Gallego-Posada, Francesco Viganò, James Rowbottom, Taco Cohen

TMLR 2022 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on the FAUST and TOSCA datasets confirm that our proposed architecture achieves improved performance on these benchmarks and is indeed equivariant, and therefore robust, to a wide variety of local/global transformations.
Researcher Affiliation Collaboration Sourya Basu EMAIL University of Illinois at Urbana-Champaign, USA Jose Gallego-Posada EMAIL Mila and DIRO, Université de Montréal, Canada Francesco Viganò EMAIL Imperial College London, UK James Rowbottom EMAIL Independent Scholar Taco Cohen EMAIL Qualcomm AI Research, The Netherlands
Pseudocode Yes Algorithm 1 Convolutional update in an Equivariant Mesh Attention Layer Forward ((fp)p M, Kquery, Kkey(θ), Kvalue(θ)): Qp Kqueryfp Kp Concatenate(Kkey(θpq)ρin(gq p)fq for q Np) Vp Concatenate(Kvalue(θpq)ρin(gq p)fq for q Np) f p Np Vp softmax K p Qp Catt
Open Source Code Yes Our code is available at: https://github.com/gallego-posada/eman
Open Datasets Yes We carry out experiments on the FAUST (Bogo et al., 2014) and TOSCA (Bronstein et al., 2008) datasets for segmentation and classification tasks, respectively.
Dataset Splits Yes The FAUST dataset consists of 100 (80 training, 20 test) 3-dimensional human meshes with 6890 vertices each. TOSCA consists of meshes belonging to nine different classes such as cats, men, women, centaurs, etc. While figures in each class are similarly meshed, each class has a varying number of nodes and edges. The dataset consists of 80 meshes, which we uniformly split into a train set of 63 meshes and a test set of 17 meshes.
Hardware Specification No The paper mentions "HAL (Kindratenko et al., 2020) and Mila compute clusters" but does not provide specific hardware details like GPU/CPU models, memory, or processor types.
Software Dependencies No The paper mentions "Adam optimizer Kingma & Ba (2015)" but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We train using a learning rate of 0.01 for 100 epochs for FAUST segmentation tasks. In the case of TOSCA, we train for 50 epochs and use a learning rate of 2 10 3 for GEM-CNN models, and 7 10 4 for EMAN models. All tasks use the Adam optimizer Kingma & Ba (2015) and negative log-likelihood loss function. The output of the first layer is also passed through Re LU Glorot et al. (2011) and a dropout layer with parameter 0.5 Srivastava et al. (2014).