On the Fourier analysis in the SO(3) space : the EquiLoPO Network

Authors: Dmitrii Zhemchuzhnikov, Sergei Grudinin

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A comprehensive evaluation on diverse 3D medical imaging datasets from Med MNIST3D demonstrates the effectiveness of our approach, which consistently outperforms state of the art. This work suggests the benefits of true rotational equivariance on SO(3) and flexible unconstrained filters enabled by the local activation function, providing a flexible framework for equivariant deep learning on volumetric data with potential applications across domains. Our code is publicly available at https://gricad-gitlab.univ-grenoble-alpes.fr/Gru Lab/ ILPO/-/tree/main/Equi Lo PO.
Researcher Affiliation Collaboration Dmitrii Zhemchuzhnikov1,2 and Sergei Grudinin1 1 Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, 38000 Grenoble, France 2 AIRI EMAIL, EMAIL
Pseudocode Yes Figure K1: Schematic representation of the Equi Lo PO Res Net-18 architecture, with a sequence of operations in the Initial and Basic blocks. Table K2: Sequence of operations in the Initial convolutional block of Equi Lo PORes Net-18. Table K3: Sequence of operations in the Basic block of Equi Lo PORes Net-18.
Open Source Code Yes Our code is publicly available at https://gricad-gitlab.univ-grenoble-alpes.fr/Gru Lab/ ILPO/-/tree/main/Equi Lo PO.
Open Datasets Yes A comprehensive evaluation on diverse 3D medical imaging datasets from Med MNIST3D demonstrates the effectiveness of our approach... For this, we used Med MNIST v2, a vast MNIST-like collection of standardized biomedical images (Yang et al., 2023).
Dataset Splits Yes We used the train-validation-text split provided by the authors of the dataset (the proportion is 7 : 1 : 2).
Hardware Specification No The paper mentions memory and time consumption metrics but does not specify any particular hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions 'Python' and 'C++' as coding languages and 'ADAM optimizer', but does not provide specific version numbers for any software dependencies or libraries used.
Experiment Setup Yes The learning rate of the optimizer and the dropout rate are hyperparameters. All the models are trained for 100 epochs. Table K1 lists hyperparameters optimized for validation data performance. Table K1: Optimal hyperparameters for the trained networks: learning rate (lr) and dropout rate (dr) of the trained networks.