Group Downsampling with Equivariant Anti-aliasing

Authors: Md Ashiqur Rahman, Raymond A. Yeh

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conducted experiments on image classification tasks demonstrating that the proposed downsampling operation improves accuracy, better preserves equivariance, and reduces model size when incorporated into G-equivariant networks.
Researcher Affiliation Academia Md Ashiqur Rahman Department of Computer Science Purdue University EMAIL Raymond A. Yeh Department of Computer Science Purdue University EMAIL
Pseudocode Yes Algorithm 1 Uniform group subsampling Algorithm 2 Check-Compliance Algorithm 3 General-Subsample
Open Source Code Yes A7 ADDITIONAL IMPLEMENTATION DETAILS. Code is also provided in the supplemental materials.
Open Datasets Yes Second, we conduct experiments on the MNIST and CIFAR-10 datasets to evaluate the performance of the proposed downsampling layer on image classification tasks over different symmetries. ... rotated MNIST (Deng, 2012) and CIFAR10 (Krizhevsky et al., 2009). ... We provide additional results of our model on STL-10 (Coates et al., 2011) dataset
Dataset Splits Yes For MNIST and CIFAR-10, we train on 5k and 60k training images, and test on images on different levels of transformations (see A7 for details). For MNIST, we train on 5, 000 training images without any data augmentation and test on 10, 000 images on different levels of transformations. For CIFAR-10, we train on 60K images without any data augmentation and evaluate on 10K images. We train on 5000 training images with any data augmentation.
Hardware Specification Yes All the expenses are run on a single NVIDIA RTX 6000 GPU.
Software Dependencies No The paper mentions "Adam optimizer" and "Sequential Least Squares Programming (Kraft, 1988)" but does not specify version numbers for any programming languages, libraries, or specific software packages used for implementation.
Experiment Setup Yes Models are optimized using the Adam optimizer and trained using 15 and 50 epochs with batch sizes of 128 and 256 for MNIST and CIFAR-10 datasets, respectively. ... We set λ = 5 in Eq. (14) for obtaining M .