Relaxed Rotational Equivariance via G-Biases in Vision
Authors: Zhiqiang Wu, Yingjie Liu, Licheng Sun, Jian Yang, Hanlin Dong, Shing-Ho J. Lin, Xuan Tang, Jinpeng Mi, Bo Jin, Xian Wei
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate the efficiency of RREConv, we conduct extensive ablation experiments on the discrete rotational group Cn. Experiments demonstrate that the proposed RREConv-based methods achieve excellent performance compared to existing GConv-based methods in both classification and 2D object detection tasks on the natural image datasets. |
| Researcher Affiliation | Academia | 1Software Engineering Institute, East China Normal University 2School of Geospatial Information, Information Engineering University 3School of Artificial Intelligence, University of Chinese Academy of Sciences 4School of Communication and Electronic Engineering, East China Normal University 5Institute of Machine Intelligence, University of Shanghai for Science and Technology 6 School of Computer Science and Technology, Tongji University EMAIL, EMAIL |
| Pseudocode | No | The paper describes methods using mathematical equations and textual explanations, but it does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Code https://github.com/wuer5/rrenet |
| Open Datasets | Yes | We evaluate the proposed method on the CIFAR10 / 100, PASCAL VOC07+12, and MS COCO2017 datasets. |
| Dataset Splits | No | The paper mentions using well-known datasets such as CIFAR10/100, PASCAL VOC07+12, and MS COCO2017, but it does not explicitly state the training, validation, and test splits used for these datasets. It refers to 'default data augmentation settings' and 'all models are trained from scratch' but no split percentages or counts. |
| Hardware Specification | Yes | All the parameters are set the same, and all the experiments are conducted on the dual RTX-4090 GPUs. |
| Software Dependencies | No | All training in this paper is based on the famous engine library Ultralytics (Jocher, Qiu, and Chaurasia 2023). However, specific version numbers for Ultralytics or other libraries/frameworks like PyTorch are not provided. |
| Experiment Setup | Yes | All models are trained from scratch for 300 epochs for the 2D object detection tasks and 100 epochs for the classification tasks. We also default to using an SGD optimizer for both tasks with an initial learning rate of 0.01, a final learning rate of 0.01, a momentum of 0.937, a weight decay of 5e-4, a warmup epoch of 3, a warmup momentum of 0.8, and a warmup bias learning rate of 0.1. |