Artificial Kuramoto Oscillatory Neurons
Authors: Takeru Miyato, Sindy Löwe, Andreas Geiger, Max Welling
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, calibrated uncertainty quantification, and reasoning. We believe that these empirical results show the importance of rethinking our assumptions at the most basic neuronal level of neural representation, and in particular show the importance of dynamical representations. Code: https://github.com/autonomousvision/akorn. Project page: https://takerum.github.io/akorn project page/. 6 EXPERIMENTS 6.1 UNSUPERVISED OBJECT DISCOVERY Unsupervised object discovery is the task of finding objects in an image without supervision. Here, we test AKOr N on five synthetic datasets (Tetrominoes, d Sprites, CLEVR (Kabra et al., 2019), Shapes, CLEVRTex (Karazija et al., 2021)) and two real image datasets (Pascal VOC (Everingham et al., 2010), COCO2017 (Lin et al., 2014)) (see the Appendix C for details). |
| Researcher Affiliation | Academia | Takeru Miyato1, Sindy L owe2, Andreas Geiger1, Max Welling2 1 University of T ubingen, T ubingen AI Center 2 University of Amsterdam |
| Pseudocode | No | The paper does not contain clearly labeled pseudocode or algorithm blocks. It describes methodologies using mathematical equations and block diagrams (e.g., Figure 2, Figure 11) but not structured pseudocode. |
| Open Source Code | Yes | Code: https://github.com/autonomousvision/akorn. Project page: https://takerum.github.io/akorn project page/. |
| Open Datasets | Yes | Here, we test AKOr N on five synthetic datasets (Tetrominoes, d Sprites, CLEVR (Kabra et al., 2019), Shapes, CLEVRTex (Karazija et al., 2021)) and two real image datasets (Pascal VOC (Everingham et al., 2010), COCO2017 (Lin et al., 2014))... To test AKOr N s reasoning capability, we apply it on the Sudoku puzzle datasets (Wang et al., 2019; Palm et al., 2018). ... We test AKOr N s robustness to adversarial attacks and its uncertainty quantification performance on CIFAR10 and CIFAR10 with common corruptions (CC, Hendrycks & Dietterich (2019)). ... we first conduct pre-training on the Tiny-imagenet (Le & Yang, 2015) dataset with the Sim CLR loss for 50 epochs with batchsize of 512. |
| Dataset Splits | Yes | Tetrominoes d Sprites CLEVR Shapes Training examples 60,000 60,000 50,000 40,000 Test examples 320 320 320 1,000 CLEVRTex OOD CAMO Training examples 40,000 Test examples 5,000 10,000 2,000 Image Net Pascal VOC COCO2017 Training examples 1,281,167 Test examples 1,449 5,000 Sudoku(ID) (Wang et al., 2019) Sudoku(OOD) (Palm et al., 2018) Training examples 9,000 Test examples 1,000 18,000 |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for the experiments. It mentions training on ImageNet and CIFAR10 but without hardware specifications. |
| Software Dependencies | No | All models are trained with Adam (Kingma & Ba, 2015) without weight decay. ... Code 1: Py Torch code for up-tiling. The paper mentions using Adam optimizer and provides a snippet of PyTorch code, implying the use of PyTorch. However, it does not specify version numbers for Python, PyTorch, or any other libraries. |
| Experiment Setup | Yes | All models are trained with Adam (Kingma & Ba, 2015) without weight decay. ... For the Tetrominoes, d Sprites, and CLEVR datasets, we train single-block models with T = 8. ... All models including baseline models have roughly the same number of parameters and are trained with shared hyperparameters such as learning rates and training epochs. See Tabs 9-11 for those hyperparameter details. Table 9: Experimental settings on Tetrominoes, d Sprites, CLEVR, and Shapes. (Batchsize 256, Learning rate 0.001, #Epochs 50, 50, 300, 100) Table 10: Experimental settings on CLEVRTex and its variants. (Batchsize 256, Learning rate 0.0005, #epochs 500) Table 11: Experimental settings on Image Net pratraining and on the Pascal VOC and COCO2017 evaluation. (Batchsize 512, Learning rate 0.0005, #epochs 400) Table 12: Sudoku puzzle datasets. (Batchsize 100, Learning rate 0.0005, #epochs 100) |