Emergent Orientation Maps —— Mechanisms, Coding Efficiency and Robustness
Authors: Haixin Zhong, Haoyu Wang, Wei Dai, Yuchao Huang, Mingyi Huang, Rubin Wang, Anna Roe, Yuguo Yu
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results identify critical factors, such as the degree of input visual field overlap, neuronal connection range, and the balance between localized connectivity and long-range competition, that determine the emergence of either salt-and-pepper or pinwheel-like topologies. Furthermore, we demonstrate that pinwheel structures exhibit lower wiring costs and enhanced sparse coding capabilities compared to salt-and-pepper organizations. They also maintain greater coding robustness against noise in naturalistic visual stimuli. |
| Researcher Affiliation | Academia | 1. Research Institute of Intelligent Complex Systems, Fudan University. 2. State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Institutes of Brain Science, Fudan University. 3. Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University. 4. Shanghai Artificial Intelligence Laboratory. 5. IDG/Mc Govern Institute for Brain Research, School of Medicine, Tsinghua University. 6. Tsinghua-Peking Joint Center for Life Sciences. 7. Institute for Cognitive Neurodynamics, East China University of Science and Technology. 8. MOE Frontier Science Center for Brain Science and Brain-machine Integration, School of Brain Science and Brain Medicine, Key Laboratory of Biomedical Engineering and Instrument Science, Zhejiang University. |
| Pseudocode | No | The paper describes the neural model and plasticity rules using mathematical equations in Section 2.2 and Appendix A.1, but it does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | To demonstrate the practical relevance of our model, we explore its capability as a front-end encoder for deep spiking neural networks. Specifically, we design a five-layer convolutional and pooling-based SNN to classify images from the Fashion MNIST dataset. |
| Dataset Splits | No | For the primary SESNN model, the paper states, "In each trial, all E-neurons process visual input from 100 random patches, each presented for 100 ms" but does not specify traditional training, validation, or test splits. For the Fashion MNIST dataset mentioned in Appendix A.7, it does not explicitly provide split percentages or sample counts, implying the use of standard splits without specifying them, which is insufficient for full reproducibility. |
| Hardware Specification | Yes | Simulations are executed on a high-performance system featuring an Intel Xeon Gold 6348 CPU (2.60 GHz), an NVIDIA A100 GPU, and 512 GB of memory. |
| Software Dependencies | Yes | The workflow was managed on Ubuntu 20.04.6 LTS, with computational tasks implemented in MATLAB R2023a and Python 3.9. |
| Experiment Setup | Yes | The hyperparameters of the SESNN model are as follows: the learning rates are ηFF = 0.2 (image to E-neuron), ηEE = 0.01 (Eto E-neuron), ηEI = 0.7 (Ito E-neuron), ηII = 1.5 (Ito I-neuron), and ηIE = 0.7 (Eto I-neuron), while the neural connectivity parameters are αmax, E = 1.0 (Emax weight) and αmax, I = 0.5 (Imax weight). These learning rate settings are crucial for stabilizing the training of the neural network. Specifically, setting a slower learning rate for E-E connections than for others helps prevent overexcitation among E-neurons. This approach is consistent with empirical findings (Hofer et al., 2011; Holmgren et al., 2003; Sato et al., 2016). |