Learning and aligning single-neuron invariance manifolds in visual cortex

Authors: Mohammad Bashiri, Luca Baroni, Ján Antolík, Fabian Sinz

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We tested our method on simulated neurons and macaque V1 neurons modeled by a responsepredicting DNN. The simulated neurons were carefully designed to display a range of invariance properties, and our method successfully learned and aligned their corresponding invariance manifolds, enabling accurate identification of the ground truth functional types. When applied to a responsepredicting model of macaque V1 neurons, our approach captured and aligned the invariance manifolds of these neurons, uncovering multiple functional clusters, including, but not limited to, the expected clusters of canonical simple and complex cells.
Researcher Affiliation Collaboration Mohammad Bashiri1,2, , , Luca Baroni3, , Ján Antolík3, Fabian H. Sinz2,4 1Noselab Gmb H, Munich, Germany 2Department of Computer Science, University of Göttingen, Germany 3Faculty of Mathematics and Physics, Charles University, Prague, Czechia 4Campus Institute Data Science (CIDAS), University of Göttingen, Germany Equal contribution, EMAIL
Pseudocode No The paper describes the method using mathematical equations and textual explanations, but it does not include a distinct pseudocode block or algorithm section.
Open Source Code Yes The complete implementation of our method, including a Docker container for easy setup, can be found in https://github.com/sinzlab/laminr.
Open Datasets Yes We then applied our method to a response-predicting model of a population of biological neurons recorded from the primary visual cortex of two macaques, using the response-predicting model from Baroni et al. (2023) trained on the publicly available dataset from Cadena et al. (2024).
Dataset Splits No The paper mentions using a "test set" for evaluation (e.g., "high predictive performance on a test set of images") and training a model, but it does not specify concrete details like exact split percentages (e.g., 80/10/10), absolute sample counts for each split, or the methodology used to create these splits.
Hardware Specification Yes Running the complete pipeline on simulated neurons took approximately 2 hours on a single V100 GPU (one seed, total number of 36 simulated neurons). For macaque V1 neurons (total of 100 neurons), each neuron s template learning and matching experiment was run separately and took approximately 15-20 minutes on a single A100 GPU (one seed).
Software Dependencies Yes The experiments were conducted using Python 3.9, Py Torch 1.13.1, and CUDA 11.7. However, our method is also compatible with the latest stable versions: Py Torch 2.6.0 and CUDA 12.4.
Experiment Setup Yes The initialization weights for the fully connected layer in the Implicit Neural Representation (INR) were sampled from a Gaussian distribution with σ = 0.1. For positional encoding of the pixel coordinates, we used Fourier features with 50 dimensions with a projection scale of 10. For the positional encoding of the latent input, we used Fourier features with 50 dimensions with a projection scale of 0.1. ... The parameters of the INR were optimized using an Adam optimizer (Diederik, 2014) with a learning rate of 0.001... Training continued for a minimum of 500 steps... For experiments involving simulated neurons, the average required activity was set to α = 0.99αMEI and the minimum to α = 0.98αMEI. ... The strength of the contrastive regularization term, initially set to λ = 2, was reduced by a factor of 0.8... We optimized the parameters of the affine transformation using the Adam optimizer with a learning rate of 0.001. Training was halted when the average activity of the target neurons ceased to increase, with a patience parameter set to 15.