Functional Connectomes of Neural Networks
Authors: Tananun Songdechakraiwut, Yutong Wu
AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical analysis demonstrates its capability to enhance the interpretability of neural networks, providing a deeper understanding of their underlying mechanisms. We conducted extensive experiments using this method to validate that our framework can indeed enhance our ability to discern and interpret the complex structure of neural network functions, providing new avenues for both analysis and application. |
| Researcher Affiliation | Academia | Department of Computer Science, Duke University EMAIL, EMAIL |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. Figure 1 is a schematic diagram, not a pseudocode listing. |
| Open Source Code | Yes | Code https://github.com/masonwu11/topo-fcnn |
| Open Datasets | Yes | We performed our analyses on three datasets: MNIST (Le Cun et al. 1998), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), and CIFAR-10 (Krizhevsky, Hinton et al. 2009). |
| Dataset Splits | No | Given a collection of data samples, we partition it into two separate datasets: a training dataset and a functional dataset. The training dataset is utilized via k-fold cross-validation to determine a set of optimal hyperparameter values through grid search. To train neural networks, we randomly partitioned the data points of each dataset into a training dataset and a functional dataset, as explained in Section 2. The paper mentions partitioning data but does not provide specific percentages, absolute sample counts for splits, or reference to predefined standard splits for reproducibility beyond general statements. |
| Hardware Specification | Yes | All topological methods used in the studies were evaluated through runtime experiments. These methods were executed on an Apple M1 Pro CPU with 16 GB of unified RAM. |
| Software Dependencies | No | The paper describes various methods and algorithms used (e.g., k-means clustering, Pearson correlation, persistent homology techniques) but does not provide specific version numbers for any software libraries, programming languages, or frameworks used for implementation. |
| Experiment Setup | Yes | For MNIST, we used a feedforward architecture with two hidden fully-connected layers, with the first and second layers comprising 128 and 64 neurons, respectively. For Fashion-MNIST, we used a similar feedforward architecture, but with the first and second layers comprising 256 and 128 neurons, respectively. For CIFAR-10, we used a convolutional neural network with three VGG blocks (Simonyan and Zisserman 2015), followed by two fully-connected layers, with the first and second layers comprising 256 and 128 neurons, respectively. In all architectures, we applied leaky ReLU activation functions and the stochastic gradient descent optimizer with momentum. To account for the stochastic nature of gradient-based optimization initialization, we trained 20 neural networks for each strategy, totaling 80 networks (20 × 4). |