Understanding Mode Connectivity via Parameter Space Symmetry

Authors: Bo Zhao, Nima Dehmamy, Robin Walters, Rose Yu

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate Proposition 6.1 empirically using a set of two-layer networks with various parameter space dimensions. Specifically, we construct networks in the form of Uσ(V X) Y 2, with σ being the sigmoid function, X Rn k, Y Rm k, and (U, V ) Param = Rm h Rh n. We create 100 such networks, each with m, h, n, k randomly sampled from integers between 2 and 100. In each network, elements in X and Y are sampled independently from a normal distribution, and U, V are randomly initialized. After training with SGD, we compute (U , V ) = g (U, V ) using (6) with a random invertible matrix g. We then plot Uσ(V X) against Uσ(V X) U σ(V X) in Figure 3(a).
Researcher Affiliation Collaboration 1University of California, San Diego 2IBM Research 3Northeastern University. Correspondence to: Bo Zhao <EMAIL>, Nima Dehmamy <EMAIL>, Robin Walters <EMAIL>, Rose Yu <EMAIL>.
Pseudocode No The paper includes mathematical formulations and theoretical derivations, but it does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present any structured, code-like procedural steps.
Open Source Code No The paper does not contain any explicit statements about releasing source code, nor does it provide any links to code repositories or mention code in supplementary materials.
Open Datasets No In Section 6.1, the paper states: 'We demonstrate Proposition 6.1 empirically using a set of two-layer networks with various parameter space dimensions. Specifically, we construct networks in the form of Uσ(V X) Y 2, with σ being the sigmoid function, X Rn k, Y Rm k, and (U, V ) Param = Rm h Rh n. We create 100 such networks, each with m, h, n, k randomly sampled from integers between 2 and 100. In each network, elements in X and Y are sampled independently from a normal distribution, and U, V are randomly initialized.' This indicates the use of synthetic or randomly generated data rather than a pre-existing publicly available dataset with concrete access information.
Dataset Splits No The paper's empirical validation in Section 6.1 uses randomly sampled data: 'In each network, elements in X and Y are sampled independently from a normal distribution, and U, V are randomly initialized.' Since the data is randomly generated for each network, there are no predefined training/test/validation splits discussed or specified.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments. It only describes the setup of the networks and data.
Software Dependencies No The paper mentions 'training with SGD' and the use of a 'sigmoid function' and 'leaky ReLU function' for activation, but it does not specify any particular software libraries, frameworks (like PyTorch or TensorFlow), or their version numbers. This information is insufficient to replicate the software environment.
Experiment Setup No The paper mentions 'U, V are randomly initialized' and 'After training with SGD' in Section 6.1. However, it does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or other detailed optimizer settings, which are crucial for a reproducible experimental setup.