Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1]

Deep Learning the Ising Model Near Criticality

Authors: Alan Morningstar, Roger G. Melko

JMLR 2017 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We investigate this question by using unsupervised, generative graphical models to learn the probability distribution of a two-dimensional Ising system. Deep Boltzmann machines, deep belief networks, and deep restricted Boltzmann networks are trained on thermal spin configurations from this system, and compared to the shallow architecture of the restricted Boltzmann machine. We benchmark the models, focussing on the accuracy of generating energetic observables near the phase transition, where these quantities are most difficult to approximate.
Researcher Affiliation Academia Perimeter Institute for Theoretical Physics Waterloo, Ontario, N2L 2Y5, Canada and Department of Physics and Astronomy University of Waterloo Waterloo, Ontario, N2L 3G1, Canada
Pseudocode No The paper describes algorithms such as the CD-k algorithm and Gibbs sampling in text and equations, but does not present them in structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about the availability of open-source code, a repository link, or code in supplementary materials for the methodology described.
Open Datasets No In order to produce training data, we use standard Markov chain Monte Carlo techniques with a combination of single-site Metropolis and Wolffcluster updates, where one Monte Carlo step consists of N single-site updates and one cluster update, and N is the number of sites on the lattice. Importance sampling thus obtained 10^5 independent spin configurations for each T [1.0, 3.5] in steps of T = 0.1.
Dataset Splits No For a given temperature T and lattice size N, training and testing sets of arbitrary size can be produced with relative ease, allowing us to explore the representational power of different generative models without the concern of regularization due to limited data.
Hardware Specification No Simulations were performed on resources provided by the Shared Hierarchical Academic Research Computing Network (SHARCNET).
Software Dependencies No The paper discusses algorithms and training hyperparameters (Table 1), but does not explicitly list any specific software dependencies with version numbers.
Experiment Setup Yes Training hyper-parameters for each model are given in Table 1, and values of Nh1 and Nh2 used in this work were {8, 16, 24, 32, 40, 48, 56, 64}. hyperparameter RBM DBM DBN DRBN k 10 10 10 5 equilibration steps NA 10 NA NA training epochs 4 10^3 3 10^3 3 10^3 3 10^3 learning rate 5 10^−3 10^−3 10^−4 10^−4 mini-batch size 10^2 10^2 10^2 10^2