Client-only Distributed Markov Chain Monte Carlo Sampling over a Network
Authors: Bo Yuan, Jiaojiao Fan, Jiaming Liang, Yongxin Chen
TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct both qualitative and quantitative experiments in Section 6 and Appendix A.5, demonstrating the efficiency of our sampler for large-scale distributed systems and the superior performance over the baselines. |
| Researcher Affiliation | Collaboration | Bo Yuan EMAIL School of Aerospace Engineering Georgia Institute of Technology Jiaojiao Fan EMAIL Nvidia Jiaming Liang EMAIL Department of Computer Science University of Rochester Yongxin Chen EMAIL School of Aerospace Engineering Georgia Institute of Technology |
| Pseudocode | Yes | Algorithm 1 A Sampler for Composite Potentials Algorithm 2 Distributed Sampling over a bipartite graph |
| Open Source Code | No | The paper does not contain an explicit statement about releasing code, nor does it provide any links to a code repository. |
| Open Datasets | No | The paper describes experiments using "Gaussian targets" and a "non-Gaussian target" (Section 6), which appear to be synthetic or defined within the paper, without providing concrete access information (links, DOIs, citations to established public datasets) for public availability. |
| Dataset Splits | No | The paper does not provide specific details on training, validation, or test dataset splits. It describes experimental procedures like running independent chains and burn-in stages for sampling, but not data partitioning for supervised learning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | In this case, σij = 1/3 if there is one edge connecting i and j, σii = 1 − Pj σij, and we replace the unbiased estimation of gradients ∇fi(xk i ) by the exact value. The dimension d is 5, and the initial distribution on each node is N(0, I5). The performance is measured by the estimated 2-Wasserstein distance. We also conduct experiments on a more challenging target defined by exp(− P4 i=1 fi(x)) exp(−|x − 1.5| − |x − 0.5| − |x − 1| − |x − 1.5|) where each minimizer is an all-one vector multiplied by a scale. We repeated the Gaussian experiment on the same three-layer perfect binary tree as in Figure 2a but increased the dimensionality from d=5 to d=32 and replaced the Gaussian initialization with a uniform initialization. The penalty parameter ρ in D-ADMMS was varied. |