Variational Pseudo Marginal Methods for Jet Reconstruction in Particle Physics
Authors: Hanming Yang, Antonio Khalil Moretti, Sebastian Macaluso, Philippe Chlenski, Christian A. Naesseth, Itsik Pe'er
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate our method s effectiveness through experiments using data generated with a collider physics generative model, highlighting superior speed and accuracy across a range of tasks. We illustrate the effectiveness of both methods through experiments using data generated with Ginkgo (Cranmer, Kyle et al., 2021), highlighting superior speed and accuracy across various tasks. |
| Researcher Affiliation | Collaboration | Hanming Yang 1,* EMAIL Antonio Khalil Moretti 1,2,* EMAIL Sebastian Macaluso 3 EMAIL Philippe Chlenski 1 EMAIL Christian A. Naesseth 4 EMAIL Itsik Pe er 1 EMAIL 1Department of Computer Science, Columbia University 2Department of Computer Science, Spelman College 3Telefonica Research 4University of Amsterdam |
| Pseudocode | Yes | Algorithm 1 Toy Parton Shower Generator Algorithm 2 Combinatorial Sequential Monte Carlo Algorithm 3 Nested Combinatorial Sequential Monte Carlo |
| Open Source Code | No | The paper discusses the source code for Ginkgo (Cranmer et al., 2019b), a generative model used in their experiments, but does not explicitly state that the code for their own proposed methodology (Variational Pseudo Marginal Methods for Jet Reconstruction) is open-source or provide a link to it. The text for Ginkgo is: 'Toy Generative Model for Jets Package. https://github.com/Sebastian Macaluso/Toy Jets Shower, 2019b.' |
| Open Datasets | No | The paper uses 'data generated with a collider physics generative model' specifically 'Ginkgo (Cranmer, Kyle et al., 2021)'. While Ginkgo is a generative *model*, the paper does not provide concrete access information (link, DOI, repository, or citation) for the specific *datasets* generated and used in their experiments. |
| Dataset Splits | No | The paper states 'We simulated 100 jets using Ginkgo running comparisons with Greedy Search, Beam Search and Cluster Trellis.' and 'Across 100 simulated jets, Vncsmc...'. It does not specify any training, validation, or test dataset splits, as the data appears to be generated for each experimental run rather than being a fixed, pre-split dataset. |
| Hardware Specification | Yes | All experiments were performed on a Google Cloud Platform n1-standard-4 instance with an Intel Xeon CPU 4 v CPUs and 15 GB RAM without leveraging GPU utilization. |
| Software Dependencies | No | The paper does not provide specific software dependency details, such as library names with version numbers (e.g., Python, PyTorch, TensorFlow versions), that were used to implement their methods or run their experiments. |
| Experiment Setup | Yes | Vncsmc with K, M = (256, 1) returns a higher likelihood on all 100 cases against Greedy Search (left) and 99 cases against Beam Search (center). Fig. 5 (right) shows the log conditional likelihood log ˆp(X|τ, λ) for Vcsmc (blue) and Vncsmc (red) with K = 256 (and M = 1 for Vncsmc) samples averaged across 5 random seeds. We generated jets with N = {4, , 64} leaf nodes and profiled the running time of Vcsmc, Cluster Trellis, Greedy Search and Beam Search averaged across 3 random seeds. |