A Lennard-Jones Layer for Distribution Normalization
Authors: Mulun Na, Jonathan Klein, Biao Zhang, Wojtek Palubicki, Soren Pirk, Dominik Michels
TMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The improvements in 3D point cloud generation utilizing LJLs are evaluated qualitatively and quantitatively. |
| Researcher Affiliation | Academia | Mulun Na EMAIL Computational Sciences Group KAUST; Jonathan Klein EMAIL Computational Sciences Group KAUST; Biao Zhang EMAIL Visual Computing Center KAUST; Wojciech Pałubicki EMAIL Natural Phenomena Modeling Group AMU; Sören Pirk EMAIL Visual Computing and Artificial Intelligence Group CAU; Dominik L. Michels EMAIL Computational Sciences Group KAUST |
| Pseudocode | Yes | ALGORITHM 1: The Lennard-Jones layer (LJL). NNS denotes the nearest neighbor search. Input: Point cloud Xi and point cloud NNS(Xi). Output: Point cloud Xi+1. |
| Open Source Code | No | Source Code: Upon request, we are happy to share the source code to generate the results presented in this paper. Please contact the first or the last author of this manuscript. |
| Open Datasets | Yes | Incorporating the optimal parameters, we tested LJL-embedded generative models on Shape Net (Chang et al., 2015) shown in Fig. 8. |
| Dataset Splits | No | The paper mentions using ShapeNet and a test set from Luo & Hu (2021b) but does not provide specific training/test/validation split percentages, sample counts, or explicit methodology for partitioning the data. |
| Hardware Specification | Yes | We test all models on NVIDIA Ge Force RTX 3090 GPU if not specified otherwise. |
| Software Dependencies | No | The paper mentions various algorithms and models (e.g., Shape GF, DDPM, score-based model) and references computational tools in related work (LAMMPS, RATTLE, SHAKE), but it does not specify version numbers for any software libraries or frameworks used in their own implementation (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | In practice, we find that setting ϵ = 2 results in a well-behaving LJ gradient. The LJL parameters α = 0.5, β = 0.01, and ϵ = 2 are the same as in the previous example. We set σ = 5σ due to the addition of the third dimension, with the factor of 5 determined through hyperparameter tuning. We set ϵ = 2 and σ = 5σ due to the additional third dimension. We found that α = 2.5 and β = 0.01 meet the requirements of less noise and better distribution. We select SS = 60 in the actual generation task. We keep β = 0.01 and choose α = 0.3 which will minimize the ratio of the noise and distance score increment rates, see Appendix C.1.1. |