Rethinking the role of frames for SE(3)-invariant crystal structure modeling

Authors: Yusei Ito, Tatsunori Taniai, Ryo Igarashi, Yoshitaka Ushiku, Kanta Ono

ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive comparisons on datasets derived from the JARVIS, Materials Project (MP), and Open Quantum Materials Database (OQMD). Our results show that our method outperforms conventional frames and existing crystal encoders in various crystal property prediction tasks.
Researcher Affiliation Collaboration Yusei Ito 1,2, Tatsunori Taniai 1, Ryo Igarashi1, Yoshitaka Ushiku1, and Kanta Ono2 Contributed equally. 1OMRON SINIC X Corporation 2Osaka University
Pseudocode No The paper describes methods in paragraph text and uses mathematical equations and figures to illustrate concepts (e.g., Figure 2 for Crystal Framer architecture) but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes We release our code online. https://omron-sinicx.github.io/crystalframer/
Open Datasets Yes We use three datasets: JARVIS (55,723 materials), MP (69,239 materials), and OQMD (817,636 materials), using snapshots available through a Python package (jarvis-tools).
Dataset Splits Yes Following these and later studies (Lin et al., 2023; Yan et al., 2024; Taniai et al., 2024), we use the same data splits and cite their reported scores to reduce computational burden. Unlike these studies, we also use the much larger-scale OQMD dataset to assess scalability. ... Table 1: Property prediction results on the JARVIS dataset. Accuracies are in mean absolute error. The sizes of training, validation, and test splits are listed under each property name.
Hardware Specification Yes The runtimes are evaluated for the formation energy prediction in the JARVIS dataset using a single NVIDIA A6000 GPU with 48GB VRAM.
Software Dependencies No The paper mentions using a 'Python package (jarvis-tools)' and 'Adam (Kingma & Ba, 2015)' for optimization, but specific version numbers for these or other key software libraries/frameworks (e.g., PyTorch, TensorFlow) are not provided.
Experiment Setup Yes A summary of detailed training settings, including the number of epochs, batch size, and learning rate for the three datasets, can be found in Appendix D. ... Table A1 summarizes the training settings for the JARVIS, MP, and OQMD datasets. Specifically, for the JARVIS dataset, we optimize the mean absolute loss function using the Adam optimizer (Kingma & Ba, 2015) with (β1, β2) = (0.9, 0.98) and weight decay of 10^-5 (Loshchilov & Hutter, 2019).