Generalized Implicit Neural Representations for Dynamic Molecular Surface Modeling

Authors: Fang Wu, Bozhen Hu, Stan Z. Li

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments validate its effectiveness in analyzing complex molecular systems across continuous space and time domains. ... We verify the effectiveness of our Mo E-DSR on ATLAS (Vander Meersche et al. 2024), the largest existing MD simulation database of proteins. Comprehensive results demonstrate that incorporating the Mo E architecture and geometric symmetries significantly boosts INR s capability to comprehend protein dynamic changes and handle diverse protein distributions. ... Quantitative Results ... Ablation Studies
Researcher Affiliation Academia 1 Computer Science Department, Stanford University 2 School of Engineering, Westlake University EMAIL, EMAIL
Pseudocode No The paper includes a 'Model Overview' section with a diagram (Figure 1) illustrating the Mo E-DSR architecture, and describes the components using mathematical formulations, but it does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block with structured steps.
Open Source Code No The paper states: 'For baseline implementation, DSR (Sun et al. 2024) was reproduced using its official Git Hub website at https://github.com/Sundw-818/DSR.' This refers to the code for a baseline model (DSR), not the authors' own Mo E-DSR methodology. There is no explicit statement or link provided for the source code of Mo E-DSR.
Open Datasets Yes To comprehensively demonstrate and assess the ability of our method, we train Mo E-DSR on ATLAS (Vander Meersche et al. 2024), the largest up-to-date dataset of all-atom MD simulations for single-chain proteins.
Dataset Splits Yes The training split contains monomers not involved during the curation of the test split. Then selected test data points are divided randomly into the validation and final test sets with a ratio of 1:1. Using this cutoff, we obtain train/val/test splits of 1,290/50/50 ensembles.
Hardware Specification Yes All experiments are implemented in a data-parallel mode on 4 A100 GPUs, each with a memory of 80GB.
Software Dependencies No The paper mentions 'Py Torch Autograd' for gradient calculation and the 'Python scikit-image package' for Marching Cubes algorithm, but does not provide specific version numbers for these software components.
Experiment Setup Yes Following (Sun et al. 2024), we adopt the Softplus (i.e., Υ(x) = 1 β ln 1 + exp βx) as the activation function for the experts with β = 100. The gradient of leaners x E( ) is calculated by Py Torch Autograd. ... Each MLP has the same architecture with 8 layers and 512 hidden units as well as a single skip connection from the input to the middle layer. The initial latent code vector z is sampled from a normal distribution N(0, 1). ... The final loss of our Mo E-DSR is thus a weighted sum of LSDF and LMo E with different multiplicative coefficients λ1 and λ2 = 1e 2, respectively. ... Here, the choice of NK is a hyperparameter whose value is chosen according to application, and typically, NK = 1, 2.