Deep Neural Cellular Potts Models
Authors: Koen Minartz, Tim D’Hondt, Leon Hillmann, Jörn Starruß, Lutz Brusch, Vlado Menkovski
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluation with synthetic and real-world multicellular systems demonstrates that Neural CPM is able to model cellular dynamics that cannot be accounted for by traditional analytical Hamiltonians. |
| Researcher Affiliation | Academia | 1Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, the Netherlands 2Department of Applied Physics and Science Education, Eindhoven University of Technology, Eindhoven, the Netherlands 3Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands 4Center for Information Services and High Performance Computing, TUD Dresden University of Technology, Dresden, Germany. Correspondence to: Koen Minartz <EMAIL>, Lutz Brusch <EMAIL>, Vlado Menkovski <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Neural CPM training procedure |
| Open Source Code | Yes | Details on experiments and datasets are in Appendix A and in our code.1 1https://github.com/kminartz/Neural CPM |
| Open Datasets | Yes | The Cellular MNIST dataset: a synthetic dataset in which cells form digit-like structures, also illustrated in Figure 3... The MNIST data set (Deng, 2012). |
| Dataset Splits | No | The resulting datasets comprised 128 independent full lattice snapshots each. The final data set contained 1280 samples. Using this procedure, we generated 1000 samples, which we randomly rotate for training. The paper does not explicitly provide training/test/validation dataset splits. |
| Hardware Specification | No | K.M. acknowledges that this work used the Dutch national e-infrastructure with the support of the SURF Cooperative using grant no. EINF-7724. This statement refers to a national e-infrastructure, but does not provide specific details such as exact GPU/CPU models, processor types, or memory amounts used for the experiments. |
| Software Dependencies | No | Our implementation is built on JAX (Bradbury et al., 2018) and Equinox (Kidger & Garcia, 2021). While the frameworks are mentioned with their publication years, specific version numbers for JAX and Equinox are not provided. |
| Experiment Setup | Yes | Common hyperparameters are given in Table 10. We used the Adam optimizer in all experiments with learning rate η = 1e 3 and standard hyperparameters β1 = 0.9, β2 = 0.999, ϵ = 1e 8. Table 10 specifies Batch size B, Max. num training steps, Monte Carlo steps, Lattice size, EWA α, Regularizer λ, Num parallel flips P, and Sampler reset probability. Appendix B.2 further details neural network architecture parameters like hidden dimensions and max-pooling rates. |