NITO: Neural Implicit Fields for Resolution-free and Domain-Adaptable Topology Optimization

Authors: Amin Heyrani Nobari, Lyle Regenwetter, Giorgio Giannone, Faez Ahmed

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In section 4, 'Experiments', the paper presents an assortment of experiments to compare the performance of NITO to existing state-of-the-art models. It includes 'Table 1: Quantitative evaluation on 64x64 datasets' and 'Table 2: Quantitative Evaluation on 256x256 datasets' which show performance metrics like compliance error and volume fraction error, indicating empirical studies with data analysis.
Researcher Affiliation Collaboration Amin Heyrani Nobari EMAIL Massachusetts Institute of Technology, Lyle Regenwetter EMAIL Massachusetts Institute of Technology, Giorgio Giannone EMAIL Amazon Massachusetts Institute of Technology, Faez Ahmed EMAIL Massachusetts Institute of Technology. The affiliations include 'Massachusetts Institute of Technology' (academic) and 'Amazon' (industry), indicating a collaboration.
Pseudocode No The paper describes its methodology in prose and through figures like 'Figure 3: The NITO framework for topology optimization'. It details the steps and components of NITO but does not present any structured pseudocode or algorithm blocks.
Open Source Code Yes The paper explicitly states: 'Code & Data: https://github.com/ahnobari/NITO_Public'
Open Datasets Yes The paper states: 'Code & Data: https://github.com/ahnobari/NITO_Public' and 'We create a dataset comprised of various domain shapes and resolutions using a custom SIMP optimizer...'
Dataset Splits Yes The paper specifies: 'For each resolution, 1,000 samples are used for testing... The 64x64 dataset includes 48,000 training samples and 1,000 test samples... The 256x256 dataset includes 60,000 training samples and 1,800 samples for testing, and the other three domains have 29,000 samples each with 1,000 test samples each.'
Hardware Specification Yes The paper explicitly states the hardware used: 'These times are measured using an RTX 4090 GPU and an Intel Core i9-13900K CPU.'
Software Dependencies No The paper mentions implementing a custom SIMP optimizer in Python ('implement the SIMP optimizer from scratch in Python') but does not provide specific version numbers for Python or any other software libraries or dependencies used.
Experiment Setup Yes The paper details the training process: 'The training of the conditional neural implicit model is performed in 3 stages... The first stage of training is carried out for 20 epochs... In the second stage, we sample on a 32x32 grid for 20 epochs. Finally, we train for 10 more epochs... We use Adam W optimizer with a starting learning rate of 10 4 which is reduced on a cosine annealing schedule to be reduced at the end of each epoch to reach 5 10 6 at the final epoch. The learning rate is stepped at the end of each epoch.' It also specifies model architecture: 'In our implementation, we use 8 layers of size 1024 for the neural fields and use 4 layers of size 256 for the three point cloud models...'