Feature-Mapping Topology Optimization with Neural Heaviside Signed Distance Functions

Authors: Aleksandr Kolomeitsev, Anh-Huy Phan

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results validate the effectiveness of our approach in balancing structural compliance, offering a new pathway to CAD-integrated design with minimal human intervention.
Researcher Affiliation Academia 1Artificial Intelligence Center, Laboratory of Intelligent Signal and Image Processing, Skolkovo Institute of Science and Technology, Moscow, Russia. Correspondence to: Aleksandr Kolomeitsev <EMAIL>.
Pseudocode No The paper describes methods and processes through narrative text and mathematical equations (e.g., Section 4.2, 'Feature Mapping Topology Optimization with Neural Heaviside SDF. Inference', and Appendix B, 'Assembling of the Global Stiffness Matrix'), but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks, figures, or explicitly formatted code-like procedures.
Open Source Code Yes The code and models are publicly available at https://github.com/Alexander19970212/NHSDF-TOp.
Open Datasets No Our datasets include training and testing datasets for the Heaviside Decoder and the Reconstruction Decoder. The dataset for training the Heaviside Decoder contains samples of geometric features and randomly located points with their corresponding Heaviside values. The dataset for training the Reconstruction Decoder contains only samples of shape codes. To evaluate the Smth metric, we generated a dataset that includes feature code shapes and a grid of points with their Heaviside values. Further details can be found in Appendix E.
Dataset Splits Yes Dataset for training the Heaviside Decoder consists of 5k samples for each type of geometric feature (ellipse, triangle, quadrilateral) with varying rounding radii. [...] Dataset for training the Reconstruction Decoder contains only 5 million samples of χ for each type of geometric feature. [...] Dataset for testing the Heaviside Decoder is generated similarly to the training dataset, but it contains 500 samples for each type of geometric feature. [...] Dataset for testing the Reconstruction Decoder is generated similarly to the training dataset, but it contains 10k samples for each type of geometric feature.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. It focuses on the methodology, model architecture, and experimental results without mentioning the underlying physical computational resources.
Software Dependencies No The paper does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions) that were used to implement the methodology or run the experiments. It refers to 'variational autoencoder-based architecture' and 'Deep SDF decoder architecture' but without specific software library versions.
Experiment Setup Yes In the proposed method... where β controls the steepness of the sigmoid function (see Fig. 1), typical value is β = 20. Implementation Details: Variable Initialization. Variables are initialized such that, in the first iteration, geometric features form a regular grid of squares with maximum rounded radii. The initial value of zm for the first iteration is computed via Shape Encoder. [...] To achieve a smooth approximation of the maximum function, we employ the Kreisselmeier-Steinhauser function, taking into account the ρ limits... where γKS is the smoothing parameter. [...] Refactoring Mechanism. To prevent the latent variable to be infeasible, we realize refactoring mechanism, where every 5 iterations we update the value of zm using the reconstruction Decoder and Encoder. Appendix D. Topology Optimization Schemes: For all cases, the Poisson ratio and Young’s modulus are set to ν = 0.3 and E = 1 Pa.