Evolvable Conditional Diffusion

Authors: Zhao Wei, Chin Chun Ooi, Abhishek Gupta, Jian Cheng Wong, Pao-Hsiung Chiu, Sheares Xue Wen Toh, Yew-Soon Ong

IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our proposed evolvable diffusion algorithm in two AI for Science scenarios: the automated design of fluidic topology and metasurface. Results demonstrate that this method effectively generates designs that better satisfy specific optimization objectives without reliance on differentiable proxies, providing an effective means of guidance-based diffusion that can capitalize on the wealth of black-box, non-differentiable multiphysics numerical models common across Science.
Researcher Affiliation Academia 1Centre for Frontier AI Research, Agency for Science, Technology and Research, Singapore 2Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore 3School of Mechanical Sciences, Indian Institute of Technology, Goa, India 4College of Computing and Data Science, Nanyang Technological University, Singapore
Pseudocode Yes Algorithm 1 Evolvable conditional diffusion method
Open Source Code No No explicit statement about code release or link to a repository for the methodology described in this paper was found. The paper discusses open-source code in the context of other works but not its own.
Open Datasets No The paper mentions using "paired data-sets" to train regressors for fitness evaluation and pre-trained diffusion models but does not provide specific access information (links, DOIs, formal citations) for any datasets used or created.
Dataset Splits No The paper mentions generating "1000 samples for assessment" and "1000 test samples" but does not provide specific details on how initial datasets were split into training, validation, or test sets for model training, nor percentages or sample counts for such splits.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, memory amounts, or cloud computing specifications) used for running the experiments are mentioned in the paper.
Software Dependencies No The paper mentions examples of black-box solvers like "Ansys Fluent" and "Ansys HFSS" but does not specify any software names with version numbers that were used for their implementation or experiments.
Experiment Setup Yes Using the pre-trained diffusion model, guidance is applied for the second half of the denoising process with 30 samples evaluated per denoising step for gradient estimation. The entire denoising process consists of 100 steps, and the input design representation has a spatial resolution of 64 64. ... Furthermore, increasing the gradient scaling factor α clearly biases the denoising process towards designs with even lower p, demonstrating the potential for generating designs which better satisfy one s criteria. ... Notably, when α = 5, the outlet area is observed to be widened, causing a reduction in p compared to the baseline design.