Deep Electromagnetic Structure Design Under Limited Evaluation Budgets

Authors: Shijian Zheng, Fangxiao Jin, Shuhai Zhang, Quan Xue, Mingkui Tan

ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate PQS on two real-world engineering tasks, i.e., Dual-layer Frequency Selective Surface and High-gain Antenna. Experimental results show that our method can achieve satisfactory designs under limited computational budgets, outperforming baseline methods. In particular, compared to generative approaches, it cuts evaluation costs by 75–85%, effectively saving 20.27–38.80 days of product designing cycle. (Section 1, Abstract) 5. Experiments We conducted experiments on real-world optimization tasks. The experiments answers two key questions: 1) How does PQS compare to state-of-the-art approaches in terms of performance and robustness? 2) How do the individual components of PQS contribute to its overall performance? (Section 5) 5.3. Ablation Studies In this section, we dissect our method’s key design choices to understand their individual contributions and overall impact on performance. (Section 5.3)
Researcher Affiliation Academia 1School of Future Technologies, South China University of Technology 2Peng Cheng Laboratory 3School of Software Engineering, South China University of Technology 4Pazhou Laboratory 5School of Microelectronics, South China University of Technology. Correspondence to: Mingkui Tan <EMAIL>, Quan Xue <EMAIL>.
Pseudocode Yes Algorithm 1 General scheme of PQS for EMS. Algorithm 2 EMS Optimization with QSS.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets No In contrast, EMS design is burdened by a larger search space (10^86 – 10^90) and lacks public datasets, pre-trained models, or augmentation methods (see Table 1).
Dataset Splits No The paper mentions generating initial datasets of a certain size (e.g., 300 samples, 6800 samples) and for a robustness study states "Samples were split into validation, test, and training sets", but it does not provide specific percentages or counts for these splits that are needed for reproducibility in the main experiments.
Hardware Specification No The paper does not mention any specific hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using "Res Net50 as the predictor model" and "Adam optimizer" but does not provide specific version numbers for these or any other software libraries or dependencies.
Experiment Setup Yes PQS (ours). For the proposed PQS, we maintain a consistent setup across both experimental scenarios. In both cases, the initial dataset comprises 300 samples... We utilize Res Net50 as the predictor... The design variable Nmax is set to 32. Additionally... We set maximum iteration M = 10000000 and K = 10. The total number of samples R in CSS is 10. Compared Methods. ...The surrogate model is updated accordingly, and this process is repeated until the total number of simulated samples reaches the predefined limit of 1000. Finally, we select the best one as optimized result. In practice, we set M = 200000 and K = 10 for our experiments in both two real-world tasks. Surrogate-GA (Zhu et al., 2020). ...The model’s batch size is set to 256, trained for 200 epochs, with a learning rate of 0.01. First, 300 samples were obtained through random sampling... we set K=10 for our experiments... Surrogate-GW(Dong & Dong, 2020). ...the parameter a, which controls the balance between exploration and exploitation during the search process, is set to 3. ...we use the Res Net50 model as the surrogate model. The model’s batch size is set to 256, trained for 200 epochs, with a learning rate of 0.01. Based on the surrogate model, we set K=10 for our experiments... Inv Grad (Trabucco et al., 2022). ...dataset consists of 6800 randomly sampled simulation samples for Dual FSS and 3800 for HGA... In pratice, we set T = 1000, α = 0.01 for two design tasks. IDN (Ma et al., 2020). ...The model’s batch size is set to 128, trained for 200 epochs, with a learning rate of 0.0001. Adam optimizer is employed, and MAE(Mean Absolute Error) is used for loss computation. c GAN (An et al., 2021) ...The model’s batch size is set to 64, trained for 200 epochs, with a discriminator learning rate of 0.00005 and generator learning rate of 0.0002. In addition, the latent dimension is set to 100, Adam optimizer is employed. c VAE (Lin et al., 2022). ...The model’s batch size is set to 128, trained for 200 epochs, with a learning rate of 0.0005. In addition, the latent dimension is set to 20, Adam optimizer is employed, and the loss function is obtained through linear summation of Mean Squared Error (MSE) and 0.00000001 times the Kullback-Leibler (KL) divergence. Gen CO (Ferber et al., 2024) ...latent dimension = 256, number of embeddings = 512, learning rate = 1e-3, and the number of epochs = 100. TS-DDEO (Zheng et al., 2023b). ...population size of DE sampling in SHPSO and BDDO to 50, and the probability r of FC strategy in the BDDO phase was 0.2. In addition, out of a total budget of 1000 simulation evaluations, we allocated 300 evaluation budgets for the first phase, and then switched to the second phase. SAHSO (Li et al., 2022). ...we increase the duration of the first stage by setting T0 to 60% of the maximum evaluation budgets. During the second stage... the number of pre-screened candidate solutions was limited to 10–15 per iteration.