Learning Configurations for Data-Driven Multi-Objective Optimization
Authors: Zhiyang Chen, Hailong Yao, Xia Yin
ICML 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we perform empirical verifications of our theoretical results. |
| Researcher Affiliation | Academia | 1Tsinghua University, China 2University of Science and Technology Beijing, China 3Key Laboratory of Advanced Materials and Devices for Post-Moore Chips, Ministry of Education of China. |
| Pseudocode | No | The paper describes several algorithms (SALT, Goemans-Ravi, local search, simulated annealing) but does not present them in structured pseudocode or algorithm blocks. Their descriptions are integrated into the main text. |
| Open Source Code | No | The paper mentions using the 'open-source code of SALT (Chen & Young, 2020)' which refers to a third-party tool utilized in their experiments, not code they are releasing for their own methodology. There is no explicit statement or link providing access to the source code for the methodology described in this paper. |
| Open Datasets | Yes | We conduct experiments on shallow-light Steiner tree benchmarks from real-world VLSI designs. ... The open-source code of SALT (Chen & Young, 2020) and the ICCAD-15 benchmark (Kim & Hu, 2015) for timing-driven VLSI placement are used. |
| Dataset Splits | Yes | We randomly select 80% nets as the training set and the rest as the test set. ... We synthesize 100 instances for each problem size (since finding all pieces is time-consuming), with 50 being the training set, and the rest being the test set. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) needed to replicate the experiments. It mentions using a 'VLSI placer' but without any version information. |
| Experiment Setup | Yes | We set the discount factor γ = 0.9, and find the optimal policy by optimizing a weighted sum of the reward and the penalty. ... In our experiments, we fix the random seed of the placer to make the placement result deterministic. |