Few for Many: Tchebycheff Set Scalarization for Many-Objective Optimization
Authors: Xi Lin, Yilu Liu, Xiaoyuan Zhang, Fei Liu, Zhenkun Wang, Qingfu Zhang
ICLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental studies on different problems with many optimization objectives demonstrate the effectiveness of our proposed method. |
| Researcher Affiliation | Academia | Xi Lin1, Yilu Liu1, Xiaoyuan Zhang1, Fei Liu1, Zhenkun Wang2, Qingfu Zhang1 , 1City University of Hong Kong, 2Southern University of Science and Technology EMAIL EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1 STCH-Set Scalarization for Multi-Objective Optimization |
| Open Source Code | Yes | 1Our source code is available at: https://github.com/Xi-L/STCH-Set |
| Open Datasets | Yes | In this experiment, we follow the same setting for the Celeb A with 9 tasks in Gao et al. (2024). |
| Dataset Splits | No | The paper does not explicitly state training/test/validation dataset splits. It describes how some datasets are generated (e.g., 'randomly generate m independent convex quadratic functions', 'randomly sample K i.i.d. ground truth'), and for other datasets, it refers to prior work for settings ('following the same setting for the Celeb A with 9 tasks in Gao et al. (2024)') without detailing the splits within the provided text. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. It mentions the 'Adam optimizer' but not the software framework or its version. |
| Experiment Setup | Yes | Following the setting in Gao et al. (2024), for all methods, we use a Res Net variant as the network backbone and the cross-entropy loss for all tasks (the same setting as in Fifty et al. (2021)). All models are trained by the Adam optimizer with initial learning rates 0.0008 with plateau learning rate decay for 100 epochs. |