Speeding Up the NSGA-II with a Simple Tie-Breaking Rule

Authors: Benjamin Doerr, Tudor Ivan, Martin S. Krejca

AAAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove the effectiveness of our tiebreaking rule via mathematical runtime analyses on the classic ONEMINMAX, LEADINGONESTRAILINGZEROS, and ONEJUMPZEROJUMP benchmarks. We support this claim empirically, showing that this speed-up is already noticeable when the chosen and optimal population size deviate only by a constant factor. Our experiments show that this effect can be already very clear when choosing slightly sub-optimal population sizes.
Researcher Affiliation Academia 1Laboratoire d Informatique (LIX), CNRS, Ecole Polytechnique, Institut Polytechnique de Paris 2 Ecole Polytechnique, Institut Polytechnique de Paris {first-name.last-name}@polytechnique.edu
Pseudocode Yes Algorithm 1: The (classic) non-dominated sorting genetic algorithm II (NSGA-II) with population size N N 1, optimizing an m-objective function.
Open Source Code Yes Our code is publicly available (Doerr, Ivan, and Krejca 2024b). Doerr, B.; Ivan, T.; and Krejca, M. S. 2024b. Speeding Up the NSGA-II With a Simple Tie-Breaking Rule (Code). Zenodo. https://doi.org/10.5281/zenodo.14501034.
Open Datasets No The paper uses theoretical benchmarks (ONEMINMAX, LEADINGONESTRAILINGZEROS, and ONEJUMPZEROJUMP) for mathematical runtime analysis and empirical evaluation. These are problem definitions rather than concrete datasets requiring access information. The paper does not provide links, DOIs, or citations for publicly available data files.
Dataset Splits No The paper uses theoretical benchmarks (ONEMINMAX, LEADINGONESTRAILINGZEROS, and ONEJUMPZEROJUMP) for its analysis and empirical evaluation. These benchmarks are problem definitions and do not involve traditional dataset splits like training, validation, or test sets. The experiments are runtime analyses on these problem definitions with varying parameters.
Hardware Specification No The paper mentions running empirical experiments but does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for these experiments.
Software Dependencies No The paper states that its code is publicly available and cites a Zenodo entry. However, the main text of the paper does not explicitly list any specific software dependencies or their version numbers (e.g., Python, PyTorch, or other libraries).
Experiment Setup Yes We run the classic and the balanced NSGA-II on OMM, for problem sizes n 10 [3..12] and three population sizes N. For each combination of n and N per algorithm, we start 50 independent runs and log the number of function evaluations until the Pareto front is covered for the first time. Let M = n + 1 denote the size of the Pareto front. We consider the choices N {2M, 4M, 8M, 16M}, noting that our theoretical result holds for all N 4M (Corollary 9).