Global Optimization Algorithm through High-Resolution Sampling

Authors: Daniel Cortild, Claire Delplancke, Nadia Oudjane, Juan Peypouquet

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental While the focus of the paper is primarily theoretical, we demonstrate the effectiveness of our algorithms on the Rastrigin function, where it outperforms recent approaches. In Section 5, we illustrate our results numerically on the Rastrigin function, where we improve on current methods.
Researcher Affiliation Collaboration Daniel Cortild EMAIL University of Groningen, Netherlands Laboratoire de Finance des Marchés de l Energie, Dauphine, CREST, EDF R&D, France Claire Delplancke EMAIL EDF R&D Palaiseau, France Nadia Oudjane EMAIL Laboratoire de Finance des Marchés de l Energie, Dauphine, CREST, EDF R&D, France Juan Peypouquet EMAIL University of Groningen, Netherlands
Pseudocode Yes Algorithm 1 Global Optimization Algorithm Algorithm 2 High-Resolution Sampling Algorithm
Open Source Code Yes All experiments have been performed in Python 3.8. The code is available on the author s Git Hub page.1 https://github.com/Daniel Cortild/Global Optimization
Open Datasets Yes We illustrate the convergence of our algorithm on the Rastrigin function, a classical example of a highly multimodal function with regularly distributed local minima. Let U : Rd R be given by U(x) = d + Pdi=1 x2i cos(2πxi), which is minimized in x = (0, . . . , 0) Rd, with objective value U = U(x ) = 0.
Dataset Splits No The paper uses the Rastrigin function, which is a synthetic mathematical function, not a dataset requiring explicit splits. The experimental setup involves simulating runs with an initial distribution µ0 = N(3 1d, 10 Id d), but this is not a dataset split.
Hardware Specification No The paper mentions 'the Hábrók high performance computing cluster' in the acknowledgements but does not provide specific hardware details such as CPU, GPU models, or memory specifications.
Software Dependencies Yes All experiments have been performed in Python 3.8.
Experiment Setup Yes Given a > 0, we fix α = 1, β = 1, b = 10, γ = a/10, σ2x = 1/a and σ2y = 0.1. The remaining parameters, namely the number of samples N, the number of iterations K and the step-size h, will vary with the experiments. The number of runs over which we compute empirical probabilities is denoted by M. In each run, a step-size h = 0.01, a sample number N = 10 and a maximal number of iterations K = 14000 have been chosen. The initial distribution is set to µ0 = N(3 1d, 10 Id d).