The inexact power augmented Lagrangian method for constrained nonconvex optimization

Authors: Alexander Bodard, Konstantinos Oikonomidis, Emanuel Laude, Panagiotis Patrinos

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, numerical experiments validate the practical performance of unconventional augmenting terms.
Researcher Affiliation Collaboration Alexander Bodard EMAIL ESAT-STADIUS & Leuven.AI, KU Leuven Konstantinos Oikonomidis EMAIL ESAT-STADIUS & Leuven.AI, KU Leuven Emanuel Laude EMAIL Proxima Fusion Gmb H Panagiotis Patrinos EMAIL ESAT-STADIUS & Leuven.AI, KU Leuven
Pseudocode Yes Algorithm 1 Inexact power augmented Lagrangian method Algorithm 2 Inexact proximal point method for (13)
Open Source Code Yes All experiments are run in Julia on an HP Elite Book with 16 cores and 32 GB memory, and the source code is publicly available.1 1https://github.com/alexanderbodard/tmlr_nonconvex_power_alm
Open Datasets Yes We test Algorithm 1 on two problem instances, being the MNIST dataset Deng (2012) and the Fashion MNIST dataset Xiao et al. (2017).
Dataset Splits Yes The setup is similar to that of Sahin et al. (2019), which is in turn based on Mixon et al. (2016). In particular, a simple two-layer neural network was used to first extract features from the data, and then this neural network was applied to n = 1000 random test samples from the dataset, yielding the vectors {zi}n=1000 i=1 that generate the distance matrix D.
Hardware Specification Yes All experiments are run in Julia on an HP Elite Book with 16 cores and 32 GB memory
Software Dependencies No All experiments are run in Julia on an HP Elite Book with 16 cores and 32 GB memory, and the source code is publicly available.1
Experiment Setup Yes We define s = 10, r = 20, tune σ1 = 10, λ = 10 3, β1 = 5, ω = 1.1, and impose a maximum of N = 1500 UPFAG iterations per subproblem. ... We use tolerances εφ = εA = 10 3.