A Parameter-Free Conditional Gradient Method for Composite Minimization under Hölder Condition

Authors: Masaru Ito, Zhaosong Lu, Chuan He

JMLR 2023 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Preliminary experiments are also conducted and illustrate the superior performance of the parameter-free conditional gradient method over the methods with some other step size rules. Keywords: Conditional gradient method, H older continuity, uniform convexity, adaptive line search, iteration complexity
Researcher Affiliation Academia Masaru Ito EMAIL Department of Mathematics Nihon University Tokyo 101-8308, JAPAN Zhaosong Lu EMAIL Department of Industrial and Systems Engineering University of Minnesota Minneapolis, MN 55455, USA Chuan He EMAIL Department of Industrial and Systems Engineering University of Minnesota Minneapolis, MN 55455, USA
Pseudocode Yes Algorithm 1: A parameter-dependent conditional gradient method ... Algorithm 2: A parameter-free conditional gradient method
Open Source Code No The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets No The instances of problem (23) are generated as follows. In particular, we generate matrix A by letting A = UDUT ... The instances of problem (24) are generated as follows. In particular, we generate matrix A with A 2 100 by letting A = V DUT ... The data matrix X Rn m for problem (26) is generated as follows. In particular, we first randomly generate U Rn k ... Finally, we set X = U V + E...
Dataset Splits No The paper describes generating instances for problems and testing methods on them, but does not provide specific training/test/validation dataset splits. Instead, it generates multiple random instances for each problem type.
Hardware Specification Yes Our experiments are conducted in Matlab on an Apple desktop with the 3.0GHz Intel Xeon E5-1680v2 processor and 64GB of RAM.
Software Dependencies No Our experiments are conducted in Matlab on an Apple desktop... (No version number for Matlab is provided.)
Experiment Setup Yes We set b = A x for some x generated from a uniform distribution over {x Rn : x q = 10}. In this experiment, we consider p {1.3, 1.6, 2}, q {1.5, 2, 3} and m = n {1000, 5000}. For each choice of (p, q, n), we randomly generate 10 instances of problem (23) by the procedure mentioned above, and apply the aforementioned three conditional gradient methods to solve them, starting with the initial point x0 = 0 and terminating them once the criterion δt/δ0 10 6 is met... In this experiment, we set λ = 0.01, α = 2, and consider m = n {100, 200, 300, 400, 500} and k {5, 10}. ...with the initial point U0 and V 0 being the matrices of all entries equal to 1 and 1/k, respectively, and terminate the methods once the criterion δt/δ0 10 5 is met