Optimization Over a Probability Simplex
Authors: James Chok, Geoffrey M. Vasil
JMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments of projection onto convex hulls show faster convergence than similar algorithms. Finally, we apply our algorithm to online learning problems and prove the convergence of the average regret for (1) Prediction with expert advice and (2) Universal Portfolios. |
| Researcher Affiliation | Academia | James Chok1,2 EMAIL Geoffrey M. Vasil1 EMAIL 1School of Mathematics and Maxwell Institute for Mathematical Sciences, The University of Edinburgh, Edinburgh EH9 3FD, United Kingdom 2Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, United Kingdom |
| Pseudocode | Yes | The pseudo-code of our method can be seen in Algorithm 1. |
| Open Source Code | Yes | The code can be found on Git Hub3. ... Experiments were written in Python and ran on Google Colab and can be found on Git Hub4. ... Experiments were written in Python and can be found on Git Hub6. |
| Open Datasets | Yes | We look at the performance of our algorithm on four standard datasets used to study the performance of universal portfolios: (1) NYSE is a collection of 36 stocks traded on the New York Stock Exchange from July 3, 1962, to Dec 31, 1984, (2) DJIA is a collection of 30 stocks tracked by the Dow Jones Industrial Average from Jan 14, 2009, to Jan 14, 2003, (3) SP500 is a collection of the 25 largest market cap stocks tracked by the Standard & Poor s 500 Index from Jan 2, 1988, to Jan 31, 2003, and (4) TSE is a collection of 88 stocks traded on the Toronto Stock Exchange from Jan 4, 1994, to Dec 31, 1998.5 The datasets were original found on http://www.cs.technion.ac.il/~rani/portfolios/, but is now unavailable. It was retrieved using the Way Back Machine https://web.archive.org/web/ 20220111131743/http://www.cs.technion.ac.il/~rani/portfolios/. |
| Dataset Splits | No | The paper describes the generation of data or partitions for calculations (e.g., Riemann approximation), but does not specify explicit training/test/validation splits for machine learning experiments. For example, for the Universal Portfolio, it mentions 'trading period, T' but not how the data is split for model evaluation. |
| Hardware Specification | No | Experiments were written in Python and ran on Google Colab. (Sections 6.1 and 6.2). This mentions a computing environment but does not specify particular hardware components like GPU or CPU models. |
| Software Dependencies | No | Experiments were written in Python (Sections 6.1 and 6.2). The paper mentions the programming language but does not specify the version of Python or any other key software libraries with their version numbers. |
| Experiment Setup | Yes | Experimental Details: We look at a convex hull sampled from the unit hypercube [0, 1]d for d [10, 15, . . . , 50]. ... These algorithms are ran until a solution, ˆy, is found such that ˆy ytrue 1e 5 or 10 000 iterations have been made. ... The learning rate for EGD, PFW, and CS is found through a line search. ... EGD implements a back-tracking linear search with Armijo conditions... (Section 6.1) Experiment Details: ... µ(x) is chosen as a unit normal distribution truncated to [0, 1], with smoothing parameter ε = 0.05. Similarly, f is a normal distribution with mean 0.5 and variance 0.1, truncated to [0, 1]. We take the partition {k/400}0 k M for the Riemann approximation (16). The algorithms CS, EGD, and PFW are ran for 150 iterations. ... (Section 6.2) ... ηCS = a / (2 log N + a^2 / T) and ηEGD = 2a sqrt(2 log(N)/T)... Rf = 4%. (Section 6.4) |