Α Descent-based Method on the Duality Gap for Solving Zero-sum Games
Authors: Michail Fasoulakis, Evangelos Markakis, Georgios Roussakis, Christodoulos Santorinaios
IJCAI 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we complement this with an experimental evaluation, which provides promising findings. Our algorithm is comparable with (and in some cases outperforms) some of the standard approaches for solving 0-sum games, such as OGDA (Optimistic Gradient Descent/Ascent), even with thousands of available strategies per player. |
| Researcher Affiliation | Collaboration | 1Royal Holloway, University of London, UK 2Athens University of Economics and Business, Greece 3Archimedes, Athena Research Center, Greece 4Foundation for Research and Technology Hellas, Greece 5Input Output Global (IOG), Greece |
| Pseudocode | Yes | Algorithm 1 The gradient descent-based algorithm Algorithm 2 Find Direction(x, y, ρ) Algorithm 3 Decaying Delta Speedup |
| Open Source Code | No | The paper refers to a full version on arXiv for further details but does not provide an explicit statement about releasing code or a link to a code repository for the methodology described. For example, it states: "We refer to our full version in [Fasoulakis et al., 2025] for justification". |
| Open Datasets | No | The paper states: "To test our algorithms we generated random games of size n n, where each entry is picked uniformly at random from [0, 1]." and "For each value of n that we used, we generated 50 uniformly random games and 50 games using the Gaussian distribution." The authors generated their own synthetic datasets rather than using publicly available ones, and do not provide access to their generated data. |
| Dataset Splits | No | The paper describes generating games for evaluation, stating: "For each size we generate 30 games and solve them to an accuracy of δ = 0.01." and "For each value of n that we used, we generated 50 uniformly random games and 50 games using the Gaussian distribution." This indicates generation of problem instances rather than splitting a fixed dataset into training, validation, and test sets. No specific dataset splits are provided. |
| Hardware Specification | Yes | All our algorithms were implemented in Python 3.10.9, and were run on a Macbook M1 Pro(10 core) with 16GB RAM. |
| Software Dependencies | No | The paper mentions: "All our algorithms were implemented in Python 3.10.9..." and "...we used the standard method of SciPy." While Python 3.10.9 is specified, the version of SciPy is not provided, and other key libraries or frameworks are not listed with version numbers, making the software dependencies description incomplete for reproducibility. |
| Experiment Setup | Yes | More specifically, once V 0.1 we start with ε = 0.2 and decrease it by 10% across iterations. We used k =100 for our experiments. To maintain an equal comparison with our algorithms, we used a tolerance of 0.01. As for first order methods, we compared against the last-iterate performance of Optimistic Gradient Descent Ascent (OGDA), which is among the fastest gradient based methods, with step size η = 0.01. For each size we generate 30 games and solve them to an accuracy of δ = 0.01. We used two types of initialization in all methods, the fully uniform strategy profile and the profile (e1, e1), i.e., first row, first column. |