PAPAL: A Provable PArticle-based Primal-Dual ALgorithm for Mixed Nash Equilibrium

Authors: Shihong Ding, Hanze Dong, Cong Fang, Zhouchen Lin, Tong Zhang

JMLR 2024 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We offer a comprehensive convergence analysis of the proposed algorithm, demonstrating its effectiveness. In contrast to prior research that attempted to update particle importance without movements, PAPAL is the first implementable particle-based algorithm accompanied by non-asymptotic quantitative convergence results, running time, and sample complexity guarantees. Our framework contributes novel insights into the particle-based algorithms for continuous min-max optimization in the general non-convex non-concave setting. ... Appendix G. Empirical Results: In this section, we would like to illustrate the motivation of our algorithm compared to weight-driven algorithm WFR-DA (Domingo-Enrich et al., 2020). ... Figure 1 demonstrates that the weight-driven algorithms only select particles close to the solution, while particle-based algorithms move the particles to find the solution. ... Figure 2 indicate that the weight-driven algorithms (WFR-DA) suffer from the curse of dimensionality seriously, while the particle-based algorithm can perform well on high dimension space. The empirical results have justify the importance of the particle-based algorithm and efficacy of PAPAL in high dimension spaces.
Researcher Affiliation Collaboration Shihong Ding EMAIL Peking University; Hanze Dong EMAIL Salesforce AI Research; Cong Fang EMAIL Peking University; Zhouchen Lin EMAIL Peking University; Tong Zhang EMAIL University of Illinois Urbana-Champaign.
Pseudocode Yes Algorithm 1 PAPAL: PArticle-based Primal-dual ALgorithm for Mixed Nash Equilibrium; Algorithm 2 Metropolis-Adjusted Langevin Algorithm (MALA); Algorithm 3 Proximal Sampler with MALA.
Open Source Code No No explicit statement or link to open-source code for the methodology is provided within the paper.
Open Datasets No For simplicity, we choose the Gaussian target distribution and compute the KL divergence between generated and target distribution.
Dataset Splits No The empirical evaluation uses synthetic or generic problem setups (e.g., 'Gaussian target distribution') which do not involve standard dataset splits like training, validation, or test sets.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., library names with versions) are explicitly mentioned in the paper.
Experiment Setup Yes We choose the step-size 10−3 to perform the in our experiments. For simplicity, we choose the Gaussian target distribution and compute the KL divergence between generated and target distribution. We choose a randomly sampled mean as our target distribution and use a linear model to reproduce it.