Chaos persists in large-scale multi-agent learning despite adaptive learning rates
TLDR: Despite the promise of adaptive learning rates, chaos remains prevalent in large multi-agent learning scenarios, even with standard algorithms and minimal strategies.
Toolbox: Chaos Theory, Discrete Dynamical Systems, Asymptotic Calculus
Curvature-Independent Last-Iterate Convergence for Games on Riemannian Manifolds
TLDR: Riemannian GD can compute NE with a step size that is agnostic to the underlying curvature.
Toolbox: Differential Geometry, Game Theory, Convex Optimization, Potential Analysis
Smoothed Complexity of SWAP in Local Graph Partitioning
TLDR: Despite theoretical concerns, local search algorithms like SWAP and 2-FLIP handle provably real-world optimization tasks efficiently.
Toolbox: Smoothed Analysis, Applied Probability, Combinatorics, Graph Theory
Exploiting Hidden Structures in Non-convex Games for Convergence to Nash Equilibrium
TLDR: 'Hidden gradient descent': A novel method for computing Nash equilibrium in ML complex non-convex landscapes with hidden convex cores.
Toolbox: Game Theory, Convex Optimization, Dynamical Systems
Stochastic Methods in Variational Inequalities: Ergodicity, Bias and Refinements
TLDR: Stochastic EG and GDA algorithms are governed by the Law of Large Numbers and the Central Limit Theorem and their accuracy can be boosted using Richardson extrapolation.
Toolbox: Probability, Ergodic Theory, Markov Chains, Optimization, Dynamical Systems
Quadratic Speedup in Finding Nash Equilibria of Quantum Zero-Sum Games
TLDR: State-of-art method for Computing Nash Equilibria in Quantum Zero-Sum Games.
Toolbox: Quantum Computing, Game Theory, Convex Optimization
Francisca Vasconcelos1,
Emmanaouil-Vasileios Vlatakis-Gkaragkounis1,
Panayotis Mertikopoulos2,
Georgios Piliouras3,
Umesh Vazirani3, and
Michael I. Jordan3
7th Conference on Quantum Techniques in Machine Learning (QTML 2023)
1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier
The Computational Complexity of Multi-player Concave Games and Kakutani Fixed Points
TLDR: Computing Kakutani's Fixpoints is PPAD-complete. Both Rosen-Nash Eq. in concave games and Walrasian Eq. in exchange markets share similar complexity for general utilities.
Toolbox: Computational Complexity, Game Theory, Convex Analysis
Algorithms and Complexity for Computing Nash Equilibria in Adversarial Team Games
TLDR: Computing Nash Equilibria in adversarial team games is CLS-complete.
Toolbox: Computational Complexity, Game Theory, Convex Analysis, Linear Programming
Efficiently Computing Nash Equilibria in Adversarial Team Markov Games
TLDR: In multi-agent learning, we introduce an efficient algorithm to find near-optimal strategies for teams playing against a single opponent, blending both competition and cooperation settings.
Toolbox: Reinforcement Learning, Game Theory, Non-Convex Programming
Fivos Kalogiannis1,
Ioannis Anagnostides2,
Ioannis Panageas2,
Emmanaouil-Vasileios Vlatakis-Gkaragkounis2,
Vaggos Chatziafratis3, and
Stelios Stavroulakis3
11th International Conference on Learning Representations (ICLR 2023Oral Talk)
1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier
Teamwork makes von Neumann work: Min-Max Optimization in Two-Team Zero-Sum Games
TLDR: In e-sports and multi-GANs ML team games, conventional algorithms often underperform in identifying the most optimal strategies. We present a control-theory-derived approach, yielding superior results in specific scenarios.
Toolbox: Game Theory, Control Theory, Dynamical Systems
First-Order Algorithms for Min-Max Optimization in Geodesic Metric Spaces
TLDR: Addressing a century-old unsolved issue, this research reveals that Von Neumann's minimax points can be computed linearly in cases with geodesically strong convex-concave characteristics. The results match the Euclidean outcome using the Riemannian corrected extragradient (RCEG) method.
Toolbox: Differential Geometry, Game Theory, Convex Optimization, Duality Theory
On the convergence of policy gradient methods to Nash equilibria in general stochastic games
TLDR: Local Convergence Guarantees of Policy Gradient Methods in a Multi-Agent RL environment.
Toolbox: Multi-Agent RL, Convex Optimization, Applied Probability, Dynamical Systems
Near-Optimal Statistical Query Lower Bounds for Agnostically Learning Intersections of Halfspaces with Gaussian Marginals
TLDR: Defined the lower bounds for agnostically learning intersections of halfspaces with Gaussian distribution using SQ learning.
Toolbox: Boolean Learning Theory, Fourier Analysis, High-Dimensional Probability
On the Rate of Convergence of Regularized Learning in Games: From Bandits and Uncertainty to Optimism and Beyond
TLDR: Exploration of local convergence rates for FTGL techniques in pursuit of Nash Equilibria in generic games.
Toolbox: Game Theory, Convex Optimization, Dynamical Systems, Applied Probability
Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
TLDR: Exploration of convergence criteria for GDA in hidden zero-sum games and drawing connections to the training methodology of generative adversarial networks.
Toolbox: Game Theory, Dynamical Systems, Lyapunov Potentials
On the Approximation Power of Two-Layer Networks of Random ReLUs
TLDR: Deciphering stringent upper-lower limits for two-layered ReLU networks, possessing random weights from the outset, in the portrayal of smooth functions.
Toolbox: Deep Learning Theory, Ridgelet Fourier Analysis, Applied Probability
Reconstructing weighted voting schemes from partial information about their power indices
TLDR: Design of algorithms to decode voting infrastructures from fragments of data on voter influence.
Toolbox: Boolean Learning Theory, Fourier Analysis, Computational Social Choice
Survival of the strictest: Stable and unstable equilibria under regularized learning with partial information
TLDR: "Follow the Regularized Leader" (FTRL) algorithm reveals that a Nash equilibrium is stable and appealing with a significant probability only when it remains strictly unique.
Toolbox: Game Dynamics, Stochastic Stability Analysis, Dynamical Systems
No-regret learning and mixed Nash equilibria: They do not mix
TLDR: Mixed NE are unstable under no-regret dynamics in any N-player games.
Toolbox: Game Theory, Dynamical Systems, Invariant Measures Analysis, Topology
Lampros Flokas1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1 Thanasis Lianeas2,
Panagiotis Mertikopoulos3 and Georgios Piliouras3
34th Conference on Neural Information Processing Systems (NeurIPS 2020-Spotlight talk)
1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier
Optimal Private Median Estimation under Minimal Distributional Assumptions
TLDR: Optimal Sample and Efficient algorithm for private median estimation.
Toolbox: Differential Privacy, Algorithmic Statistics, Dynamic Programming
Smoothed Complexity of Local Max-Cut and Binary Max-CSP
TLDR: Despite theoretical concerns, local search algorithms based on single flips handle provably real-world optimization tasks efficiently.
Toolbox: Smoothed Analysis, Applied Probability, Combinatorics, Graph Theory
Poincaré Recurrence, Cycles, and Spurious Equilibria in Gradient-Descent-Ascent for Non-Convex Non-Concave Zero-Sum Games
TLDR: In several ML non-convex scenarios, standard GDA dynamics show behaviors that don't converge to expected game solutions, including periodic behavior and convergence to non-min-max equilibria.
Toolbox: GANs, Game Dynamics, Invariant Measures, Lyapunov Potentials, Topology
Lampros Flokas1, Emmanaouil-Vasileios Vlatakis-Gkaragkounis1 and Georgios Piliouras
33rd Conference on Neural Information Processing Systems (NeurIPS 2019-Spotlight talk)
1/2/3. Contribution order; Equal Number=Equal Contribution; Alphabetically listed within tier
Efficiently avoiding saddle points with zero order methods: No gradients required
TLDR: A simple derivative-free method efficiently avoids saddle points.
Toolbox: Non-convex Optimization, Applied Probability, Dynamical Systems
Pattern Search Multidimensional Scaling
TLDR: Derivative-free methods for multi-dimensional scaling via complex data structures.
Toolbox: Manifold Learning, Non-convex & Black-box Optimization, Local Search Heuristics