I am a postdoctoral research fellow at Simons Institute, UC Berkeley. Previously, I obtained my Ph.D. from Columbia University, advised by Xi Chen and Christos Papadimitriou. Prior to that, I received my bachelor's degree from Yao Class at Tsinghua University.
I have worked on machine learning theory, game theory, and theoretical computer science broadly. Recently, I am particularly interested in how computational theory interplays with AI.
I will be a Motawani postdoctoral research fellow at Stanford University starting from January 2025!
Research
Theoretical foundation of language model
Transfromer is the backbone architecture of modern LLM, what makes Transformer special and what problems can be solved efficiently by Transformer? I study the representation power of Transformer
Theoretical limitations of multi-layer Transformer [pdf]
Lijie Chen, Binghui Peng, Hongxun Wu
This paper gives the first unconditional lower bound for multi-layer Transformer, proving constant depth Transformer could not solve certain sequential composition task.
Self-Attention Networks Can Process Bounded Hierarchical Languages (ACL 2021, Oral) [pdf]
Shunyu Yao, Binghui Peng, Christos Papadimitriou, Karthik Narasimhan
This (old) paper demonstrates that Transformer could parse hierachical structure efficiently, showing the first separation between Transformer and recurrent model.
Game theory
How should we reason about a multi-agent system and how can these agents agree on an equilibrium? My research resolves decades of years open problems on equilibrium computation
Fast swap regret minimization and applications to approximate correlated equillibria (STOC 2024) [pdf]
Binghui Peng, Aviad Rubinstein
This paper gives an exponentially faster no-swap regret learning algorithm. As applications, it gives optimal computation/communication/query algorithms for finding approximate correlated equilibrium in normal-form game, and the first polynomial time approximation algorithm for correlated equilibria in extensive-form game.
Invited to the SICOMP Special Issue for STOC 2024
Complexity of Equilibria in First-Price Auctions under General Tie-Breaking Rules (STOC 2023) [pdf]
Xi Chen, Binghui Peng
This work proves finding a Bayesian Nash equilibrium of first-price auction is PPAD hard under general tie-breaking rule.
The role of memory in learning
Memory (or space) is a crucial computational resource for large-scale learning tasks. My research addresses the space requirements for fundamental learning problems, including convex optimization and online learning.
Memory-Query Tradeoffs for Randomized Convex Optimization (FOCS 2023) [pdf]
Xi Chen, Binghui Peng
This paper shows that quadratic memory is necessary if one wants the optimal query complexity for convex optimization.
Near Optimal Memory-Regret Tradeoff for Online Learning (FOCS 2023) [pdf]
Binghui Peng, Aviad Rubinstein
This paper provides optimal sublinear space algorithms for online decision making (the experts problem).
Continual learning
How can machines learn effectively in evolving data and ever-changing environments? Continual learning (or lifelong learning) is still in its early stages, but I believe that theoretical insights are essential for its advancement
Memory Bounds for Continual Learning (FOCS 2022) [pdf]
Xi Chen, Christos Papadimitriou, Binghui Peng
This paper elucidates the importance of memory for learning sequentially classification tasks. My follow up work also considers non-stationary reinforcement learning [pdf]
The Complexity of Dynamic Least-Squares Regression (FOCS 2023) [pdf]
Shunhua Jiang, Binghui Peng, Omri Weinstein
We prove the hardness of approximation for the OMv conjecture.