Due to covid crisis, the French conference SMAI-MODE postponed to September 7-9, 2020, and the mini-course postponed to September 10-11, 2020, will hold fully online.
|
Mini-coursesAs in previous editions of SMAI-MODE conferences, in partnership with the GdR MOA (Mathematics of Optimization and Applications), we propose a course on the 2 days preceding the conference, on March 23 and 24, to be held at ENSTA (Palaiseau).
The lectures will take place online on zoom platform. Registration is required in order to get the connexion details to the video-conference platform. |
AbstractGame theory is a thriving interdisciplinary field that studies the interactions between optimizing agents with competing objectives, be they humans, bacteria, or artificial neural networks. This course is intended to provide a gentle introduction to algorithmic game theory with a particular focus on its connections to learning and optimization, as well as some of its core applications (traffic routing, machine learning, auctions, etc.). The first part of the course deals with the static elements that define a game, the different equilibrium notions that arise in game theory (Nash, Bayesian, Poisson, Wardrop equilibria,…), and the connections between them. Special attention will be put to analyze the classes of congestion games and routing games – both atomic and non-atomic – alongside with the more general class of potential games. We will describe the asymptotic behavior of large games with an increasing number of players, and we will discuss the social efficiency of equilibria by reviewing some basic bounds for the so-called price of anarchy (PoA) as a measure of the gap between global optimality and equilibrium. The second part of the course will focus on online learning procedures that aim to maximize the rewards accrued by an individual agent over time. We will cover some classical procedures (such as the best-response dynamics, fictitious play and their variants), and then focus on online optimization algorithms that aim to minimize an agent’s regret (exponential weights, follow-the-regularized-leader, online gradient/mirror descent, etc.). Subsequently, we will examine the ramifications of no-regret learning in games, and we will study under which conditions online learning can lead to Nash equilibrium. We will also discuss the impact of the information available to the players, as well as a range of concrete applications to traffic routing, signal processing, and machine learning. |
Important information:
|
For any question on the mini-course, contact us at the address: smai-mode2020@inria.fr |