Conférences plénières-Résumés

 

Conférenciers invités :

René Carmona, Princeton University, USA : Titre annulé en raison du report

Résumé : TBA

 

Antonin Chambolle, CMAP, CNRS, École polytechnique, IPP, France : How to discretize the total variation?

Résumé : This talk will address standard and less standard approaches for discretizing the total variation, widely used as a regularizer for solving inverse problems in imaging. It is used for recovering functions  with discontinuities (edges) and therefore standard numerical analysis,  usually based on the smoothness of the solutions, yields sub-optimal  error bounds for such problems. The talk will address some workarounds, discussing primal and dual discretizations, isotropy issues, error  bounds. (Based on joint works with C. Caillaud and T. Pock.)

 

Adrian Lewis, Cornell University, USA : Smoothness in nonsmooth optimization annulé en raison du report

Résumé : Fast black-box nonsmooth optimization, while theoretically out of reach in the worst case, has long been an intriguing goal in practice.  Generic concrete nonsmooth objectives are « partly » smooth:  their subdifferentials have locally smooth graphs with powerful constant-rank properties, often associated with hidden structure in the objective.  One typical example is the proximal mapping for the matrix numerical radius, whose output is surprisingly often a « disk” matrix.  Motivated by this expectation of partial smoothness, this talk describes a Newtonian black-box algorithm for general nonsmooth optimization.  Local convergence is provably superlinear on a representative class of objectives, and early numerical experience is promising more generally.

Joint work with Xiaoyan Han, Jingwei Liang, Michael Overton, and Calvin Wylie.

 

Claire Mathieu, CNRS, France : Two-sided matching markets with correlated random preferences have few stable pairs

Résumé : Stable matching in a community consisting of N men and N women is a classical combinatorial problem that has been the subject of intense theoretical and empirical study since its introduction in 1962 in a seminal paper by Gale and Shapley.

In this paper, we study the number of stable pairs, that is, the man/woman pairs that appear in some stable matching. We prove that if the preference lists on one side are generated at random using the popularity model of Immorlica and Mahdian, the expected number of stable edges is bounded by N ln(N) + N, matching the asymptotic value for uniform preference lists. If in addition that popularity model is a geometric distribution, then the number of stable edges is O(N) and the incentive to manipulate is limited. If in addition the preference lists on the other side are uniform, then the number of stable edges is asymptotically N up to lower order terms: most participants have a unique stable partner, hence non-manipulability. 

This is joint work with Hugo Gimbert and Simon Mauras

 

Nadia Oudjane, EDF-Lab, France : Some optimization issues in the new electrical system

Résumé : With the emergence of new renewable energies (as wind or solar generation), the electrical system is evolving from a situation based on controllable power plants (thermal, gas, hydro, etc.) connected to the transmission network to a new setting, where substantial uncontrollable generation facilities are connected to the distribution network, closer to the consumers. To contribute to this transformation, new local actors are rapidly multiplying potentially involving a great variety of devices: uncontrollable renewable generations; storage devices (batteries, Electric Vehicles smart charging systems, …); conventional plants (such as gas turbines or hydro plants); possibilities to adjust some consumers load (e.g. demand response, or direct control). In this new context, energy utilities are facing new practical issues requiring to develop suitable mathematical optimization tools.

 

Lauréat 2019 du prix Jean-Jacques Moreau :

Francis Bach, Inria et ENS, France : Optimisation convexe et non-convexe pour l’apprentissage statistique

Résumé : L’apprentissage statistique se formule naturellement comme un problème d’optimisation, qui cherche à minimiser le taux d’erreur sur les données observées, typiquement de grande taille. Si pour les modèles linéaires, la fonction objectif est convexe, elle ne l’est pas pour des modèles non-linéaires comme les réseaux de neurones. Dans cet exposé, je présenterai certains développements récents liés à ces deux situations.

Les commentaires sont clos.