Presentation
  
  Since September 2025, I am an Assistant Professor at Ecole
  Polytechnique, working within
  the SIMPAS
  team.
  Between November 2023 and August 2025, I was a post-doctoral researcher at Ecole
  Polytechnique, supervised
  by Eric
  Moulines.
  Before Paris, I did a PhD thesis titled Exploiting Problem
  Structure in Privacy-Preserving Optimization and Machine
  Learning, in
  the Magnet team at Inria
  Lille. My supervisors were
  are Aurélien
  Bellet, Marc
  Tommasi, and Joseph
  Salmon. During my PhD, I also worked with
   Michaël Perrot
  and Hadrien
  Hendrikx.
  Before Lille, I was student at ENS de Lyon.  I followed
  the Master
  Datasciences at Université Paris-Saclay.
  In 2019, I did the agrégation de mathématiques, option
  informatique. This page (in french) contains
  all documents related to this: lessons, proofs, together with a few
  resources, links and remarks.
Research Interests
  My research is centered around optimization, federated
  (reinforcement) learning, and ethical concerns that come with
  training machine learning models. Quite generally, I am interested
  in finding algorithms that can adapt to problem's structural
  properties to improve either convergence rate, communication
  complexity, or other properties like privacy and fairness.
  I am also strongly interested in methods that make training and
  using models more practical and trustworthy. In particular in
  quantification of model's uncertainty, and in procedures
  that allow learning with missing data (e.g. imputation).
  
    Feel free (and even, encouraged) to reach to me if you want to
    discuss any of these questions, I am always happy to chat!  You
    can conctact me by e-mail
    at paul.mangold@polytechnique.edu.
  
  You can find a list of my publications on
  the dedicated page or
  on Google
  scholar. I am
  also on Twitter.
  A list of the talks (with the slides) I have given is available
  on this
  page. You can also find the Python/R/C++ libraries I have
  developed/participated to
  at this
  page.
  
  News
- June 2025. I will be at Journées de Stastistique de la SFdS in Marseille from June 2 to June 6. I am talking about federated averaging. Come discuss with me if you are there!
 
- May 2025. New preprint online! We study the convergence of Federated Policy Gradient methods. We show that federated RL has a different structure than single-agent RL, and show fast convergence to a region of the solution, with diameter that depends on heterogeneity.
 
- May 2025. Paper accepted to ICML! It shows that the Scaffold algorithm, without global step-size, has linear speed-up!
 
- March 2025. New preprint online! In this preprint we show that Scaffold has linear speed-up!!
 
- February 2025. Our book chapter on using federated deep reinforcement learning in vehicular communications has been published.
 
- January 2025. Two papers accepted to AISTATS! The first one is on federated averaging. The second one on a federated value iteration method: congratulation to Safwan!
 
- December 2024. I will be attending NeurIPS 2024 in Vancouver. Feel free to contact me if you want to chat! I will present Scafflsa on Friday, December 13 at 4:30 p.m. PST — 7:30 p.m. PST in West Ballroom A-D #5709. See you there!
 
- December 2024. One new preprint online! In this paper studies the bias of federated averaging. We give exact first-order expansions of this bias, showing it is decomposed in two parts: one due to heterogeneity and one due to gradients' stochasticity. We propose a new method based on Richardson-Romberg extrapolation to reduce both types of bias simultaneously.
 
- October 2024. Two new preprints online! The first one is on federated value iteration, where we prove that federated value iteration has linear speed-up with low communication cost, and second one is on federated deep RL for telecommunications, where we show that federated deep RL allows to learn more reliable policies in communication problems that involve cars.
 
- September 2024. Our paper has been accepted to NeurIPS 2024.
 
- February 2024. New preprint online. We study federated linear stochastic approximation, propose a new method that uses control variates to mitigate bias, and apply it to federated temporal difference learning.
 
- January 2024. Our paper has been accepted to AISTATS!
 
  
Show the full list of news.
  Teaching
  I teach at Lille University, where I do a machine learning course for master's students and a machine learning/graph cours for bachelor students.
  More details on this page.