Presentation
Since November 2023, I am a post-doctoral researcher at Ecole
Polytechnique, supervised
by Eric
Moulines.
Before Paris, I did a PhD thesis titled Exploiting Problem
Structure in Privacy-Preserving Optimization and Machine
Learning, in
the Magnet team at Inria
Lille. My supervisors were
are Aurélien
Bellet, Marc
Tommasi, and Joseph
Salmon. During my PhD, I also worked with
Michaël Perrot
and Hadrien
Hendrikx.
Before Lille, I was student at ENS de Lyon. I followed
the Master
Datasciences at Université Paris-Saclay.
In 2019, I did the agrégation de mathématiques, option
informatique. This page (in french) contains
all documents related to this: lessons, proofs, together with a few
resources, links and remarks.
Research Interests
My research is centered around optimization, federated
(reinforcement) learning, and ethical concerns that come with
training machine learning models. Quite generally, I am interested
in finding algorithms that can adapt to problem's structural
properties to improve either convergence rate, communication
complexity, or other properties like privacy and fairness.
I am also strongly interested in methods that make training and
using models more practical and trustworthy. In particular in
quantification of model's uncertainty, and in procedures
that allow learning with missing data (e.g. imputation).
As of now, I have mostly studied convex problems, but I hope to work
on more general classes of non-convex problems soon!
Feel free (and even, encouraged) to reach to me if you want to
discuss any of these questions, I am always happy to chat! You
can conctact me by e-mail
at paul.mangold@polytechnique.edu.
You can find a list of my publications on
the dedicated page or
on Google
scholar. I am
also on Twitter.
News
- December 2024. I will be attending NeurIPS 2024 in Vancouver. Feel free to contact me if you want to chat! I will present Scafflsa on Friday, December 13 at 4:30 p.m. PST — 7:30 p.m. PST in West Ballroom A-D #5709. See you there!
- December 2024. One new preprint online! In this paper studies the bias of federated averaging. We give exact first-order expansions of this bias, showing it is decomposed in two parts: one due to heterogeneity and one due to gradients' stochasticity. We propose a new method based on Richardson-Romberg extrapolation to reduce both types of bias simultaneously.
- October 2024. Two new preprints online! The first one is on federated value iteration, where we prove that federated value iteration has linear speed-up with low communication cost, and second one is on federated deep RL for telecommunications, where we show that federated deep RL allows to learn more reliable policies in communication problems that involve cars.
- September 2024. Our paper has been accepted to NeurIPS 2024.
- February 2024. New preprint online. We study federated linear stochastic approximation, propose a new method that uses control variates to mitigate bias, and apply it to federated temporal difference learning.
- January 2024. Our paper has been accepted to AISTATS!
- November 2023. I am starting a post-doc at Ecole Polytechnique.
- October 2023. I just defended my thesis! :)
- August 2023. New preprint online! We propose a variant of the Gaussian Mechanism that can exploit a relaxed sensitivity assumption, and apply it to private gradient descent.
- July 2023. I finished writing my thesis! It was intense but I am happy with the result.
- July 2023. I attended the CAp conference at Strasbourg. CAp is a nice conference for the francophone community to meet.
- April 2023. Paper accepted to ICML 2023!
- April 2023. I am in Valencia for AISTATS 2023 to present our paper [and our poster] on high dimensional private empirical risk minimization using a greedy coordinate descent approach.
- April 2023. I am presenting a poster on the interplay of fairness and privacy at the statlearn workshop in Montpellier.
Teaching
I teach at Lille University, where I do a machine learning course for master's students and a machine learning/graph cours for bachelor students.
More details on this page.