I am a machine learning researcher currently working on deep generative models (e.g., normalizing flows and latent variable models) and understanding machine learning (e.g., generalization and representation). I also follow work in connected areas like probabilistic modelling, approximate inference, optimization, learning of discrete structures, and sociotechnical studies of machine learning.
I obtained my PhD in deep learning at Mila (Montréal, Canada), under the supervision of Yoshua Bengio. Prior to that I studied at École Centrale de Paris (Paris, France) in applied mathematics and at ÉNS Paris-Saclay (Paris, France) in machine learning and computer vision. I had the privilege to work in the machine learning group led by Nando de Freitas both at UBC (Vancouver, Canada) and DeepMind (London, United Kingdom), and also at Google Brain (Mountain View, US), under the supervision of Samy Bengio.
I remade my website 🎉.
I am now an Action Editor for TMLR.
I left Google Brain.
Invertible Models and Normalizing Flows
Normalizing flows provide a tool to build an expressive and tractable family of probability distributions. Research in this field harness the recent advances in deep learning to design flexible invertible models for probabilistic inference or density estimation.
Learning Discrete structure
I am interested in efficient learning of expressive discrete representation including discrete latent variables, sparse models and inference, and mixture modelling.
Understanding Deep Learning
Despite their empirical success, the extrapolation behavior of deep learning models remain unclear. I am interested in empirical and theoretical insights about and limits of deep learning methods pertaining to generalization, representation learning, and their inductive biases.