Laurent Dinh
Machine Learning Researcher
Research Experience
2022-now
Senior Research Scientist, Apple
2018-2021
Research Scientist, Google Brain, Montréal, Canada
2014-2017
Graduate Researcher, Mila, Montréal, Canada
Supervisor: Yoshua Bengio
2017-2017
Research Intern, DeepMind, London, United Kingdom
Supervisors: Nando De Freitas, Misha Denil
2016-2017
Software Engineering Intern, Google Brain, Montréal, Canada
Supervisor: Samy Bengio
2016-2016
Software Engineering Intern, Google Brain, Mountain View, California, United States
Supervisors: Samy Bengio, Jascha
2015-2015
Software Engineering Intern, Google Brain, Mountain View, California, United States
Supervisor: Samy Bengio
2013-2013
Visiting Researcher, University of British Columbia, Vancouver, Canada
Supervisors: Nando De Freitas, Misha Denil
2011-2012
Visiting Researcher, Mila, Montréal, Canada
Supervisor: Yoshua Bengio
Education
2014-2018
Philosophiæ Doctor (Computer Science), Université de Montréal, Montréal, Canada
Supervisor: Yoshua Bengio
2012-2013
Master of Science (Machine Learning and Computer Vision), ÉNS Paris-Saclay, Cachan, France
2012-2013
Master of Engineering (Applied Mathematics), École Centrale Paris, Châtenay Malabry, France
2009-2012
Bachelor of Engineering, École Centrale Paris, Châtenay Malabry, France
Publications and Preprints
Perfect Density Models Cannot Guarantee Anomaly Detection
Charline Le Lan, Laurent Dinh
I Can’t Believe It’s Not Better! Workshop (Neural Information Processing Systems 2020) (oral, Entropic Paper Award)
Special Issue: Probabilistic Methods for Deep Learning (MPDI Entropy)
Augmented Normalizing Flows: Bridging the Gap Between Generative Flows and Latent Variable Models
Chin-Wei Huang, Laurent Dinh, Aaron Courville
Solving ODE with Universal Flows: Approximation Theory for Flow-Based Models
Chin-Wei Huang, Laurent Dinh, Aaron Courville
Workshop on Integration of Deep Neural Models and Differential Equations (International Conference on Learning Representations 2020) (oral)
Discrete Flows: Invertible Generative Models of Discrete Data
Dustin Tran, Keyon Vafa, Kumar Krishna Agrawal, Laurent Dinh, Ben Poole
Deep Generative Models for Highly Structured Data (International Conference on Learning Representations 2019)
Neural Information Processing Systems 2019
Invertible Convolutional Flow
Mahdi Karami, Dale Schuurmans, Jascha Sohl-Dickstein, Laurent Dinh, Daniel Duckworth
Neural Information Processing Systems 2019 (spotlight)
A RAD approach to deep mixture models
Laurent Dinh, Jascha Sohl-Dickstein, Hugo Larochelle, Razvan Pascanu
Deep Generative Models for Highly Structured Data (International Conference on Learning Representations 2019)
VideoFlow: A Flow-Based Generative Model for Video
Manoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, Laurent Dinh, Durk Kingma
Workshop on Invertible Neural Nets and Normalizing Flows (International Conference on Machine Learning 2019) (oral)
International Conference on Learning Representations 2020
Conference ticket allocation via non-uniform random selection to address systemic biases
Jessica Thompson, Laurent Dinh, Layla El Asri, Nicolas Le Roux
Critiquing and Correcting Trends in Machine Learning (Neural Information Processing Systems 2018) (spotlight)
Reparametrization in Deep Learning
Laurent Dinh
PhD thesis
Learning Awareness Models
Brandon Amos, Laurent Dinh, Serkan Cabi, Thomas Rothörl, Sergio Gómez Colmenarejo, Alistair Muldal, Tom Erez, Yuval Tassa, Nando De Freitas, Misha Denil
International Conference on Learning Representations 2018 (conference track)
Sharp Minima Can Generalize For Deep Nets
Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
International Conference on Machine Learning 2017
Density estimation using Real NVP
Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio
Deep Learning Symposium (Neural Information Processing Systems 2016) (oral)
International Conference on Learning Representations 2017 (conference track)
Deep independence network analysis of structural brain imaging: A simulation study
Eduardo Castro, Devon Hjelm, Sergey Plis, Laurent Dinh, Jessica Turner, Vince Calhoun
IEEE 25th International Workshop on Machine Learning for Signal Processing 2015
A Recurrent Latent Variable Model for Sequential Data
Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, Yoshua Bengio
Neural Information Processing Systems 2015
NICE: Non-linear Independent Components Estimation
Laurent Dinh, David Krueger, Yoshua Bengio
International Conference on Learning Representations 2015 (workshop track)
Techniques for Learning Binary Stochastic Feedforward Neural Networks
Tapani Raiko, Mathias Berglund, Guillaume Alain, Laurent Dinh
International Conference on Learning Representations 2015 (conference track)
Predicting Parameters in Deep Learning
Misha Denil, Babak Shakibi, Laurent Dinh, Marc’Aurelio Ranzato, Nando De Freitas
Neural Information Processing Systems 2013
Talks
Invertible Models and Normalizing Flows: A Retrospective Talk
Primer on Normalizing Flows
- Massachusetts Institute of Technology (Computer Science and Artificial Intelligence Lab) (invited talk)
- From Passive to Active: Generative and Reinforcement Learning with Physics (Machine Learning for Physics and the Physics of Learning) (invited talk)
- Mila (invited talk)
A RAD approach to deep mixture models
- Stanford University (Computer Science Department) (invited talk)
- McGill University (Computer Science Department) (invited talk)
- Mila (invited talk)
Building a Tractable Generator Network
- Workshop on Invertible Neural Nets and Normalizing Flows (International Conference on Machine Learning 2019) (invited talk)
Conference ticket allocation via non-uniform random selection to address systemic biases
- Critiquing and Correcting Trends in Machine Learning (Neural Information Processing Systems 2018) (spotlight)
Reparametrization in Deep Learning
- Mila / Université de Montréal (PhD defense)
- University of California, Berkeley (BAIR lab) (invited talk)
- NVIDIA AI (invited talk)
- Facebook AI Research (invited talk)
- Google Brain (invited talk)
Sharp Minima Can Generalize For Deep Nets
Density estimation using Real NVP
- Deep Learning Symposium (Neural Information Processing Systems 2016) (invited talk)
- OpenAI (invited talk)
- Twitter (Cortex team) (invited talk)
- Mila (tea talk)
NICE: Non-linear Independent Components Estimation
- University of California, Berkeley (Redwood Center For Theoretical Neuroscience) (invited talk)
- Mila (tea talk)
Training Neural Bayesian Nets
- CIFAR Deep Learning Summer School 2014 (contributed talk)
Academic Service
Diversity and Inclusion Chair
Area Chair
Action Editor
Reviewer
- International Conference on Machine Learning
- Neural Information Processing Systems
- International Conference on Learning Representations
- International Conference on Artificial Intelligence and Statistics
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- Journal of Machine Learning Research
Panelist
- I Can’t Believe It’s Not Better! Workshop (Neural Information Processing Systems 2020)
- ML Retrospectives (International Conference on Machine Learning 2020)
- Deep Learning Symposium (Neural Information Processing Systems 2016)
Mentor
- Women in Machine Learning Workshop (Neural Information Processing Systems 2020)
- ML Collective Social on Open Collaboration in ML Research (Neural Information Processing Systems 2020)
- Eastern European Machine Learning Summer School 2020
Lecturer
- Depth First Learning Fellowship on Normalizing Flows with Variational Inference
- Deep learning Minicourse 2015 (Instituto Nokia de Tecnologia)