Discrete Flows: Invertible Generative Models of Discrete Data

Dustin Tran, Keyon Vafa, Kumar Krishna Agrawal, Laurent Dinh, Ben Poole
Flow-based models for categorical values. (arXiv) (code)

A RAD approach to deep mixture models

Laurent Dinh, Jascha Sohl-Dickstein, Razvan Pascanu, Hugo Larochelle
Piecewise invertible flows for deep mixture models. (arXiv)

VideoFlow: A Flow-Based Generative Model for Video

Brandon Amos, Manoj Kumar, Mohammad Babaeizadeh, Dumitru Erhan, Chelsea Finn, Sergey Levine, Laurent Dinh, Durk Kingma
Applying flow-based models to video prediction. (arXiv) (code)

Reparametrization in Deep Learning

Laurent Dinh
PhD thesis. (Google Drive) (slides) (extended slides)

Learning Awareness Models

Brandon Amos, Laurent Dinh, Serkan Cabi, Thomas Rothörl, Sergio Gómez Colmenarejo, Alistair Muldal, Tom Erez, Yuval Tassa, Nando De Freitas, Misha Denil
International Conference in Learning Representations 2018 conference track

We train predictive models on proprioceptive information and show they represent properties of external objects. (arXiv) (videos)

Sharp Minima Can Generalize For Deep Nets

Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio
International Conference on Machine Learning 2017 (Oral)

Through simple reasonings, we show several failure modes in connecting curvature to generalization. (arXiv) (slides) (poster)

Density estimation using Real NVP

Laurent Dinh, Jascha Sohl-Dickstein, Samy Bengio
Deep Learning Symposium (Neural Information Processing Systems) 2016 (Oral)
International Conference in Learning Representations 2017 conference track

We use invertible neural network for deep generative model with latent space, tractable log-likelihood, inference and sampling in high-dimensional space.
(arXiv) (visualizations) (talk @ Twitter Boston) (slides) (code)

Deep independence network analysis of structural brain imaging: A simulation study

Eduardo Castro, Devon Hjelm, Sergey Plis, Laurent Dinh, Jessica Turner, Vince Calhoun
IEEE 25th International Workshop on Machine Learning for Signal Processing 2015

We study the use of Non-linear Independent Components Estimation on simulated sMRI data.

A Recurrent Latent Variable Model for Sequential Data

Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron Courville, Yoshua Bengio
Neural Information Processing Systems 2015

Variational auto-encoder meets Deep-RNN. (arXiv) (code)

NICE: Non-linear Independent Components Estimation

Laurent Dinh, David Krueger, Yoshua Bengio
International Conference in Learning Representations 2015 workshop track

Tractable maximum likelihood with deep neural networks. (arXiv) (code) (talk @ Berkeley)

Techniques for Learning Binary Stochastic Feedforward Neural Networks

Tapani Raiko, Mathias Berglund, Guillaume Alain, Laurent Dinh
International Conference in Learning Representations 2015 conference track

We explore gradient estimators for binary stochastic neural network. (arXiv)

Predicting Parameters in Deep Learning

Misha Denil, Babak Shakibi, Laurent Dinh, Marc’Aurelio Ranzato, Nando De Freitas
Neural Information Processing Systems 2013

Prior knowledge on models allows us to learn on a significantly smaller subset of parameters and predicting the remaining ones via kernel ridge regression.
(arXiv) (code)