We are always looking forward to push the cutting edge research to new boundaries. MILA is best known for:
Fundamentals of deep learning: MILA published one of the first papers on deep learning (NIPS’2006), a book (2009) and a review paper (2013), as well as many fundamental contributions regarding representation learning. Ian Goodfellow (former PhD student at MILA), Yoshua Bengio and Aaron Courville published one of the most comprehensive books on deep learning in 2016.
Autoencoders: MILA first became involved in the deep learning revolution by introducing the stacked denoising autoencoder. MILA has also invented contractive autoencoders and generative stochastic networks. There has been also an on going research on Variational Autoencoders which led to invention of Recurrent Variational Autoencoders and a few other varients at MILA.
Supervised neural nets: MILA pioneered the use of rectified linear units for feedforward neural nets. Deep rectifier nets have gained widespread popularity in industrial vision and speech recognition. MILA also developed the Attention Mechanism methods, which have gained popularity in a variety of speech and vision applications.
Generative models: MILA is the birthplace of the spike-and-slab RBM, and also Generative Adversarial Networks (GANs) as one of the best generative models of natural images. Students at MILA study a wide range of topics related to GANs, including better ways to train both the discriminator and the generator.
Recurrent neural networks: Students at MILA have been studying the problem of “capturing temporal structures” with recurrent nets for a long time. Neural language models were invented at MILA. The birthplace of the RNN-RBM, a very successful model of polyphonic music is also MILA.