My favourite papers from day one of ICML 2015
07 July 2015
Aargh! How can I possibly keep all the amazing things I learnt at ICML today in my head?! Clearly I can’t. This is a list of pointers to my favourite papers from today, and why I think they are cool. This is mainly for my benefit, but you might like them too!
Neural Nets / Deep Learning
BilBOWA: Fast Bilingual Distributed Representations without Word Alignments
Stephan Gouws, Yoshua Bengio, Greg Corrado
Why this paper is cool: It simultaneously learns word vectors for words in two languages without having to learn a mapping between them.
Compressing Neural Networks with the Hashing Trick
Wenlin Chen, James Wilson, Stephen Tyree, Kilian Weinberger, Yixin Chen
Why this paper is cool: Gives a huge reduction (32x) in the amount of memory needed to store a neural network. This means you can potentially use it on low memory devices like mobile phones!
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe, Christian Szegedy
Why this paper is cool: Makes deep neural network training super fast, giving a new state of the art for some datasets.
Deep Learning with Limited Numerical Precision
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan
Why this paper is cool: Train neural networks with very limited fixed precision arithmetic instead of floating points. The key insight is to use randomness to do the rounding. The goal is to eventually build custom hardware to make learning much faster.
Recommendations etc.
Fixed-point algorithms for learning determinantal point processes
Zelda Mariet, Suvrit Sra
Why this paper is cool If you want to recommend a set of things, rather than just an individual thing, how do you choose the best set? This will tell you.
Surrogate Functions for Maximizing Precision at the Top
Why this paper is cool: If you only care about the top n things you recommend, this technique works faster and better than other approaches.
Purushottam Kar, Harikrishna Narasimhan, Prateek Jain
And Finally…
Learning to Search Better than Your Teacher
Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, John Langford
Why this paper is cool: A new, general way to do structured prediction (tasks like dependency parsing or semantic parsing) which works well even when there are errors in the training set. Thanks to the authors for talking me through this one!
Want more? Sign up below to get a free ebook Machine Learning in Practice, and updates on new posts:
08 July 2015
Yesterday I posted on my favourite papers from the beginning of ICML (some of those papers were actually presented today, although the posters were displayed yesterday). Here’s today’s update, which includes some papers to be presented tomorrow, because the posters were on display today…
Neural Nets
Unsupervised Domain Adaptation by Backpropagation
Yaroslav Ganin, Victor Lempitsky
Imagine you have a small amount of labelled training data and a lot of unlabelled data from a different domain. This technique will allow you to build a neural network model that fits the unlabelled domain. The key idea is super cool and really simple to implement. You build a network that optimises features such that it is difficult to distinguish which domain the data came from.
Weight Uncertainty in Neural Networks
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra
Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks
Jose Miguel Hernandez-Lobato, Ryan Adams
These papers have a very similar goal, namely making neural networks probabilistic. This is cool because it allows you to not only make a decision, but know how sure you are about the decision. There are a bunch of other benefits: you don’t need to worry about regularisation, hyperparameter tuning is easier etc.
Anyway, the two papers achieve this in two different ways. The first uses Gaussian scale mixtures together with a clever trick to backpropagate expectations. The second one computes the distribution after rectifying and then approximates this with a Gaussian distribution. Either way, this is an exciting development for neural networks.
Training Deep Convolutional Neural Networks to Play Go
Christopher Clark, Amos Storkey
Although I’ve never actually played the game, I have an interest in AI Go players, because it’s such a hard game for computers, which still can’t reach the level of human players. The current state of the art uses Monte Carlo tree search which is a really cool technique. The authors of this paper use neural networks to play the game but don’t quite achieve the same level of performance. I asked the author whether the two approaches could be combined, and they think they can! Watch this space for a new state of the art Go player.
Natural Language Processing
Phrase-based Image Captioning
Remi Lebret, Pedro Pinheiro, Ronan Collobert
This is a new state of the art in this very interesting task of labelling images with phrases. The clever bit is in the syntactic analysis of the phrases in the training set, which often follow a similar pattern. The authors use this to their advantage: the model is trained on the individual sub-phrases that are extracted, which allows it to behave compositionally. This means that it can describe, for example, both the fact that a plate is on a table, and that there is pizza on the plate. Unlike previous approaches, the sentences that are generated are not often found in the training set, which shows that it is doing real generation and not retrieval. Exciting stuff!
Bimodal Modelling of Source Code and Natural Language
Miltos Allamanis, Daniel Tarlow, Andrew Gordon, Yi Wei
Another fun paper; this one tries to generate source code given a natural language query, quite an ambitious task! It is trained on snippets of code extracted from StackOverflow.
Optimisation
Gradient-based Hyperparameter Optimization through Reversible Learning
Dougal Maclaurin, David Duvenaud, Ryan Adams
Hyperparameter optimisation is important when training neural networks because there are so many of the things floating around. How do you know what to set them to? Normally you have to perform some kind of search on the space of possible parameters, and Bayesian techniques have been very helpful at doing this. This paper suggests something entirely different and completely audacious. The authors are able to compute gradients for hyperparameters using automatic differentiationafter going through a whole round of stochastic gradient descent learning. That’s quite a feat. What this means is that we can answer questions about what the optimal hyperparameter settings look like in different settings - and makes a whole set of things that was previously a “black art” a lot more scientific and understandable.
And more…
There were many more interesting papers - too many to write up here. Take a look at the schedule and find your favourite! Let me know on Twitter.