My favourite papers from day one of ICML 2015

My favourite papers from day one of ICML 2015

07 July 2015

Aargh! How can I possibly keep all the amazing things I learnt at ICML today in my head?! Clearly I can’t. This is a list of pointers to my favourite papers from today, and why I think they are cool. This is mainly for my benefit, but you might like them too!

Neural Nets / Deep Learning

BilBOWA: Fast Bilingual Distributed Representations without Word Alignments

Stephan Gouws, Yoshua Bengio, Greg Corrado

Why this paper is cool: It simultaneously learns word vectors for words in two languages without having to learn a mapping between them.

Compressing Neural Networks with the Hashing Trick

Wenlin Chen, James Wilson, Stephen Tyree, Kilian Weinberger, Yixin Chen

Why this paper is cool: Gives a huge reduction (32x) in the amount of memory needed to store a neural network. This means you can potentially use it on low memory devices like mobile phones!

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

Sergey Ioffe, Christian Szegedy

Why this paper is cool: Makes deep neural network training super fast, giving a new state of the art for some datasets.

Deep Learning with Limited Numerical Precision

Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan

Why this paper is cool: Train neural networks with very limited fixed precision arithmetic instead of floating points. The key insight is to use randomness to do the rounding. The goal is to eventually build custom hardware to make learning much faster.

Recommendations etc.

Fixed-point algorithms for learning determinantal point processes

Zelda Mariet, Suvrit Sra

Why this paper is cool If you want to recommend a set of things, rather than just an individual thing, how do you choose the best set? This will tell you.

Surrogate Functions for Maximizing Precision at the Top

Why this paper is cool: If you only care about the top n things you recommend, this technique works faster and better than other approaches.

Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

And Finally…

Learning to Search Better than Your Teacher

Kai-Wei Chang, Akshay Krishnamurthy, Alekh Agarwal, Hal Daume, John Langford

Why this paper is cool: A new, general way to do structured prediction (tasks like dependency parsing or semantic parsing) which works well even when there are errors in the training set. Thanks to the authors for talking me through this one!

Want more? Sign up below to get a free ebook Machine Learning in Practice, and updates on new posts:

08 July 2015

Yesterday I posted on my favourite papers from the beginning of ICML (some of those papers were actually presented today, although the posters were displayed yesterday). Here’s today’s update, which includes some papers to be presented tomorrow, because the posters were on display today…

Neural Nets

Unsupervised Domain Adaptation by Backpropagation

Yaroslav Ganin, Victor Lempitsky

Imagine you have a small amount of labelled training data and a lot of unlabelled data from a different domain. This technique will allow you to build a neural network model that fits the unlabelled domain. The key idea is super cool and really simple to implement. You build a network that optimises features such that it is difficult to distinguish which domain the data came from.

Weight Uncertainty in Neural Networks

Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, Daan Wierstra

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

Jose Miguel Hernandez-Lobato, Ryan Adams

These papers have a very similar goal, namely making neural networks probabilistic. This is cool because it allows you to not only make a decision, but know how sure you are about the decision. There are a bunch of other benefits: you don’t need to worry about regularisation, hyperparameter tuning is easier etc.

Anyway, the two papers achieve this in two different ways. The first uses Gaussian scale mixtures together with a clever trick to backpropagate expectations. The second one computes the distribution after rectifying and then approximates this with a Gaussian distribution. Either way, this is an exciting development for neural networks.

Training Deep Convolutional Neural Networks to Play Go

Christopher Clark, Amos Storkey

Although I’ve never actually played the game, I have an interest in AI Go players, because it’s such a hard game for computers, which still can’t reach the level of human players. The current state of the art uses Monte Carlo tree search which is a really cool technique. The authors of this paper use neural networks to play the game but don’t quite achieve the same level of performance. I asked the author whether the two approaches could be combined, and they think they can! Watch this space for a new state of the art Go player.

Natural Language Processing

Phrase-based Image Captioning

Remi Lebret, Pedro Pinheiro, Ronan Collobert

This is a new state of the art in this very interesting task of labelling images with phrases. The clever bit is in the syntactic analysis of the phrases in the training set, which often follow a similar pattern. The authors use this to their advantage: the model is trained on the individual sub-phrases that are extracted, which allows it to behave compositionally. This means that it can describe, for example, both the fact that a plate is on a table, and that there is pizza on the plate. Unlike previous approaches, the sentences that are generated are not often found in the training set, which shows that it is doing real generation and not retrieval. Exciting stuff!

Bimodal Modelling of Source Code and Natural Language

Miltos Allamanis, Daniel Tarlow, Andrew Gordon, Yi Wei

Another fun paper; this one tries to generate source code given a natural language query, quite an ambitious task! It is trained on snippets of code extracted from StackOverflow.

Optimisation

Gradient-based Hyperparameter Optimization through Reversible Learning

Dougal Maclaurin, David Duvenaud, Ryan Adams

Hyperparameter optimisation is important when training neural networks because there are so many of the things floating around. How do you know what to set them to? Normally you have to perform some kind of search on the space of possible parameters, and Bayesian techniques have been very helpful at doing this. This paper suggests something entirely different and completely audacious. The authors are able to compute gradients for hyperparameters using automatic differentiationafter going through a whole round of stochastic gradient descent learning. That’s quite a feat. What this means is that we can answer questions about what the optimal hyperparameter settings look like in different settings - and makes a whole set of things that was previously a “black art” a lot more scientific and understandable.

And more…

There were many more interesting papers - too many to write up here. Take a look at the schedule and find your favourite! Let me know on Twitter.

时间: 2024-10-13 11:15:20

My favourite papers from day one of ICML 2015的相关文章

Trends and Highlights of ICML 2015

Trends and Highlights of ICML 2015 There were a number of exciting things happening at ICML this past week, which took place in Lille, France. Deep learning remains the primary interest among a lot of research and excitement at ICML, where questions

(zhuan) Deep Reinforcement Learning Papers

Deep Reinforcement Learning Papers A list of recent papers regarding deep reinforcement learning. The papers are organized based on manually-defined bookmarks. They are sorted by time to see the recent papers first. Any suggestions and pull requests

Awesome Deep Vision

Awesome Deep Vision  A curated list of deep learning resources for computer vision, inspired by awesome-php and awesome-computer-vision. Maintainers - Jiwon Kim, Heesoo Myeong, Myungsub Choi, Jung Kwon Lee, Taeksoo Kim We are looking for a maintainer

Awesome Random Forest

Awesome Random Forest Random Forest - a curated list of resources regarding tree-based methods and more, including but not limited to random forest, bagging and boosting. Contributing Please feel free to pull requests, email Jung Kwon Lee ([email pro

Saw a tweet from Andrew Liam Trask, sounds like Oxford DeepNLP 2017 class have all videos slides practicals all up. Thanks Andrew for the tip!

Saw a tweet from Andrew Liam Trask, sounds like Oxford DeepNLP 2017 class have all videos/slides/practicals all up. Thanks Andrew for the tip! Preamble This repository contains the lecture slides and course description for the Deep Natural Language P

[转]当当推荐团队的机器学习实践

转自:http://www.csdn.net/article/2015-10-16/2825925 先说一下我的初衷.机器学习系统现在多红多NB这件事情我已不必赘述.但是由于机器学习系统的特殊性,构建一个靠谱好用的系统却并不是件容易的事情.每当看到同行们精彩的分享时,我都会想到,这些复杂精妙的系统,是怎样构建起来的?构建过程是怎样的?这背后是否有一些坑?有一些经验?是否可以“偷”来借鉴? 所以我希望做一个更侧重“面向过程”的分享,与大家分享一下我们在构建系统时的一些实践,一些坑,以及如何从坑里爬

Paper Reading 4:Massively Parallel Methods for Deep Reinforcement Learning

来源:ICML 2015 Deep Learning Workshop 作者:Google DeepMind 创新点:构建第一个用于深度增强学习的大规模分布式结构 该结构由四部分组成: 并行的行动器:用于产生新的行为 并行的学习器:用于从存储的经验中训练 分布式的神经网络:用于表示value function或者policy 分布式的经验存储 实验结果: 将DQN应用在该体系结构上,训练的水平在49个游戏中有41个游戏超过没有分布式DQN的水平,并且减少了训练时间 优点: 训练效果更好 训练时间

Awesome Recurrent Neural Networks

Awesome Recurrent Neural Networks A curated list of resources dedicated to recurrent neural networks (closely related to deep learning). Maintainers - Jiwon Kim, Myungsub Choi We have pages for other topics: awesome-deep-vision, awesome-random-forest

计算机视觉方面2015年重要会议deadline

新的一轮又要开始了.一年中虽然不知道怎样能中,但至少知道怎样不能中了.新的一年好好努力. CVPR2015       Boston, USA.          Nov,14,2014; ICMR2015         Shanghai, China.    Jan 15,2015; ICME2015         Turlin, Italy.                Nov 28, 2014. SIGIR2015        Santiago, Chile.     Jan,28