Geoffrey E. Hinton

https://www.cs.toronto.edu/~hinton/

Geoffrey E. Hinton

I am an Engineering Fellow at Google where I manage Brain Team Toronto, which is a new part of the Google Brain Team and is located at Google‘s Toronto office at 111 Richmond Street. Brain Team Toronto does basic research on ways to improve neural network learning techniques. I also do pro bono work as the Chief Scientific Adviser of the new Vector Institute. I am also an Emeritus Professor at the University of Toronto.

Department of Computer Science   email: geoffrey [dot] hinton [at] gmail [dot] com
University of Toronto   voice: send email
6 King‘s College Rd.   fax: scan and send email
Toronto, Ontario    
 

Information for prospective students:
I advise interns at Brain team Toronto. 
I also advise some of the residents in the Google Brain Residents Program.
I will not be taking any more visiting students, summer students or visitors at the University of Toronto. I will not be the sole advisor of any new graduate students, but I may co-advise a few graduate students with Prof. Roger Grosse or soon to be Prof. Jimmy Ba.

News 
Results of the 2012 competition to recognize 1000 different types of object
How George Dahl won the competition to predict the activity of potential drugs
How Vlad Mnih won the competition to predict job salaries from job advertisements
How Laurens van der Maaten won the competition to visualize a dataset of potential drugs

Using big data to make people vote against their own interests 
A possible motive for making people vote against their own interests

Basic papers on deep learning

Hinton, G. E., Osindero, S. and Teh, Y. (2006)
A fast learning algorithm for deep belief nets.
Neural Computation, 18, pp 1527-1554. [pdf
Movies of the neural network generating and recognizing digits

Hinton, G. E. and Salakhutdinov, R. R. (2006)
Reducing the dimensionality of data with neural networks.
Science, Vol. 313. no. 5786, pp. 504 - 507, 28 July 2006.
[full paper] [supporting online material (pdf)] [Matlab code]

 LeCun, Y., Bengio, Y. and Hinton, G. E. (2015)
Deep Learning
Nature, Vol. 521, pp 436-444. [pdf]

Papers on deep learning without much math

Hinton, G. E. (2007)
To recognize shapes, first learn to generate images
In P. Cisek, T. Drew and J. Kalaska (Eds.)
Computational Neuroscience: Theoretical Insights into Brain Function. Elsevier. [pdf of final draft]

Hinton, G. E. (2007)
Learning Multiple Layers of Representation.
Trends in Cognitive Sciences, Vol. 11, pp 428-434. [pdf]

Hinton, G. E. (2014)
Where do features come from?.
Cognitive Science, Vol. 38(6), pp 1078-1101. [pdf]

A practical guide to training restricted Boltzmann machines
[pdf]

Recent Papers

 Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., & Dean, J. (2017)
Outrageously large neural networks: The sparsely-gated mixture-of-experts layer
arXiv preprint arXiv:1701.06538 [pdf]

 Ba, J. L., Hinton, G. E., Mnih, V., Leibo, J. Z. and Ionescu, C. (2016)
Using Fast Weights to Attend to the Recent Past
{\it NIPS-2016}, arXiv preprint arXiv:1610.06258v2 [pdf]

 Ba, J. L., Kiros, J. R. and Hinton, G. E. (2016)
Layer normalization
{\it Deep Learning Symposium, NIPS-2016}, arXiv preprint arXiv:1607.06450 [pdf]

 Ali Eslami, S. M., Nicolas Heess, N., Theophane Weber, T., Tassa, Y., Szepesvari, D., Kavukcuoglu, K. and Hinton, G. E. (2016)
Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
{\it NIPS-2016}, arXiv preprint arXiv:1603.08575v3 [pdf]

LeCun, Y., Bengio, Y. and Hinton, G. E. (2015)
Deep Learning
Nature, Vol. 521, pp 436-444. [pdf]

Hinton, G. E., Vinyals, O., and Dean, J. (2015)
Distilling the knowledge in a neural network
arXiv preprint arXiv:1503.02531 [pdf]

Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I., & Hinton, G. E. (2014)
Grammar as a foreign language.
arXiv preprint arXiv:1412.7449 [pdf]

Hinton, G. E. (2014)
Where do features come from?.
Cognitive Science, Vol. 38(6), pp 1078-1101. [pdf]

Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R. (2014)
Dropout: A simple way to prevent neural networks from overfitting
The Journal of Machine Learning Research, 15(1), pp 1929-1958. [pdf]

Srivastava, N., Salakhutdinov, R. R. and Hinton, G. E. (2013)
Modeling Documents with a Deep Boltzmann Machine
arXiv preprint arXiv:1309.6865 [pdf]

Graves, A., Mohamed, A. and Hinton, G. E. (2013)
Speech Recognition with Deep Recurrent Neural Networks
In IEEE International Conference on Acoustic Speech and Signal Processing (ICASSP 2013) Vancouver, 2013. [pdf]

Joseph Turian‘s map of 2500 English words produced by using t-SNE on the word feature vectors learned by Collobert & Weston, ICML 2008

Doing analogies by using vector algebra on word embeddings

时间: 2024-10-13 11:36:05

Geoffrey E. Hinton的相关文章

随时更新———个人喜欢的关于模式识别、机器学习、推荐系统、图像特征、深度学习、数值计算、目标跟踪等方面个人主页及博客

目标检測.识别.分类.特征点的提取 David Lowe:Sift算法的发明者,天才. Rob Hess:sift的源代码OpenSift的作者,个人主页上有openSift的下载链接.Opencv中sift的实现.也是參考这个. Koen van de Sande:作者给出了sift,densesift,colorsift等等经常使用的特征点程序.输出格式见个人主页说明,当然这个特征点的算法,在Opencv中都有实现. Ivan Laptev:作者给出了物体检測等方面丰富C\C++源代码,及部

总结一些机器视觉库

通用库/General Library OpenCV   无需多言. RAVL  Recognition And Vision Library. 线程安全.强大的IO机制.包含AAM. CImg  很酷的一个图像处理包.整个库只有一个头文件.包含一个基于PDE的光流算法. 图像,视频IO/Image, Video IO FreeImage DevIL ImageMagick FFMPEG VideoInput portVideo AR相关/Augmented Reality ARToolKit 

[转]计算机视觉、机器学习相关领域论文和源代码大集合

计算机视觉.机器学习相关领域论文和源代码大集合--持续更新…… [email protected] http://blog.csdn.net/zouxy09 注:下面有project网站的大部分都有paper和相应的code.Code一般是C/C++或者Matlab代码. 最近一次更新:2013-3-17 一.特征提取Feature Extraction: ·         SIFT [1] [Demo program][SIFT Library] [VLFeat] ·         PCA

关于机器学习和深度学习的资料

声明:转来的,原文出处:http://blog.csdn.net/achaoluo007/article/details/43564321 编者按:本文收集了百来篇关于机器学习和深度学习的资料,含各种文档,视频,源码等.而且原文也会不定期的更新,望看到文章的朋友能够学到更多. <Brief History of Machine Learning> 介绍:这是一篇介绍机器学习历史的文章,介绍很全面,从感知机.神经网络.决策树.SVM.Adaboost 到随机森林.Deep Learning. &

paper 61:计算机视觉领域的一些牛人博客,超有实力的研究机构等的网站链接

转载出处:blog.csdn.net/carson2005 以下链接是本人整理的关于计算机视觉(ComputerVision, CV)相关领域的网站链接,其中有CV牛人的主页,CV研究小组的主页,CV领域的paper,代码,CV领域的最新动态,国内的应用情况等等.打算从事这个行业或者刚入门的朋友可以多关注这些网站,多了解一些CV的具体应用.搞研究的朋友也可以从中了解到很多牛人的研究动态.招生情况等.总之,我认为,知识只有分享才能产生更大的价值,真诚希望下面的链接能对朋友们有所帮助.(1)goog

转载:2013计算机视觉代码合集

转载,原文地址http://blog.csdn.net/daoqinglin/article/details/23607079 -------------------------------------------------------------------------- 来源: http://www.yuanyong.org/cv/cv-code-one.html http://www.yuanyong.org/cv/cv-code-two.html http://www.yuanyong

ML简史

原文地址:http://www.52ml.net/15427.html 图 1 机器学习时间线 在科学技术刚刚萌芽的时候,科学家Blaise Pascal和Von Leibniz就想到了有朝一日能够实现人工智能.即让机器拥有像人一样的智能. 机器学习是AI中一条重要的发展线,在工业界和学术界都异常火爆.企业.大学都在投入大量的资源来做机器学习方面的研究.最近,机器学习在很多任务上都有了重大的进步,达到或者超越了人类的水平(例如,交通标志的识别[1],ML达到了98.98%,已超越了人类). 图1

Deep Learning(深度学习)学习笔记整理

申明:本文非笔者原创,原文转载自:http://www.sigvc.org/bbs/thread-2187-1-3.html 4.2.初级(浅层)特征表示 既然像素级的特征表示方法没有作用,那怎样的表示才有用呢? 1995 年前后,Bruno Olshausen和 David Field 两位学者任职 Cornell University,他们试图同时用生理学和计算机的手段,双管齐下,研究视觉问题. 他们收集了很多黑白风景照片,从这些照片中,提取出400个小碎片,每个照片碎片的尺寸均为 16x1

Awesome Deep Vision

Awesome Deep Vision  A curated list of deep learning resources for computer vision, inspired by awesome-php and awesome-computer-vision. Maintainers - Jiwon Kim, Heesoo Myeong, Myungsub Choi, Jung Kwon Lee, Taeksoo Kim We are looking for a maintainer