CMU Deep Learning 2018 by Bhiksha Raj 学习记录(20) Lecture 20: Hopfield Networks 1

symmetric version: called Hopfield Net

    

if y <-- minus y, energy won‘t change!

the second last line‘s first "-" is not the minus sign

https://zhuanlan.zhihu.com/p/21539285

原文地址:https://www.cnblogs.com/ecoflex/p/8971351.html

时间: 2024-11-07 21:04:39

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(20) Lecture 20: Hopfield Networks 1的相关文章

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(7)

https://zhuanlan.zhihu.com/p/22810533 L2 -> Regression Problems KL -> Classification Problems http://deeplearning.cs.cmu.edu/slides/lec8.stochastic_gradient.pdf 原文地址:https://www.cnblogs.com/ecoflex/p/8886201.html

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(10)

http://deeplearning.cs.cmu.edu/slides/lec11.recurrent.pdf I think the subscripts in this lecture is quite confusing, and even incorrect sometimes. Jacobian Matrix 原文地址:https://www.cnblogs.com/ecoflex/p/8904117.html

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(17)

NN is pretty bad at learning this pattern. green dots are referred to the first layer. blue -> second layer red -> third layer https://github.com/cmudeeplearning11785/machine_learning_gpu/blob/master/Dockerfile www.dockerhub.com https://github.com/w

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(1)

Recitation 2 numpy operations array index x = np.arange(10) ** 2 # x:[ 0 1 4 9 16 25 36 49 64 81] print(x[::-1]) # all reversed print(x[8:1:-1]) # reversed slice print(x[::2]) # every other print(x[:]) # no-op (but useful syntax when dealing with n-d

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(18)

https://www.youtube.com/watch?v=oiNFCbD_4Tk <eos> end of section a common trick image to seq problem mistake might happen. look at the fig(1,0) could lose info by only taking the red box into consideration. the new framework still has a problem soft

Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1

3.Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1 http://blog.csdn.net/sunbow0 Spark MLlib Deep Learning工具箱,是根据现有深度学习教程<UFLDL教程>中的算法,在SparkMLlib中的实现.具体Spark MLlib Deep Learning(深度学习)目录结构: 第一章Neural Net(NN) 1.源码 2.源码解析 3.实例 第二章D

Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.2

3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.2 http://blog.csdn.net/sunbow0 第三章Convolution Neural Network (卷积神经网络) 2基础及源码解析 2.1 Convolution Neural Network卷积神经网络基础知识 1)基础知识: 自行google,百度,基础方面的非常多,随便看看就可以,只是很多没有把细节说得清楚和明白: 能把细节说清

Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.3

3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.3 http://blog.csdn.net/sunbow0 第三章Convolution Neural Network (卷积神经网络) 3实例 3.1 测试数据 按照上例数据,或者新建图片识别数据. 3.2 CNN实例 //2 测试数据 Logger.getRootLogger.setLevel(Level.WARN) valdata_path="/use

Deep Learning 23:dropout理解_之读论文“Improving neural networks by preventing co-adaptation of feature detectors”

理论知识:Deep learning:四十一(Dropout简单理解).深度学习(二十二)Dropout浅层理解与实现.“Improving neural networks by preventing co-adaptation of feature detectors” 感觉没什么好说的了,该说的在引用的这两篇博客里已经说得很清楚了,直接做试验吧 注意: 1.在模型的测试阶段,使用”mean network(均值网络)”来得到隐含层的输出,其实就是在网络前向传播到输出层前时隐含层节点的输出值都