CMU Deep Learning 2018 by Bhiksha Raj 学习记录(1)

  • Recitation 2

  • numpy operations
    • array index
    • x = np.arange(10) ** 2

      # x:[ 0 1 4 9 16 25 36 49 64 81]

      print(x[::-1]) # all reversed

      print(x[8:1:-1]) # reversed slice

      print(x[::2]) # every other

      print(x[:]) # no-op (but useful syntax when dealing with n-d arrays)

      ---

      output:

      [81 64 49 36 25 16 9 4 1 0][64 49 36 25 16 9 4]

      [ 0 4 16 36 64][ 0 1 4 9 16 25 36 49 64 81]

    • simple syntax
    • # Simple syntax

      np.random.seed(123)

      x=np.random.random((10,))

      print(x)

      print(x>0.5)

      print(x[x>0.5])

      ---

      [ 0.69646919 0.28613933 0.22685145 0.55131477 0.71946897 0.42310646

      0.9807642 0.68482974 0.4809319 0.39211752][ True False False True True False True True False False]

      [ 0.69646919 0.55131477 0.71946897 0.9807642 0.68482974]

    • get diagonal elements
    • # Create a random matrix

      x = np.random.random((5,5))

      print(x)

      # Get diagonal elements

      print(np.diag(x))

    • save a single array
    • x = np.random.random((5,))

      np.save(‘temp.npy‘, x)

      y = np.load(‘temp.npy‘)

      print(y)

    • save dict of arrays
    • x1 = x = np.random.random((2,))

      y1 = x = np.random.random((2,))

      np.savez(‘temp.npy‘, x = x1, y = y1)

      data.np.load(‘temp.npy‘)

      print(data[‘x‘])

      print(data[‘y‘])

    • transpose
    • x=np.random.random((2,3))

      print(x)

      print(x.T) # simple transpose

      print(np.transpose(x, (1,0))) # syntax for multiple dimensions

      ---

      [[ 0.6919703 0.55438325 0.38895057][ 0.92513249 0.84167 0.35739757]][[ 0.6919703 0.92513249][ 0.55438325 0.84167 ]

      [ 0.38895057 0.35739757]]

      [[ 0.6919703 0.92513249][ 0.55438325 0.84167 ]

      [ 0.38895057 0.35739757]]

    • Add/remove a dim
    • # Special functions for adding and removing dims

      x=np.random.random((2,3,1))

      print(np.expand_dims(x, 1).shape) # add a new dimension

      print(np.squeeze(x,2).shape) # remove a dimension (must be size of 1)

      ---

      (2, 1, 3, 1)

      (2, 3)

  • Pytorch operation

import torch

import numpy as np

from torch.autograd import Variable

x = torch.FloatTensor(2,3)

print(x)

x.zero_()

print(x)

np.random.seed(123)

np_array = np.random.random((2,3))

print(torch.FloatTensor(np_array))

print(torch.from_numpy(np_array))

torch.manual_seed(123)

print(torch.randn(2,3))

print(torch.eye(3))

print(torch.ones(2,3))

print(torch.zeros(2,3))

print(torch.arange(0,3))

x = torch.FloatTensor(3,4)

print(x.size())

print(x.type())

x = torch.rand(3,2)

print(x)

y = x.cuda()

print(y)

z = y.cpu()

print(z)

print(z.numpy())

x = torch.rand(3,5).cuda()

y = torch.rand(5,4).cuda()

print(torch.mm(x,y))

print(x.new(1,2).zero_())

from timeit import timeit

x = torch.rand(1000,64)

y = torch.rand(64,32)

number = 10000

def square():

z = torch.mm(x,y)

print(‘CPU: {}ms‘.format(timeit(square,number = number)1000))

x,y = x.cuda(),y.cuda()

print(‘GPU: {}ms‘.format(timeit(square,number = number)1000))

x = torch.arange(0,5)

print(torch.sum(x))

print(torch.sum(torch.exp(x)))

print(torch.mean(x))

x = torch.rand(3,2)

print(x)

print(x[1,:])

x = Variable(torch.arange(0,4),requires_grad = True)

y = torch.sum(x**2)

y.backward()

print(x)

print(y)

print(x.grad)

x = torch.rand(3,5)

y = torch.rand(5,4)

xv = Variable(x)

yv = Variable(y)

print(torch.mm(x,y))

print(torch.mm(xv,yv))

x = Variable(torch.arange(0,4),requires_grad = True)

torch.sum(x ** 2).backward()

print(x.grad)

torch.sum(x ** 2).backward()

print(x.grad)

x.grad.data.zero_()

torch.sum(x ** 2).backward()

print(x.grad)

net = torch.nn.Sequential(

torch.nn.Linear(28*28,256),

torch.nn.Sigmoid(),

torch.nn.Linear(256,10)

)

print(net.state_dict().keys())

print(net.state_dict())

torch.save(net.state_dict(),‘test.t7‘)

net.load_state_dict(torch.load(‘test.t7‘))

class MyNetwork(torch.nn.Module):

def init(self):

super().init()

self.layer1 = torch.nn.Linear(28*28,256),

self.layer2 = torch.nn.Sigmoid(),

self.layer3 = torch.nn.Linear(256,10)

def forward(self,input_val):
    h = input_val
    h = self.layer1(h)
    h = self.layer2(h)
    h = self.layer3(h)
    return h
    

net = MyNetwork()

原文地址:https://www.cnblogs.com/ecoflex/p/8870121.html

时间: 2024-08-30 16:46:46

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(1)的相关文章

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(7)

https://zhuanlan.zhihu.com/p/22810533 L2 -> Regression Problems KL -> Classification Problems http://deeplearning.cs.cmu.edu/slides/lec8.stochastic_gradient.pdf 原文地址:https://www.cnblogs.com/ecoflex/p/8886201.html

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(10)

http://deeplearning.cs.cmu.edu/slides/lec11.recurrent.pdf I think the subscripts in this lecture is quite confusing, and even incorrect sometimes. Jacobian Matrix 原文地址:https://www.cnblogs.com/ecoflex/p/8904117.html

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(17)

NN is pretty bad at learning this pattern. green dots are referred to the first layer. blue -> second layer red -> third layer https://github.com/cmudeeplearning11785/machine_learning_gpu/blob/master/Dockerfile www.dockerhub.com https://github.com/w

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(18)

https://www.youtube.com/watch?v=oiNFCbD_4Tk <eos> end of section a common trick image to seq problem mistake might happen. look at the fig(1,0) could lose info by only taking the red box into consideration. the new framework still has a problem soft

CMU Deep Learning 2018 by Bhiksha Raj 学习记录(20) Lecture 20: Hopfield Networks 1

symmetric version: called Hopfield Net      if y <-- minus y, energy won't change! the second last line's first "-" is not the minus sign https://zhuanlan.zhihu.com/p/21539285 原文地址:https://www.cnblogs.com/ecoflex/p/8971351.html

Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1

3.Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.1 http://blog.csdn.net/sunbow0 Spark MLlib Deep Learning工具箱,是根据现有深度学习教程<UFLDL教程>中的算法,在SparkMLlib中的实现.具体Spark MLlib Deep Learning(深度学习)目录结构: 第一章Neural Net(NN) 1.源码 2.源码解析 3.实例 第二章D

Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.2

3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.2 http://blog.csdn.net/sunbow0 第三章Convolution Neural Network (卷积神经网络) 2基础及源码解析 2.1 Convolution Neural Network卷积神经网络基础知识 1)基础知识: 自行google,百度,基础方面的非常多,随便看看就可以,只是很多没有把细节说得清楚和明白: 能把细节说清

Spark MLlib Deep Learning Convolution Neural Network (深度学习-卷积神经网络)3.3

3.Spark MLlib Deep Learning Convolution Neural Network(深度学习-卷积神经网络)3.3 http://blog.csdn.net/sunbow0 第三章Convolution Neural Network (卷积神经网络) 3实例 3.1 测试数据 按照上例数据,或者新建图片识别数据. 3.2 CNN实例 //2 测试数据 Logger.getRootLogger.setLevel(Level.WARN) valdata_path="/use

Deep Learning 15:RBM的学习

1.读学位论文“基于深度学习的人脸识别研究”: 对RBM.DBN的介绍比较详细,可以作为基础阅读,再去读英文论文. 2.RBM的推导: ①深度学习笔记 - RBM_百度文库 这个讲很直白,感觉非常好!也不知道是哪位大神写的 ②page ③Yoshua Bengio大神写的论文“ Learning Deep Architectures for AI”的第5部分.或者是这本书的第5部分 ④http://deeplearning.net/tutorial/rbm.html#equation-energ