20170721 PyTorch学习笔记之计算图

1. **args, **kwargs的区别

 1     def build_vocab(self, *args, **kwargs):
 2         counter = Counter()
 3         sources = []
 4         for arg in args:
 5             if isinstance(arg, Dataset):
 6                 sources += [getattr(arg, name) for name, field in
 7                             arg.fields.items() if field is self]
 8             else:
 9                 sources.append(arg)
10         for data in sources:
11             for x in data:
12                 if not self.sequential:
13                     x = [x]
14                 counter.update(x)
15         specials = list(OrderedDict.fromkeys(
16             tok for tok in [self.pad_token, self.init_token, self.eos_token]
17             if tok is not None))
18         self.vocab = Vocab(counter, specials=specials, **kwargs)

2. np.sum

 1 import numpy as np
 2 np.random.seed(0)
 3
 4 N, D = 3, 4
 5 x = np.random.randn(N, D)
 6 y = np.random.randn(N, D)
 7 z = np.random.randn(N, D)
 8
 9 a = x * y
10 b = a + z
11 print(b)
12 c = np.sum(b)
13 print(c)                # 6.7170085378
14
15 # search the function of np.sum
16 total = 0
17 for i in range(len(b)):
18     for j in range(4):
19         total += b[i][j]
20 print(total)            # 6.7170085378

3. use numpy to solve grad

 1 import numpy as np
 2 N, D = 3, 4
 3 x = np.random.randn(N, D)
 4 y = np.random.randn(N, D)
 5 z = np.random.randn(N, D)
 6
 7 a = x * y
 8 b = a + z
 9 # print(b)
10 c = np.sum(b)
11 # print(c)                # 6.7170085378
12
13 grad_c = 1.0
14 grad_b = grad_c * np.ones((N, D))
15 grad_a = grad_b.copy()
16 grad_z = grad_b.copy()
17 grad_x = grad_a * y
18 grad_y = grad_a * x
19
20 print(grad_x)
21 print(grad_y)
22 print(grad_z)
23 ‘‘‘
24 [[ 0.04998285  0.32809396 -0.49822878  1.36419309]
25  [-0.52303972 -0.5881509  -0.37058995 -1.42112189]
26  [-0.58705758 -0.26012336  1.31326911 -0.20088737]]
27 [[ 0.14893265 -0.45509058  0.21410015  0.27659   ]
28  [ 0.29617438  0.98971103  2.07310583 -0.0195055 ]
29  [-1.49222601 -0.64073344 -0.18269488  0.26193553]]
30 [[ 1.  1.  1.  1.]
31  [ 1.  1.  1.  1.]
32  [ 1.  1.  1.  1.]]
33 ‘‘‘

PyTorch自动计算梯度

 1 import torch
 2 from torch.autograd import Variable
 3
 4 N, D = 3, 4
 5 # define variables to start building a computational graph
 6 x = Variable(torch.randn(N, D), requires_grad=True)
 7 y = Variable(torch.randn(N, D), requires_grad=True)
 8 z = Variable(torch.randn(N, D), requires_grad=True)
 9
10 # forward pass looks just like numpy
11 a = x * y
12 b = a + z
13 c = torch.sum(b)
14
15 # calling c,backward() computes all gradients
16 c.backward()
17 print(x.grad.data)
18 print(y.grad.data)
19 print(z.grad.data)
20 ‘‘‘
21 -0.9775 -0.0913  0.3710  1.5789
22  0.0896 -0.6563  0.8976 -0.3508
23 -0.9378  0.7028  1.4533  0.9255
24 [torch.FloatTensor of size 3x4]
25
26
27  0.6365  0.2388 -0.4755 -0.9860
28 -0.2403 -0.0468 -0.0470 -1.0132
29 -0.5019  0.5005 -1.9270  1.0030
30 [torch.FloatTensor of size 3x4]
31
32
33  1  1  1  1
34  1  1  1  1
35  1  1  1  1
36 [torch.FloatTensor of size 3x4]
37 ‘‘‘

4. 随机数

(1)random.seed(int)

  • 给随机数对象一个种子值,用于产生随机序列。
  • 对于同一个种子值的输入,之后产生的随机数序列也一样。
  • 通常是把时间秒数等变化值作为种子值,达到每次运行产生的随机系列都不一样
  • seed() 省略参数,意味着使用当前系统时间生成随机数

(2)random.random()

  生成随机浮点数

 1 import numpy as np
 2 np.random.seed(0)
 3
 4 print(np.random.random())       # 0.5488135039273248 不随时间改变
 5 print(np.random.random())       # 0.7151893663724195 不随时间改变
 6
 7 np.random.seed(0)
 8 print(np.random.random())       # 0.5488135039273248 不随时间改变
 9
10 np.random.seed()
11 print(np.random.random())       # 0.9623797942471012 随时间改变
12 np.random.seed()
13 print(np.random.random())       # 0.12734792669918393 随时间改变

(3)random.shuffle

  • 对list列表随机打乱顺序,也就是洗牌
  • shuffle只作用于list,对Str会报错比如‘abcdfed’,而[‘1‘,‘2‘,‘3‘,‘5‘,‘6‘,‘7‘]可以
 1 import numpy as np
 2
 3 item = [1,2,3,4,5,6,7]
 4 print(item)                     # [1, 2, 3, 4, 5, 6, 7]
 5 np.random.shuffle(item)
 6 print(item)                     # [7, 1, 2, 5, 4, 6, 3]
 7
 8 item2 = [‘1‘,‘2‘,‘3‘]
 9 np.random.shuffle(item2)
10 print(item2)                    # [‘1‘, ‘3‘, ‘2‘]

参考博客:python随机数用法

时间: 2024-11-10 00:18:19

20170721 PyTorch学习笔记之计算图的相关文章

pytorch 学习笔记之编写 C 扩展,又涨姿势了

pytorch利用CFFI 进行 C 语言扩展.包括两个基本的步骤(docs): 编写 C 代码: python 调用 C 代码,实现相应的 Function 或 Module. 在之前的文章中,我们已经了解了如何自定义 Module.至于 [py]torch 的 C 代码库的结构,我们留待之后讨论: 这里,重点关注,如何在 pytorch C 代码库高层接口的基础上,编写 C 代码,以及如何调用自己编写的 C 代码. 官方示例了如何定义一个加法运算(见 repo).这里我们定义ReLU函数(见

PyTorch学习笔记之nn的简单实例

method 1 1 import torch 2 from torch.autograd import Variable 3 4 N, D_in, H, D_out = 64, 1000, 100, 10 5 x = Variable(torch.randn(N, D_in)) 6 y = Variable(torch.randn(N, D_out), requires_grad=False) 7 8 # define our model as a sequence of layers 9 m

PyTorch学习笔记之DataLoaders

A DataLoader wraps a Dataset and provides minibatching, shuffling, multithreading, for you. 1 import torch 2 from torch.autograd import Variable 3 import torch.nn as nn 4 from torch.utils.data import TensorDataset, DataLoader 5 6 # define our whole m

PyTorch学习笔记之Tensors 2

Tensors的一些应用 1 ''' 2 Tensors和numpy中的ndarrays较为相似, 因此Tensor也能够使用GPU来加速运算 3 ''' 4 # from _future_ import print_function 5 import torch 6 x = torch.Tensor(5, 3) # 构造一个未初始化的5*3的矩阵 7 8 x2 = torch.rand(5, 3) # 构造一个随机初始化的矩阵 the same as 9 10 # print(x.size()

PyTorch学习笔记之初识word_embedding

1 import torch 2 import torch.nn as nn 3 from torch.autograd import Variable 4 5 word2id = {'hello': 0, 'world': 1} 6 # you have 2 words, and then need 5 dim each word 7 embeds = nn.Embedding(2, 5) 8 # we need variable, because we need use element of

PyTorch学习笔记之Variable

application 1 1 from torch.autograd import Variable 2 import torch 3 b = Variable(torch.FloatTensor([64, 100, 43])) 4 print(b) 5 ''' 6 Variable containing: 7 64 8 100 9 43 10 [torch.FloatTensor of size 3] 11 ''' application 2 1 from torch.autograd im

学习笔记TF039:TensorBoard

首先向大家和<TensorFlow实战>的作者说句不好意思.我现在看的书是<TensorFlow实战>.但从TF024开始,我在学习笔记的参考资料里一直写的是<TensorFlow实践>,我自己粗心搞错了,希望不至于对大家造成太多误导. TensorBoard,TensorFlow官方可视化工具.展示模型训练过程各种汇总数据.标量(Scalars).图片(Images).音频(audio).计算图(Graphs).数据分布(Distributions).直方图(Hist

vector 学习笔记

vector 使用练习: /**************************************** * File Name: vector.cpp * Author: sky0917 * Created Time: 2014年04月27日 11:07:33 ****************************************/ #include <iostream> #include <vector> using namespace std; int main

Caliburn.Micro学习笔记(一)----引导类和命名匹配规则

Caliburn.Micro学习笔记(一)----引导类和命名匹配规则 用了几天时间看了一下开源框架Caliburn.Micro 这是他源码的地址http://caliburnmicro.codeplex.com/ 文档也写的很详细,自己在看它的文档和代码时写了一些demo和笔记,还有它实现的原理记录一下 学习Caliburn.Micro要有MEF和MVVM的基础 先说一下他的命名规则和引导类 以后我会把Caliburn.Micro的 Actions IResult,IHandle ICondu