MNIST,一个经典的手写数字库,包含60000个训练样本和10000个测试样本,图片大小28*28,在Caffe上配置的第一个案例
1首先,获取minist的数据包。 这个版本是四个数据包
cd $CAFFE_ROOT
./data/mnist/get_mnist.sh
[html] view plaincopy
- #!/usr/bin/env sh
- # This scripts downloads the mnist data and unzips it.
- DIR="$( cd "$(dirname "$0")" ; pwd -P )"
- cd $DIR
- echo "Downloading..."
- wget --no-check-certificate http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
- wget --no-check-certificate http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
- wget --no-check-certificate http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
- wget --no-check-certificate http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
- echo "Unzipping..."
- gunzip train-images-idx3-ubyte.gz
- gunzip train-labels-idx1-ubyte.gz
- gunzip t10k-images-idx3-ubyte.gz
- gunzip t10k-labels-idx1-ubyte.gz
- # Creation is split out because leveldb sometimes causes segfault
- # and needs to be re-created.
- echo "Done."
然后执行
./examples/mnist/create_mnist.sh
[html] view plaincopy
- Creating lmdb...
- Done.
在这一步做了什么工作呢?
create_mnist.sh是利用caffe-master/build/examples/mnist/的convert_mnist_data.bin工具,将mnist date转化为可用的lmdb格式的文件。并将新生成的2个文件mnist-train-lmdb 和 mnist-test-lmdb放于create_mnist.sh同目录下。
2 数据准备好了,那么接下来的工作就是训练了。
http://caffe.berkeleyvision.org/gathered/examples/mnist.html
给出来的例子是
./examples/mnist/train_lenet.sh
这个脚本调用的工具如下:
[html] view plaincopy
- ./build/tools/caffe train --solver=examples/mnist/lenet_solver.prototxt
还有其他的示例,如:
./examples/mnist/train_mnist_autoencoder.sh
这个脚本调用的工具如下:
[html] view plaincopy
- ./build/tools/caffe train \
- --solver=examples/mnist/mnist_autoencoder_solver.prototxt
运行完结果如下:
生成四个文件
lenet_iter_10000.caffemodel
lenet_iter_10000.solverstate
lenet_iter_5000.caffemodel
lenet_iter_5000.solverstate
屏幕显示每次
.......................
I0126 17:32:32.171516 18290 solver.cpp:246] Iteration 10000, loss = 0.00453533
I0126 17:32:32.171550 18290 solver.cpp:264] Iteration 10000, Testing net (#0)
I0126 17:32:40.498195 18290 solver.cpp:315] Test net output #0: accuracy = 0.9903
I0126 17:32:40.498236 18290 solver.cpp:315] Test net output #1: loss = 0.0309918 (* 1 = 0.0309918 loss)
I0126 17:32:40.498245 18290 solver.cpp:251] Optimization Done.
I0126 17:32:40.498249 18290 caffe.cpp:121] Optimization Done.
首先讨论训练的网络模型:LeNet: the MNIST Classification Model http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf
LeNet 模型之前在手写识别上就有非常好的表现。caffe 这里提供的是一个改进版的LeNet模型,其中的 sigmoid 被rectified linear units (ReLUs) 替换。
(标准的sigmoid输出不具备稀疏性,需要用一些惩罚因子来训练出一大堆接近0的冗余数据来,从而产生稀疏数据,例如L1、L1/L2或Student-t作惩罚因子。因此需要进行无监督的预训练。多层的神经网络如果用sigmoid或tanh激活函数也不做pre-training的话会因为 gradient vanishing problem 而会无法收敛。ReLU则这没有这个问题。ReLU是线性修正,公式为:g(x) = max(0, x),是purelin的折线版。它的作用是如果计算出的值小于0,就让它等于0,否则保持原来的值不变。这是一种简单粗暴地强制某些数据为0的方法,然而经实践证明,训练后的网络完全具备适度的稀疏性。而且训练后的可视化效果和传统方式预训练出的效果很相似,这也说明了ReLU具备引导适度稀疏的能力。 来自讨论http://tieba.baidu.com/p/3061925556)
LeNet的结构和CNN的结构是由相似之处的,是由两个 convolutional layer 和 pooling layer交错连接 ,然后两个fully connected layers结构组成。 具体可参见http://deeplearning.net/tutorial/lenet.html,此外DeepID 有个简单直观的结构图,可以辅助了解 http://blog.csdn.net/stdcoutzyx/article/details/42091205
定义MNIST网络和MNIST Solver:
从 ./examples/mnist/train_lenet.sh 脚本调用的指令我们就可以看到,solver是定义在了$CAFFE_ROOT/examples/mnist/lenet_solver.prototxt 中。
[html] view plaincopy
- # The train/test net protocol buffer definition
- net: "examples/mnist/lenet_train_test.prototxt" //网络协议具体定义
- # test_iter specifies how many forward passes the test should carry out.
- # In the case of MNIST, we have test batch size 100 and 100 test iterations,
- # covering the full 10,000 testing images.
- test_iter: 100 //test迭代次数 如果batch_size =100,则100张图一批,训练100次,则可以覆盖10000张图的需求
- # Carry out testing every 500 training iterations.
- test_interval: 500 //训练迭代500次,测试一次
- # The base learning rate, momentum and the weight decay of the network. //网络参数:学习率,动量,权重的衰减
- base_lr: 0.01
- momentum: 0.9
- weight_decay: 0.0005
- # The learning rate policy //学习策略:有固定学习率和每步递减学习率
- lr_policy: "inv"
- gamma: 0.0001
- power: 0.75
- # Display every 100 iterations //每迭代100次显示一次
- display: 100
- # The maximum number of iterations //最大迭代次数
- max_iter: 10000
- # snapshot intermediate results // 每5000次迭代存储一次数据,路径前缀是<span style="font-family: Arial, Helvetica, sans-serif;">examples/mnist/lenet</span>
- snapshot: 5000
- snapshot_prefix: "examples/mnist/lenet"
[html] view plaincopy
- solver_mode: CPU
再看一下./examples/mnist/train_mnist_autoencoder.sh 调用的 mnist_autoencoder_solver.prototxt 的solver定义
[html] view plaincopy
- net: "examples/mnist/mnist_autoencoder.prototxt"
- test_state: { stage: ‘test-on-train‘ }
- test_iter: 500
- test_state: { stage: ‘test-on-test‘ }
- test_iter: 100
- test_interval: 500
- test_compute_loss: true
- base_lr: 0.01
- lr_policy: "step"
- gamma: 0.1
- stepsize: 10000
- display: 100
- max_iter: 65000
- weight_decay: 0.0005
- snapshot: 10000
- snapshot_prefix: "examples/mnist/mnist_autoencoder"
- momentum: 0.9
- # solver mode: CPU or GPU
- solver_mode: GPU
MNIST网络定义在 $CAFFE_ROOT/examples/mnist/lenet_train_test.prototxt. solver的第二行已经注明
具体解释 可以参见http://caffe.berkeleyvision.org/gathered/examples/mnist.html
$CAFFE_ROOT/examples/mnist/lenet_train_test.prototx
[html] view plaincopy
- name: "LeNet"
- layers {
- name: "mnist"
- type: DATA
- top: "data"
- top: "label"
- data_param {
- source: "examples/mnist/mnist_train_lmdb"
- backend: LMDB
- batch_size: 64
- }
- transform_param {
- scale: 0.00390625
- }
- include: { phase: TRAIN }
- }
- layers {
- name: "mnist"
- type: DATA
- top: "data"
- top: "label"
- data_param {
- source: "examples/mnist/mnist_test_lmdb"
- backend: LMDB
- batch_size: 100
- }
- transform_param {
- scale: 0.00390625
- }
- include: { phase: TEST }
- }
- layers {
- name: "conv1"
- type: CONVOLUTION
- bottom: "data"
- top: "conv1"
- blobs_lr: 1
- blobs_lr: 2
- convolution_param {
- num_output: 20
- kernel_size: 5
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "constant"
- }
- }
- }
- layers {
- name: "pool1"
- type: POOLING
- bottom: "conv1"
- top: "pool1"
- pooling_param {
- pool: MAX
- kernel_size: 2
- stride: 2
- }
- }
- layers {
- name: "conv2"
- type: CONVOLUTION
- bottom: "pool1"
- top: "conv2"
- blobs_lr: 1
- blobs_lr: 2
- convolution_param {
- num_output: 50
- kernel_size: 5
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "constant"
- }
- }
- }
- layers {
- name: "pool2"
- type: POOLING
- bottom: "conv2"
- top: "pool2"
- pooling_param {
- pool: MAX
- kernel_size: 2
- stride: 2
- }
- }
- layers {
- name: "ip1"
- type: INNER_PRODUCT
- bottom: "pool2"
- top: "ip1"
- blobs_lr: 1
- blobs_lr: 2
- inner_product_param {
- num_output: 500
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "constant"
- }
- }
- }
- layers {
- name: "relu1"
- type: RELU
- bottom: "ip1"
- top: "ip1"
- }
- layers {
- name: "ip2"
- type: INNER_PRODUCT
- bottom: "ip1"
- top: "ip2"
- blobs_lr: 1
- blobs_lr: 2
- inner_product_param {
- num_output: 10
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "constant"
- }
- }
- }
- layers {
- name: "accuracy"
- type: ACCURACY
- bottom: "ip2"
- bottom: "label"
- top: "accuracy"
- include: { phase: TEST }
- }
- layers {
- name: "loss"
- type: SOFTMAX_LOSS
- bottom: "ip2"
- bottom: "label"
- top: "loss"
- }
而examples/mnist/mnist_autoencoder.prototxt 的网络定义则复杂许多:我们也可以看到,里面采用的是sigmoid 而不是 RELU
[html] view plaincopy
- name: "MNISTAutoencoder"
- layers {
- top: "data"
- name: "data"
- type: DATA
- data_param {
- source: "examples/mnist/mnist_train_lmdb"
- backend: LMDB
- batch_size: 100
- }
- transform_param {
- scale: 0.0039215684
- }
- include: { phase: TRAIN }
- }
- layers {
- top: "data"
- name: "data"
- type: DATA
- data_param {
- source: "examples/mnist/mnist_train_lmdb"
- backend: LMDB
- batch_size: 100
- scale: 0.0039215684
- }
- include: {
- phase: TEST
- stage: ‘test-on-train‘
- }
- }
- layers {
- top: "data"
- name: "data"
- type: DATA
- data_param {
- source: "examples/mnist/mnist_test_lmdb"
- backend: LMDB
- batch_size: 100
- }
- transform_param {
- scale: 0.0039215684
- }
- include: {
- phase: TEST
- stage: ‘test-on-test‘
- }
- }
- layers {
- bottom: "data"
- top: "flatdata"
- name: "flatdata"
- type: FLATTEN
- }
- layers {
- bottom: "data"
- top: "encode1"
- name: "encode1"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 1
- weight_decay: 1
- weight_decay: 0
- inner_product_param {
- num_output: 1000
- weight_filler {
- type: "gaussian"
- std: 1
- sparse: 15
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layers {
- bottom: "encode1"
- top: "encode1neuron"
- name: "encode1neuron"
- type: SIGMOID
- }
- layers {
- bottom: "encode1neuron"
- top: "encode2"
- name: "encode2"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 1
- weight_decay: 1
- weight_decay: 0
- inner_product_param {
- num_output: 500
- weight_filler {
- type: "gaussian"
- std: 1
- sparse: 15
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layers {
- bottom: "encode2"
- top: "encode2neuron"
- name: "encode2neuron"
- type: SIGMOID
- }
- layers {
- bottom: "encode2neuron"
- top: "encode3"
- name: "encode3"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 1
- weight_decay: 1
- weight_decay: 0
- inner_product_param {
- num_output: 250
- weight_filler {
- type: "gaussian"
- std: 1
- sparse: 15
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layers {
- bottom: "encode3"
- top: "encode3neuron"
- name: "encode3neuron"
- type: SIGMOID
- }
- layers {
- bottom: "encode3neuron"
- top: "encode4"
- name: "encode4"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 1
- weight_decay: 1
- weight_decay: 0
- inner_product_param {
- num_output: 30
- weight_filler {
- type: "gaussian"
- std: 1
- sparse: 15
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layers {
- bottom: "encode4"
- top: "decode4"
- name: "decode4"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 1
- weight_decay: 1
- weight_decay: 0
- inner_product_param {
- num_output: 250
- weight_filler {
- type: "gaussian"
- std: 1
- sparse: 15
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layers {
- bottom: "decode4"
- top: "decode4neuron"
- name: "decode4neuron"
- type: SIGMOID
- }
- layers {
- bottom: "decode4neuron"
- top: "decode3"
- name: "decode3"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 1
- weight_decay: 1
- weight_decay: 0
- inner_product_param {
- num_output: 500
- weight_filler {
- type: "gaussian"
- std: 1
- sparse: 15
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layers {
- bottom: "decode3"
- top: "decode3neuron"
- name: "decode3neuron"
- type: SIGMOID
- }
- layers {
- bottom: "decode3neuron"
- top: "decode2"
- name: "decode2"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 1
- weight_decay: 1
- weight_decay: 0
- inner_product_param {
- num_output: 1000
- weight_filler {
- type: "gaussian"
- std: 1
- sparse: 15
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layers {
- bottom: "decode2"
- top: "decode2neuron"
- name: "decode2neuron"
- type: SIGMOID
- }
- layers {
- bottom: "decode2neuron"
- top: "decode1"
- name: "decode1"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 1
- weight_decay: 1
- weight_decay: 0
- inner_product_param {
- num_output: 784
- weight_filler {
- type: "gaussian"
- std: 1
- sparse: 15
- }
- bias_filler {
- type: "constant"
- value: 0
- }
- }
- }
- layers {
- bottom: "decode1"
- bottom: "flatdata"
- top: "cross_entropy_loss"
- name: "loss"
- type: SIGMOID_CROSS_ENTROPY_LOSS
- loss_weight: 1
- }
- layers {
- bottom: "decode1"
- top: "decode1neuron"
- name: "decode1neuron"
- type: SIGMOID
- }
- layers {
- bottom: "decode1neuron"
- bottom: "flatdata"
- top: "l2_error"
- name: "loss"
- type: EUCLIDEAN_LOSS
- loss_weight: 0
- }
当所有数据都训练好之后,接下来就是如何将模型应用到实际数据了:
./build/tools/caffe.bin test -model=examples/mnist/lenet_train_test.prototxt -weights=examples/mnist/lenet_iter_10000.caffemodel -gpu=0
如果没有GPU则使用
./build/tools/caffe.bin test -model=examples/mnist/lenet_train_test.prototxt -weights=examples/mnist/lenet_iter_10000.caffemodel
test:表示对训练好的模型进行Testing,而不是training。其他参数包括train, time, device_query。
-model=XXX:指定模型prototxt文件,这是一个文本文件,详细描述了网络结构和数据集信息。
输出如下:
[html] view plaincopy
- I0303 18:26:31.961637 27313 caffe.cpp:138] Use CPU.
- I0303 18:26:32.035434 27313 net.cpp:275] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnist
- I0303 18:26:32.035794 27313 net.cpp:39] Initializing net from parameters:
- name: "LeNet"
- layers {
- top: "data"
- top: "label"
- name: "mnist"
- type: DATA
- data_param {
- source: "examples/mnist/mnist_test_lmdb"
- batch_size: 100
- backend: LMDB
- }
- include {
- phase: TEST
- }
- transform_param {
- scale: 0.00390625
- }
- }
- layers {
- bottom: "data"
- top: "conv1"
- name: "conv1"
- type: CONVOLUTION
- blobs_lr: 1
- blobs_lr: 2
- convolution_param {
- num_output: 20
- kernel_size: 5
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "constant"
- }
- }
- }
- layers {
- bottom: "conv1"
- top: "pool1"
- name: "pool1"
- type: POOLING
- pooling_param {
- pool: MAX
- kernel_size: 2
- stride: 2
- }
- }
- layers {
- bottom: "pool1"
- top: "conv2"
- name: "conv2"
- type: CONVOLUTION
- blobs_lr: 1
- blobs_lr: 2
- convolution_param {
- num_output: 50
- kernel_size: 5
- stride: 1
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "constant"
- }
- }
- }
- layers {
- bottom: "conv2"
- top: "pool2"
- name: "pool2"
- type: POOLING
- pooling_param {
- pool: MAX
- kernel_size: 2
- stride: 2
- }
- }
- layers {
- bottom: "pool2"
- top: "ip1"
- name: "ip1"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 2
- inner_product_param {
- num_output: 500
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "constant"
- }
- }
- }
- layers {
- bottom: "ip1"
- top: "ip1"
- name: "relu1"
- type: RELU
- }
- layers {
- bottom: "ip1"
- top: "ip2"
- name: "ip2"
- type: INNER_PRODUCT
- blobs_lr: 1
- blobs_lr: 2
- inner_product_param {
- num_output: 10
- weight_filler {
- type: "xavier"
- }
- bias_filler {
- type: "constant"
- }
- }
- }
- layers {
- bottom: "ip2"
- bottom: "label"
- top: "accuracy"
- name: "accuracy"
- type: ACCURACY
- include {
- phase: TEST
- }
- }
- layers {
- bottom: "ip2"
- bottom: "label"
- top: "loss"
- name: "loss"
- type: SOFTMAX_LOSS
- }
- I0303 18:26:32.035923 27313 net.cpp:67] Creating Layer mnist
- I0303 18:26:32.035931 27313 net.cpp:356] mnist -> data
- I0303 18:26:32.035948 27313 net.cpp:356] mnist -> label
- I0303 18:26:32.035955 27313 net.cpp:96] Setting up mnist
- I0303 18:26:32.076159 27313 data_layer.cpp:68] Opening lmdb examples/mnist/mnist_test_lmdb
- I0303 18:26:32.084071 27313 data_layer.cpp:128] output data size: 100,1,28,28
- I0303 18:26:32.084214 27313 net.cpp:103] Top shape: 100 1 28 28 (78400)
- I0303 18:26:32.084223 27313 net.cpp:103] Top shape: 100 1 1 1 (100)
- I0303 18:26:32.084249 27313 net.cpp:67] Creating Layer label_mnist_1_split
- I0303 18:26:32.084257 27313 net.cpp:394] label_mnist_1_split <- label
- I0303 18:26:32.084269 27313 net.cpp:356] label_mnist_1_split -> label_mnist_1_split_0
- I0303 18:26:32.084280 27313 net.cpp:356] label_mnist_1_split -> label_mnist_1_split_1
- I0303 18:26:32.084286 27313 net.cpp:96] Setting up label_mnist_1_split
- I0303 18:26:32.084292 27313 net.cpp:103] Top shape: 100 1 1 1 (100)
- I0303 18:26:32.084297 27313 net.cpp:103] Top shape: 100 1 1 1 (100)
- I0303 18:26:32.084306 27313 net.cpp:67] Creating Layer conv1
- I0303 18:26:32.084311 27313 net.cpp:394] conv1 <- data
- I0303 18:26:32.084317 27313 net.cpp:356] conv1 -> conv1
- I0303 18:26:32.084326 27313 net.cpp:96] Setting up conv1
- I0303 18:26:32.084775 27313 net.cpp:103] Top shape: 100 20 24 24 (1152000)
- I0303 18:26:32.084791 27313 net.cpp:67] Creating Layer pool1
- I0303 18:26:32.084796 27313 net.cpp:394] pool1 <- conv1
- I0303 18:26:32.084803 27313 net.cpp:356] pool1 -> pool1
- I0303 18:26:32.084810 27313 net.cpp:96] Setting up pool1
- I0303 18:26:32.099537 27313 net.cpp:103] Top shape: 100 20 12 12 (288000)
- I0303 18:26:32.099583 27313 net.cpp:67] Creating Layer conv2
- I0303 18:26:32.099593 27313 net.cpp:394] conv2 <- pool1
- I0303 18:26:32.099606 27313 net.cpp:356] conv2 -> conv2
- I0303 18:26:32.099619 27313 net.cpp:96] Setting up conv2
- I0303 18:26:32.099905 27313 net.cpp:103] Top shape: 100 50 8 8 (320000)
- I0303 18:26:32.099925 27313 net.cpp:67] Creating Layer pool2
- I0303 18:26:32.099947 27313 net.cpp:394] pool2 <- conv2
- I0303 18:26:32.099959 27313 net.cpp:356] pool2 -> pool2
- I0303 18:26:32.099970 27313 net.cpp:96] Setting up pool2
- I0303 18:26:32.099979 27313 net.cpp:103] Top shape: 100 50 4 4 (80000)
- I0303 18:26:32.099990 27313 net.cpp:67] Creating Layer ip1
- I0303 18:26:32.099998 27313 net.cpp:394] ip1 <- pool2
- I0303 18:26:32.100008 27313 net.cpp:356] ip1 -> ip1
- I0303 18:26:32.100019 27313 net.cpp:96] Setting up ip1
- I0303 18:26:32.104487 27313 net.cpp:103] Top shape: 100 500 1 1 (50000)
- I0303 18:26:32.104518 27313 net.cpp:67] Creating Layer relu1
- I0303 18:26:32.104527 27313 net.cpp:394] relu1 <- ip1
- I0303 18:26:32.104538 27313 net.cpp:345] relu1 -> ip1 (in-place)
- I0303 18:26:32.104549 27313 net.cpp:96] Setting up relu1
- I0303 18:26:32.104558 27313 net.cpp:103] Top shape: 100 500 1 1 (50000)
- I0303 18:26:32.104571 27313 net.cpp:67] Creating Layer ip2
- I0303 18:26:32.104579 27313 net.cpp:394] ip2 <- ip1
- I0303 18:26:32.104591 27313 net.cpp:356] ip2 -> ip2
- I0303 18:26:32.104604 27313 net.cpp:96] Setting up ip2
- I0303 18:26:32.104676 27313 net.cpp:103] Top shape: 100 10 1 1 (1000)
- I0303 18:26:32.104691 27313 net.cpp:67] Creating Layer ip2_ip2_0_split
- I0303 18:26:32.104701 27313 net.cpp:394] ip2_ip2_0_split <- ip2
- I0303 18:26:32.104710 27313 net.cpp:356] ip2_ip2_0_split -> ip2_ip2_0_split_0
- I0303 18:26:32.104722 27313 net.cpp:356] ip2_ip2_0_split -> ip2_ip2_0_split_1
- I0303 18:26:32.104733 27313 net.cpp:96] Setting up ip2_ip2_0_split
- I0303 18:26:32.104743 27313 net.cpp:103] Top shape: 100 10 1 1 (1000)
- I0303 18:26:32.104751 27313 net.cpp:103] Top shape: 100 10 1 1 (1000)
- I0303 18:26:32.104763 27313 net.cpp:67] Creating Layer accuracy
- I0303 18:26:32.104770 27313 net.cpp:394] accuracy <- ip2_ip2_0_split_0
- I0303 18:26:32.104779 27313 net.cpp:394] accuracy <- label_mnist_1_split_0
- I0303 18:26:32.104790 27313 net.cpp:356] accuracy -> accuracy
- I0303 18:26:32.104802 27313 net.cpp:96] Setting up accuracy
- I0303 18:26:32.104811 27313 net.cpp:103] Top shape: 1 1 1 1 (1)
- I0303 18:26:32.104822 27313 net.cpp:67] Creating Layer loss
- I0303 18:26:32.104830 27313 net.cpp:394] loss <- ip2_ip2_0_split_1
- I0303 18:26:32.104838 27313 net.cpp:394] loss <- label_mnist_1_split_1
- I0303 18:26:32.104848 27313 net.cpp:356] loss -> loss
- I0303 18:26:32.104857 27313 net.cpp:96] Setting up loss
- I0303 18:26:32.104869 27313 net.cpp:103] Top shape: 1 1 1 1 (1)
- I0303 18:26:32.104877 27313 net.cpp:109] with loss weight 1
- I0303 18:26:32.104909 27313 net.cpp:170] loss needs backward computation.
- I0303 18:26:32.104918 27313 net.cpp:172] accuracy does not need backward computation.
- I0303 18:26:32.104925 27313 net.cpp:170] ip2_ip2_0_split needs backward computation.
- I0303 18:26:32.104933 27313 net.cpp:170] ip2 needs backward computation.
- I0303 18:26:32.104941 27313 net.cpp:170] relu1 needs backward computation.
- I0303 18:26:32.104948 27313 net.cpp:170] ip1 needs backward computation.
- I0303 18:26:32.104956 27313 net.cpp:170] pool2 needs backward computation.
- I0303 18:26:32.104964 27313 net.cpp:170] conv2 needs backward computation.
- I0303 18:26:32.104975 27313 net.cpp:170] pool1 needs backward computation.
- I0303 18:26:32.104984 27313 net.cpp:170] conv1 needs backward computation.
- I0303 18:26:32.104991 27313 net.cpp:172] label_mnist_1_split does not need backward computation.
- I0303 18:26:32.105000 27313 net.cpp:172] mnist does not need backward computation.
- I0303 18:26:32.105006 27313 net.cpp:208] This network produces output accuracy
- I0303 18:26:32.105017 27313 net.cpp:208] This network produces output loss
- I0303 18:26:32.105034 27313 net.cpp:467] Collecting Learning Rate and Weight Decay.
- I0303 18:26:32.105046 27313 net.cpp:219] Network initialization done.
- I0303 18:26:32.105053 27313 net.cpp:220] Memory required for data: 8086808
- I0303 18:26:32.136730 27313 caffe.cpp:145] Running for 50 iterations.
- I0303 18:26:32.243196 27313 caffe.cpp:169] Batch 0, accuracy = 1
- I0303 18:26:32.243229 27313 caffe.cpp:169] Batch 0, loss = 0.0140614
- I0303 18:26:32.326557 27313 caffe.cpp:169] Batch 1, accuracy = 1
- I0303 18:26:32.326588 27313 caffe.cpp:169] Batch 1, loss = 0.00749996
- I0303 18:26:32.409931 27313 caffe.cpp:169] Batch 2, accuracy = 0.99
- I0303 18:26:32.409963 27313 caffe.cpp:169] Batch 2, loss = 0.0106815
- I0303 18:26:32.493257 27313 caffe.cpp:169] Batch 3, accuracy = 0.99
- I0303 18:26:32.493288 27313 caffe.cpp:169] Batch 3, loss = 0.0528439
- I0303 18:26:32.576733 27313 caffe.cpp:169] Batch 4, accuracy = 0.99
- I0303 18:26:32.576761 27313 caffe.cpp:169] Batch 4, loss = 0.0632355
- I0303 18:26:32.660257 27313 caffe.cpp:169] Batch 5, accuracy = 0.99
- I0303 18:26:32.660289 27313 caffe.cpp:169] Batch 5, loss = 0.041726
- I0303 18:26:32.743624 27313 caffe.cpp:169] Batch 6, accuracy = 0.97
- I0303 18:26:32.743654 27313 caffe.cpp:169] Batch 6, loss = 0.0816639
- I0303 18:26:32.827059 27313 caffe.cpp:169] Batch 7, accuracy = 0.99
- I0303 18:26:32.827090 27313 caffe.cpp:169] Batch 7, loss = 0.0146397
- I0303 18:26:32.910567 27313 caffe.cpp:169] Batch 8, accuracy = 1
- I0303 18:26:32.910598 27313 caffe.cpp:169] Batch 8, loss = 0.00730312
- I0303 18:26:32.993976 27313 caffe.cpp:169] Batch 9, accuracy = 0.99
- I0303 18:26:32.994007 27313 caffe.cpp:169] Batch 9, loss = 0.0225503
- I0303 18:26:33.077335 27313 caffe.cpp:169] Batch 10, accuracy = 0.98
- I0303 18:26:33.077366 27313 caffe.cpp:169] Batch 10, loss = 0.0657359
- I0303 18:26:33.160778 27313 caffe.cpp:169] Batch 11, accuracy = 0.98
- I0303 18:26:33.160809 27313 caffe.cpp:169] Batch 11, loss = 0.0431129
- I0303 18:26:33.244256 27313 caffe.cpp:169] Batch 12, accuracy = 0.96
- I0303 18:26:33.244284 27313 caffe.cpp:169] Batch 12, loss = 0.132687
- I0303 18:26:33.327652 27313 caffe.cpp:169] Batch 13, accuracy = 0.98
- I0303 18:26:33.327684 27313 caffe.cpp:169] Batch 13, loss = 0.0907693
- I0303 18:26:33.411123 27313 caffe.cpp:169] Batch 14, accuracy = 0.99
- I0303 18:26:33.411151 27313 caffe.cpp:169] Batch 14, loss = 0.0150445
- I0303 18:26:33.494606 27313 caffe.cpp:169] Batch 15, accuracy = 0.98
- I0303 18:26:33.494635 27313 caffe.cpp:169] Batch 15, loss = 0.0465094
- I0303 18:26:33.578012 27313 caffe.cpp:169] Batch 16, accuracy = 0.99
- I0303 18:26:33.578042 27313 caffe.cpp:169] Batch 16, loss = 0.0343866
- I0303 18:26:33.661423 27313 caffe.cpp:169] Batch 17, accuracy = 0.99
- I0303 18:26:33.661454 27313 caffe.cpp:169] Batch 17, loss = 0.0277292
- I0303 18:26:33.744851 27313 caffe.cpp:169] Batch 18, accuracy = 1
- I0303 18:26:33.744882 27313 caffe.cpp:169] Batch 18, loss = 0.0146081
- I0303 18:26:33.828307 27313 caffe.cpp:169] Batch 19, accuracy = 0.99
- I0303 18:26:33.828338 27313 caffe.cpp:169] Batch 19, loss = 0.0457058
- I0303 18:26:33.911772 27313 caffe.cpp:169] Batch 20, accuracy = 0.98
- I0303 18:26:33.911800 27313 caffe.cpp:169] Batch 20, loss = 0.086042
- I0303 18:26:33.995313 27313 caffe.cpp:169] Batch 21, accuracy = 0.98
- I0303 18:26:33.995343 27313 caffe.cpp:169] Batch 21, loss = 0.0756276
- I0303 18:26:34.078820 27313 caffe.cpp:169] Batch 22, accuracy = 0.99
- I0303 18:26:34.078850 27313 caffe.cpp:169] Batch 22, loss = 0.0306264
- I0303 18:26:34.162230 27313 caffe.cpp:169] Batch 23, accuracy = 0.98
- I0303 18:26:34.162261 27313 caffe.cpp:169] Batch 23, loss = 0.0438904
- I0303 18:26:34.245688 27313 caffe.cpp:169] Batch 24, accuracy = 0.98
- I0303 18:26:34.245718 27313 caffe.cpp:169] Batch 24, loss = 0.0494635
- I0303 18:26:34.329123 27313 caffe.cpp:169] Batch 25, accuracy = 0.99
- I0303 18:26:34.329152 27313 caffe.cpp:169] Batch 25, loss = 0.0670097
- I0303 18:26:34.412616 27313 caffe.cpp:169] Batch 26, accuracy = 0.99
- I0303 18:26:34.412647 27313 caffe.cpp:169] Batch 26, loss = 0.117325
- I0303 18:26:34.496093 27313 caffe.cpp:169] Batch 27, accuracy = 0.99
- I0303 18:26:34.496122 27313 caffe.cpp:169] Batch 27, loss = 0.0199489
- I0303 18:26:34.579558 27313 caffe.cpp:169] Batch 28, accuracy = 0.98
- I0303 18:26:34.579587 27313 caffe.cpp:169] Batch 28, loss = 0.0489519
- I0303 18:26:34.663027 27313 caffe.cpp:169] Batch 29, accuracy = 0.96
- I0303 18:26:34.663051 27313 caffe.cpp:169] Batch 29, loss = 0.103231
- I0303 18:26:34.746485 27313 caffe.cpp:169] Batch 30, accuracy = 1
- I0303 18:26:34.746516 27313 caffe.cpp:169] Batch 30, loss = 0.0104769
- I0303 18:26:34.829927 27313 caffe.cpp:169] Batch 31, accuracy = 1
- I0303 18:26:34.829990 27313 caffe.cpp:169] Batch 31, loss = 0.00431556
- I0303 18:26:34.913399 27313 caffe.cpp:169] Batch 32, accuracy = 0.98
- I0303 18:26:34.913429 27313 caffe.cpp:169] Batch 32, loss = 0.027013
- I0303 18:26:34.996893 27313 caffe.cpp:169] Batch 33, accuracy = 1
- I0303 18:26:34.996923 27313 caffe.cpp:169] Batch 33, loss = 0.00294145
- I0303 18:26:35.080307 27313 caffe.cpp:169] Batch 34, accuracy = 0.99
- I0303 18:26:35.080338 27313 caffe.cpp:169] Batch 34, loss = 0.0528829
- I0303 18:26:35.163833 27313 caffe.cpp:169] Batch 35, accuracy = 0.93
- I0303 18:26:35.163862 27313 caffe.cpp:169] Batch 35, loss = 0.164353
- I0303 18:26:35.247449 27313 caffe.cpp:169] Batch 36, accuracy = 1
- I0303 18:26:35.247481 27313 caffe.cpp:169] Batch 36, loss = 0.00703398
- I0303 18:26:35.331092 27313 caffe.cpp:169] Batch 37, accuracy = 0.97
- I0303 18:26:35.331121 27313 caffe.cpp:169] Batch 37, loss = 0.0861889
- I0303 18:26:35.414821 27313 caffe.cpp:169] Batch 38, accuracy = 0.99
- I0303 18:26:35.414850 27313 caffe.cpp:169] Batch 38, loss = 0.028661
- I0303 18:26:35.498474 27313 caffe.cpp:169] Batch 39, accuracy = 0.99
- I0303 18:26:35.498502 27313 caffe.cpp:169] Batch 39, loss = 0.0414709
- I0303 18:26:35.582015 27313 caffe.cpp:169] Batch 40, accuracy = 1
- I0303 18:26:35.582042 27313 caffe.cpp:169] Batch 40, loss = 0.0357227
- I0303 18:26:35.665555 27313 caffe.cpp:169] Batch 41, accuracy = 0.99
- I0303 18:26:35.665585 27313 caffe.cpp:169] Batch 41, loss = 0.0525798
- I0303 18:26:35.749254 27313 caffe.cpp:169] Batch 42, accuracy = 1
- I0303 18:26:35.749285 27313 caffe.cpp:169] Batch 42, loss = 0.0257062
- I0303 18:26:35.833019 27313 caffe.cpp:169] Batch 43, accuracy = 0.99
- I0303 18:26:35.833048 27313 caffe.cpp:169] Batch 43, loss = 0.0198026
- I0303 18:26:35.916801 27313 caffe.cpp:169] Batch 44, accuracy = 1
- I0303 18:26:35.916833 27313 caffe.cpp:169] Batch 44, loss = 0.0178475
- I0303 18:26:36.000491 27313 caffe.cpp:169] Batch 45, accuracy = 0.97
- I0303 18:26:36.000522 27313 caffe.cpp:169] Batch 45, loss = 0.0608676
- I0303 18:26:36.085665 27313 caffe.cpp:169] Batch 46, accuracy = 1
- I0303 18:26:36.085697 27313 caffe.cpp:169] Batch 46, loss = 0.0100693
- I0303 18:26:36.169760 27313 caffe.cpp:169] Batch 47, accuracy = 0.98
- I0303 18:26:36.169791 27313 caffe.cpp:169] Batch 47, loss = 0.0211241
- I0303 18:26:36.277791 27313 caffe.cpp:169] Batch 48, accuracy = 0.95
- I0303 18:26:36.277822 27313 caffe.cpp:169] Batch 48, loss = 0.111764
- I0303 18:26:36.361287 27313 caffe.cpp:169] Batch 49, accuracy = 1
- I0303 18:26:36.361318 27313 caffe.cpp:169] Batch 49, loss = 0.0052372
- I0303 18:26:36.361326 27313 caffe.cpp:174] Loss: 0.0452134
- I0303 18:26:36.361332 27313 caffe.cpp:186] accuracy = 0.986
- I0303 18:26:36.361341 27313 caffe.cpp:186] loss = 0.0452134 (* 1 = 0.0452134 loss)
- [[email protected] caffe]#
参考文献:
学习笔记4 学习搭建自己的网络——MNIST在caffe上进行训练与学习-薛开宇
http://wenku.baidu.com/link?url=_sERcBsTCgKElwFi7Hf9FXFe3J-c35ftm27Trf8SJX_iGsR2SlKDDIJmF-5DYruWK-uYJu5pYA3MMfcYt_IRiTL95tYVZ72TYwVTxf0JF27
Deep Learning(深度学习)学习笔记整理系列之LeNet-5卷积参数个人理解 :http://blog.csdn.net/qiaofangjie/article/details/16826849
Deep Learning论文笔记之(四)CNN卷积神经网络推导和实现:http://blog.csdn.net/zouxy09/article/details/9993371
Deep Learning论文笔记之(六)Multi-Stage多级架构分析:http://blog.csdn.net/zouxy09/article/details/10007237
cuda-convnet 卷积神经网络 一般性结构卷积核个数 和 输入输出的关系以及输入输出的个数的说明: http://blog.csdn.net/zhubenfulovepoem/article/details/29583429
DeepID http://blog.csdn.net/stdcoutzyx/article/details/42091205