『TensorFlow』读书笔记_简单卷积神经网络

网络结构

卷积层->池化层->卷积层->池化层->全连接层->Softmax分类器

卷积层激活函数使用relu

全连接层激活函数使用relu

池化层模式使用SAME,所以stride取2,且池化层和卷积层一样,通常设置为SAME模式,本模式下stride=2正好实现1/2变换

网络实现

# Author : Hellcat
# Time   : 2017/12/7

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets(‘../../../Mnist_data‘,one_hot=True)
sess = tf.InteractiveSession()

def weight_variable(shape):
    initial = tf.truncated_normal(shape,stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    # 偏置项使用极小值初始化,防止Relu出现死亡节点(dead neuron)
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

def conv2d(x, W):
    return tf.nn.conv2d(x, W, strides=[1,1,1,1], padding=‘SAME‘)

def max_pool_2x2(x):
    # 2x2池化,步长为2
    return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding=‘SAME‘)

x = tf.placeholder(tf.float32, [None, 784])
y_ = tf.placeholder(tf.float32, [None, 10])
x_image = tf.reshape(x, [-1, 28, 28, 1])

# 5x5滤波器,1通道,32特征图
W_conv1 = weight_variable([5,5,1,32])
b_conv1 = bias_variable([32])

h_conv1 = tf.nn.relu((conv2d(x_image, W_conv1) + b_conv1))
h_pool1 = max_pool_2x2(h_conv1)

# 5x5滤波器,32通道,64特征图
W_conv2 = weight_variable([5,5,32,64])
b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

# 28x28,经过2次步长为2的最大池化(SAME),大小变为28/2/2,即7x7
W_fc1 = weight_variable([7*7*64,1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1,7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

# dropout层
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)

# axis=1,按行来计算
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv),axis=1))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

correct_prediction = tf.equal(tf.argmax(y_conv,axis=1), tf.argmax(y_,axis=1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

tf.global_variables_initializer().run()
for i in range(20000):
    batch = mnist.train.next_batch(50)
    train_step.run(feed_dict={x:batch[0],y_:batch[1],keep_prob:0.5})
    if i % 100 == 0:
        train_accuracy = accuracy.eval(feed_dict={x:batch[0],y_:batch[1],keep_prob:1.0})
        print(‘step {0} traning accuracy {1:.3f}‘.format(i,train_accuracy))

print(‘test accuracy {}‘.format(accuracy.eval(
    feed_dict={x:mnist.test.images,y_:mnist.test.labels,keep_prob:1.0})))

收敛情况还不错,前1000轮结果如下,

step 0 traning accuracy 0.040
step 100 traning accuracy 0.940
step 200 traning accuracy 0.940
step 300 traning accuracy 0.980
step 400 traning accuracy 0.980
step 500 traning accuracy 0.900
step 600 traning accuracy 0.920
step 700 traning accuracy 0.960
step 800 traning accuracy 1.000
step 900 traning accuracy 0.960
step 1000 traning accuracy 1.000
……

时间: 2024-10-08 14:35:42

『TensorFlow』读书笔记_简单卷积神经网络的相关文章

『TensorFlow』读书笔记_降噪自编码器

『TensorFlow』降噪自编码器设计 之前学习过的代码,又敲了一遍,新的收获也还是有的,因为这次注释写的比较详尽,所以再次记录一下,具体的相关知识查阅之前写的文章即可(见上面链接). # Author : Hellcat # Time : 2017/12/6 import numpy as np import sklearn.preprocessing as prep import tensorflow as tf from tensorflow.examples.tutorials.mni

『TensorFlow』迁移学习_他山之石,可以攻玉

目的: 使用google已经训练好的模型,将最后的全连接层修改为我们自己的全连接层,将原有的1000分类分类器修改为我们自己的5分类分类器,利用原有模型的特征提取能力实现我们自己数据对应模型的快速训练.实际中对于一个陌生的数据集,原有模型经过不高的迭代次数即可获得很好的准确率. 实战: 实机文件夹如下,两个压缩文件可以忽略: 花朵图片数据下载: 1 curl -O http://download.tensorflow.org/example_images/flower_photos.tgz 已经

『TensorFlow』以GAN为例的神经网络类范式

1.导入包: import os import time import math from glob import glob from PIL import Image import tensorflow as tf import numpy as np import ops # 层函数封装包 import utils # 其他辅助函数 2.简单的临时辅助函数: def conv_out_size_same(size, stride): # 对浮点数向上取整(大于f的最小整数) return i

『TensorFlow』分布式训练_其二_多GPU并行demo分析(待续)

建议比对『MXNet』第七弹_多GPU并行程序设计 models/tutorials/image/cifar10/cifer10_multi_gpu-train.py # Copyright 2015 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file exc

『TensorFlow』测试项目_对评论分类

数据介绍 neg.txt:5331条负面电影评论 pos.txt:5331条正面电影评论 函数包 自然语言工具库 Natural Language Toolkit 下载nltk相关数据: import nltk nltk.download() 测试安装是否成功: from nltk.corpus import brown print(brown.words()) 常用的函数有两个: from nltk.tokenize import word_tokenize """ 'I'

『TensorFlow』图像预处理_

部分代码单独测试: 这里实践了图像大小调整的代码,值得注意的是格式问题: 输入输出图像时一定要使用uint8编码, 但是数据处理过程中TF会自动把编码方式调整为float32,所以输入时没问题,输出时要手动转换回来!使用numpy.asarray(dtype)或者tf.image.convert_image_dtype(dtype)都行 都行 1 import numpy as np 2 import tensorflow as tf 3 import matplotlib.pyplot as

『TensorFlow』分布式训练_其三_多机demo分析(待续)

tensorflow/tools/dist_test/python/mnist_replica.py # Copyright 2016 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the Licens

『TensorFlow』常用函数实践笔记

查询列表: 『TensorFlow』函数查询列表_数值计算 『TensorFlow』函数查询列表_张量属性调整 『TensorFlow』函数查询列表_神经网络相关 经验之谈: 节点张量铺设好了之后,只要不加sess.run(),可以运行脚本检查张量节点是否匹配,无需传入实际数据流. 'conv1'指节点,'conv1:0'指节点输出的第一个张量. sess上下文环境中的函数调用即使不传入sess句柄,函数体内也存在于默认的sess环境中,可以直接sess.run(). image_holder

『TensorFlow』函数查询列表_神经网络相关

神经网络(Neural Network) 激活函数(Activation Functions) 操作 描述 tf.nn.relu(features, name=None) 整流函数:max(features, 0) tf.nn.relu6(features, name=None) 以6为阈值的整流函数:min(max(features, 0), 6) tf.nn.elu(features, name=None) elu函数,exp(features) - 1 if < 0,否则featuresE