吴裕雄 python神经网络 手写数字图片识别(5)

import keras
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense,Activation,Flatten,Dropout,Convolution2D,MaxPooling2D
from keras.utils import np_utils
from keras.optimizers import RMSprop
from skimage import io

nb_classes=10
batch_size=128
####因为是卷积神经网络,输入数据的格式是图像格式,所以要进行reshape
train_X = io.imread("E:\\WaySign\\0_0_colorrgb0.ppm")
train_x=np.reshape(train_X,(train_X.shape[0],32,32,1))
# test_x=np.reshape(test_X,(test_X.shape[0],28,28,1))
# train_y=np_utils.to_categorical(train_Y,nb_classes)
# test_y=np_utils.to_categorical(test_Y,nb_classes)

print(train_y.shape,‘\n‘,test_y.shape)

print(train_x.shape,‘\n‘,test_x.shape)

train_x[:,:,:,0].shape

###reshape后的数据显示
import matplotlib.pyplot as plt
%matplotlib inline
f,a=plt.subplots(1,10,figsize=(10,5))
for i in range(10):
a[i].imshow(train_x[i,:,:,0],cmap=‘gray‘)
print(train_Y[i])

####establish a convolution nerual network
model=Sequential()

####Convolution layer 1
model.add(Convolution2D(filters=32,kernel_size=(3,3),input_shape=(28,28,1),strides=(1,1),\
padding=‘same‘,activation=‘relu‘))

#####pooling layer with dropout
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding=‘valid‘))
model.add(Dropout(0.2))

####Convolution layer 2
model.add(Convolution2D(filters=64,kernel_size=(3,3),strides=(1,1),padding=‘same‘,\
activation=‘relu‘))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding=‘valid‘))
model.add(Dropout(0.2))

####Convolution layer 3
model.add(Convolution2D(filters=128,kernel_size=(3,3),strides=(1,1),padding=‘same‘,\
activation=‘relu‘))
model.add(MaxPooling2D(pool_size=(2,2), strides=(2,2), padding=‘valid‘))
model.add(Flatten())###理解扁平化
model.add(Dropout(0.2))

#model.add(Flatten())?

####fully connected layer 1 (fc layer)
model.add(Dense(output_dim=625,activation=‘relu‘))
model.add(Dropout(0.5))

####fully connected layer 2 (fc layer)
model.add(Dense(output_dim=10,activation=‘softmax‘))
model.summary()

model.compile(optimizer=RMSprop(lr=0.001,rho=0.9),loss="categorical_crossentropy",\
metrics=[‘accuracy‘])
import time
start_time=time.time()
model.fit(train_x,train_y,epochs=30,batch_size=128,verbose=1)
end_time=time.time()
print("running time:%.2f"%(end_time-start_time))

evaluation=model.evaluate(test_x,test_y,batch_size=128,verbose=1)
print("model loss:%.4f"%(evaluation[0]),"model accuracy:%.4f"%(evaluation[1]))

# https://github.com/fchollet/keras/issues/431
def get_activations(model, model_inputs, print_shape_only=True, layer_name=None):
import keras.backend as K
print(‘----- activations -----‘)
activations = []
inp = model.input

model_multi_inputs_cond = True
if not isinstance(inp, list):
# only one input! let‘s wrap it in a list.
inp = [inp]
model_multi_inputs_cond = False

outputs = [layer.output for layer in model.layers if
layer.name == layer_name or layer_name is None] # all layer outputs

funcs = [K.function(inp + [K.learning_phase()], [out]) for out in outputs] # evaluation functions

if model_multi_inputs_cond:
list_inputs = []
list_inputs.extend(model_inputs)
list_inputs.append(1.)
else:
list_inputs = [model_inputs, 1.]

# Learning phase. 1 = Test mode (no dropout or batch normalization)
# layer_outputs = [func([model_inputs, 1.])[0] for func in funcs]
layer_outputs = [func(list_inputs)[0] for func in funcs]
for layer_activations in layer_outputs:
activations.append(layer_activations)
if print_shape_only:
print(layer_activations.shape)
else:
print(layer_activations)
return activations

# https://github.com/philipperemy/keras-visualize-activations/blob/master/read_activations.py
def display_activations(activation_maps):
import numpy as np
import matplotlib.pyplot as plt
"""
(1, 28, 28, 32)
(1, 14, 14, 32)
(1, 14, 14, 32)
(1, 14, 14, 64)
(1, 7, 7, 64)
(1, 7, 7, 64)
(1, 7, 7, 128)
(1, 3, 3, 128)
(1, 1152)
(1, 1152)
(1, 625)
(1, 625)
(1, 10)
"""
batch_size = activation_maps[0].shape[0]
assert batch_size == 1, ‘One image at a time to visualize.‘
for i, activation_map in enumerate(activation_maps):
print(‘Displaying activation map {}‘.format(i))
shape = activation_map.shape
if len(shape) == 4:
activations = np.hstack(np.transpose(activation_map[0], (2, 0, 1)))
elif len(shape) == 2:
# try to make it square as much as possible. we can skip some activations.
activations = activation_map[0]
num_activations = len(activations)
if num_activations > 1024: # too hard to display it on the screen.
square_param = int(np.floor(np.sqrt(num_activations)))
activations = activations[0: square_param * square_param]
activations = np.reshape(activations, (square_param, square_param))
else:
activations = np.expand_dims(activations, axis=0)
else:
raise Exception(‘len(shape) = 3 has not been implemented.‘)
#plt.imshow(activations, interpolation=‘None‘, cmap=‘binary‘)
fig, ax = plt.subplots(figsize=(18, 12))
ax.imshow(activations, interpolation=‘None‘, cmap=‘binary‘)
plt.show()

###One image at a time to visualize.
activations = get_activations(model, (test_x[0,:,:,:])[np.newaxis,:])

(test_x[0,:,:,:])[np.newaxis,:].shape

display_activations(activations)

plt.imshow(test_x[0,:,:,0],cmap=‘gray‘)
pred_value=model.predict_classes((test_x[0,:,:,:])[np.newaxis,:],batch_size=1)
print(pred_value)

原文地址:https://www.cnblogs.com/tszr/p/10085348.html

时间: 2024-10-09 19:31:59

吴裕雄 python神经网络 手写数字图片识别(5)的相关文章

MNIST手写数字图片识别(线性回归、CNN方法的手工及框架实现)(未完待续)

0-Background 作为Deep Learning中的Hello World 项目无论如何都要做一遍的. 代码地址:Github 练习过程中将持续更新blog及代码. 第一次写博客,很多地方可能语言组织不清,请多多提出意见..谢谢~ 0.1 背景知识: Linear regression CNN LeNet-5 AlexNet ResNet VGG 各种regularization方式 0.2 Catalog 1-Prepare 2-MNIST 3-LinearRegression 1-P

Andrew Ng 机器学习课程笔记 ———— 通过初步的神经网络实现手写数字的识别(尽力去向量化实现)

上一篇我总结了自己在学完逻辑回归后,实现了对手写数字的初步识别 , 在学完了Andrew教授的神经网络简易教程后,趁着知识刚学完没多久,记下了自己在运用简易神经网络实现手写数字识别过程中的总结和问题 ^_^  菜鸡QP的第二篇学习笔记 ~ 错误在所难免 ,希望自己可以通过一篇篇菜鸡的笔记心得 ,取得一点点的进步 ~\(≧▽≦)/~    ) 依旧是给定 5000个20 * 20像素点的手写数字图片 ,与前几天自己完成的逻辑回归完成任务不同 ,这次自己终于要用到极富魅力的神经网络啦(虽然只是最基础

基于Numpy的神经网络+手写数字识别

基于Numpy的神经网络+手写数字识别 本文代码来自Tariq Rashid所著<Python神经网络编程> 代码分为三个部分,框架如下所示: # neural network class definition class neuralNetwork: # initialise the neural network def __init__(): pass # train the neural network def train(): pass # query the neural netwo

一文全解:利用谷歌深度学习框架Tensorflow识别手写数字图片(初学者篇)

笔记整理者:王小草 笔记整理时间2017年2月24日 原文地址 http://blog.csdn.net/sinat_33761963/article/details/56837466?fps=1&locationNum=5 Tensorflow官方英文文档地址:https://www.tensorflow.org/get_started/mnist/beginners 本文整理时官方文档最近更新时间:2017年2月15日 1.案例背景 本文是跟着Tensorflow官方文档的第二篇教程–识别手

神经网络手写数字识别

聊了几天理论,是该弄一个 Hello World 了,在人工智能领域,或者说深度学习领域,Hello World 程序就是手写数字识别,今天我们就来看看手写数字识别的程序怎么写.不愿意看代码吗,那我就说一说这段代码干了点什么:先通过 keras 内置的数据集下载测试数据,是 60000 长手写图片的训练集和 10000 张测试集,随后定义了一个神经网络的模型,设置网络中的层参数,随后配置训练网络的参数,包括损失函数和评测模型,设置迭代次数,启动训练网络,最后将测试数据喂给网络,得出训练效果是否有

使用cuda加速卷积神经网络-手写数字识别准确率99.7%

源码和运行结果 cuda:https://github.com/zhxfl/cuCNN-I C语言版本参考自:http://eric-yuan.me/ 针对著名手写数字识别的库mnist,准确率是99.7%,在几分钟内,CNN的训练就可以达到99.60%左右的准确率. 参数配置 网络的配置使用Config.txt进行配置##之间是注释,代码会自动过滤掉,其他格式参考如下: #Comment# #NON_LINEARITY CAN = NL_SIGMOID , NL_TANH , NL_RELU#

opencv实现KNN手写数字的识别

人工智能是当下很热门的话题,手写识别是一个典型的应用.为了进一步了解这个领域,我阅读了大量的论文,并借助opencv完成了对28x28的数字图片(预处理后的二值图像)的识别任务. 预处理一张图片: 首先采用opencv读取图片的构造函数读取灰度的图片,再采用大津法求出图片的二值化的阈值,并且将图片二值化. 1 int otsu(const IplImage* src_image) { 2 double sum = 0.0; 3 double w0 = 0.0; 4 double w1 = 0.0

吴裕雄 python 神经网络——TensorFlow 使用卷积神经网络训练和预测MNIST手写数据集

import tensorflow as tf import numpy as np from tensorflow.examples.tutorials.mnist import input_data #设置输入参数 batch_size = 128 test_size = 256 # 初始化权值与定义网络结构,建构一个3个卷积层和3个池化层,一个全连接层和一个输出层的卷积神经网络 # 首先定义初始化权重函数 def init_weights(shape): return tf.Variabl

吴裕雄 python 神经网络——TensorFlow实现回归模型训练预测MNIST手写数据集

import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("E:\\MNIST_data\\", one_hot=True) #构建回归模型,输入原始真实值(group truth),采用sotfmax函数拟合,并定义损失函数和优化器 #定义回归模型 x = tf.placeholder(tf.float32,