1,数据集简介
SVHN(Street View House Number)Dateset 来源于谷歌街景门牌号码,原生的数据集1也就是官网的 Format 1 是一些原始的未经处理的彩色图片,如下图所示(不含有蓝色的边框),下载的数据集含有 PNG 的图像和 digitStruct.mat 的文件,其中包含了边框的位置信息,这个数据集每张图片上有好几个数字,适用于 OCR 相关方向。
这里采用 Format2, Format2 将这些数字裁剪成32x32的大小,如图所示,并且数据是 .mat 文件。
2,数据处理
数据集含有两个变量 X 代表图像, 训练集 X 的 shape 是 (32,32,3,73257) 也就是(width, height, channels, samples), tensorflow 的张量需要 (samples, width, height, channels),所以需要转换一下,由于直接调用 cifar 10 的网络模型,数据只需要先做个归一化,所有像素除于255就 OK,另外原始数据 0 的标签是 10,这里要转化成 0,并提供 one_hot 编码。
#!/usr/bin/env python2 # -*- coding: utf-8 -*- """ Created on Thu Jan 19 09:55:36 2017 @author: cheers """ import scipy.io as sio import matplotlib.pyplot as plt import numpy as np image_size = 32 num_labels = 10 def display_data(): print ‘loading Matlab data...‘ train = sio.loadmat(‘train_32x32.mat‘) data=train[‘X‘] label=train[‘y‘] for i in range(10): plt.subplot(2,5,i+1) plt.title(label[i][0]) plt.imshow(data[...,i]) plt.axis(‘off‘) plt.show() def load_data(one_hot = False): train = sio.loadmat(‘train_32x32.mat‘) test = sio.loadmat(‘test_32x32.mat‘) train_data=train[‘X‘] train_label=train[‘y‘] test_data=test[‘X‘] test_label=test[‘y‘] train_data = np.swapaxes(train_data, 0, 3) train_data = np.swapaxes(train_data, 2, 3) train_data = np.swapaxes(train_data, 1, 2) test_data = np.swapaxes(test_data, 0, 3) test_data = np.swapaxes(test_data, 2, 3) test_data = np.swapaxes(test_data, 1, 2) test_data = test_data / 255. train_data =train_data / 255. for i in range(train_label.shape[0]): if train_label[i][0] == 10: train_label[i][0] = 0 for i in range(test_label.shape[0]): if test_label[i][0] == 10: test_label[i][0] = 0 if one_hot: train_label = (np.arange(num_labels) == train_label[:,]).astype(np.float32) test_label = (np.arange(num_labels) == test_label[:,]).astype(np.float32) return train_data,train_label, test_data,test_label if __name__ == ‘__main__‘: load_data(one_hot = True) display_data()
3,TFearn 训练
注意 ImagePreprocessing 对数据做了 0 均值化。网络结构也比较简单,直接调用 TFlearn 的 cifar10 例子。
from __future__ import division, print_function, absolute_import import tflearn from tflearn.data_utils import shuffle, to_categorical from tflearn.layers.core import input_data, dropout, fully_connected from tflearn.layers.conv import conv_2d, max_pool_2d from tflearn.layers.estimator import regression from tflearn.data_preprocessing import ImagePreprocessing from tflearn.data_augmentation import ImageAugmentation # Data loading and preprocessing import svhn_data as SVHN X, Y, X_test, Y_test = SVHN.load_data(one_hot = True) X, Y = shuffle(X, Y) # Real-time data preprocessing img_prep = ImagePreprocessing() img_prep.add_featurewise_zero_center() img_prep.add_featurewise_stdnorm() # Convolutional network building network = input_data(shape=[None, 32, 32, 3], data_preprocessing=img_prep) network = conv_2d(network, 32, 3, activation=‘relu‘) network = max_pool_2d(network, 2) network = conv_2d(network, 64, 3, activation=‘relu‘) network = conv_2d(network, 64, 3, activation=‘relu‘) network = max_pool_2d(network, 2) network = fully_connected(network, 512, activation=‘relu‘) network = dropout(network, 0.5) network = fully_connected(network, 10, activation=‘softmax‘) network = regression(network, optimizer=‘adam‘, loss=‘categorical_crossentropy‘, learning_rate=0.001) # Train using classifier model = tflearn.DNN(network, tensorboard_verbose=0) model.fit(X, Y, n_epoch=15, shuffle=True, validation_set=(X_test, Y_test), show_metric=True, batch_size=96, run_id=‘svhn_cnn‘)
训练结果:
Training Step: 11452 | total loss: 0.68217 | time: 7.973s | Adam | epoch: 015 | loss: 0.68217 - acc: 0.9329 -- iter: 72576/73257 Training Step: 11453 | total loss: 0.62980 | time: 7.983s | Adam | epoch: 015 | loss: 0.62980 - acc: 0.9354 -- iter: 72672/73257 Training Step: 11454 | total loss: 0.58649 | time: 7.994s | Adam | epoch: 015 | loss: 0.58649 - acc: 0.9356 -- iter: 72768/73257 Training Step: 11455 | total loss: 0.53254 | time: 8.005s | Adam | epoch: 015 | loss: 0.53254 - acc: 0.9421 -- iter: 72864/73257 Training Step: 11456 | total loss: 0.49179 | time: 8.016s | Adam | epoch: 015 | loss: 0.49179 - acc: 0.9416 -- iter: 72960/73257 Training Step: 11457 | total loss: 0.45679 | time: 8.027s | Adam | epoch: 015 | loss: 0.45679 - acc: 0.9433 -- iter: 73056/73257 Training Step: 11458 | total loss: 0.42026 | time: 8.038s | Adam | epoch: 015 | loss: 0.42026 - acc: 0.9469 -- iter: 73152/73257 Training Step: 11459 | total loss: 0.38929 | time: 8.049s | Adam | epoch: 015 | loss: 0.38929 - acc: 0.9491 -- iter: 73248/73257 Training Step: 11460 | total loss: 0.35542 | time: 9.928s | Adam | epoch: 015 | loss: 0.35542 - acc: 0.9542 | val_loss: 0.40315 - val_acc: 0.9085 -- iter: 73257/73257
时间: 2024-10-12 02:22:33