使用逻辑回归进行mnist手写字识别

1.引言

逻辑回归(LR)在分类问题中的应用十分广泛,它是一个基于概率的线性分类器,通过建立一个简单的输入层和输出层,即可实现对输入数据的有效分类。而该网络结构的主要参数只有两个,分别是权重和偏置,本文定义损耗函数为负对数,然后通过随机梯度下降算法(SGD)来对参数进行更新,并定义误差函数来衡量训练的阶段。

2.具体训练过程

在第3部分将会给出本文的完整python代码,其中用到的文件mnist.pkl.gz可以去网上下载,放到与python文件同一目录下面即可。

首先,定义一个基于object的LogisticRegression类,类的构造方法中包含输入、输入数据维度、输出数据维度;同时将W、b分别初始化为0矩阵、0向量;接下来定义由x输出y的分类器为softmax函数,并且取概率最大值得到y的预测值。

接下来定义一个输入参数为y的negative_log_likelihood(self, y)函数,其返回值为负对数似然的平均值。

接下来定义errors(self, y)函数,计算网络的错误率。

接下来定义导入数据的函数 load_data(dataset),这个比较简单,主要是对象的反序列化。最终返回结果是[(train_set_x, train_set_y), (valid_set_x, valid_set_y), (test_set_x, test_set_y)],顾名思义。

接下来定义sgd_optimization_mnist(learning_rate=0.13, n_epochs=1000,dataset=‘mnist.pkl.gz‘,batch_size=600)函数,将训练数据分成多个batch,每个batch的大小为600,并且实例化一个LogisticRegression对象,输出为长度为10的向量(因为包含0~9之间的数字分类),并且输入的矩阵为28*28像素大小的图片,并定义两个函数:test_model和validate_model,然后定义了参数的更新规则,最后定义了训练函数train_model,训练一次就要对参数更新一次。

最后一段代码是进行训练的主要流程。

3.Python实现

"""
This tutorial introduces logistic regression using Theano and stochastic
gradient descent.

Logistic regression is a probabilistic, linear classifier. It is parametrized
by a weight matrix :math:`W` and a bias vector :math:`b`. Classification is
done by projecting data points onto a set of hyperplanes, the distance to
which is used to determine a class membership probability.

Mathematically, this can be written as:

.. math::
  P(Y=i|x, W,b) &= softmax_i(W x + b) \                &= \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}

The output of the model or prediction is then done by taking the argmax of
the vector whose i‘th element is P(Y=i|x).

.. math::

  y_{pred} = argmax_i P(Y=i|x,W,b)

This tutorial presents a stochastic gradient descent optimization method
suitable for large datasets.

References:

    - textbooks: "Pattern Recognition and Machine Learning" -
                 Christopher M. Bishop, section 4.3.2

"""
__docformat__ = ‘restructedtext en‘

import cPickle
import gzip
import os
import sys
import timeit

import numpy

import theano
import theano.tensor as T

class LogisticRegression(object):
    """Multi-class Logistic Regression Class

    The logistic regression is fully described by a weight matrix :math:`W`
    and bias vector :math:`b`. Classification is done by projecting data
    points onto a set of hyperplanes, the distance to which is used to
    determine a class membership probability.
    """

    def __init__(self, input, n_in, n_out):
        """ Initialize the parameters of the logistic regression

        :type input: theano.tensor.TensorType
        :param input: symbolic variable that describes the input of the
                      architecture (one minibatch)

        :type n_in: int
        :param n_in: number of input units, the dimension of the space in
                     which the datapoints lie

        :type n_out: int
        :param n_out: number of output units, the dimension of the space in
                      which the labels lie

        """
        # start-snippet-1
        # initialize with 0 the weights W as a matrix of shape (n_in, n_out)
        self.W = theano.shared(
            value=numpy.zeros(
                (n_in, n_out),
                dtype=theano.config.floatX
            ),
            name=‘W‘,
            borrow=True
        )
        # initialize the biases b as a vector of n_out 0s
        self.b = theano.shared(
            value=numpy.zeros(
                (n_out,),
                dtype=theano.config.floatX
            ),
            name=‘b‘,
            borrow=True
        )

        # symbolic expression for computing the matrix of class-membership
        # probabilities
        # Where:
        # W is a matrix where column-k represent the separation hyperplane for
        # class-k
        # x is a matrix where row-j  represents input training sample-j
        # b is a vector where element-k represent the free parameter of
        # hyperplane-k
        self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)

        # symbolic description of how to compute prediction as class whose
        # probability is maximal
        self.y_pred = T.argmax(self.p_y_given_x, axis=1)
        # end-snippet-1

        # parameters of the model
        self.params = [self.W, self.b]

        # keep track of model input
        self.input = input

    def negative_log_likelihood(self, y):
        """Return the mean of the negative log-likelihood of the prediction
        of this model under a given target distribution.

        .. math::

            \frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) =
            \frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|}
                \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \            \ell (\theta=\{W,b\}, \mathcal{D})

        :type y: theano.tensor.TensorType
        :param y: corresponds to a vector that gives for each example the
                  correct label

        Note: we use the mean instead of the sum so that
              the learning rate is less dependent on the batch size
        """
        # start-snippet-2
        # y.shape[0] is (symbolically) the number of rows in y, i.e.,
        # number of examples (call it n) in the minibatch
        # T.arange(y.shape[0]) is a symbolic vector which will contain
        # [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of
        # Log-Probabilities (call it LP) with one row per example and
        # one column per class LP[T.arange(y.shape[0]),y] is a vector
        # v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,
        # LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is
        # the mean (across minibatch examples) of the elements in v,
        # i.e., the mean log-likelihood across the minibatch.
        return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
        # end-snippet-2

    def errors(self, y):
        """Return a float representing the number of errors in the minibatch
        over the total number of examples of the minibatch ; zero one
        loss over the size of the minibatch

        :type y: theano.tensor.TensorType
        :param y: corresponds to a vector that gives for each example the
                  correct label
        """

        # check if y has same dimension of y_pred
        if y.ndim != self.y_pred.ndim:
            raise TypeError(
                ‘y should have the same shape as self.y_pred‘,
                (‘y‘, y.type, ‘y_pred‘, self.y_pred.type)
            )
        # check if y is of the correct datatype
        if y.dtype.startswith(‘int‘):
            # the T.neq operator returns a vector of 0s and 1s, where 1
            # represents a mistake in prediction
            return T.mean(T.neq(self.y_pred, y))
        else:
            raise NotImplementedError()

def load_data(dataset):
    ‘‘‘ Loads the dataset

    :type dataset: string
    :param dataset: the path to the dataset (here MNIST)
    ‘‘‘

    #############
    # LOAD DATA #
    #############

    # Download the MNIST dataset if it is not present
    data_dir, data_file = os.path.split(dataset)
    if data_dir == "" and not os.path.isfile(dataset):
        # Check if dataset is in the data directory.
        new_path = os.path.join(
            os.path.split(__file__)[0],
            "..",
            "data",
            dataset
        )
        if os.path.isfile(new_path) or data_file == ‘mnist.pkl.gz‘:
            dataset = new_path

    if (not os.path.isfile(dataset)) and data_file == ‘mnist.pkl.gz‘:
        import urllib
        origin = (
            ‘http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz‘
        )
        print ‘Downloading data from %s‘ % origin
        urllib.urlretrieve(origin, dataset)

    print ‘... loading data‘

    # Load the dataset
    f = gzip.open(dataset, ‘rb‘)
    train_set, valid_set, test_set = cPickle.load(f)
    f.close()
    #train_set, valid_set, test_set format: tuple(input, target)
    #input is an numpy.ndarray of 2 dimensions (a matrix)
    #witch row‘s correspond to an example. target is a
    #numpy.ndarray of 1 dimensions (vector)) that have the same length as
    #the number of rows in the input. It should give the target
    #target to the example with the same index in the input.

    def shared_dataset(data_xy, borrow=True):
        """ Function that loads the dataset into shared variables

        The reason we store our dataset in shared variables is to allow
        Theano to copy it into the GPU memory (when code is run on GPU).
        Since copying data into the GPU is slow, copying a minibatch everytime
        is needed (the default behaviour if the data is not in a shared
        variable) would lead to a large decrease in performance.
        """
        data_x, data_y = data_xy
        shared_x = theano.shared(numpy.asarray(data_x,
                                               dtype=theano.config.floatX),
                                 borrow=borrow)
        shared_y = theano.shared(numpy.asarray(data_y,
                                               dtype=theano.config.floatX),
                                 borrow=borrow)
        # When storing data on the GPU it has to be stored as floats
        # therefore we will store the labels as ``floatX`` as well
        # (``shared_y`` does exactly that). But during our computations
        # we need them as ints (we use labels as index, and if they are
        # floats it doesn‘t make sense) therefore instead of returning
        # ``shared_y`` we will have to cast it to int. This little hack
        # lets ous get around this issue
        return shared_x, T.cast(shared_y, ‘int32‘)

    test_set_x, test_set_y = shared_dataset(test_set)
    valid_set_x, valid_set_y = shared_dataset(valid_set)
    train_set_x, train_set_y = shared_dataset(train_set)

    rval = [(train_set_x, train_set_y), (valid_set_x, valid_set_y),
            (test_set_x, test_set_y)]
    return rval

def sgd_optimization_mnist(learning_rate=0.13, n_epochs=1000,
                           dataset=‘mnist.pkl.gz‘,
                           batch_size=600):
    """
    Demonstrate stochastic gradient descent optimization of a log-linear
    model

    This is demonstrated on MNIST.

    :type learning_rate: float
    :param learning_rate: learning rate used (factor for the stochastic
                          gradient)

    :type n_epochs: int
    :param n_epochs: maximal number of epochs to run the optimizer

    :type dataset: string
    :param dataset: the path of the MNIST dataset file from
                 http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz

    """
    datasets = load_data(dataset)

    train_set_x, train_set_y = datasets[0]
    valid_set_x, valid_set_y = datasets[1]
    test_set_x, test_set_y = datasets[2]

    # compute number of minibatches for training, validation and testing
    n_train_batches = train_set_x.get_value(borrow=True).shape[0] / batch_size
    n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] / batch_size
    n_test_batches = test_set_x.get_value(borrow=True).shape[0] / batch_size

    ######################
    # BUILD ACTUAL MODEL #
    ######################
    print ‘... building the model‘

    # allocate symbolic variables for the data
    index = T.lscalar()  # index to a [mini]batch

    # generate symbolic variables for input (x and y represent a
    # minibatch)
    x = T.matrix(‘x‘)  # data, presented as rasterized images
    y = T.ivector(‘y‘)  # labels, presented as 1D vector of [int] labels

    # construct the logistic regression class
    # Each MNIST image has size 28*28
    classifier = LogisticRegression(input=x, n_in=28 * 28, n_out=10)

    # the cost we minimize during training is the negative log likelihood of
    # the model in symbolic format
    cost = classifier.negative_log_likelihood(y)

    # compiling a Theano function that computes the mistakes that are made by
    # the model on a minibatch
    test_model = theano.function(
        inputs=[index],
        outputs=classifier.errors(y),
        givens={
            x: test_set_x[index * batch_size: (index + 1) * batch_size],
            y: test_set_y[index * batch_size: (index + 1) * batch_size]
        }
    )

    validate_model = theano.function(
        inputs=[index],
        outputs=classifier.errors(y),
        givens={
            x: valid_set_x[index * batch_size: (index + 1) * batch_size],
            y: valid_set_y[index * batch_size: (index + 1) * batch_size]
        }
    )

    # compute the gradient of cost with respect to theta = (W,b)
    g_W = T.grad(cost=cost, wrt=classifier.W)
    g_b = T.grad(cost=cost, wrt=classifier.b)

    # start-snippet-3
    # specify how to update the parameters of the model as a list of
    # (variable, update expression) pairs.
    updates = [(classifier.W, classifier.W - learning_rate * g_W),
               (classifier.b, classifier.b - learning_rate * g_b)]

    # compiling a Theano function `train_model` that returns the cost, but in
    # the same time updates the parameter of the model based on the rules
    # defined in `updates`
    train_model = theano.function(
        inputs=[index],
        outputs=cost,
        updates=updates,
        givens={
            x: train_set_x[index * batch_size: (index + 1) * batch_size],
            y: train_set_y[index * batch_size: (index + 1) * batch_size]
        }
    )
    # end-snippet-3

    ###############
    # TRAIN MODEL #
    ###############
    print ‘... training the model‘
    # early-stopping parameters
    patience = 5000  # look as this many examples regardless
    patience_increase = 2  # wait this much longer when a new best is
                                  # found
    improvement_threshold = 0.995  # a relative improvement of this much is
                                  # considered significant
    validation_frequency = min(n_train_batches, patience / 2)
                                  # go through this many
                                  # minibatche before checking the network
                                  # on the validation set; in this case we
                                  # check every epoch

    best_validation_loss = numpy.inf
    test_score = 0.
    start_time = timeit.default_timer()

    done_looping = False
    epoch = 0
    while (epoch < n_epochs) and (not done_looping):
        epoch = epoch + 1
        for minibatch_index in xrange(n_train_batches):

            minibatch_avg_cost = train_model(minibatch_index)
            # iteration number
            iter = (epoch - 1) * n_train_batches + minibatch_index

            if (iter + 1) % validation_frequency == 0:
                # compute zero-one loss on validation set
                validation_losses = [validate_model(i)
                                     for i in xrange(n_valid_batches)]
                this_validation_loss = numpy.mean(validation_losses)

                print(
                    ‘epoch %i, minibatch %i/%i, validation error %f %%‘ %
                    (
                        epoch,
                        minibatch_index + 1,
                        n_train_batches,
                        this_validation_loss * 100.
                    )
                )

                # if we got the best validation score until now
                if this_validation_loss < best_validation_loss:
                    #improve patience if loss improvement is good enough
                    if this_validation_loss < best_validation_loss *                         improvement_threshold:
                        patience = max(patience, iter * patience_increase)

                    best_validation_loss = this_validation_loss
                    # test it on the test set

                    test_losses = [test_model(i)
                                   for i in xrange(n_test_batches)]
                    test_score = numpy.mean(test_losses)

                    print(
                        (
                            ‘     epoch %i, minibatch %i/%i, test error of‘
                            ‘ best model %f %%‘
                        ) %
                        (
                            epoch,
                            minibatch_index + 1,
                            n_train_batches,
                            test_score * 100.
                        )
                    )

                    # save the best model
                    with open(‘best_model.pkl‘, ‘w‘) as f:
                        cPickle.dump(classifier, f)

            if patience <= iter:
                done_looping = True
                break

    end_time = timeit.default_timer()
    print(
        (
            ‘Optimization complete with best validation score of %f %%,‘
            ‘with test performance %f %%‘
        )
        % (best_validation_loss * 100., test_score * 100.)
    )
    print ‘The code run for %d epochs, with %f epochs/sec‘ % (
        epoch, 1. * epoch / (end_time - start_time))
    print >> sys.stderr, (‘The code for file ‘ +
                          os.path.split(__file__)[1] +
                          ‘ ran for %.1fs‘ % ((end_time - start_time)))

def predict():
    """
    An example of how to load a trained model and use it
    to predict labels.
    """

    # load the saved model
    classifier = cPickle.load(open(‘best_model.pkl‘))

    # compile a predictor function
    predict_model = theano.function(
        inputs=[classifier.input],
        outputs=classifier.y_pred)

    # We can test it on some examples from test test
    dataset=‘mnist.pkl.gz‘
    datasets = load_data(dataset)
    test_set_x, test_set_y = datasets[2]
    test_set_x = test_set_x.get_value()

    predicted_values = predict_model(test_set_x[:10])
    print ("Predicted values for the first 10 examples in test set:")
    print predicted_values

if __name__ == ‘__main__‘:
    sgd_optimization_mnist()
时间: 2024-08-18 23:00:38

使用逻辑回归进行mnist手写字识别的相关文章

Haskell手撸Softmax回归实现MNIST手写识别

Haskell手撸Softmax回归实现MNIST手写识别 前言 初学Haskell,看的书是Learn You a Haskell for Great Good, 才刚看到Making Our Own Types and Typeclasses这一章. 为了加深对Haskell的理解,便动手写了个Softmax回归.纯粹造轮子,只用了base. 显示图片虽然用了OpenGL,但是本文不会提到关于OpenGL的内容.虽说是造轮子, 但是这轮子造得还是使我受益匪浅.Softmax回归方面的内容参考

基于tensorflow的MNIST手写字识别(一)--白话卷积神经网络模型

一.卷积神经网络模型知识要点卷积卷积 1.卷积 2.池化 3.全连接 4.梯度下降法 5.softmax 本次就是用最简单的方法给大家讲解这些概念,因为具体的各种论文网上都有,连推导都有,所以本文主要就是给大家做个铺垫,如有错误请指正,相互学习共同进步. 二.卷积神经网络讲解 2.1卷积神经网络作用 大家应该知道大名鼎鼎的傅里叶变换,即一个波形,可以有不同的正弦函数和余弦函数进行叠加完成,卷积神经网络也是一样,可以认为一张图片是由各种不同特征的图片叠加而成的,所以它的作用是用来提取特定的特征,举

基于PyTorch实现MNIST手写字识别

本篇不涉及模型原理,只是分享下代码.想要了解模型原理的可以去看网上很多大牛的博客. 目前代码实现了CNN和LSTM两个网络,整个代码分为四部分: Config:项目中涉及的参数: CNN:卷积神经网络结构: LSTM:长短期记忆网络结构: TrainProcess: 模型训练及评估,参数model控制训练何种模型(CNN or LSTM). 完整代码 -Talk is cheap, show me the code. # -*- coding: utf-8 -*- # @author: Awes

python逻辑回归分类MNIST数据集

一.逻辑回归的介绍 logistic回归又称logistic回归分析,是一种广义的线性回归分析模型,常用于数据挖掘,疾病自动诊断,经济预测等领域.例如,探讨引发疾病的危险因素,并根据危险因素预测疾病发生的概率等.以胃癌病情分析为例,选择两组人群,一组是胃癌组,一组是非胃癌组,两组人群必定具有不同的体征与生活方式等.因此因变量就为是否胃癌,值为"是"或"否",自变量就可以包括很多了,如年龄.性别.饮食习惯.幽门螺杆菌感染等.自变量既可以是连续的,也可以是分类的.然后通

win10下通过Anaconda安装TensorFlow-GPU1.3版本,并配置pycharm运行Mnist手写识别程序

折腾了一天半终于装好了win10下的TensorFlow-GPU版,在这里做个记录. 准备安装包: visual studio 2015: Anaconda3-4.2.0-Windows-x86_64: pycharm-community: CUDA:cuda_8.0.61_win10:下载时选择 exe(local) CUDA补丁:cuda_8.0.61.2_windows: cuDNN:cudnn-8.0-windows10-x64-v6.0;如果你安装的TensorFlow版本和我一样1.

tensorflow笔记(四)之MNIST手写识别系列一

tensorflow笔记(四)之MNIST手写识别系列一 版权声明:本文为博主原创文章,转载请指明转载地址 http://www.cnblogs.com/fydeblog/p/7436310.html 前言 这篇博客将利用神经网络去训练MNIST数据集,通过学习到的模型去分类手写数字. 我会将本篇博客的jupyter notebook放在最后,方便你下载在线调试!推荐结合官方的tensorflow教程来看这个notebook! 1. MNIST数据集的导入 这里介绍一下MNIST,MNIST是在

我的第一个svm程序:手写字识别

之前学过svm相关知识,基本原理不算复杂,今天做了一个手写字识别程序,总算验证了svm的效果. 因为只是验证效果,实现上原则是简单,使用python + libsvm + PIL(python image library).这部分工作花了一些时间: PIL: http://www.pythonware.com/products/pil/ 下载源码包,解压之后运行:python setup.py install即可. max下python libsvm安装使用:http://blog.csdn.n

Tensorflow编程基础之Mnist手写识别实验+关于cross_entropy的理解

好久没有静下心来写点东西了,最近好像又回到了高中时候的状态,休息不好,无法全心学习,恶性循环,现在终于调整的好一点了,听着纯音乐突然非常伤感,那些曾经快乐的大学时光啊,突然又慢慢的一下子出现在了眼前,不知道我大学的那些小伙伴们现在都怎么样了,考研的刚刚希望他考上,实习的菜头希望他早日脱离苦海,小瑞哥希望他早日出成果,范爷熊健研究生一定要过的开心啊!天哥也哥早日结婚领证!那些回不去的曾经的快乐的时光,你们都还好吗! 最近开始接触Tensorflow,可能是论文里用的是这个框架吧,其实我还是觉得py

基于tensorflow的MNIST手写识别

这个例子,是学习tensorflow的人员通常会用到的,也是基本的学习曲线中的一环.我也是! 这个例子很简单,这里,就是简单的说下,不同的tensorflow版本,相关的接口函数,可能会有不一样哟.在TensorFlow的中文介绍文档中的内容,有些可能与你使用的tensorflow的版本不一致了,我这里用到的tensorflow的版本就有这个问题. 另外,还给大家说下,例子中的MNIST所用到的资源图片,在原始的官网上,估计很多人都下载不到了.我也提供一下下载地址. 我的tensorflow的版