用KNN算法分类CIFAR-10图片数据

  KNN分类CIFAR-10,并且做Cross Validation,CIDAR-10数据库数据如下:

knn.py : 主要的试验流程

from cs231n.data_utils import     load_CIFAR10
from cs231n.classifiers import KNearestNeighbor
import random
import numpy as np
import     matplotlib.pyplot as plt
# set plt params
plt.rcParams[‘figure.figsize‘] = (10.0, 8.0) # set default size of plots
plt.rcParams[‘image.interpolation‘] = ‘nearest‘
plt.rcParams[‘image.cmap‘] = ‘gray‘

cifar10_dir = ‘cs231n/datasets/cifar-10-batches-py‘
x_train,y_train,x_test,y_test = load_CIFAR10(cifar10_dir)
print‘x_train : ‘,x_train.shape
print‘y_train : ‘,y_train.shape
print‘x_test : ‘,x_test.shape,‘y_test : ‘,y_test.shape

#visual training example
classes = [‘plane‘,‘car‘,‘bird‘,‘cat‘,‘deer‘,‘dog‘,‘forg‘,‘horse‘,‘ship‘,‘truck‘]
num_classes = len(classes)
samples_per_class = 7
for y,cls in enumerate(classes):
    #flaznonzero return indices_array of the none-zero elements
    # ten classes, y_train and y_test all in [1...10]
    idxs = np.flatnonzero(y_train == y)
    idxs = np.random.choice(idxs , samples_per_class, replace = False)
    for i,idx in enumerate(idxs):
        plt_idx = i*num_classes + y + 1
        # subplot(m,n,p)
        # m : length of subplot
        # n : width of subplot
        # p : location of subplot
        plt.subplot(samples_per_class,num_classes,plt_idx)
        plt.imshow(x_train[idx].astype(‘uint8‘))
        # hidden the axis info
        plt.axis(‘off‘)
        if i == 0:
            plt.title(cls)
plt.show()

# subsample data for more dfficient code execution
num_training = 5000
#range(5)=[0,1,2,3,4]
mask = range(num_training)
x_train = x_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
x_test = x_test[mask]
y_test = y_test[mask]
#the image data has three chanels
#the next two step shape the image size 32*32*3 to 3072*1
x_train = np.reshape(x_train,(x_train.shape[0],-1))
x_test = np.reshape(x_test,(x_test.shape[0],-1))
print ‘after subsample and re shape:‘
print ‘x_train : ‘,x_train.shape," x_test : ",x_test.shape
#KNN classifier
classifier = KNearestNeighbor()
classifier.train(x_train,y_train)
# compute the distance between test_data and train_data
dists = classifier.compute_distances_no_loops(x_test)
#each row is a single test example and its distances to training example
print ‘dist shape : ‘,dists.shape
plt.imshow(dists , interpolation=‘none‘)
plt.show()
y_test_pred = classifier.predict_labels(dists,k = 5)
num_correct = np.sum(y_test_pred == y_test)
acc = float(num_correct)/num_test
print‘k=5 ,The Accurancy is : ‘, acc

#Cross-Validation

#5-fold cross validation split the training data to 5 pieces
num_folds = 5
#k is params of knn
k_choice = [1,5,8,11,15,18,20,50,100]
x_train_folds = []
y_train_folds = []
x_train_folds = np.array_split(x_train,num_folds)
y_train_folds = np.array_split(y_train,num_folds)

k_to_acc={}

for k in k_choice:
    k_to_acc[k] =[]
for k in k_choice:
    print ‘cross validation : k = ‘, k
    for j in range(num_folds):
        #vstack :stack the array to matrix
        #vertical
        x_train_cv = np.vstack(x_train_folds[0:j]+x_train_folds[j+1:])
        x_test_cv = x_train_folds[j]

        #>>> a = np.array((1,2,3))
        #>>> b = np.array((2,3,4))
        #>>> np.hstack((a,b))
        # horizontally
        y_train_cv = np.hstack(y_train_folds[0:j]+y_train_folds[j+1:])
        y_test_cv = y_train_folds[j]

        classifier.train(x_train_cv,y_train_cv)
        dists_cv = classifier.compute_distances_no_loops(x_test_cv)
        y_test_pred = classifier.predict_labels(dists_cv,k)
        num_correct = np.sum(y_test_pred == y_test_cv)
        acc = float(num_correct)/ num_test
        k_to_acc[k].append(acc)
print k_to_acc

k_nearest_neighbor.py : knn算法的实现:

import numpy as np
from collections import Counter
class KNearestNeighbor(object):
  """ a kNN classifier with L2 distance """

  def __init__(self):
    pass

  def train(self, X, y):
    """
    Train the classifier. For k-nearest neighbors this is just
    memorizing the training data.

    Inputs:
    - X: A numpy array of shape (num_train, D) containing the training data
      consisting of num_train samples each of dimension D.
    - each row is a training example
    - y: A numpy array of shape (N,) containing the training labels, where
         y[i] is the label for X[i].
    """
    self.X_train = X
    self.y_train = y

  def predict(self, X, k=1, num_loops=0):
    """
    Predict labels for test data using this classifier.

    Inputs:
    - X: A numpy array of shape (num_test, D) containing test data consisting
         of num_test samples each of dimension D.
    - k: The number of nearest neighbors that vote for the predicted labels.
    - num_loops: Determines which implementation to use to compute distances
      between training points and testing points.

    Returns:
    - y: A numpy array of shape (num_test,) containing predicted labels for the
      test data, where y[i] is the predicted label for the test point X[i].
    """
    if num_loops == 0:
      dists = self.compute_distances_no_loops(X)
    elif num_loops == 1:
      dists = self.compute_distances_one_loop(X)
    elif num_loops == 2:
      dists = self.compute_distances_two_loops(X)
    else:
      raise ValueError(‘Invalid value %d for num_loops‘ % num_loops)

    return self.predict_labels(dists, k=k)
  def compute_distances_two_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a nested loop over both the training data and the
    test data.

    Inputs:
    - X: A numpy array of shape (num_test, D) containing test data.

    Returns:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      is the Euclidean distance between the ith test point and the jth training
      point.
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))
    for i in xrange(num_test):
      for j in xrange(num_train):
        #####################################################################
        # TODO:                                                             #
        # Compute the l2 distance between the ith test point and the jth    #
        # training point, and store the result in dists[i, j]. You should   #
        # not use a loop over dimension.                                    #
        #####################################################################
    #Euclidean distance
    #dists[i,j] = np.sqrt(np.sum(X[i,:]-self.X_train[j,:])**2)
    # use linalg make it more easy
    dists[i,j] = np.linalg.norm(self.X_train[j,:]-X[i,:])
        #####################################################################
        #                       END OF YOUR CODE                            #
        #####################################################################
    return dists

  def compute_distances_one_loop(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using a single loop over the test data.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))
    for i in xrange(num_test):
      #######################################################################
      # TODO:                                                               #
      # Compute the l2 distance between the ith test point and all training #
      # points, and store the result in dists[i, :].                        #
      #######################################################################
      #evevy row minus X[i,:] then norm it
      # axis = 1 imply operations by row
      dist[i,:] = np.linalg.norm(self.X_train - X[i,:],axis = 1)
      #######################################################################
      #                         END OF YOUR CODE                            #
      #######################################################################
    return dists

  def compute_distances_no_loops(self, X):
    """
    Compute the distance between each test point in X and each training point
    in self.X_train using no explicit loops.

    Input / Output: Same as compute_distances_two_loops
    """
    num_test = X.shape[0]
    num_train = self.X_train.shape[0]
    dists = np.zeros((num_test, num_train))
    #########################################################################
    # TODO:                                                                 #
    # Compute the l2 distance between all test points and all training      #
    # points without using any explicit loops, and store the result in      #
    # dists.                                                                #
    #                                                                       #
    # You should implement this function using only basic array operations; #
    # in particular you should not use functions from scipy.                #
    #                                                                       #
    # HINT: Try to formulate the l2 distance using matrix multiplication    #
    #       and two broadcast sums.                                         #
    #########################################################################
    M = np.dot(X , self.X_train.T)
    te = np.square(X).sum(axis = 1)
    tr = np.square(self.X_train).sum(axis = 1)
    dists = np.sqrt(-2*M +tr+np.matrix(te).T)
    #########################################################################
    #                         END OF YOUR CODE                              #
    #########################################################################
    return dists

  def predict_labels(self, dists, k=1):
    """
    Given a matrix of distances between test points and training points,
    predict a label for each test point.

    Inputs:
    - dists: A numpy array of shape (num_test, num_train) where dists[i, j]
      gives the distance betwen the ith test point and the jth training point.

    Returns:
    - y: A numpy array of shape (num_test,) containing predicted labels for the
      test data, where y[i] is the predicted label for the test point X[i].
    """
    num_test = dists.shape[0]
    y_pred = np.zeros(num_test)
    for i in xrange(num_test):
      # A list of length k storing the labels of the k nearest neighbors to
      # the ith test point.
      closest_y = []
      #########################################################################
      # TODO:                                                                 #
      # Use the distance matrix to find the k nearest neighbors of the ith    #
      # testing point, and use self.y_train to find the labels of these       #
      # neighbors. Store these labels in closest_y.                           #
      # Hint: Look up the function numpy.argsort.                             #
      #########################################################################
      labels = self.y_train[np.argsort(dists[i,:])].flatten()
      closest_y = labels[0:k]
      #########################################################################
      # TODO:                                                                 #
      # Now that you have found the labels of the k nearest neighbors, you    #
      # need to find the most common label in the list closest_y of labels.   #
      # Store this label in y_pred[i]. Break ties by choosing the smaller     #
      # label.                                                                #
      #########################################################################
      c = Counter(closest_y)
      y_pred[i] = c.most_common(1)[0][0]
      #########################################################################
      #                           END OF YOUR CODE                            #
      #########################################################################

    return y_pred

data_utils.py : CIFAR-10数据的读取

import cPickle as pickle
import numpy as np
import os
from scipy.misc import imread

def load_CIFAR_batch(filename):
  """ load single batch of cifar """
  with open(filename, ‘rb‘) as f:
    datadict = pickle.load(f)
    X = datadict[‘data‘]
    Y = datadict[‘labels‘]
    X = X.reshape(10000, 3, 32, 32).transpose(0,2,3,1).astype("float")
    Y = np.array(Y)
    return X, Y

def load_CIFAR10(ROOT):
  """ load all of cifar """
  xs = []
  ys = []
  for b in range(1,6):
    f = os.path.join(ROOT, ‘data_batch_%d‘ % (b, ))
    X, Y = load_CIFAR_batch(f)
    xs.append(X)
    ys.append(Y)
  Xtr = np.concatenate(xs)
  Ytr = np.concatenate(ys)
  del X, Y
  Xte, Yte = load_CIFAR_batch(os.path.join(ROOT, ‘test_batch‘))
  return Xtr, Ytr, Xte, Yte

def load_tiny_imagenet(path, dtype=np.float32):
  """
  Load TinyImageNet. Each of TinyImageNet-100-A, TinyImageNet-100-B, and
  TinyImageNet-200 have the same directory structure, so this can be used
  to load any of them.

  Inputs:
  - path: String giving path to the directory to load.
  - dtype: numpy datatype used to load the data.

  Returns: A tuple of
  - class_names: A list where class_names[i] is a list of strings giving the
    WordNet names for class i in the loaded dataset.
  - X_train: (N_tr, 3, 64, 64) array of training images
  - y_train: (N_tr,) array of training labels
  - X_val: (N_val, 3, 64, 64) array of validation images
  - y_val: (N_val,) array of validation labels
  - X_test: (N_test, 3, 64, 64) array of testing images.
  - y_test: (N_test,) array of test labels; if test labels are not available
    (such as in student code) then y_test will be None.
  """
  # First load wnids
  with open(os.path.join(path, ‘wnids.txt‘), ‘r‘) as f:
    wnids = [x.strip() for x in f]

  # Map wnids to integer labels
  wnid_to_label = {wnid: i for i, wnid in enumerate(wnids)}

  # Use words.txt to get names for each class
  with open(os.path.join(path, ‘words.txt‘), ‘r‘) as f:
    wnid_to_words = dict(line.split(‘\t‘) for line in f)
    for wnid, words in wnid_to_words.iteritems():
      wnid_to_words[wnid] = [w.strip() for w in words.split(‘,‘)]
  class_names = [wnid_to_words[wnid] for wnid in wnids]

  # Next load training data.
  X_train = []
  y_train = []
  for i, wnid in enumerate(wnids):
    if (i + 1) % 20 == 0:
      print ‘loading training data for synset %d / %d‘ % (i + 1, len(wnids))
    # To figure out the filenames we need to open the boxes file
    boxes_file = os.path.join(path, ‘train‘, wnid, ‘%s_boxes.txt‘ % wnid)
    with open(boxes_file, ‘r‘) as f:
      filenames = [x.split(‘\t‘)[0] for x in f]
    num_images = len(filenames)

    X_train_block = np.zeros((num_images, 3, 64, 64), dtype=dtype)
    y_train_block = wnid_to_label[wnid] * np.ones(num_images, dtype=np.int64)
    for j, img_file in enumerate(filenames):
      img_file = os.path.join(path, ‘train‘, wnid, ‘images‘, img_file)
      img = imread(img_file)
      if img.ndim == 2:
        ## grayscale file
        img.shape = (64, 64, 1)
      X_train_block[j] = img.transpose(2, 0, 1)
    X_train.append(X_train_block)
    y_train.append(y_train_block)

  # We need to concatenate all training data
  X_train = np.concatenate(X_train, axis=0)
  y_train = np.concatenate(y_train, axis=0)

  # Next load validation data
  with open(os.path.join(path, ‘val‘, ‘val_annotations.txt‘), ‘r‘) as f:
    img_files = []
    val_wnids = []
    for line in f:
      img_file, wnid = line.split(‘\t‘)[:2]
      img_files.append(img_file)
      val_wnids.append(wnid)
    num_val = len(img_files)
    y_val = np.array([wnid_to_label[wnid] for wnid in val_wnids])
    X_val = np.zeros((num_val, 3, 64, 64), dtype=dtype)
    for i, img_file in enumerate(img_files):
      img_file = os.path.join(path, ‘val‘, ‘images‘, img_file)
      img = imread(img_file)
      if img.ndim == 2:
        img.shape = (64, 64, 1)
      X_val[i] = img.transpose(2, 0, 1)

  # Next load test images
  # Students won‘t have test labels, so we need to iterate over files in the
  # images directory.
  img_files = os.listdir(os.path.join(path, ‘test‘, ‘images‘))
  X_test = np.zeros((len(img_files), 3, 64, 64), dtype=dtype)
  for i, img_file in enumerate(img_files):
    img_file = os.path.join(path, ‘test‘, ‘images‘, img_file)
    img = imread(img_file)
    if img.ndim == 2:
      img.shape = (64, 64, 1)
    X_test[i] = img.transpose(2, 0, 1)

  y_test = None
  y_test_file = os.path.join(path, ‘test‘, ‘test_annotations.txt‘)
  if os.path.isfile(y_test_file):
    with open(y_test_file, ‘r‘) as f:
      img_file_to_wnid = {}
      for line in f:
        line = line.split(‘\t‘)
        img_file_to_wnid[line[0]] = line[1]
    y_test = [wnid_to_label[img_file_to_wnid[img_file]] for img_file in img_files]
    y_test = np.array(y_test)

  return class_names, X_train, y_train, X_val, y_val, X_test, y_test

def load_models(models_dir):
  """
  Load saved models from disk. This will attempt to unpickle all files in a
  directory; any files that give errors on unpickling (such as README.txt) will
  be skipped.

  Inputs:
  - models_dir: String giving the path to a directory containing model files.
    Each model file is a pickled dictionary with a ‘model‘ field.

  Returns:
  A dictionary mapping model file names to models.
  """
  models = {}
  for model_file in os.listdir(models_dir):
    with open(os.path.join(models_dir, model_file), ‘rb‘) as f:
      try:
        models[model_file] = pickle.load(f)[‘model‘]
      except pickle.UnpicklingError:
        continue
  return models

通过 cv,最优的 k 值为7,accurancy=0.282,太低了,明天用cnn重复这个实验...

时间: 2024-10-26 18:43:35

用KNN算法分类CIFAR-10图片数据的相关文章

Opencv学习之路—Opencv下基于HOG特征的KNN算法分类训练

在计算机视觉研究当中,HOG算法和LBP算法算是基础算法,但是却十分重要.后期很多图像特征提取的算法都是基于HOG和LBP,所以了解和掌握HOG,是学习计算机视觉的前提和基础. HOG算法的原理很多资料都可以查到,简单来说,就是将图像分成一个cell,通过对每个cell的像素进行梯度处理,进而根据梯度方向和梯度幅度来得到cell的图像特征.随后,将每个cell的图像特征连接起来,得到一个BLock的特征,进而得到一张图片的特征.Opencv当中自带HOG算法,可以直接调用,进行图像的特征提取.但

运用kNN算法识别潜在续费商家

背景与目标 Youzan 是一家SAAS公司,服务于数百万商家,帮助互联网时代的生意人私有化顾客资产.拓展互联网客群.提高经营效率.现在,该公司希望能够从商家的交易数据中,挖掘出有强烈续费倾向的商家,并提供更优质更有针对性的服务. 目标: 从商家交易数据中识别有强烈续费倾向的商家. 思路与建模 kNN是一种思路简单清晰的有点近似蛮力的机器学习算法.它将待分类数据的特征值集与已分类数据集的每个样本的特征值集进行比较,计算出距离值,然后根据距离最小原则,选择k个距离最小的已分类实例,从这k个已分类实

Hadoop伪分布安装详解+MapReduce运行原理+基于MapReduce的KNN算法实现

本篇博客将围绕Hadoop伪分布安装+MapReduce运行原理+基于MapReduce的KNN算法实现这三个方面进行叙述. (一)Hadoop伪分布安装 1.简述Hadoop的安装模式中–伪分布模式与集群模式的区别与联系. Hadoop的安装方式有三种:本地模式,伪分布模式,集群(分布)模式,其中后两种模式为重点,有意义 伪分布:如果Hadoop对应的Java进程都运行在一个物理机器上,称为伪分布 分布:如果Hadoop对应的Java进程运行在多台物理机器上,称为分布.[集群就是有主有从] 伪

python用K近邻(KNN)算法分类MNIST数据集和Fashion MNIST数据集

一.KNN算法的介绍 K最近邻(k-Nearest Neighbor,KNN)分类算法是最简单的机器学习算法之一,理论上比较成熟.KNN算法首先将待分类样本表达成和训练样本一致的特征向量:然后根据距离计算待测试样本和每个训练样本的距离,选择距离最小的K个样本作为近邻样本:最后根据K个近邻样本判断待分类样本的类别.KNN算法的正确选取是分类正确的关键因素之一,而近邻样本是通过计算测试样本与每个训练集样本的距离来选定的,故定义合适的距离是KNN正确分类的前提. 本文中在上述研究的基础上,将特征属性值

KNN邻近分类算法

K邻近(k-Nearest Neighbor,KNN)分类算法是最简单的机器学习算法了.它采用测量不同特征值之间的距离方法进行分类.它的思想很简单:计算一个点A与其他所有点之间的距离,取出与该点最近的k个点,然后统计这k个点里面所属分类比例最大的,则点A属于该分类. 下面用一个例子来说明一下: 电影名称 打斗次数 接吻次数 电影类型 California Man 3 104 Romance He’s Not Really into Dudes 2 100 Romance Beautiful Wo

基于K-Nearest Neighbors[K-NN]算法的鸢尾花分类问题解决方案(For Python)

看了原理,总觉得需要用具体问题实现一下机器学习算法的模型,才算学习深刻.而写此博文的目的是,网上关于K-NN解决此问题的博文很多,但大都是调用Python高级库实现,尤其不利于初级学习者本人对模型的理解和工程实践能力的提升,也不利于Python初学者实现该模型. 本博文的特点: 一 全面性地总结K-NN模型的特征.用途 二  基于Python的内置模块,不调用任何第三方库实现 博文主要分为三部分: 基本模型(便于理清概念.回顾模型) 对待解决问题的重述 模型(算法)和评价(一来,以便了解模型特点

数据挖掘之分类算法---knn算法(有matlab样例)

knn算法(k-Nearest Neighbor algorithm).是一种经典的分类算法. 注意,不是聚类算法.所以这样的分类算法必定包含了训练过程. 然而和一般性的分类算法不同,knn算法是一种懒惰算法.它并不是 像其它的分类算法先通过训练建立分类模型.,而是一种被动的分类 过程.它是边測试边训练建立分类模型. 算法的一般描写叙述步骤例如以下: 1.首先计算每一个測试样本点到其它每一个点的距离. 这个距离能够是欧氏距离,余弦距离等. 2. 然后取出距离小于设定的距离阈值的点. 这些点即为依

数据挖掘之分类算法---knn算法(有matlab例子)

knn算法(k-Nearest Neighbor algorithm).是一种经典的分类算法. 注意,不是聚类算法.所以这种分类算法必然包括了训练过程. 然而和一般性的分类算法不同,knn算法是一种懒惰算法.它并非 像其他的分类算法先通过训练建立分类模型.,而是一种被动的分类 过程.它是边测试边训练建立分类模型. 算法的一般描述过程如下: 1.首先计算每个测试样本点到其他每个点的距离. 这个距离可以是欧氏距离,余弦距离等. 2. 然后取出距离小于设定的距离阈值的点. 这些点即为根据阈值环绕在测试

knn原理及借助电影分类实现knn算法

KNN最近邻算法原理 KNN英文全称K-nearst neighbor,中文名称为K近邻算法,它是由Cover和Hart在1968年提出来的 KNN算法原理: 1. 计算已知类别数据集中的点与当前点之间的距离: 2. 按照距离递增次序排序: 3. 选择与当前距离最小的k个点: 4. 确定前k个点所在类别的出现概率 5. 返回前k个点出现频率最高的类别作为当前点的预测分类 如果数据集中序号1-12为已知的电影分类,分为喜剧片.动作片.爱情片三个种类,使用的特征值分别为搞笑镜头.打斗镜头.拥抱镜头的