OpenCV Tutorials —— Introduction to Support Vector Machines

支持向量机 ~~

 

How is the optimal hyperplane computed?

Let’s introduce the notation used to define formally a hyperplane:

where is known as the weight vector and as the bias.

 

The optimal hyperplane can be represented in an infinite number of different ways by scaling of and . As a matter of convention, among all the possible representations of the hyperplane, the one chosen is

where symbolizes the training examples closest to the hyperplane. In general, the training examples that are closest to the hyperplane are called support vectors. This representation is known as the canonical hyperplane.

Now, we use the result of geometry that gives the distance between a point and a hyperplane :

In particular, for the canonical hyperplane, the numerator is equal to one and the distance to the support vectors is

Recall that the margin introduced in the previous section, here denoted as , is twice the distance to the closest examples:

Finally, the problem of maximizing is equivalent to the problem of minimizing a function subject to some constraints. The constraints model the requirement for the hyperplane to classify correctly all the training examples . Formally,

where represents each of the labels of the training examples.

This is a problem of Lagrangian optimization that can be solved using Lagrange multipliers to obtain the weight vector and the bias of the optimal hyperplane.

 

float labels[4] = {1.0, -1.0, -1.0, -1.0};
float trainingData[4][2] = {{501, 10}, {255, 10}, {501, 255}, {10, 501}};
Mat trainingDataMat(3, 2, CV_32FC1, trainingData);
Mat labelsMat      (3, 1, CV_32FC1, labels);

The function CvSVM::train that will be used afterwards requires the training data to be stored as Mat objects of floats.  数组转Mat

 

 

CvSVMParams params;
params.svm_type    = CvSVM::C_SVC;
params.kernel_type = CvSVM::LINEAR;
params.term_crit   = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);

 

The methodCvSVM::get_support_vector_count outputs the total number of support vectors used in the problem and with the method CvSVM::get_support_vector we obtain each of the support vectors using an index.

 

 

Code

#include "stdafx.h"

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/ml/ml.hpp>

using namespace cv;

int main()
{
	// Data for visual representation
	int width = 512, height = 512;
	Mat image = Mat::zeros(height, width, CV_8UC3);

	// Set up training data
	float labels[4] = {1.0, -1.0, -1.0, -1.0};
	Mat labelsMat(4, 1, CV_32FC1, labels);

	float trainingData[4][2] = { {501, 10}, {255, 10}, {501, 255}, {10, 501} };
	Mat trainingDataMat(4, 2, CV_32FC1, trainingData);

	// Set up SVM‘s parameters
	CvSVMParams params;
	params.svm_type    = CvSVM::C_SVC;
	params.kernel_type = CvSVM::LINEAR;	// 不对训练数据进行映射
	params.term_crit   = cvTermCriteria(CV_TERMCRIT_ITER, 100, 1e-6);	// 迭代终止条件,在迭代中完成二次优化

	// Train the SVM
	CvSVM SVM;
	SVM.train(trainingDataMat, labelsMat, Mat(), Mat(), params);

	Vec3b green(0,255,0), blue (255,0,0);
	// Show the decision regions given by the SVM
	for (int i = 0; i < image.rows; ++i)
		for (int j = 0; j < image.cols; ++j)
		{
			Mat sampleMat = (Mat_<float>(1,2) << j,i);	// 每个像素点都进行predict
			float response = SVM.predict(sampleMat);

			if (response == 1)
				image.at<Vec3b>(i,j)  = green;
			else if (response == -1)
				image.at<Vec3b>(i,j)  = blue;
		}

		// Show the training data
		int thickness = -1;
		int lineType = 8;
		circle( image, Point(501,  10), 5, Scalar(  0,   0,   0), thickness, lineType);
		circle( image, Point(255,  10), 5, Scalar(255, 255, 255), thickness, lineType);
		circle( image, Point(501, 255), 5, Scalar(255, 255, 255), thickness, lineType);
		circle( image, Point( 10, 501), 5, Scalar(255, 255, 255), thickness, lineType);

		// Show support vectors
		thickness = 2;
		lineType  = 8;
		int c     = SVM.get_support_vector_count();

		for (int i = 0; i < c; ++i)
		{
			const float* v = SVM.get_support_vector(i);
			circle( image,  Point( (int) v[0], (int) v[1]),   6,  Scalar(128, 128, 128), thickness, lineType);
		}

		imwrite("result.png", image);        // save the image

		imshow("SVM Simple Example", image); // show it to the user
		waitKey(0);

}
时间: 2024-10-18 07:23:13

OpenCV Tutorials —— Introduction to Support Vector Machines的相关文章

OpenCV Tutorials &mdash;&mdash; Support Vector Machines for Non-Linearly Separable Data

与上一篇类似 ~~ 只不过是非线性数据 #include "stdafx.h" #include <iostream> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/ml/ml.hpp> #define NTRAINING_SAMPLES 100 // Number of training samples

Support Vector Machines for classification

Support Vector Machines for classification To whet your appetite for support vector machines, here’s a quote from machine learning researcher Andrew Ng: “SVMs are among the best (and many believe are indeed the best) ‘off-the-shelf’ supervised learni

初译 Support Vector Machines:A Simple Tutorial(一)

从本次开始我将开始尝试着逐章翻译一下 Alexey Nefedov的<Support Vector Machines:A Simple Tutorial>这本教材,这可是我们导师极力推荐的SVM教材,看了好久一直感觉一脸懵逼,索性开坑翻译一下吧,也当是加深理解,毕竟我也是一知半解,如果翻译的有不对的地方还望大佬们斧正,欢迎提意见,欢迎讨论. 嗯,就是这样. (一)Introduction 在本章节中将会介绍一些用于定义支持向量机(SVM)的基础的概念,这些概念对于理解SVM至关重要,假定读者了

Machine Learning - XII. Support Vector Machines (Week 7)

http://blog.csdn.net/pipisorry/article/details/44522881 机器学习Machine Learning - Andrew NG courses学习笔记 Support Vector Machines支持向量机 {SVM sometimes gives a cleaner and more powerful way of learning complex nonlinear functions} Optimization Objective优化目标

(原创)Stanford Machine Learning (by Andrew NG) --- (week 7) Support Vector Machines

本栏目内容来源于Andrew NG老师讲解的SVM部分,包括SVM的优化目标.最大判定边界.核函数.SVM使用方法.多分类问题等,Machine learning课程地址为:https://www.coursera.org/course/ml 大家对于支持向量机(SVM)可能会比较熟悉,是个强大且流行的算法,有时能解决一些复杂的非线性问题.我之前用过它的工具包libsvm来做情感分析的研究,感觉效果还不错.NG在进行SVM的讲解时也同样建议我们使用此类的工具来运用SVM. (一)优化目标(Opt

[机器学习] Coursera笔记 - Support Vector Machines

序言 机器学习栏目记录我在学习Machine Learning过程的一些心得笔记,包括在线课程或Tutorial的学习笔记,论文资料的阅读笔记,算法代码的调试心得,前沿理论的思考等等,针对不同的内容会开设不同的专栏系列. 机器学习是一个令人激动令人着迷的研究领域,既有美妙的理论公式,又有实用的工程技术,在不断学习和应用机器学习算法的过程中,我愈发的被这个领域所吸引,只恨自己没有早点接触到这个神奇伟大的领域!不过我也觉得自己非常幸运,生活在这个机器学习技术发展如火如荼的时代,并且做着与之相关的工作

scikit-learn(工程中用的相对较多的模型介绍):1.4. Support Vector Machines

参考:http://scikit-learn.org/stable/modules/svm.html 在实际项目中,我们真的很少用到那些简单的模型,比如LR.kNN.NB等,虽然经典,但在工程中确实不实用. 今天我们关注在工程中用的相对较多的SVM. SVM功能不少:Support vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outl

Machine Learning in Action -- Support Vector Machines

虽然SVM本身算法理论,水比较深,很难懂 但是基本原理却非常直观易懂,就是找到与训练集中支持向量有最大间隔的超平面 形式化的描述: 其中需要满足m个约束条件,m为数据集大小,即数据集中的每个数据点function margin都是>=1,因为之前假设所有支持向量,即离超平面最近的点,的function margin为1 对于这种有约束条件的最优化问题,用拉格朗日定理,于是得到如下的形式, 现在我们的目的就是求出最优化的m个拉格朗日算子,因为通过他们我们可以间接的算出w和b,从而得到最优超平面 考

机器学习 第七讲:Support Vector Machines 1

引言 这一讲及接下来的几讲,我们要介绍supervised learning 算法中最好的算法之一:Support Vector Machines (SVM,支持向量机).为了介绍支持向量机,我们先讨论"边界"的概念,接下来,我们将讨论优化的边界分类器,并将引出拉格朗日数乘法.我们还会给出 kernel function 的概念,利用 kernel function,可以有效地处理高维(甚至无限维数)的特征向量,最后,我们会介绍SMO算法,该算法说明了如何高效地实现SVM. Margi