Haar Feature-based Cascade Classifier for Object Detection
First, a classifier (namely a cascade of boosted classifiers working with haar-like features) is trained with a few hundred sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size. 使用相同规格的正负样本进行训练
After a classifier is trained, it can be applied to a region of interest (of the same size as used during the training) in an input image. The classifier outputs a “1” if the region is likely to show the object (i.e., face/car), and “0” otherwise.
To search for the object in the whole image one can move the search window across the image and check every location using the classifier. The classifier is designed so that it can be easily “resized” in order to be able to find the objects of interest at different sizes, which is more efficient than resizing the image itself. So, to find an object of an unknown size in the image the scan procedure should be done several times at different scales. 不同尺度滑动窗口 ~~ 缩放模型而不是缩放图像
The word “cascade” in the classifier name means that the resultant classifier consists of several simpler classifiers (stages) that are applied subsequently to a region of interest until at some stage the candidate is rejected or all the stages are passed. 级联分类器是多个简单分类器的合成
The word “boosted” means that the classifiers at every stage of the cascade are complex themselves and they are built out of basic classifiers using one of four different boosting techniques (weighted voting). Currently Discrete Adaboost, Real Adaboost, Gentle Adaboost and Logitboost are supported. 每阶段的简单分类器加权投票
The basic classifiers are decision-tree classifiers with at least 2 leaves. Haar-like features are the input to the basic classifiers, and are calculated as described below. 两层决策树,Haar 特征作为初始特征输入,计算出如下中层特征
The feature used in a particular classifier is specified by its shape (1a, 2b etc.), position within the region of interest and the scale (this scale is not the same as the scale used at the detection stage, though these two scales are multiplied).
For example, in the case of the third line feature (2c) the response is calculated as the difference between the sum of image pixels under the rectangle covering the whole feature (including the two white stripes and the black stripe in the middle) and the sum of the image pixels under the black stripe multiplied by 3 in order to compensate for the differences in the size of areas. The sums of pixel values over a rectangular regions are calculated rapidly using integral images (see below and the integral() description).
CascadeClassifier::load
Loads a classifier from a file.
CascadeClassifier::read
Reads a classifier from a FileStorage node.
CascadeClassifier::detectMultiScale
Detects objects of different sizes in the input image. The detected objects are returned as a list of rectangles.
void CascadeClassifier::detectMultiScale(const Mat& image, vector<Rect>& objects, double scaleFactor=1.1, intminNeighbors=3, int flags=0, Size minSize=Size(), Size maxSize=Size())
CascadeClassifier::setImage
The function is automatically called by & objects, double scaleFactor, int minNeighbors, int flags, Size minSize, Size maxSize)">CascadeClassifier::detectMultiScale() at every image scale. But if you want to test various locations manually using & feval, Point pt, double& weight)">CascadeClassifier::runAt(),
CascadeClassifier::runAt
The function returns 1 if the cascade classifier detects an object in the given location. Otherwise, it returns negated index of the stage at which the candidate has been rejected.
Code
#include "stdafx.h" #include "opencv2/objdetect/objdetect.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> #include <stdio.h> using namespace std; using namespace cv; /** Function Headers */ void detectAndDisplay( Mat frame ); /** Global variables */ String face_cascade_name = "haarcascade_frontalface_alt.xml"; String eyes_cascade_name = "haarcascade_eye_tree_eyeglasses.xml"; CascadeClassifier face_cascade; CascadeClassifier eyes_cascade; string window_name = "Capture - Face detection"; RNG rng(12345); /** @function main */ int main( int argc, const char** argv ) { CvCapture* capture; Mat frame; //-- 1. Load the cascades if( !face_cascade.load( face_cascade_name ) ){ printf("--(!)Error loading\n"); return -1; }; if( !eyes_cascade.load( eyes_cascade_name ) ){ printf("--(!)Error loading\n"); return -1; }; //-- 2. Read the video stream capture = cvCaptureFromCAM( -1 ); if( capture ) { while( true ) { frame = cvQueryFrame( capture ); //-- 3. Apply the classifier to the frame if( !frame.empty() ) { detectAndDisplay( frame ); } else { printf(" --(!) No captured frame -- Break!"); break; } int c = waitKey(10); if( (char)c == ‘c‘ ) { break; } } } return 0; } /** @function detectAndDisplay */ void detectAndDisplay( Mat frame ) { std::vector<Rect> faces; Mat frame_gray; cvtColor( frame, frame_gray, CV_BGR2GRAY ); equalizeHist( frame_gray, frame_gray ); // 直方图均衡化,增强对比度 //-- Detect faces face_cascade.detectMultiScale( frame_gray, faces, 1.1, 2, 0|CV_HAAR_SCALE_IMAGE, Size(30, 30) ); for( size_t i = 0; i < faces.size(); i++ ) { Point center( faces[i].x + faces[i].width*0.5, faces[i].y + faces[i].height*0.5 ); ellipse( frame, center, Size( faces[i].width*0.5, faces[i].height*0.5), 0, 0, 360, Scalar( 255, 0, 255 ), 4, 8, 0 ); Mat faceROI = frame_gray( faces[i] ); std::vector<Rect> eyes; //-- In each face, detect eyes eyes_cascade.detectMultiScale( faceROI, eyes, 1.1, 2, 0 |CV_HAAR_SCALE_IMAGE, Size(30, 30) ); for( size_t j = 0; j < eyes.size(); j++ ) { Point center( faces[i].x + eyes[j].x + eyes[j].width*0.5, faces[i].y + eyes[j].y + eyes[j].height*0.5 ); int radius = cvRound( (eyes[j].width + eyes[j].height)*0.25 ); circle( frame, center, radius, Scalar( 255, 0, 0 ), 4, 8, 0 ); } } //-- Show what you got imshow( window_name, frame ); }