OpenCV Tutorials —— Back Projection

反向投影

一种记录给定图像中的像素点如何适应直方图模型像素分布的方式。

反向投影就是首先计算某一特征的直方图模型,然后使用模型去寻找图像中存在的该特征。

backproject是直接取直方图中的值,即以灰度为例,某种灰度值在整幅图像中所占面积越大,其在直方图中的值越大,backproject时,其对应的像素的新值越大(越亮),反过来,某灰度值所占面积越小,其新值就越小。

源图像 ——> 直方图 ——> backproject图像

 

calcBackProjectPatch,整个是基于块的形式,利用直方图做匹配,类似于模板匹配,只不过这些模板转换为直方图,而原图中以某点为基准,抠出来作对比的部分也转换为直方图,两个直方图作匹配,匹配的结果作为此点的值。 结果会是一张概率图,概率越大的地方,代表此区域与模板的相似度越高。

 

Application example

If you have a histogram of flesh color (say, a Hue-Saturation histogram ) —— 皮肤颜色的直方图 , then you can use it to find flesh color areas in an image:

1,Let’s say you have gotten a skin histogram (Hue-Saturation) based on the image below. The histogram besides is going to be our model histogram (which we know represents a sample of skin tonality). You applied some mask to capture only the histogram of the skin area: 模板肤色直方图,需要用mask来提取出需要的肤色区域

    

2,let’s imagine that you get another hand image (Test Image) like the one below: (with its respective histogram):

  

3,What we want to do is to use our model histogram (that we know represents a skin tonality) to detect skin areas in our Test Image. Here are the steps :

 

  • 投影之前要用mask把非兴趣区域的值设置为0

    1. In each pixel of our Test Image (i.e. ), collect the data and find the correspondent bin location for that pixel (i.e. ). 对test image 的每个像素点,找到其直方图中对应的bin值
    2. Lookup the model histogram in the correspondent bin - - and read the bin value. 读取此bin值在模板直方图中对应的像素值
    3. Store this bin value in a new image (BackProjection). Also, you may consider to normalize the model histogramfirst, so the output for the Test Image can be visible for you.  归一化直方图等,使这些值在新的backprojection 图像中可使
    4. Applying the steps above, we get the following BackProjection image for our Test Image:

    5. In terms of statistics, the values stored in BackProjection represent the probability that a pixel in Test Image belongs to a skin area, based on the model histogram that we use. For instance in our Test image, the brighter areas are more probable to be skin area (as they actually are), whereas the darker areas have less probability (notice that these “dark” areas belong to surfaces that have some shadow on it, which in turns affects the detection). 在反向投影图像中每一个像素点代表了其属于皮肤区域的概率,越亮的地方(像素值越高)的地方可能性越大

 

void mixChannels(const Mat* src, size_t nsrcs, Mat* dst, size_t ndsts, const int* fromTo, size_t npairs)

Parameters:

  • src – input array or vector of matricesl; all of the matrices must have the same size and the same depth.
  • nsrcs – number of matrices in src.
  • dst – output array or vector of matrices; all the matrices must be allocated; their size and depth must be the same as in src[0].
  • ndsts – number of matrices in dst.
  • fromTo – array of index pairs specifying which channels are copied and where; fromTo[k*2] is a 0-based index of the input channel in src, fromTo[k*2+1] is an index of the output channel in dst; the continuous channel numbering is used: the first input image channels are indexed from 0 tosrc[0].channels()-1, the second input image channels are indexed from src[0].channels() tosrc[0].channels() + src[1].channels()-1, and so on, the same scheme is used for the output image channels; as a special case, when fromTo[k*2] is negative, the corresponding output channel is filled with zero .
  • npairs – number of index pairs in fromTo.

 

Mat rgba( 100, 100, CV_8UC4, Scalar(1,2,3,4) );
Mat bgr( rgba.rows, rgba.cols, CV_8UC3 );
Mat alpha( rgba.rows, rgba.cols, CV_8UC1 );

// forming an array of matrices is a quite efficient operation,
// because the matrix data is not copied, only the headers
Mat out[] = { bgr, alpha };
// rgba[0] -> bgr[2], rgba[1] -> bgr[1],
// rgba[2] -> bgr[0], rgba[3] -> alpha[0]
int from_to[] = { 0,2, 1,1, 2,0, 3,3 };
mixChannels( &rgba, 1, out, 2, from_to, 4 );

通道拆分,交换一步到位,好nb的方法

 

Code

#include "stdafx.h"

#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"

#include <iostream>

using namespace cv;
using namespace std;

/// Global Variables
Mat src; Mat hsv; Mat hue;
int bins = 25;

/// Function Headers
void Hist_and_Backproj(int, void* );

/** @function main */
int main( int argc, char** argv )
{
  /// Read the image
  src = imread( "zanxin.jpg", 1 );
  /// Transform it to HSV
  cvtColor( src, hsv, CV_BGR2HSV );

  /// Use only the Hue value
  hue.create( hsv.size(), hsv.depth() );
  int ch[] = { 0, 0 };	// from 0 to 0
  mixChannels( &hsv, 1, &hue, 1, ch, 1 );

  /// Create Trackbar to enter the number of bins
  char* window_image = "Source image";
  namedWindow( window_image, CV_WINDOW_AUTOSIZE );
  createTrackbar("* Hue  bins: ", window_image, &bins, 180, Hist_and_Backproj );	// bins —— 0到180
  Hist_and_Backproj(0, 0);

  /// Show the image
  imshow( window_image, src );	// 窗口初始图像

  /// Wait until user exits the program
  waitKey(0);
  return 0;
}

/**
 * @function Hist_and_Backproj
 * @brief Callback to Trackbar
 */
void Hist_and_Backproj(int, void* )
{
  MatND hist;
  int histSize = MAX( bins, 2 );
  float hue_range[] = { 0, 180 };
  const float* ranges = { hue_range };

  /// Get the Histogram and normalize it
  calcHist( &hue, 1, 0, Mat(), hist, 1, &histSize, &ranges, true, false );
  normalize( hist, hist, 0, 255, NORM_MINMAX, -1, Mat() );

  /// Get Backprojection
  MatND backproj;
  calcBackProject( &hue, 1, 0, hist, backproj, &ranges, 1, true );

  /// Draw the backproj
  imshow( "BackProj", backproj );

  /// Draw the histogram
  int w = 400; int h = 400;
  int bin_w = cvRound( (double) w / histSize );
  Mat histImg = Mat::zeros( w, h, CV_8UC3 );

  for( int i = 0; i < bins; i ++ )
     { rectangle( histImg, Point( i*bin_w, h ), Point( (i+1)*bin_w, h - cvRound( hist.at<float>(i)*h/255.0 ) ), Scalar( 0, 0, 255 ), -1 ); }

  imshow( "Histogram", histImg );
}

 

Code

#include "stdafx.h"

#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"

#include <iostream>

using namespace cv;
using namespace std;

/// Global Variables
Mat src; Mat hsv;
Mat mask;	// 手动选择种子点和阈值,洪范法

int lo = 20; int up = 20;
const char* window_image = "Source image";

/// Function Headers
void Hist_and_Backproj( );
void pickPoint (int event, int x, int y, int, void* );

/**
 * @function main
 */
int main( int, char** argv )
{
  /// Read the image
  src = imread( "zanxin.jpg", 1 );
  /// Transform it to HSV
  cvtColor( src, hsv, COLOR_BGR2HSV );

  /// Show the image
  namedWindow( window_image, WINDOW_AUTOSIZE );
  imshow( window_image, src );

  /// Set Trackbars for floodfill thresholds
  createTrackbar( "Low thresh", window_image, &lo, 255, 0 );
  createTrackbar( "High thresh", window_image, &up, 255, 0 );
  /// Set a Mouse Callback
  setMouseCallback( window_image, pickPoint, 0 );	// 选择种子点

  waitKey(0);
  return 0;
}

/**
 * @function pickPoint
 */
void pickPoint (int event, int x, int y, int, void* )
{
  if( event != EVENT_LBUTTONDOWN )
    { return; }

  // Fill and get the mask
  Point seed = Point( x, y );

  int newMaskVal = 255;
  Scalar newVal = Scalar( 120, 120, 120 );

  int connectivity = 8;
  int flags = connectivity + (newMaskVal << 8 ) + FLOODFILL_FIXED_RANGE + FLOODFILL_MASK_ONLY;

  Mat mask2 = Mat::zeros( src.rows + 2, src.cols + 2, CV_8UC1 );
  floodFill( src, mask2, seed, newVal, 0, Scalar( lo, lo, lo ), Scalar( up, up, up), flags );
  mask = mask2( Range( 1, mask2.rows - 1 ), Range( 1, mask2.cols - 1 ) );

  imshow( "Mask", mask );

  Hist_and_Backproj( );
}

/**
 * @function Hist_and_Backproj
 */
void Hist_and_Backproj( )
{
  MatND hist;
  int h_bins = 30; int s_bins = 32;
  int histSize[] = { h_bins, s_bins };

  float h_range[] = { 0, 179 };
  float s_range[] = { 0, 255 };
  const float* ranges[] = { h_range, s_range };

  int channels[] = { 0, 1 };

  /// Get the Histogram and normalize it
  calcHist( &hsv, 1, channels, mask, hist, 2, histSize, ranges, true, false );

  normalize( hist, hist, 0, 255, NORM_MINMAX, -1, Mat() );

  /// Get Backprojection
  MatND backproj;
  calcBackProject( &hsv, 1, channels, hist, backproj, ranges, 1, true );

  /// Draw the backproj
  imshow( "BackProj", backproj );

}
时间: 2024-10-27 13:03:46

OpenCV Tutorials —— Back Projection的相关文章

学习opencv tutorials

1.opencv里头动态库和静态库的区别 lib是动态库,staticlib是静态库. 这是opencv tutorials中对动态库和静态库的说明.动态库是在runtime时候才load的库文件.而静态库文件会在你build的时候build-in inside your exe file.优点是可以避免误删,缺点是应用程序变大,加载时间也会变长. 2.  Visual Studio中solution和project的关系 在VS中,一个solution中可以包含多个project. 3.  两

OpenCV Tutorials &mdash;&mdash; Camera calibration With OpenCV

获取摄像机参数是为了来处理图像失真或者实现图像度量 ~~ Unfortunately, this cheapness comes with its price: significant distortion. Luckily, these are constants and with a calibration and some remapping we can correct this. Furthermore, with calibration you may also determine

OpenCV Tutorials &mdash;&mdash; Hough Line Transform

霍夫直线变换 -- 用于检测图像中的直线 利用图像空间和Hough参数空间的点--直线对偶性,把图像空间中的检测问题转换到参数空间,通过在参数空间进行简单的累加统计,然后在Hough参数空间中寻找累加器峰值的方法检测直线 Standard and Probabilistic Hough Line Transform OpenCV implements two kind of Hough Line Transforms: The Standard Hough Transform It consis

OpenCV Tutorials &mdash;&mdash; Feature Matching with FLANN

Extractors of keypoint descriptors in OpenCV have wrappers with a common interface that enables you to easily switch between different algorithms solving the same problem.   DescriptorExtractor::compute Computes the descriptors for a set of keypoints

OpenCV Tutorials &mdash;&mdash; Creating a video with OpenCV

写video 需要用到 VideoWriter  视频文件可看作一个容器 视频的类型由视频文件的后缀名来指定   Due to this OpenCV for video containers supports only the avi extension, its first version. A direct limitation of this is that you cannot save a video file larger than 2 GB. Furthermore you ca

OpenCV Tutorials &mdash;&mdash; Image Moments

图像矩 Moments moments(InputArray array, bool binaryImage=false ) Parameters: array – Raster image (single-channel, 8-bit or floating-point 2D array) or an array ( or ) of 2D points (Point or Point2f ). binaryImage – If it is true, all non-zero image pi

OpenCV Tutorials &mdash;&mdash; Hough Circle Transform

Hough 圆变换 和 Hough 直线变换原理相同,只是参数空间不同 : In the line detection case, a line was defined by two parameters . In the circle case, we need three parameters to define a circle: where define the center position (gree point) and is the radius, which allows us

OpenCV Tutorials &mdash;&mdash; Sobel Derivatives

图像边缘 -- 像素灰度值变换剧烈的点 You can easily notice that in an edge, the pixel intensity changes in a notorious way. A good way to expresschanges is by using derivatives. A high change in gradient indicates a major change in the image.   To be more graphical,

OpenCV Tutorials &mdash;&mdash; Making your own linear filters

kernel A kernel is essentially a fixed size array of numerical coefficeints along with an anchor point in that array, which is tipically located at the center. The value of the convolution is calculated in the following way: 1,Place the kernel anchor