基于轮廓的匹配算法(强,可在重叠堆积物体中识别)

这是一篇印度软件工程师的无私奉献!

非常具备参考价值!

如果能完成旋转匹配更接近于实用性.

当然要完成全角度匹配的难度是要量级数的提升.

Introduction

Template matching is an image processing problem to find the location of an object using a template image in another search image when its pose (X, Y, θ) is unknown. In this article, we implement an algorithm that uses an object’s edge information for recognizing the object in the search image.

模块匹配:用一个模板图像在一幅未知位置的搜索图像中找到这个物体的位置。在本文中,我们实现了一个算法,用模板图像的边缘信息在搜索图像中搜索物体。

Background

Template matching is inherently a tough problem due to its speed and reliability issues. The solution should be robust against brightness changes when an object is partially visible or mixed with other objects, and most importantly, the algorithm should be computationally efficient. There are mainly two approaches to solve this problem, gray value based matching (or area based matching) and feature based matching (non area based).

由于模块匹配的速度和可靠性的原因,模块匹配实际是一个棘手的问题。当一个对象和其他对象混合时,此解决方案对于亮度变化匹配时健壮的,更重要的是,该算法应该是高效的。对于模块匹配问题,主要有基于灰度匹配(或地区建立匹配)和基于特征匹配(非面积为基础)。

Gray value based approach: In gray value based matching, the Normalized Cross Correlation (NCC) algorithm is known from old days. This is typically done at every step by subtracting the mean and dividing by the standard deviation. The cross correlation of template t(x, y) with a sub image f(x, y) is:

基于灰度值的匹配:在基于灰度值的匹配中,规范化交叉相关(NCC)算法在过去已经给出,这个算法的特点就是在每一步中减去平均值并且通过标准差划分。模板图像t(x, y) 与子图像f(x, y)的互相关关系是:

Where n is the number of pixels in t(x, y) and f(x, y). [Wiki]

这里的n 是图像像素的个数。

Though this method is robust against linear illumination changes, the algorithm will fail when the object is partially visible or the object is mixed with other objects. Moreover, this algorithm is computationally expensive since it needs to compute the correlation between all the pixels in the template image to the search image.

尽管这个算法对于线性照明的变化时健壮的,但是,当搜素图像部分与其他图像重叠时,这个算法将不能搜索到目标图像。同时,这个算法的计算量比较大,因为它需要计算搜索图像和目标图像所有像素点的互相关关系。

Feature based approach: Several methods of feature based template matching are being used in the image processing domain. Like edge based object recognition where the object edges are features for matching, in Generalized Hough transform, an object’s geometric features will be used for matching.

特征点匹配:几种基于特征匹配的方法经常应用于图像的预处理。像基于边缘信息的匹配,在广义霍夫变换中,物体的几何特征也用于图像匹配。

In this article, we implement an algorithm that uses an object’s edge information for recognizing the object in a search image. This implementation uses the Open-Source Computer Vision library as a platform.

在本文中,我们实现一个算法,在搜索图像中,用一个物体的边缘信息识别这一物体。这个实现使用开源的计算机视觉库作为一个平台。

Compiling the example code

We are using OpenCV 2.0 and Visual studio 2008 to develop this code. To compile the example code, we need to install OpenCV.

OpenCV can be downloaded free fromhere.OpenCV (Open Source Computer Vision) is a library of programming functions for real time computer vision. Download OpenCV and install it in your system. Installation information can be read fromhere.

We need to configure our Visual Studio environment. This information can be read fromhere.

The algorithm

Here, we are explaining an edge based template matching technique. An edge can be defined as points in a digital image at which the image brightness changes sharply or has discontinuities. Technically, it is a discrete differentiation operation, computing an approximation of the gradient of the image intensity function.

在这里,首先介绍基于边缘信息的模块匹配技术。图像的亮度变化间断或急剧时,在一个数字图像中,边缘可以定义为一系列的点。从技术上讲,这是一个离散微分操作,计算一个梯度图像的近似的强度函数。

There are many methods for edge detection, but most of them can be grouped into two categories: search-based and zero-crossing based. The search-based methods detect edges by first computing a measure of edge strength, usually a first-order derivative expression such as the gradient magnitude, and then searching for local directional maxima of the gradient magnitude using a computed estimate of the local orientation of the edge, usually the gradient direction. Here, we are using such a method implemented by Sobel known as Sobel operator. The operator calculates thegradient of the image intensity at each point, giving the direction of the largest possible increase from light to dark and the rate of change in that direction.

有许多方法用于边缘检测,它们的绝大部分可以划分为两类:基于查找一类和基于零穿越的一类。(补:基于查找的方法通过寻找图像一阶导数中的最大和最小值来检测边界,通常是将边界定位在梯度最大的方向。基于零穿越的方法通过寻找图像二阶导数零穿越来寻找边界,通常是Laplacian过零点或者非线性差分表示的过零点。)基于搜索的边缘检测方法首先计算边缘强度, 通常用一阶导数表示, 例如梯度模,然后,用计算估计边缘的局部方向, 通常采用梯度的方向,并利用此方向找到局部梯度模的最大值。在这里,我们使用的是称为索贝尔算子。根据从亮到暗增长的最大可能性的方向和变化速率最快的方向,计算图像中每个点的梯度。

We are using these gradients or derivatives in X direction and Y direction for matching.

This algorithm involves two steps. First, we need to create an edge based model of the template image, and then we use this model to search in the search image.

我们在这里用梯度或者x方向和y方向的衍生来匹配。
  这个算法包括两个步骤。首先,我们需要创建一个基于模板图像模型的边缘。然后,我们用这个边缘在搜索图像中寻找目标图像。

Creating an edge based template model

We first create a data set or template model from the edges of the template image that will be used for finding the pose of that object in the search image.

我们首先要从模板图像中创建数据集或者模板模型。

Here we are using a variation of Canny’s edge detection method to find the edges. You can read more on Canny’s edge detectionhere. For edge extraction, Canny uses the following steps:

在这里,我们用一个变化中的Canny’s边缘检测方法来寻找边缘。对于边缘提取,Canny方法可以用以下步骤:

Step 1: Find the intensity gradient of the image

Use the Sobel filter on the template image which returns the gradients in the X (Gx) and Y (Gy) direction. From this gradient, we will compute the edge magnitude and direction using the following formula:

We are using an OpenCV function to find these values.

[cpp] view plaincopy

  1. //第一步
  2. cvSobel( src, gx, 1,0, 3 ); //gradient in X direction
  3. cvSobel( src, gy, 0, 1, 3 ); //gradient in Y direction
  4. for( i = 1; i < Ssize.height-1; i++ )
  5. {
  6. for( j = 1; j < Ssize.width-1; j++ )
  7. {
  8. _sdx = (short*)(gx->data.ptr + gx->step*i);
  9. _sdy = (short*)(gy->data.ptr + gy->step*i);
  10. fdx = _sdx[j]; fdy = _sdy[j]; // read x, y derivatives
  11. //Magnitude = Sqrt(gx^2 +gy^2)
  12. MagG = sqrt((float)(fdx*fdx) + (float)(fdy*fdy));
  13. //Direction = invtan (Gy / Gx)
  14. direction =cvFastArctan((float)fdy,(float)fdx);
  15. magMat[i][j] = MagG;
  16. if(MagG>MaxGradient)
  17. MaxGradient=MagG; // get maximum gradient value for normalizing.
  18. // get closest angle from 0, 45, 90, 135 set
  19. if ( (direction>0 && direction < 22.5) || (direction >157.5 && direction < 202.5) || (direction>337.5 && direction<360) )
  20. direction = 0;
  21. else if ( (direction>22.5 && direction < 67.5) || (direction >202.5 && direction <247.5) )
  22. direction = 45;
  23. else if ( (direction >67.5 && direction < 112.5)|| (direction>247.5 && direction<292.5) )
  24. direction = 90;
  25. else if ( (direction >112.5 && direction < 157.5)|| (direction>292.5 && direction<337.5) )
  26. direction = 135;
  27. else
  28. direction = 0;
  29. orients[count] = (int)direction;
  30. count++;
  31. }
  32. }
   

Once the edge direction is found, the next step is to relate the edge direction that can be traced in the image. There are four possible directions describing the surrounding pixels: 0 degrees, 45 degrees, 90 degrees, and 135 degrees. We assign all the directions to any of these angles.

一旦找到边缘的方向,下一步就是将边缘方向与图像中的被追踪的方向联系起来。这里有四种可能的方向来描述周围的像素点:0,45,90,135.我们将赋予其中的一个方向。

Step 2: Apply non-maximum suppression

After finding the edge direction, we will do a non-maximum suppression algorithm. Non-maximum suppression traces the left and right pixel in the edge direction and suppresses the current pixel magnitude if it is less than the left and right pixel magnitudes. This will result in a thin image.

找到边缘方向后,我们要用非最大抑制算法处理。非最大抑制算法跟踪在这个方向上的左右像素点,如果该像素点小于左右像素点的值,那么就抑制该像素点。这将产生一个薄图像。

[cpp] view plaincopy

  1. //第二步
  2. for( i = 1; i < Ssize.height-1; i++ )
  3. {
  4. for( j = 1; j < Ssize.width-1; j++ )
  5. {
  6. switch ( orients[count] )
  7. {
  8. case 0:
  9. leftPixel = magMat[i][j-1];
  10. rightPixel = magMat[i][j+1];
  11. break;
  12. case 45:
  13. leftPixel = magMat[i-1][j+1];
  14. rightPixel = magMat[i+1][j-1];
  15. break;
  16. case 90:
  17. leftPixel = magMat[i-1][j];
  18. rightPixel = magMat[i+1][j];
  19. break;
  20. case 135:
  21. leftPixel = magMat[i-1][j-1];
  22. rightPixel = magMat[i+1][j+1];
  23. break;
  24. }
  25. // compare current pixels value with adjacent pixels
  26. if (( magMat[i][j] < leftPixel ) || (magMat[i][j] < rightPixel ) )
  27. (nmsEdges->data.ptr + nmsEdges->step*i)[j]=0;
  28. else
  29. (nmsEdges->data.ptr + nmsEdges->step*i)[j]= (uchar)(magMat[i][j]/MaxGradient*255);
  30. count++;
  31. }
  32. }

Step 3: Do hysteresis threshold

Doing the threshold with hysteresis requires two thresholds: high and low. We apply a high threshold to mark out thosee edges we can be fairly sure are genuine. Starting from these, using the directional information derived earlier, other edges can be traced through the image. While tracing an edge, we apply the lower threshold, allowing us to trace faint sections of edges as long as we find a starting point.

使用滞后门限需要两个阀值:高门限和低门限。我们应用高门限来标记处我们已经非常确定的边缘。从这些点出发,利用已经求出的方向信息,其它边缘也能求出。当我们在寻找边缘的时,可以通过低门限,只要我们找到了起始点就可以找到模糊的边缘。

[cpp] view plaincopy

  1. int RSum=0,CSum=0;
  2. int curX,curY;
  3. int flag=1;
  4. //Hysterisis threshold
  5. for( i = 1; i < Ssize.height-1; i++ )
  6. {
  7. for( j = 1; j < Ssize.width; j++ )
  8. {
  9. _sdx = (short*)(gx->data.ptr + gx->step*i);
  10. _sdy = (short*)(gy->data.ptr + gy->step*i);
  11. fdx = _sdx[j]; fdy = _sdy[j];
  12. MagG = sqrt(fdx*fdx + fdy*fdy); //Magnitude = Sqrt(gx^2 +gy^2)
  13. DirG =cvFastArctan((float)fdy,(float)fdx);   //Direction = tan(y/x)
  14. ////((uchar*)(imgGDir->imageData + imgGDir->widthStep*i))[j]= MagG;
  15. flag=1;
  16. if(((double)((nmsEdges->data.ptr + nmsEdges->step*i))[j]) < maxContrast)
  17. {
  18. if(((double)((nmsEdges->data.ptr + nmsEdges->step*i))[j])< minContrast)
  19. {
  20. (nmsEdges->data.ptr + nmsEdges->step*i)[j]=0;
  21. flag=0; // remove from edge
  22. ////((uchar*)(imgGDir->imageData + imgGDir->widthStep*i))[j]=0;
  23. }
  24. else
  25. {   // if any of 8 neighboring pixel is not greater than max contraxt remove from edge
  26. if( (((double)((nmsEdges->data.ptr + nmsEdges->step*(i-1)))[j-1]) < maxContrast)   &&
  27. (((double)((nmsEdges->data.ptr + nmsEdges->step*(i-1)))[j]) < maxContrast)     &&
  28. (((double)((nmsEdges->data.ptr + nmsEdges->step*(i-1)))[j+1]) < maxContrast)   &&
  29. (((double)((nmsEdges->data.ptr + nmsEdges->step*i))[j-1]) < maxContrast)       &&
  30. (((double)((nmsEdges->data.ptr + nmsEdges->step*i))[j+1]) < maxContrast)       &&
  31. (((double)((nmsEdges->data.ptr + nmsEdges->step*(i+1)))[j-1]) < maxContrast)   &&
  32. (((double)((nmsEdges->data.ptr + nmsEdges->step*(i+1)))[j]) < maxContrast)     &&
  33. (((double)((nmsEdges->data.ptr + nmsEdges->step*(i+1)))[j+1]) < maxContrast)   )
  34. {
  35. (nmsEdges->data.ptr + nmsEdges->step*i)[j]=0;
  36. flag=0;
  37. ////((uchar*)(imgGDir->imageData + imgGDir->widthStep*i))[j]=0;
  38. }
  39. }
  40. }
  41. }
  42. }

Step 4: Save the data set

After extracting the edges, we save the X and Y derivatives of the selected edges along with the coordinate information as the template model. These coordinates will be rearranged to reflect the start point as the center of gravity.

边缘提取后,我们保存模板模型,即挑选出的边缘在x和y方向上的坐标信息。这些坐标将被重新安排来反映起始坐标(即重心)

[cpp] view plaincopy

  1. // save selected edge information
  2. curX=i; curY=j;
  3. if(flag!=0)
  4. {
  5. if(fdx!=0 || fdy!=0)
  6. {
  7. RSum=RSum+curX; CSum=CSum+curY; // Row sum and column sum for center of gravity
  8. cordinates[noOfCordinates].x = curX;
  9. cordinates[noOfCordinates].y = curY;
  10. edgeDerivativeX[noOfCordinates] = fdx;
  11. edgeDerivativeY[noOfCordinates] = fdy;
  12. //handle divide by zero
  13. if(MagG!=0)
  14. edgeMagnitude[noOfCordinates] = 1/MagG;  // gradient magnitude
  15. else
  16. edgeMagnitude[noOfCordinates] = 0;
  17. noOfCordinates++;
  18. }
  19. }

[cpp] view plaincopy

  1. <p> centerOfGravity.x = RSum /noOfCordinates; // center of gravity
  2. centerOfGravity.y = CSum/noOfCordinates ; // center of gravity
  3. // change coordinates to reflect center of gravity
  4. for(int m=0;m<noOfCordinates ;m++)
  5. {
  6. int temp;</p><p>  temp=cordinates[m].x;
  7. cordinates[m].x=temp-centerOfGravity.x;
  8. temp=cordinates[m].y;
  9. cordinates[m].y =temp-centerOfGravity.y;
  10. }</p>

Find the edge based template model

The next task in the algorithm is to find the object in the search image using the template model. We can see the model we created from the template image which contains a set of points:, and its gradients in X and Y direction, wherei = 1…n, n is the number of elements in the Template (T) data set.

算法的下一个任务就是去寻找目标图像。我们已经知道了模板图像的一系列的边缘点和其在两个方向上的梯度。

We can also find the gradients in the search image (S), where u = 1...number of rows in the search image, v = 1… number of columns in the search image.

In the matching process, the template model should be compared to the search image at all locations using a similarity measure. The idea behind similarity measure is to take the sum of all normalized dot products of gradient vectors of the template image and search the image over all points in the model data set. This results in a score at each point in the search image. This can be formulated as follows:

接下来的工作就是利用已经求出的模版模型来寻找目标图像在搜索图像中。可以看到,在我们创建的模版图像中含有一系列的点,在它的x和y的方向梯度,其中i=1..n,是模版图像的一系列点。

同样的,我们可以求出搜索图像的梯度。

在匹配过程中,模板模型与搜索图像在所有的像素点用一个相似的方法作对比。这个方法是将模板图像中的梯度向量归一化,然后通过模板图像的所有数据点来搜索图像。这个结果是搜索图像中每个点的得分。公式如下:

In case there is a perfect match between the template model and the search image, this function will return a score 1. The score corresponds to the portion of the object visible in the search image. In case the object is not present in the search image, the score will be 0.

如果模板图像和搜索图像很好的匹配,这个函数将返回1。若对象不在搜索图像中,则返回0..

[cpp] view plaincopy

  1. cvSobel( src, Sdx, 1, 0, 3 );  // find X derivatives
  2. cvSobel( src, Sdy, 0, 1, 3 ); // find Y derivatives
  3. // stoping criterias to search for model
  4. double normMinScore = minScore /noOfCordinates; // precompute minumum score
  5. double normGreediness = ((1- greediness * minScore)/(1-greediness)) /noOfCordinates; // precompute greedniness
  6. for( i = 0; i < Ssize.height; i++ )
  7. {
  8. _Sdx = (short*)(Sdx->data.ptr + Sdx->step*(i));
  9. _Sdy = (short*)(Sdy->data.ptr + Sdy->step*(i));
  10. for( j = 0; j < Ssize.width; j++ )
  11. {
  12. iSx=_Sdx[j];  // X derivative of Source image
  13. iSy=_Sdy[j];  // Y derivative of Source image
  14. gradMag=sqrt((iSx*iSx)+(iSy*iSy)); //Magnitude = Sqrt(dx^2 +dy^2)
  15. if(gradMag!=0) // hande divide by zero
  16. matGradMag[i][j]=1/gradMag;   // 1/Sqrt(dx^2 +dy^2)
  17. else
  18. matGradMag[i][j]=0;
  19. }
  20. }
  21. for( i = 0; i < Ssize.height; i++ )
  22. {
  23. for( j = 0; j < Ssize.width; j++ )
  24. {
  25. partialSum = 0; // initilize partialSum measure
  26. for(m=0;m<noOfCordinates;m++)
  27. {
  28. curX   = i + cordinates[m].x ; // template X coordinate
  29. curY   = j + cordinates[m].y ; // template Y coordinate
  30. iTx    = edgeDerivativeX[m];   // template X derivative
  31. iTy    = edgeDerivativeY[m];    // template Y derivative
  32. if(curX<0 ||curY<0||curX>Ssize.height-1 ||curY>Ssize.width-1)
  33. continue;
  34. _Sdx = (short*)(Sdx->data.ptr + Sdx->step*(curX));
  35. _Sdy = (short*)(Sdy->data.ptr + Sdy->step*(curX));
  36. iSx=_Sdx[curY]; // get curresponding  X derivative from source image
  37. iSy=_Sdy[curY];// get curresponding  Y derivative from source image
  38. if((iSx!=0 || iSy!=0) && (iTx!=0 || iTy!=0))
  39. {
  40. //partial Sum  = Sum of(((Source X derivative* Template X drivative) + Source Y derivative * Template Y derivative)) / Edge magnitude of(Template)* edge magnitude of(Source))
  41. partialSum = partialSum + ((iSx*iTx)+(iSy*iTy))*(edgeMagnitude[m] * matGradMag[curX][curY]);
  42. }
  43. sumOfCoords = m + 1;
  44. partialScore = partialSum /sumOfCoords ;
  45. // check termination criteria
  46. // if partial score score is less than the score than needed to make the required score at that position
  47. // break serching at that coordinate.
  48. if( partialScore < (MIN((minScore -1) + normGreediness*sumOfCoords,normMinScore*  sumOfCoords)))
  49. break;
  50. }
  51. if(partialScore > resultScore)
  52. {
  53. resultScore = partialScore; //  Match score
  54. resultPoint->x = i;          // result coordinate X
  55. resultPoint->y = j;          // result coordinate Y
  56. }
  57. }
  58. }

In practical situations, we need to speed up the searching procedure. This can be achieved using various methods. The first method is using the property of averaging. When finding the similarity measure, we need notevaluate for all points in the template model if we can set a minimum score (Smin) for the similarity measure. For checking the partial score Su,v at a particular pointJ, we have to find the partial sum Sm. Sm at point m can be defines as follows:

在有些情况下,我们需要加快处理速度。这里有很多方法来加速,第一种方法是使用属性的平均值。当发现一个相似的区域,如果通过相似方法设定了一个极小值,就不需要计算每个点的得分。为了检测在特定点J处的得分,我们必须找到sum部分和,SUM见如下定义:

It is evident that the remaining terms of the sum are smaller or equal to 1. So we can stop the evaluation if.

很明显的可以看出,当部分和小于或等于一时,我们就可以停止计算

Another criterion can be that the partial score at any point should be greater than the minimum score. I.e.,. When this condition is used, matching will be extremely fast. But the problem is, if the missing part of the object is checked first, the partial sum will be low.In that case, that instance of the object will not be considered as a match. We can modify this with another criterion where we check the first part of the template model with a safe stopping criteria and the remaining with a hard criteria,. The user can specify a greediness parameter (g) where the fraction of the template model is examined with a hard criteria. So that if g=1, all points in the template model are checked with the hard criteria, and if g=0, all the points will be checked with a safe criteria only. We can formulate this procedure as follows.

The evaluation of the partial score can be stopped at:

// stoping criterias to search for model double normMinScore = minScore /noOfCordinates; // precompute minumum score double normGreediness = ((1- greediness * minScore)/(1-greediness)) /noOfCordinates; // precompute greedniness sumOfCoords = m + 1; partialScore = partialSum /sumOfCoords ; // check termination criteria // if partial score score is less than the score than // needed to make the required score at that position // break serching at that coordinate. if( partialScore < (MIN((minScore -1) + normGreediness*sumOfCoords,normMinScore* sumOfCoords))) break;   

This similarity measure has several advantages: the similarity measure is invariant to non-linear illumination changes because all gradient vectors are normalized. Since there is no segmentation on edge filtering, it will show the true invariance against arbitrary changes in illumination. More importantly, this similarity measure is robust when the object is partially visible or mixed with other objects.

Enhancements

There are various enhancements possible for this algorithm. For speeding up the search process further, a pyramidal approach can be used. In this case, the search is started at low resolution with a small image size. This corresponds to the top of the pyramid. If the search is successful at this stage, the search is continued at the next level of the pyramid, which represents a higher resolution image. In this manner, the search is continued, and consequently, the result is refined until the original image size, i.e., the bottom of the pyramid is reached.

Another enhancement is possible by extending the algorithm for rotation and scaling. This can be done by creating template models for rotation and scaling and performing search using all these template models

有各种各样的增强可能对于这个算法。为了进一步加速搜索过程,一个金字塔方法也可以使用。在这种情况下,搜索是开始在低分辨率与一个小图像的大小。这对应于金字塔的顶端。在这个阶段如果搜索成功,继续下一个级别的金字塔搜索,这代表了更高分辨率的图像。在这种方式下,搜索是持续的,因此,结果是精制直到原始图像大小,即,金字塔的底部是达到了。  另一个增强可以通过扩展算法,旋转和缩放。这可以通过创建模板模型的旋转和缩放和执行搜索使用所有这些模板模型。

References

  1. Machine Vision Algorithms and Applications [Carsten Steger, Markus Ulrich, Christian Wiedemann]
  2. Digital Image Processing [Rafael C. Gonzalez, Richard Eugene Woods]
  3. http://dasl.mem.drexel.edu/alumni/bGreen/www.pages.drexel.edu/_weg22/can_tut.html
时间: 2024-11-08 01:00:38

基于轮廓的匹配算法(强,可在重叠堆积物体中识别)的相关文章

基于Aforge的物体运动识别-入门

基于Aforge的物体运动识别-入门篇chatbot人工智能机器人开发,提供教学视频>>>   0 收藏(2) 本文来自http://blog.csdn.net/hellogv/ ,引用必须注明出处! 最近看到越来越多人在做物体运动识别(例如:"第六感"中的指套),而且我最近也有点闲空,所以也来玩玩.....大多数人都是用Opencv来做,那我就不做重复的工作了,换个别的开源类库~~~Aforge. 来自百度知道的Aforge介绍:AForge.NET 是一个专门为开

物体三维识别论文介绍——基于霍夫投票

作者:袁野 Date:2020-04-02 来源:物体三维识别论文介绍——基于霍夫投票 文章“Objectrecognition in 3D scenes with occlusions and clutter by Hough voting”发表在2010年,提出了一个经典的将霍夫投票思想用于三维场景目标识别的方法,在杂乱场景和有遮挡情况下取得了不错的效果.这一思想在近年的文章中被多次引用,一些深度学习的方法也有该投票思想的影子.该方法已在PCL库中有简易实现.一.算法框架算法借助点云三位特征

从源代码剖析Struts2中用户自定义配置转换器的两种方式——基于字段的配置转换器和基于类型的配置转换器(解决了实际系统中,因没有区分这两种工作方式的生命周期而引起的异常错误问题)

自定义类型转换器必须实现ongl.TypeConverter接口或对这个接口的某种具体实现做扩展 <<interface>>com.opensymphony.xwork2.conversion.TypeConverter à com.opensymphony.xwork2.conversion.impl.DefaultTypeConverter à org.apache.struts2.util.StrutsTypeConverter 接口及类进行解析 TypeConverter(

基于Caffe的Large Margin Softmax Loss的实现(中)

小喵的唠叨话:前一篇博客,我们做完了L-Softmax的准备工作.而这一章,我们开始进行前馈的研究. 小喵博客: http://miaoerduo.com 博客原文:  http://www.miaoerduo.com/deep-learning/基于caffe的large-margin-softmax-loss的实现(中).html 四.前馈 还记得上一篇博客,小喵给出的三个公式吗?不记得也没关系. 这次,我们要一点一点的通过代码来实现这些公式.小喵主要是GPU上实现前后馈的代码,因为这个层只

基于表的Lua原生支持面向对象编程在GUI中的使用示例

http://blog.csdn.net/cnjet/article/details/5974212 lua真的有很多神奇的用法,下面是一个基于表的形式实现的对于GUI的定义.支持事件响应等. 可以在线(http://www.lua.org/cgi-bin/demo)测试运行效果. [cpp] view plaincopy -- Canvas Canvas = { ["frame_msg.OFrame"] = { skin="Engine//Standard.OSkin1St

DeepLearning (四) 基于自编码算法与softmax回归的手写数字识别

[原创]Liu_LongPo 转载请注明出处 [CSDN]http://blog.csdn.net/llp1992 softmax 回归模型,是logistic 回归模型在多分类问题上的推广.关于logistic回归算法的介绍,前面博客已经讲得很清楚,详情可以参考博客 机器学习实战ByMatlab(五)Logistic Regression 在logistic回归模型中,我们的激励函数sigmoid的输入为: z=θ0x0+θ1x1+θ2x2+...+θnxn 则可以得到假设函数为: hθ(x)

基于MFC 对话框的 PCL、VTK 、OPENCV岩体识别系统构建(1)

最近在忙于点云系统的构建,主要结合点云库PCL.可视化库VTK以及图像处理开源库OpenCV来做结合图像和点云数据协同的岩体分析系统.这里希望跟大家分享一下自己的整体构建流程,不足的地方希望大家能够帮忙指出以便改进.由于还在搭建过程中,所以文章的更新时间不一定,但是有关键性的进展一定会写出来讨论,谢谢大家. 整个系统构建的分析过程包括模块划分,模块之间的耦合,数据库构建,相关类的创建.继承等理论细节不在这里讨论.这篇文章先从系统的界面框架开始: 一.菜单栏 关于系统的菜单由于之前讨论完成,主要分

Win2012R2 Hyper-V初级教程15 -- 基于Kerberos与CA证书的系统容灾(中)

二.基于CA证书的HTTPS复制 ??????? 刚刚看了一下关于基于Kerberos与CA证书的系统容灾(上)还是2017-08-31写的,到现在半年过去了,懒癌太重了,一直没有更新,从今天起将逐步更新完初级教程,希望能够有更多的博友了解并学习微软的虚拟化技术.前面我们说到了基于HTTP的复制,在内部网络的时候我们可以采用这种方法,因为本身而言内网相对安全很多,如果你是需要进行跨广域网的复制,建议你最好配置CA证书服务,通过443端口进行数据传输,以保证数据的安全性.下面我们就来说一下基于CA

基于 Confluence 6 数据中心在你的 Atlassian 应用中配置 SAML 授权

希望在 Confluence 中配置SAML: Go to  > 基本配置(General Configuration) > SAMl 授权(SAML Authentication). 选择 SAML 单点登录(SAML single sign-on). 配置下面的设置: Single sign-on issuer 这个值是是由你 IdP 提供的,作为设置 SAML 的一部分.有时候这个被称为 'Entity ID' 发布者是将会接受授权请求表单的 IdP 应用. Identity provi