AnswerOpenCV一周佳作欣赏(0615-0622)

一、How to make auto-adjustments(brightness and contrast) for image Android Opencv Image Correction

i‘m using OpenCV for Android.
I would like to know,how to make image correction(auto adjustments of brightness/contrast) for image(bitmap) in android via OpenCV or that can be done by native ColorMatrixFilter from Android!??

I tried to google,but didn‘t found good tutorials/examples.
So how can i achieve my goal? Any ideas?
Thanks!

算法问题,需要寻找到自动调整亮度和对比度的算法。

回答:

Brightness and contrast is linear operator with parameter alpha and beta

O(x,y) = alpha * I(x,y) + beta

In OpenCV you can do this with convertTo.

The question here is how to calculate alpha and beta automatically ?

Looking at histogram, alpha operates as color range amplifier, beta operates as range shift.

Automatic brightness and contrast optimization calculates alpha and beta so that the output range is 0..255.

input range = max(I) - min(I)
 
wanted output range = 255;
 
alpha = output range / input range = 255 / ( max(I) - min(I) )

min(O) = alpha * min(I) + beta
beta = -min(I) * alpha

Histogram Wings Cut (clipping)

To maximize the result of it‘s useful to cut out some color with few pixels.

This is done cutting left right and wings of histogram where color frequency is less than a value (typically 1%). Calculating cumulative distribution from the histogram you can easly find where to cut.

may be sample chart helps to understand:

By the way BrightnessAndContrastAuto could be named normalizeHist because it works on BGR and gray images stretching the histogram to the full range without touching bins balance. If input image has range 0..255 BrightnessAndContrastAuto will do nothing.

Histogram equalization and CLAE works only on gray images and they change grays level balancing. look at the images below:

也就是实现了彩色的颜色增强算法

void BrightnessAndContrastAuto(const cv::Mat &src, cv::Mat &dst, float clipHistPercent)
  {
    CV_Assert(clipHistPercent >= 0);
    CV_Assert((src.type() == CV_8UC1) || (src.type() == CV_8UC3) || (src.type() == CV_8UC4));
    int histSize = 256;
    float alpha, beta;
    double minGray = 0, maxGray = 0;
    //to calculate grayscale histogram
    cv::Mat gray;
    if (src.type() == CV_8UC1) gray = src;
    else if (src.type() == CV_8UC3) cvtColor(src, gray, CV_BGR2GRAY);
    else if (src.type() == CV_8UC4) cvtColor(src, gray, CV_BGRA2GRAY);
    if (clipHistPercent == 0)
    {
        // keep full available range
        cv::minMaxLoc(gray, &minGray, &maxGray);
    }
    else
    {
        cv::Mat hist; //the grayscale histogram
        float range[] = { 0, 256 };
        const float* histRange = { range };
        bool uniform = true;
        bool accumulate = false;
        calcHist(&gray, 1, 0, cv::Mat (), hist, 1, &histSize, &histRange, uniform, accumulate);
        // calculate cumulative distribution from the histogram
        std::vector<float> accumulator(histSize);
        accumulator[0] = hist.at<float>(0);
        for (int i = 1; i < histSize; i++)
        {
            accumulator[i] = accumulator[i - 1] + hist.at<float>(i);
        }
        // locate points that cuts at required value
        float max = accumulator.back();
        clipHistPercent *= (max / 100.0); //make percent as absolute
        clipHistPercent /= 2.0; // left and right wings
        // locate left cut
        minGray = 0;
        while (accumulator[minGray] < clipHistPercent)
            minGray++;
        // locate right cut
        maxGray = histSize - 1;
        while (accumulator[maxGray] >= (max - clipHistPercent))
            maxGray--;
    }
    // current range
    float inputRange = maxGray - minGray;
    alpha = (histSize - 1) / inputRange;   // alpha expands current range to histsize range
    beta = -minGray * alpha;             // beta shifts current range so that minGray will go to 0
    // Apply brightness and contrast normalization
    // convertTo operates with saurate_cast
    src.convertTo(dst, -1, alpha, beta);
    // restore alpha channel from source 
    if (dst.type() == CV_8UC4)
    {
        int from_to[] = { 3, 3};
        cv::mixChannels(&src, 4, &dst,1, from_to, 1);
    }
    return;
}



效果比较不错。

二、Template matching behavior - Color

I am evaluating template matching algorithm to differentiate similar and dissimilar objects. What I found is confusing, I had an impression of template matching is a method which compares raw pixel intensity values. Hence when the pixel value varies I expected Template Matching to give a less match percentage.

I have a template and search image having same shape and size differing only in color(Images attached). When I did template matching surprisingly I am getting match percentage greater than 90%.

img = cv2.imread(‘./images/searchtest.png‘, cv2.IMREAD_COLOR)template = cv2.imread(‘./images/template.png‘, cv2.IMREAD_COLOR)
 
res = cv2.matchTemplate(img, template, cv2.TM_CCORR_NORMED)
 
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)print(max_val)

Template Image :

Search Image :

Can someone give me an insight why it is happening so? I have even tried this in HSV color space, Full BGR image, Full HSV image, Individual channels of B,G,R and Individual channels of H,S,V. In all the cases I am getting a good percentage.

Any help could be really appreciated.

这个问题的核心就是彩色模板匹配。很有趣的问题。回答直接给出了source code,

https://github.com/LaurentBerger/ColorMatchTemplate

关于这个问题,我认为,彩色模板匹配的意义不是很大,毕竟用于定位的时候,黑白效果更好。

三、likely position of Feature Matching.

I am using Brute Force Matcher with L2 norm. Referring this link https://docs.opencv.org/2.4/doc/tutor...

After the process, I get below image as output

What is the likely position of the object suggested by the feature matching?

I don‘t understand how to choose the likely position using this image :(

这是一个只知其一不知其二的问题,他能够找到旋转的地方,但是对于下一步如何做没有思路。其实,解决此类问题,最好的方法就是参考教程做一遍。

当然,管理员的回答非常明确:

to retrieve the position of your matched object, you need some further steps :

  • filter the matches for outliers
  • extract the 2d point locations from the keypoints
  • apply findHomography() on the matched 2d points to get a transformation matrix between your query and the scene image
  • apply perspectiveTransform on the boundingbox of the query object, to see, where it is located in the scene image.

我也给出具体回答:

//used surf

//

#include "stdafx.h"

#include <iostream>

#include "opencv2/core/core.hpp"

#include "opencv2/imgproc/imgproc.hpp"

#include "opencv2/features2d/features2d.hpp"

#include "opencv2/highgui/highgui.hpp"

#include "opencv2/nonfree/features2d.hpp"

#include "opencv2/calib3d/calib3d.hpp"

using namespace std;

using namespace cv;

int main( int argc, char** argv )

{

Mat img_1 ;

Mat img_2 ;

Mat img_raw_1 = imread("c1.bmp");

Mat img_raw_2 = imread("c3.bmp");

cvtColor(img_raw_1,img_1,CV_BGR2GRAY);

cvtColor(img_raw_2,img_2,CV_BGR2GRAY);

//-- Step 1: 使用SURF识别出特征点

int minHessian = 400;

SurfFeatureDetector detector( minHessian );

std::vector<KeyPoint> keypoints_1, keypoints_2;

detector.detect( img_1, keypoints_1 );

detector.detect( img_2, keypoints_2 );

//-- Step 2: 描述SURF特征

SurfDescriptorExtractor extractor;

Mat descriptors_1, descriptors_2;

extractor.compute( img_1, keypoints_1, descriptors_1 );

extractor.compute( img_2, keypoints_2, descriptors_2 );

//-- Step 3: 匹配

FlannBasedMatcher matcher;//BFMatcher为强制匹配

std::vector< DMatch > matches;

matcher.match( descriptors_1, descriptors_2, matches );

//取最大最小距离

double max_dist = 0; double min_dist = 100;

for( int i = 0; i < descriptors_1.rows; i++ )

{

double dist = matches[i].distance;

if( dist < min_dist ) min_dist = dist;

if( dist > max_dist ) max_dist = dist;

}

std::vector< DMatch > good_matches;

for( int i = 0; i < descriptors_1.rows; i++ )

{

if( matches[i].distance <= 3*min_dist )//这里的阈值选择了3倍的min_dist

{

good_matches.push_back( matches[i]);

}

}

//-- Localize the object from img_1 in img_2

std::vector<Point2f> obj;

std::vector<Point2f> scene;

for( int i = 0; i < (int)good_matches.size(); i++ )

{

//这里采用“帧向拼接图像中添加的方法”,因此左边的是scene,右边的是obj

scene.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );

obj.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );

}

//直接调用ransac,计算单应矩阵

Mat H = findHomography( obj, scene, CV_RANSAC );

//图像对准

Mat result;

warpPerspective(img_raw_2,result,H,Size(2*img_2.cols,img_2.rows));

Mat half(result,cv::Rect(0,0,img_2.cols,img_2.rows));

img_raw_1.copyTo(half);

imshow("result",result);

waitKey(0);

return 0;

}

四、Run OpenCV to MFC
 I have reproduced this sample, in a MFC app.

The cv::Mat is a CDOcument variable member:

// Attributes
public:
std::vector<CBlob> m_blobs;
cv::Mat m_Mat;

and I draw rectangles over image, with an method (called at some time intervals, according FPS rate):

DrawBlobInfoOnImage(m_blobs, m_Mat);

Here is the code of this method:

void CMyDoc::DrawBlobInfoOnImage(std::vector<CBlob>& blobs, cv::Mat& Mat)
{
for (unsigned int i = 0;i < blobs.size();++i)
{
    if (blobs[i].m_bStillBeingTracked)
    {
        cv::rectangle(Mat, blobs[i].m_rectCurrentBounding, SCALAR_RED, 2);
        double dFontScale = blobs[i].m_dCurrentDiagonalSize / 60.0;
        int nFontThickness = (int)roundf(dFontScale * 1.0);
        cv::putText(Mat, (LPCTSTR)IntToString(i), blobs[i].m_vecPointCenterPositions.back(), CV_FONT_HERSHEY_SIMPLEX, dFontScale, SCALAR_GREEN, nFontThickness);
    }
}
}

but the result of this method is something like that:

My question is: how can I draw only the last blobs result over my image ?

I have tried to clean up m_Mat, and to enable to draw only blobs.size() - 1 blob over image, none of this worked ...

非常有意思的问题,主要就是说他能够用mfc调用oepncv了,但是不知道如何将视频中的框子正确显示(也就是不要有拖尾)

这个也是对问题思考不是很深造成的问题。其实,解决的方法无外乎两个

一是直接修改视频流,也就是在原来的“读取-显示”循环里面添加一些东西(也就是框子),如果是这种方法,你使用或者不使用mfc基本没有什么影响;

比如<PariticalFilter在MFC上的运行,源代码公开>

https://www.cnblogs.com/jsxyhelu/p/6336429.html

二是采用mfc的机制。mfc不是都是对话框吗?那就创建一个窗体,专门用来显示这个矩形就好啦。

比如<GreenOpenPaint的实现(五)矩形框>https://www.cnblogs.com/jsxyhelu/p/6354341.html

五、How to find the thickness of the red color sealent in the image ????

Hi,

I want to find the thickness of the red colored sealent in the image.

First I‘m extracting the sealent portion by using findcontours by having minimum and maximum conotur area.And then check the Area,length and thickness of the sealent,i can find the area as well as length but i m not able to find the thickness of the sealent portion.

Please help me guys........below is the example image.

提示一下:

do a distance transform and a skeleton on the binary image.

这是一个算法问题,具体解决,下周给出实现。大家可以先研究一下。

来自为知笔记(Wiz)

原文地址:https://www.cnblogs.com/jsxyhelu/p/9215646.html

时间: 2024-10-22 02:58:41

AnswerOpenCV一周佳作欣赏(0615-0622)的相关文章

AnswerOpenCV(0826-0901)一周佳作欣赏

1.OpenCV to detect how missing tooth in equipment Hello everyone. I am just starting with OpenCV and still a bit lost. Is OpenCV helpful to detect the position of a missing object (tooth for example)? I would like to build a code to analyze an image

(原创)近景佳作欣赏(摄影,欣赏)

本文图片摘自腾讯文化网:www.cal.qq.com 1.Abstract     微小的世界时常被忽略,要拍出它们的美丽,不仅相机要使用得非常熟练,而且还要对微小世界有十足感悟.通过专业摄影师的视角,欣赏下微小世界的大不同. 2.Content FIG2.1 蜘蛛网   FIG2.2 树叶的傲骨 FIG2.3 树叶与昆虫 FIG2.4 秋日蜘网 FIG2.5 冬雪压枝 FIG2.7 蚁工 FIG2.8 触角 FIG2.9 翠绿水钻 FIG2.10 晶莹剔透 FIG2.11 植物的触角   3.

(原创)古典主义&mdash;&mdash;平凡之美 佳作欣赏(摄影,欣赏)

文中图片摘自腾讯文化网:www.cal.qq.com 1.Abstract     生活本就是平淡的,如同真理一般寂静.平时生活中不经意的瞬间,也有它本来的美丽.下面一组图是上上个世纪到上个世纪末一个画家的作品,通过绘画的手法将平凡人生活中的一些习以为常的动作刻画下来.绘画的技艺高超只有通过看画才更能体会到,作品中充融入了太多作者的细腻情感,自然而然地勾勒出了美源自于生活,美源自于平凡,美就在我们身边的主题. 2.Content FIG2.1 讲故事     值得注意的是仔细看看小女孩陶醉的眼神

Python爬虫之抓取豆瓣影评数据

脚本功能: 1.访问豆瓣最受欢迎影评页面(http://movie.douban.com/review/best/?start=0),抓取所有影评数据中的标题.作者.影片以及影评信息 2.将抓取的信息写入excel中 页面信息如下: 一共5页,需要循环访问不同的页面 HTML源代码: <a class="" title="<权力的游戏>S5E8:凛冬已至,凡人皆死" href="http://movie.douban.com/review

色彩提取在PPT中的应用技巧

在刚接触设计的时候,选择色彩是有一些技巧的,咱们这次就说说这些技巧.这次分为两部分,一部分是怎样提取合适的色彩,一部分怎样应用提取到的色彩. 第一部分 · 怎样提取合适的色彩 步骤一:找图 关键步骤,确定自己想要的风格,找大量的符合风格要求的图片,进行对比筛选,选出最符合自己要求的几张图片开始色彩的提取. 因为这次没有明确的风格和方向,我从佳作欣赏里找了一张色彩丰富,精神状态积极向上,尺寸超大(选图片是尽量找到大尺寸的图,提取色彩时颜色会很正) 步骤二:提取颜色 把选中的图片放入PS中,点击“存

黄金周张家界之行(2)

HELLO,各位亲爱的小伙伴们,小编又来了,这两天有点小忙,又开始出差模式了,所以没顾上这个公众号的推送了,话说上次和大家说到小编的黄金周张家界之旅的第一天行程,第一天总体来说,可以用“一塌糊涂”来形容.好吧,开始第二天的行程吧. 第二天我们定的早上五点半的闹钟,闹钟响的时候,我云里雾里的看了一眼窗外,漆黑一片,抱着今天绝对不能浪费的决心,酝酿五分钟就果断起床去洗漱了,一切准备就绪,出发前检查身份证和各种随身物品,来到车站, 到车站才发现居然有人比我们还早,不过我们前面好像已经发了一班车,我们到

Linux Mint 17一周使用体验

1 Win7下安装Mint双系统 Linux Mint支持直接从Win7硬盘引导安装,非常方便,不用制作U盘引导,更不用刻盘安装了.Mint有Cinnamon和Mate两种桌面,听说Mate更加简洁节省资源,所以就选择了Linux Mint 17 Mate版.安装过程主要参考百度经验. 1.1 清理磁盘空间 为Mint清理出空间.例如我的机器之前有C.D.E.F四个盘符,备份好E和F盘中的重要数据后,右键"我的电脑"=>"管理"=>"磁盘管理&

唐诗欣赏网页设计

1 <html> 2 <head><title>文字网页</title><head> 3 <body> 4 <h2 align = center >唐诗欣赏</h2> 5 <hr width ="100%" size ="1"color= "#00ffee"> 6 <p align ="center"><

音乐鉴赏 周海宏 网络课程 题库(Ctrl+f查找)点赞哦

一. 单选题(题数:50,共 50.0 分) 1 在<第二钢琴协奏曲>中使用了明显的强烈的强弱对比手法的演奏家是:() (1.0分) 0.0 分 窗体顶端 A. 拉赫玛尼诺夫 B. 里赫特 C. 卡钦 D. 科特 窗体底端 我的答案:A 2 下列哪一位演奏家利用人声演奏了<野蜂飞舞>:() (1.0分) 0.0 分 窗体顶端 A. 齐夫拉 B. 奥利弗·刘 易斯 C. 麦克费林 D. 王羽佳 窗体底端 我的答案:A 3 19世纪下半叶和20世纪上半叶出现的思潮是下列哪一项:()(1