OPENCV(7) —— HighGUI

包括函数createTrackbar、getTrackbarPos、setTrackbarPos、imshow、namedWindow、destroyWindow、destroyAllWindows、MoveWindow、ResizeWindow、SetMouseCallback、waitKey。这些函数保证了图像的基本处理、tarckbar的控制和鼠标键盘的响应

 

读写图像视频的函数:图像相关的函数有imdecode、imencode、imread、imwrite。读取视频相关为VideoCapture类,负责捕捉文件和摄像头的视频,该类内有成员函数VideoCapture、open、isOpened、release、grab、retrieve、read、get、set,写视频的类为VideoWriter,类内有成员函数VideoWriter、open、isOpened、write

 

void addWeighted(InputArray src1, double alpha, InputArray src2, double beta, double gamma, OutputArray dst, int dtype=-1)

Calculates the weighted sum of two arrays. 不同权重相加

 

int createTrackbar(const string& trackbarname, const string& winname, int* value, int count, TrackbarCallbackonChange=0, void* userdata=0)

Parameters:

  • trackbarname – Name of the created trackbar.
  • winname – Name of the window that will be used as a parent of the created trackbar.
  • value – Optional pointer to an integer variable whose value reflects the position of the slider. Upon creation, the slider position is defined by this variable.
  • count – Maximal position of the slider. The minimal position is always 0.
  • onChange – Pointer to the function to be called every time the slider changes position. This function should be prototyped as void Foo(int,void*); , where the first parameter is the trackbar position and the second parameter is the user data (see the next parameter). If the callback is the NULL pointer, no callbacks are called, but only value is updated.
  • userdata – User data that is passed as is to the callback. It can be used to handle trackbar events without using global variables.
#include "stdafx.h"

#include <cv.h>
#include <highgui.h>

using namespace cv;

/// Global Variables
const int alpha_slider_max = 100;
int alpha_slider;
double alpha;
double beta;

/// Matrices to store images
Mat src1;
Mat src2;
Mat dst;

void on_trackbar( int, void* )
{
 alpha = (double) alpha_slider/alpha_slider_max ;
 beta = ( 1.0 - alpha );

 addWeighted( src1, alpha, src2, beta, 0.0, dst);

 imshow( "Linear Blend", dst );
}

int main( int argc, char** argv )
{
 /// Read image ( same size, same type )
 src1 = imread("lang.jpg");
 src2 = imread("taitan.jpg");

 if( !src1.data ) { printf("Error loading src1 \n"); return -1; }
 if( !src2.data ) { printf("Error loading src2 \n"); return -1; }

 /// Initialize values
 alpha_slider = 0;

 /// Create Windows
 namedWindow("Linear Blend", 1);

 /// Create Trackbars
 char TrackbarName[50];
 sprintf( TrackbarName, "Alpha x %d", alpha_slider_max );  // 将格式化的数据写入某个字符串中

 createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
    // 滚动条名称,窗口名称,滑块位置,滑块最大值,回调函数

 /// Show some stuff
 on_trackbar( alpha_slider, 0 );

 /// Wait until user press some key
 waitKey(0);
 return 0;
}

 

Video Stream

captRefrnc.set(CV_CAP_PROP_POS_MSEC, 1.2);  // go to the 1.2 second in the video
captRefrnc.set(CV_CAP_PROP_POS_FRAMES, 10); // go to the 10th frame of the video
// now a read operation would read the frame at the set position

Image similarity - PSNR and SSIM

PSNR (aka Peak signal-to-noise ratio). The simplest definition of this starts out from the mean squad error. 计算两幅图像的均方误差 。 Let there be two images: I1 and I2; with a two dimensional size i and j, composed of c number of channels.

Then the PSNR is expressed as:

Here the is the maximum valid value for a pixel. In case of the simple single byte image per pixel per channel this is 255. When two images are the same the MSE will give zero, resulting in an invalid divide by zero operation in the PSNR formula.(差值为0的情况要区分对待) In this case the PSNR is undefined and as we’ll need to handle this case separately.

the source code presented at the start of the tutorial will perform the PSNR measurement for each frame, and the SSIM only for the frames where the PSNR falls below an input value.

 

Mat::convertTo

在缩放或不缩放的情况下转换为另一种数据类型。

void Mat::convertTo(OutputArray m,int rtype,double alpha=1,double beta=0)const

参数:

m – 目标矩阵。如果它的尺寸和类型不正确,在操作之前会重新分配。

rtype – 要求是目标矩阵的类型,或者在当前通道数与源矩阵通道数相同的情况下的depth。如果rtype 为负,目标矩阵与源矩阵类型相同。

beta – 可选的delta加到缩放值中去。

该方法将源像素值转化为目标类型saturate_cast<> 要放在最后以避免溢出

m( x;y) = saturate_cast < rType > ( α*( *this)( x;y) +β)

 

#include "stdafx.h"

#include <iostream>    // for standard I/O
#include <string>   // for strings
#include <iomanip>  // for controlling float print precision
#include <sstream>  // string to number conversion

#include <opencv2/imgproc/imgproc.hpp>  // Gaussian Blur
#include <opencv2/core/core.hpp>        // Basic OpenCV structures (cv::Mat, Scalar)
#include <opencv2/highgui/highgui.hpp>  // OpenCV window I/O

using namespace std;
using namespace cv;

double getPSNR ( const Mat& I1, const Mat& I2);
Scalar getMSSIM( const Mat& I1, const Mat& I2);

void help()
{
    cout
        << "\n--------------------------------------------------------------------------" << endl
        << "This program shows how to read a video file with OpenCV. In addition, it tests the"
        << " similarity of two input videos first with PSNR, and for the frames below a PSNR "  << endl
        << "trigger value, also with MSSIM."<< endl
        << "Usage:"                                                                       << endl
        << "./video-source referenceVideo useCaseTestVideo PSNR_Trigger_Value Wait_Between_Frames " << endl
        << "--------------------------------------------------------------------------"   << endl
        << endl;
}
int main(int argc, char *argv[], char *window_name)
{
    help();
    if (argc != 5)
    {
        cout << "Not enough parameters" << endl;
        return -1;
    }
    stringstream conv;

    const string sourceReference = argv[1],sourceCompareWith = argv[2];
    int psnrTriggerValue, delay;
    conv << argv[3] << endl << argv[4];          // put in the strings
    conv >> psnrTriggerValue >> delay;    // take out the numbers   string --- int 

    char c;
    int frameNum = -1;            // Frame counter

    VideoCapture captRefrnc(sourceReference),
        captUndTst(sourceCompareWith);

    if ( !captRefrnc.isOpened())
    {
        cout  << "Could not open reference " << sourceReference << endl;
        return -1;
    }

    if( !captUndTst.isOpened())
    {
        cout  << "Could not open case test " << sourceCompareWith << endl;
        return -1;
    }

    Size refS = Size((int) captRefrnc.get(CV_CAP_PROP_FRAME_WIDTH),
        (int) captRefrnc.get(CV_CAP_PROP_FRAME_HEIGHT)),

        uTSi = Size((int) captUndTst.get(CV_CAP_PROP_FRAME_WIDTH),
        (int) captUndTst.get(CV_CAP_PROP_FRAME_HEIGHT));

    if (refS != uTSi)
    {
        cout << "Inputs have different size!!! Closing." << endl;
        return -1;
    }

    const char* WIN_UT = "Under Test";    // window name
    const char* WIN_RF = "Reference";

    // Windows
    namedWindow(WIN_RF, CV_WINDOW_AUTOSIZE );
    namedWindow(WIN_UT, CV_WINDOW_AUTOSIZE );
    cvMoveWindow(WIN_RF, 400       ,            0);         //750,  2 (bernat =0)
    cvMoveWindow(WIN_UT, refS.width,            0);         //1500, 2

    cout << "Reference frame resolution: Width=" << refS.width << "  Height=" << refS.height
        << " of nr#: " << captRefrnc.get(CV_CAP_PROP_FRAME_COUNT) << endl;

    cout << "PSNR trigger value " <<
        setiosflags(ios::fixed) << setprecision(3) << psnrTriggerValue << endl;

    Mat frameReference, frameUnderTest;
    double psnrV;
    Scalar mssimV;

    while( true) //Show the image captured in the window and repeat
    {
        captRefrnc >> frameReference;    // 获取视频流中的一帧图像
        captUndTst >> frameUnderTest;

        if( frameReference.empty()  || frameUnderTest.empty())    // 视频流结束
        {
            cout << " < < <  Game over!  > > > ";
            break;
        }

        ++frameNum;
        cout <<"Frame:" << frameNum <<"# ";

        ///////////////////////////////// PSNR ////////////////////////////////////////////////////
        psnrV = getPSNR(frameReference,frameUnderTest);                    //get PSNR
        cout << setiosflags(ios::fixed) << setprecision(3) << psnrV << "dB";

        //////////////////////////////////// MSSIM /////////////////////////////////////////////////
        if (psnrV < psnrTriggerValue && psnrV)    // 基于效率考虑,只有PSNR低于一定阈值时考虑SSIM
        {
            mssimV = getMSSIM(frameReference,frameUnderTest);

            cout << " MSSIM: "
                << " R " << setiosflags(ios::fixed) << setprecision(2) << mssimV.val[2] * 100 << "%"
                << " G " << setiosflags(ios::fixed) << setprecision(2) << mssimV.val[1] * 100 << "%"
                << " B " << setiosflags(ios::fixed) << setprecision(2) << mssimV.val[0] * 100 << "%";
        }

        cout << endl;

        ////////////////////////////////// Show Image /////////////////////////////////////////////
        imshow( WIN_RF, frameReference);
        imshow( WIN_UT, frameUnderTest);

        c = cvWaitKey(delay);
        if (c == 27) break;
    }

    return 0;
}

double getPSNR(const Mat& I1, const Mat& I2)
{
    Mat s1;
    absdiff(I1, I2, s1);       // |I1 - I2|
    s1.convertTo(s1, CV_32F);  // cannot make a square on 8 bits    // 在缩放或不缩放的情况下转换为另一种数据类型
    s1 = s1.mul(s1);           // |I1 - I2|^2    // 执行两个矩阵按元素相乘或这两个矩阵的除法

    Scalar s = sum(s1);         // sum elements per channel

    double sse = s.val[0] + s.val[1] + s.val[2]; // sum channels

    if( sse <= 1e-10) // for small values return zero
        return 0;
    else
    {
        double  mse =sse /(double)(I1.channels() * I1.total());
        double psnr = 10.0*log10((255*255)/mse);
        return psnr;
    }
}

Scalar getMSSIM( const Mat& i1, const Mat& i2)
{
    const double C1 = 6.5025, C2 = 58.5225;
    /***************************** INITS **********************************/
    int d     = CV_32F;

    Mat I1, I2;
    i1.convertTo(I1, d);           // cannot calculate on one byte large values
    i2.convertTo(I2, d);

    Mat I2_2   = I2.mul(I2);        // I2^2
    Mat I1_2   = I1.mul(I1);        // I1^2
    Mat I1_I2  = I1.mul(I2);        // I1 * I2

    /*************************** END INITS **********************************/

    Mat mu1, mu2;   // PRELIMINARY COMPUTING
    GaussianBlur(I1, mu1, Size(11, 11), 1.5);
    GaussianBlur(I2, mu2, Size(11, 11), 1.5);

    Mat mu1_2   =   mu1.mul(mu1);
    Mat mu2_2   =   mu2.mul(mu2);
    Mat mu1_mu2 =   mu1.mul(mu2);

    Mat sigma1_2, sigma2_2, sigma12;

    GaussianBlur(I1_2, sigma1_2, Size(11, 11), 1.5);
    sigma1_2 -= mu1_2;

    GaussianBlur(I2_2, sigma2_2, Size(11, 11), 1.5);
    sigma2_2 -= mu2_2;

    GaussianBlur(I1_I2, sigma12, Size(11, 11), 1.5);
    sigma12 -= mu1_mu2;

    ///////////////////////////////// FORMULA ////////////////////////////////
    Mat t1, t2, t3;

    t1 = 2 * mu1_mu2 + C1;
    t2 = 2 * sigma12 + C2;
    t3 = t1.mul(t2);              // t3 = ((2*mu1_mu2 + C1).*(2*sigma12 + C2))

    t1 = mu1_2 + mu2_2 + C1;
    t2 = sigma1_2 + sigma2_2 + C2;
    t1 = t1.mul(t2);               // t1 =((mu1_2 + mu2_2 + C1).*(sigma1_2 + sigma2_2 + C2))

    Mat ssim_map;
    divide(t3, t1, ssim_map);      // ssim_map =  t3./t1;

    Scalar mssim = mean( ssim_map ); // mssim = average of ssim map
    return mssim;
}

 

写video

For simple video outputs you can use the OpenCV built-in VideoWriter class, designed for this.

The type of the container is expressed in the files extension (for example avi, mov or mkv). This contains multiple elements like: video feeds, audio feeds or other tracks (like for example subtitles). How these feeds are stored is determined by the codec used for each one of them.

OPENCV的限制

Due to this OpenCV for video containers supports only the avi extension, its first version. A direct limitation of this is that you cannot save a video file larger than 2 GB. Furthermore you can only create and expand a single video track inside the container. No audio or other track editing support here.

VideoWriter::open(const string& filename, int fourcc, double fps, Size frameSize, bool isColor=true)

#include "stdafx.h"
#include <iostream> // for standard I/O
#include <string>   // for strings

#include <opencv2/core/core.hpp>        // Basic OpenCV structures (cv::Mat)
#include <opencv2/highgui/highgui.hpp>  // Video write

using namespace std;
using namespace cv;

static void help()
{
    cout
        << "------------------------------------------------------------------------------" << endl
        << "This program shows how to write video files."                                   << endl
        << "You can extract the R or G or B color channel of the input video."              << endl
        << "Usage:"                                                                         << endl
        << "./video-write inputvideoName [ R | G | B] [Y | N]"                              << endl
        << "------------------------------------------------------------------------------" << endl
        << endl;
}

int main(int argc, char *argv[])
{
    help();

    // 命令行参数:video文件名,要提取的通道,是否使用与原video相同的codec
    if (argc != 4)
    {
        cout << "Not enough parameters" << endl;
        return -1;
    }

    const string source      = argv[1];           // the source file name
    const bool askOutputType = argv[3][0] ==‘Y‘;  // If false it will use the inputs codec type

    VideoCapture inputVideo(source);              // Open input
    if (!inputVideo.isOpened())
    {
        cout  << "Could not open the input video: " << source << endl;
        return -1;
    }

    string::size_type pAt = source.find_last_of(‘.‘);                  // Find extension point
    const string NAME = source.substr(0, pAt) + argv[2][0] + ".avi";   // Form the new name with container  string 可以直接拼接
    int ex = static_cast<int>(inputVideo.get(CV_CAP_PROP_FOURCC));     // Get Codec Type- Int form

    // Transform from int to char via Bitwise operators
    char EXT[] = {(char)(ex & 0XFF) , (char)((ex & 0XFF00) >> 8),(char)((ex & 0XFF0000) >> 16),(char)((ex & 0XFF000000) >> 24), 0};

    Size S = Size((int) inputVideo.get(CV_CAP_PROP_FRAME_WIDTH),    // Acquire input size
        (int) inputVideo.get(CV_CAP_PROP_FRAME_HEIGHT));

    VideoWriter outputVideo;                                        // Open the output
    if (askOutputType)
        outputVideo.open(NAME, ex=-1, inputVideo.get(CV_CAP_PROP_FPS), S, true);
    else
        outputVideo.open(NAME, ex, inputVideo.get(CV_CAP_PROP_FPS), S, true);

    if (!outputVideo.isOpened())
    {
        cout  << "Could not open the output video for write: " << source << endl;
        return -1;
    }

    cout << "Input frame resolution: Width=" << S.width << "  Height=" << S.height
        << " of nr#: " << inputVideo.get(CV_CAP_PROP_FRAME_COUNT) << endl;
    cout << "Input codec type: " << EXT << endl;

    int channel = 2; // Select the channel to save
    switch(argv[2][0])
    {
    case ‘R‘ : channel = 2; break;
    case ‘G‘ : channel = 1; break;
    case ‘B‘ : channel = 0; break;
    }
    Mat src, res;
    vector<Mat> spl;

    for(;;) //Show the image captured in the window and repeat
    {
        inputVideo >> src;              // read
        if (src.empty()) break;         // check if at end

        split(src, spl);                // process - extract only the correct channel    // 将图像拆成单通道
        for (int i =0; i < 3; ++i)
            if (i != channel)
                spl[i] = Mat::zeros(S, spl[0].type());    // 其他通道置零
        merge(spl, res);    // 合并不同通道

        //outputVideo.write(res); //save or
        outputVideo << res;
    }

    cout << "Finished writing" << endl;
    return 0;
}
时间: 2024-12-13 22:53:07

OPENCV(7) —— HighGUI的相关文章

opencv学习HighGUI图形用户界面初步【1】

HighGUI是图形用户界面模块.包括:1.输入与输出:2.视频捕捉:3.图形和视频的解码编码:4.图形交界面与接口. 由于opencv.hpp包含了core.objdetect.ingproc.photo.video.featurse2d.calib3d.ml.highgui.contrib等模块.一般开发偷懒时#include <opencv2\opencv.hpp>. 但是提交时还是用具体模块的头文件. 命名空间会让你书写代码简单:using namespace cv,那么可以这样imr

opencv 1 HighGUI图形用户界面初步

1图像载入 显示和输出到文件 Opencv的命名空间 Mat类 图像的载入:imread( )函数 原文地址:https://www.cnblogs.com/xingkongcanghai/p/11155248.html

解决“cv2.error: OpenCV(3.4.2) C:\projects\opencv-python\opencv\modules\highgui\src\window.cpp:356:...”

主要是图片路径中“文件夹分隔符”使用的错误 将“\”改成“/”就好了 修改后的测试代码如下:x.py #导入cv模块 import cv2 as cv #读取图像,支持 bmp.jpg.png.tiff 等常用格式 img = cv.imread("./xx.png") #创建窗口并显示图像 cv.namedWindow("Image") cv.imshow("Image",img) cv.waitKey(0) #释放窗口 cv.destroyA

OpenCV 1.0在VC6下安装与配置(附测试程序)

步骤: 1 安装Visual C++ 6.0         2 安装OpenCV 1.0        3 配置Windows环境变量         4 配置Visual C++ 6.0             4.1 全局设置             4.2 项目设置 5 测试程序 1.安装Visual C++ 6.0          链接就不放了,网上下载安装即可. 2.安装OpenCV 1.0 下载OpenCV1.0安装程序.本人将OpenCV安装到D:\Program Files\

又是正版!Win下ffmpeg源码调试分析二(Step into ffmpeg from Opencv for bugs in debug mode with MSVC)

最近工作忙一直没时间写,但是看看网络上这方面的资源确实少,很多都是linux的(我更爱unix,哈哈),而且很多是直接引入上一篇文章的编译结果来做的.对于使用opencv但是又老是被ffmpeg库坑害的朋友们,可能又爱又恨,毕竟用它处理和分析视频是第一选择,不仅是因为俩者配合使用方便,而且ffmpeg几乎囊括了我所知道的所有解编码器,但是正是因为这个导致了一些bug很难定位,所以有必要考虑一下如何快速定位你的ffmpeg bug. sorry,废话多了.首先给个思路: 1.使opencv 的hi

OpenCV学习-b

OpenCV是开源计算机视觉和机器学习库.包含成千上万优化过的算法.项目地址:http://opencv.org/about.html.官方文档:http://docs.opencv.org/modules/core/doc/intro.html.OpenCV已支持OpenCL OpenGL,也支持iOS和Android.OpenCV的API是C++的,所以在iOS中最佳实践是将用到OpenCV功能写一层Objective-C++封装.这些封装把OpenCV的C++API转化为安全的Object

opencv VideoCapture.read()读取错误

错误详情: OpenCV Error: Assertion failed (size.width>0 && size.height>0) in cv::imshow, file ..\..\..\..\opencv\modules\highgui\src\window.cpp, line 261 源码: import cv2 as cv clicked = False def onMouse(event,x,y,flags,param): global clicked if e

基于 OpenCV 的人脸识别

基于 OpenCV 的人脸识别 一点背景知识 OpenCV 是一个开源的计算机视觉和机器学习库.它包含成千上万优化过的算法,为各种计算机视觉应用提供了一个通用工具包.根据这个项目的关于页面,OpenCV 已被广泛运用在各种项目上,从谷歌街景的图片拼接,到交互艺术展览的技术实现中,都有 OpenCV 的身影. OpenCV 起始于 1999 年 Intel 的一个内部研究项目.从那时起,它的开发就一直很活跃.进化到现在,它已支持如 OpenCL 和 OpenGL 的多种现代技术,也支持如 iOS

OpenCV学习和总结2

为了学习<学习OpenCV>,自己使用OpenCV 1.0版本进行操作.操作环境:Windows 7,Microsoft Visual Studio 2010,OpenCV 1.0. 1. 环境搭建 安装VS 2010,OpenCV 1.0,主要介绍相关配置,如下所示: (1)包含目录 解析:项目名称 -> 右键 -> 属性 -> 配置属性 -> VC++目录 -> 包含目录 D:\Program Files (x86)\OpenCV\ml\include D:\