视频监控、直播——基于opencv,lbx264,live555的RTSP流媒体服务器 (zc301P摄像头)

一个月一步步的学习历程已经在我前面的随笔中。现在终于迎来了最后一步

不多说,贴代码,不懂的,先看看我之前的随笔,有一步步的过程。还是不懂就在评论中问。

如果有哪里错了或哪里不好,希望读者给出建议。

#ifndef _DYNAMIC_RTSP_SERVER_HH
#define _DYNAMIC_RTSP_SERVER_HH

#ifndef _RTSP_SERVER_SUPPORTING_HTTP_STREAMING_HH
#include <RTSPServerSupportingHTTPStreaming.hh>
#endif

class DynamicRTSPServer: public RTSPServerSupportingHTTPStreaming {
public:
  static DynamicRTSPServer* createNew(UsageEnvironment& env, Port ourPort,
                      UserAuthenticationDatabase* authDatabase,
                      unsigned reclamationTestSeconds = 65);

protected:
  DynamicRTSPServer(UsageEnvironment& env, int ourSocket, Port ourPort,
            UserAuthenticationDatabase* authDatabase, unsigned reclamationTestSeconds);
  // called only by createNew();
  virtual ~DynamicRTSPServer();

protected: // redefined virtual functions
  virtual ServerMediaSession*
  lookupServerMediaSession(char const* streamName, Boolean isFirstLookupInSession);
};

#endif

DynamicRTSPServer.hh

#include "DynamicRTSPServer.hh"
#include "H264LiveVideoServerMediaSubssion.hh"
#include <liveMedia.hh>
#include <string.h>

DynamicRTSPServer* DynamicRTSPServer::createNew(
                UsageEnvironment& env, Port ourPort,
                 UserAuthenticationDatabase* authDatabase,
                 unsigned reclamationTestSeconds)
{
  int ourSocket = setUpOurSocket(env, ourPort);
  if (ourSocket == -1) return NULL;

  return new DynamicRTSPServer(env, ourSocket, ourPort, authDatabase, reclamationTestSeconds);
}

DynamicRTSPServer::DynamicRTSPServer(UsageEnvironment& env, int ourSocket, Port ourPort,
                     UserAuthenticationDatabase* authDatabase, unsigned reclamationTestSeconds)
  : RTSPServerSupportingHTTPStreaming(env, ourSocket, ourPort, authDatabase, reclamationTestSeconds) {}

DynamicRTSPServer::~DynamicRTSPServer() {}

static ServerMediaSession* createNewSMS(UsageEnvironment& env, char const* fileName/*, FILE* fid*/); // forward

ServerMediaSession* DynamicRTSPServer::lookupServerMediaSession(char const* streamName, Boolean isFirstLookupInSession)
{
  // Next, check whether we already have a "ServerMediaSession" for this file:
  ServerMediaSession* sms = RTSPServer::lookupServerMediaSession(streamName);
  Boolean smsExists = sms != NULL;

  // Handle the four possibilities for "fileExists" and "smsExists":
  if (smsExists && isFirstLookupInSession)
  {
      // Remove the existing "ServerMediaSession" and create a new one, in case the underlying
      // file has changed in some way:
      removeServerMediaSession(sms);
      sms = NULL;
  }
  if (sms == NULL)
  {
      sms = createNewSMS(envir(), streamName/*, fid*/);
      addServerMediaSession(sms);
  }

  return sms;
}

static ServerMediaSession* createNewSMS(UsageEnvironment& env, char const* fileName/*, FILE* fid*/)
{
  // Use the file name extension to determine the type of "ServerMediaSession":
  char const* extension = strrchr(fileName, ‘.‘);
  if (extension == NULL) return NULL;

  ServerMediaSession* sms = NULL;
  Boolean const reuseSource = False;

  if (strcmp(extension, ".264") == 0) {
    // Assumed to be a H.264 Video Elementary Stream file:
    char const* descStr = "H.264 Video, streamed by the LIVE555 Media Server";
    sms = ServerMediaSession::createNew(env, fileName, fileName, descStr);
    OutPacketBuffer::maxSize = 100000; // allow for some possibly large H.264 frames
    sms->addSubsession(H264LiveVideoServerMediaSubssion::createNew(env, fileName, reuseSource));
  }

  return sms;
}

DynamicRTSPServer.cpp

#include <BasicUsageEnvironment.hh>
#include "DynamicRTSPServer.hh"
#include "H264FramedLiveSource.hh"
#include <opencv/highgui.h>

//"version"
#ifndef _MEDIA_SERVER_VERSION_HH
#define _MEDIA_SERVER_VERSION_HH
#define MEDIA_SERVER_VERSION_STRING "0.85"
#endif

Cameras Camera;
int main(int argc, char** argv) {
  // Begin by setting up our usage environment:
  TaskScheduler* scheduler = BasicTaskScheduler::createNew();
  UsageEnvironment* env = BasicUsageEnvironment::createNew(*scheduler);

  UserAuthenticationDatabase* authDB = NULL;
#ifdef ACCESS_CONTROL
  // To implement client access control to the RTSP server, do the following:
  authDB = new UserAuthenticationDatabase;
  authDB->addUserRecord("username1", "password1"); // replace these with real strings
  // Repeat the above with each <username>, <password> that you wish to allow
  // access to the server.
#endif

  // Create the RTSP server.  Try first with the default port number (554),
  // and then with the alternative port number (8554):
  RTSPServer* rtspServer;
  portNumBits rtspServerPortNum = 554;
  Camera.Init();
  rtspServer = DynamicRTSPServer::createNew(*env, rtspServerPortNum, authDB);
  if (rtspServer == NULL) {
    rtspServerPortNum = 8554;
    rtspServer = DynamicRTSPServer::createNew(*env, rtspServerPortNum, authDB);
  }
  if (rtspServer == NULL) {
    *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
    exit(1);
  }

  *env << "LIVE555 Media Server\n";
  *env << "\tversion " << MEDIA_SERVER_VERSION_STRING
       << " (LIVE555 Streaming Media library version "
       << LIVEMEDIA_LIBRARY_VERSION_STRING << ").\n";

  char* urlPrefix = rtspServer->rtspURLPrefix();
  *env << "Play streams from this server using the URL\n\t"
       << urlPrefix << "<filename>\nwhere <filename> is a file present in the current directory.\n";
  *env << "Each file‘s type is inferred from its name suffix:\n";
  *env << "\t\".264\" => a H.264 Video Elementary Stream file\n";

  // Also, attempt to create a HTTP server for RTSP-over-HTTP tunneling.
  // Try first with the default HTTP port (80), and then with the alternative HTTP
  // port numbers (8000 and 8080).

  if (rtspServer->setUpTunnelingOverHTTP(80) || rtspServer->setUpTunnelingOverHTTP(8000) || rtspServer->setUpTunnelingOverHTTP(8080)) {
    *env << "(We use port " << rtspServer->httpServerPortNum() << " for optional RTSP-over-HTTP tunneling, or for HTTP live streaming (for indexed Transport Stream files only).)\n";
  } else {
    *env << "(RTSP-over-HTTP tunneling is not available.)\n";
  }

  env->taskScheduler().doEventLoop(); // does not return
  Camera.Destory();
  return 0; // only to prevent compiler warning
}

live555MediaServer.cpp

/*
*  H264LiveVideoServerMediaSubssion.hh
*/
#ifndef _H264_LIVE_VIDEO_SERVER_MEDIA_SUBSESSION_HH
#define _H264_LIVE_VIDEO_SERVER_MEDIA_SUBSESSION_HH
#include <liveMedia/H264VideoFileServerMediaSubsession.hh>
#include <UsageEnvironment/UsageEnvironment.hh>

class H264LiveVideoServerMediaSubssion : public H264VideoFileServerMediaSubsession {

public:
    static H264LiveVideoServerMediaSubssion*
        createNew(UsageEnvironment& env,
        char const* fileName,
        Boolean reuseFirstSource);

protected:
    H264LiveVideoServerMediaSubssion(UsageEnvironment& env, char const* fileName, Boolean reuseFirstSource);
    ~H264LiveVideoServerMediaSubssion();

protected:
    FramedSource* createNewStreamSource(unsigned clientSessionId,
        unsigned& estBitrate);
public:
    char fFileName[100];

};

#endif

H264LiveVideoServerMediaSubssion.hh

/*
*  H264LiveVideoServerMediaSubssion.cpp
*/

#include "H264LiveVideoServerMediaSubssion.hh"
#include "H264FramedLiveSource.hh"
#include <H264VideoStreamFramer.hh>
#include <string.h>

H264LiveVideoServerMediaSubssion* H264LiveVideoServerMediaSubssion::createNew (UsageEnvironment& env, char const* fileName, Boolean reuseFirstSource)
{
    return new H264LiveVideoServerMediaSubssion(env, fileName, reuseFirstSource);
}

H264LiveVideoServerMediaSubssion::H264LiveVideoServerMediaSubssion(UsageEnvironment& env, char const* fileName, Boolean reuseFirstSource)
: H264VideoFileServerMediaSubsession(env, fileName, reuseFirstSource)
{
    strcpy(fFileName, fileName);
}

H264LiveVideoServerMediaSubssion::~H264LiveVideoServerMediaSubssion()
{
}

FramedSource* H264LiveVideoServerMediaSubssion::createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate)
{
    estBitrate = 1000; // kbps

    H264FramedLiveSource* liveSource = H264FramedLiveSource::createNew(envir(), fFileName);
    if (liveSource == NULL)
    {
        return NULL;
    }
    return H264VideoStreamFramer::createNew(envir(), liveSource);
}

H264LiveVideoServerMediaSubssion.cpp

/*
* H264FramedLiveSource.hh
*/

#ifndef _H264FRAMEDLIVESOURCE_HH
#define _H264FRAMEDLIVESOURCE_HH

#include <FramedSource.hh>
#include <UsageEnvironment.hh>
#include <opencv/highgui.h>

extern "C"
{
#include "encoder.h"
}

class H264FramedLiveSource : public FramedSource
{
public:
    static H264FramedLiveSource* createNew(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize = 0, unsigned playTimePerFrame = 0);
    x264_nal_t * my_nal;

protected:
    H264FramedLiveSource(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize, unsigned playTimePerFrame); // called only by createNew()
    ~H264FramedLiveSource();

private:
    // redefined virtual functions:
    virtual void doGetNextFrame();
    int TransportData(unsigned char* to, unsigned maxSize);
    //static int nalIndex;

protected:
    FILE *fp;

};

class Cameras
{
public:
    void Init();
    void GetNextFrame();
    void Destory();
public:
    CvCapture * cap ;
    my_x264_encoder*  encoder;
    int n_nal;
    x264_picture_t pic_out;

    IplImage * img;
    unsigned char *RGB1;
};

#endif

H264FramedLiveSource.hh

/*
*  H264FramedLiveSource.cpp
*/

#include "H264FramedLiveSource.hh"
#include <stdio.h>
#include <stdint.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdlib.h>
#include <string.h>
extern class Cameras Camera; //in mainRTSPServer.cpp

#define WIDTH 320
#define HEIGHT 240
#define widthStep 960
#define ENCODER_TUNE   "zerolatency"
#define ENCODER_PROFILE  "baseline"
#define ENCODER_PRESET "veryfast"
#define ENCODER_COLORSPACE X264_CSP_I420
#define CLEAR(x) (memset((&x),0,sizeof(x)))

void Convert(unsigned char *RGB, unsigned char *YUV, unsigned int width, unsigned int height);

H264FramedLiveSource::H264FramedLiveSource(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize, unsigned playTimePerFrame) : FramedSource(env)
{
    //fp = fopen(fileName, "rb");
}

H264FramedLiveSource* H264FramedLiveSource::createNew(UsageEnvironment& env, char const* fileName, unsigned preferredFrameSize /*= 0*/, unsigned playTimePerFrame /*= 0*/)
{
    H264FramedLiveSource* newSource = new H264FramedLiveSource(env, fileName, preferredFrameSize, playTimePerFrame);

    return newSource;
}

H264FramedLiveSource::~H264FramedLiveSource()
{
    //fclose(fp);
}

void H264FramedLiveSource::doGetNextFrame()
{

    fFrameSize = 0;
    //不知道为什么,多几帧一起发送效果会好一点点,也许是心理作怪
    for(int i = 0; i < 2; i++)
    {
        Camera.GetNextFrame();
        for (my_nal = Camera.encoder->nal; my_nal < Camera.encoder->nal + Camera.n_nal; ++my_nal){
            memmove((unsigned char*)fTo + fFrameSize, my_nal->p_payload, my_nal->i_payload);
            fFrameSize += my_nal->i_payload;
        }
    }

    nextTask() = envir().taskScheduler().scheduleDelayedTask(0,
        (TaskFunc*)FramedSource::afterGetting, this);//表示延迟0秒后再执行 afterGetting 函数
    return;
}

void Cameras::Init()
{
    int ret;
    //打开第一个摄像头
    cap = cvCreateCameraCapture(0);
    if (!cap)
    {
        fprintf(stderr, "Can not open camera1.\n");
        exit(-1);
    }
    cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_WIDTH, WIDTH);
    cvSetCaptureProperty(cap, CV_CAP_PROP_FRAME_HEIGHT, HEIGHT);

    encoder = (my_x264_encoder *)malloc(sizeof(my_x264_encoder));
    if (!encoder){
        printf("cannot malloc my_x264_encoder !\n");
        exit(EXIT_FAILURE);
    }
    CLEAR(*encoder);

    strcpy(encoder->parameter_preset, ENCODER_PRESET);
    strcpy(encoder->parameter_tune, ENCODER_TUNE);

    encoder->x264_parameter = (x264_param_t *)malloc(sizeof(x264_param_t));
    if (!encoder->x264_parameter){
        printf("malloc x264_parameter error!\n");
        exit(EXIT_FAILURE);
    }

    /*初始化编码器*/
    CLEAR(*(encoder->x264_parameter));
    x264_param_default(encoder->x264_parameter);

    if ((ret = x264_param_default_preset(encoder->x264_parameter, encoder->parameter_preset, encoder->parameter_tune))<0){
        printf("x264_param_default_preset error!\n");
        exit(EXIT_FAILURE);
    }

    /*cpuFlags 去空缓冲区继续使用不死锁保证*/
    encoder->x264_parameter->i_threads = X264_SYNC_LOOKAHEAD_AUTO;
    /*视频选项*/
    encoder->x264_parameter->i_width = WIDTH;//要编码的图像的宽度
    encoder->x264_parameter->i_height = HEIGHT;//要编码的图像的高度
    encoder->x264_parameter->i_frame_total = 0;//要编码的总帧数,不知道用0
    encoder->x264_parameter->i_keyint_max = 25;
    /*流参数*/
    encoder->x264_parameter->i_bframe = 5;
    encoder->x264_parameter->b_open_gop = 0;
    encoder->x264_parameter->i_bframe_pyramid = 0;
    encoder->x264_parameter->i_bframe_adaptive = X264_B_ADAPT_TRELLIS;

    /*log参数,不需要打印编码信息时直接注释掉*/
//    encoder->x264_parameter->i_log_level = X264_LOG_DEBUG;

    encoder->x264_parameter->i_fps_num = 25;//码率分子
    encoder->x264_parameter->i_fps_den = 1;//码率分母

    encoder->x264_parameter->b_intra_refresh = 1;
    encoder->x264_parameter->b_annexb = 1;
    /////////////////////////////////////////////////////////////////////////////////////////////////////

    strcpy(encoder->parameter_profile, ENCODER_PROFILE);
    if ((ret = x264_param_apply_profile(encoder->x264_parameter, encoder->parameter_profile))<0){
        printf("x264_param_apply_profile error!\n");
        exit(EXIT_FAILURE);
    }
    /*打开编码器*/
    encoder->x264_encoder = x264_encoder_open(encoder->x264_parameter);
    encoder->colorspace = ENCODER_COLORSPACE;

    /*初始化pic*/
    encoder->yuv420p_picture = (x264_picture_t *)malloc(sizeof(x264_picture_t));
    if (!encoder->yuv420p_picture){
        printf("malloc encoder->yuv420p_picture error!\n");
        exit(EXIT_FAILURE);
    }
    if ((ret = x264_picture_alloc(encoder->yuv420p_picture, encoder->colorspace, WIDTH, HEIGHT))<0){
        printf("ret=%d\n", ret);
        printf("x264_picture_alloc error!\n");
        exit(EXIT_FAILURE);
    }

    encoder->yuv420p_picture->img.i_csp = encoder->colorspace;
    encoder->yuv420p_picture->img.i_plane = 3;
    encoder->yuv420p_picture->i_type = X264_TYPE_AUTO;

    /*申请YUV buffer*/
    encoder->yuv = (uint8_t *)malloc(WIDTH*HEIGHT * 3 / 2);
    if (!encoder->yuv){
        printf("malloc yuv error!\n");
        exit(EXIT_FAILURE);
    }
    CLEAR(*(encoder->yuv));
    encoder->yuv420p_picture->img.plane[0] = encoder->yuv;
    encoder->yuv420p_picture->img.plane[1] = encoder->yuv + WIDTH*HEIGHT;
    encoder->yuv420p_picture->img.plane[2] = encoder->yuv + WIDTH*HEIGHT + WIDTH*HEIGHT / 4;

    n_nal = 0;
    encoder->nal = (x264_nal_t *)calloc(2, sizeof(x264_nal_t));
    if (!encoder->nal){
        printf("malloc x264_nal_t error!\n");
        exit(EXIT_FAILURE);
    }
    CLEAR(*(encoder->nal));

    RGB1 = (unsigned char *)malloc(HEIGHT * WIDTH * 3);

}
void Cameras::GetNextFrame()
{
    img = cvQueryFrame(cap);

    for (int i = 0; i< HEIGHT; i++)
    {
        for (int j = 0; j< WIDTH; j++)
        {
            RGB1[(i*WIDTH + j) * 3] = img->imageData[i * widthStep + j * 3 + 2];;
            RGB1[(i*WIDTH + j) * 3 + 1] = img->imageData[i * widthStep + j * 3 + 1];
            RGB1[(i*WIDTH + j) * 3 + 2] = img->imageData[i * widthStep + j * 3];
        }
    }
    Convert(RGB1, encoder->yuv, WIDTH, HEIGHT);
    encoder->yuv420p_picture->i_pts++;
//printf("!!!!!\n");
    if ( x264_encoder_encode(encoder->x264_encoder, &encoder->nal, &n_nal, encoder->yuv420p_picture, &pic_out) < 0){
        printf("x264_encoder_encode error!\n");
        exit(EXIT_FAILURE);
    }
//printf("@@@@@@\n");
    /*for (my_nal = encoder->nal; my_nal < encoder->nal + n_nal; ++my_nal){
        write(fd_write, my_nal->p_payload, my_nal->i_payload);
    }*/
}
void Cameras::Destory()
{
    free(RGB1);
    cvReleaseCapture(&cap);
    free(encoder->yuv);
    free(encoder->yuv420p_picture);
    free(encoder->x264_parameter);
    x264_encoder_close(encoder->x264_encoder);
    free(encoder);
}

H264FramedLiveSource.cpp

#include <x264.h>

typedef struct my_x264_encoder{
    x264_param_t  * x264_parameter;
    char parameter_preset[20];
    char parameter_tune[20];
    char parameter_profile[20];
    x264_t  * x264_encoder;
    x264_picture_t * yuv420p_picture;
    long colorspace;
    unsigned char *yuv;
    x264_nal_t * nal;
} my_x264_encoder;

encoder.h

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <iostream>
//转换矩阵
#define MY(a,b,c) (( a*  0.2989  + b*  0.5866  + c*  0.1145))
#define MU(a,b,c) (( a*(-0.1688) + b*(-0.3312) + c*  0.5000 + 128))
#define MV(a,b,c) (( a*  0.5000  + b*(-0.4184) + c*(-0.0816) + 128))
//#define MY(a,b,c) (( a*  0.257 + b*  0.504  + c*  0.098+16))
//#define MU(a,b,c) (( a*( -0.148) + b*(- 0.291) + c* 0.439 + 128))
//#define MV(a,b,c) (( a*  0.439  + b*(- 0.368) + c*( - 0.071) + 128))

//大小判断
#define DY(a,b,c) (MY(a,b,c) > 255 ? 255 : (MY(a,b,c) < 0 ? 0 : MY(a,b,c)))
#define DU(a,b,c) (MU(a,b,c) > 255 ? 255 : (MU(a,b,c) < 0 ? 0 : MU(a,b,c)))
#define DV(a,b,c) (MV(a,b,c) > 255 ? 255 : (MV(a,b,c) < 0 ? 0 : MV(a,b,c)))
#define CLIP(a) ((a) > 255 ? 255 : ((a) < 0 ? 0 : (a)))
//RGB to YUV
void Convert(unsigned char *RGB, unsigned char *YUV, unsigned int width, unsigned int height)
{
    //变量声明
    unsigned int i, x, y, j;
    unsigned char *Y = NULL;
    unsigned char *U = NULL;
    unsigned char *V = NULL;

    Y = YUV;
    U = YUV + width*height;
    V = U + ((width*height) >> 2);
    for (y = 0; y < height; y++)
    for (x = 0; x < width; x++)
    {
        j = y*width + x;
        i = j * 3;
        Y[j] = (unsigned char)(DY(RGB[i], RGB[i + 1], RGB[i + 2]));
        if (x % 2 == 1 && y % 2 == 1)
        {
            j = (width >> 1) * (y >> 1) + (x >> 1);
            //上面i仍有效
            U[j] = (unsigned char)
                ((DU(RGB[i], RGB[i + 1], RGB[i + 2]) +
                DU(RGB[i - 3], RGB[i - 2], RGB[i - 1]) +
                DU(RGB[i - width * 3], RGB[i + 1 - width * 3], RGB[i + 2 - width * 3]) +
                DU(RGB[i - 3 - width * 3], RGB[i - 2 - width * 3], RGB[i - 1 - width * 3])) / 4);
            V[j] = (unsigned char)
                ((DV(RGB[i], RGB[i + 1], RGB[i + 2]) +
                DV(RGB[i - 3], RGB[i - 2], RGB[i - 1]) +
                DV(RGB[i - width * 3], RGB[i + 1 - width * 3], RGB[i + 2 - width * 3]) +
                DV(RGB[i - 3 - width * 3], RGB[i - 2 - width * 3], RGB[i - 1 - width * 3])) / 4);
        }
    }
}

RGB2YUV.cpp

以上就是全部代码了,代码有些是修改源代码的,有些是修改别人的代码的,也许还有一些没用的变量或没用的步骤,忽略就行了。不影响编译。

g++ -c *.cpp -I /usr/local/include/groupsock  -I /usr/local/include/UsageEnvironment -I /usr/local/include/liveMedia -I /usr/local/include/BasicUsageEnvironment -I .

g++  *.o /usr/local/lib/libliveMedia.a /usr/local/lib/libgroupsock.a /usr/local/lib/libBasicUsageEnvironment.a /usr/local/lib/libUsageEnvironment.a /usr/local/lib/libx264.so /usr/local/lib/libopencv_highgui.so /usr/local/lib/libopencv_videoio.so /usr/lib/x86_64-linux-gnu/libx264.so.142 -ldl  -lm -lpthread -ldl -g

这个是我的编译命令,因为libx264误安装了两个版本,所以两个库都用上。

切换到root权限,然后执行./a.out

除了VLC软件之外,手机软件MX player更好用,我一直都是用手机软件看的,连上自己的WIFI,输入地址就行了。

代码中设置的帧速是25帧每秒,但实际只有...我看不出来。。总之算是很流畅了,分辨率为320x240(可以自己适当调节),延迟在1秒钟之内或1秒左右。

看了很多视频监控、直播的论文,别人要不就是不搭建服务器,要不就是直接传jpeg图片,要不就是帧速才6-7帧每秒,感觉都不怎么符合要求。

当时还以为能找到些适合的论文学习,还有搜索到适合的博客那些,但是没有。

很多人会了但他就是不肯分享出来,花点时间总结下也不错吧?还能给初学者学习,一代代地分享下去世界才会有更快的发展。

希望我的读者能够分享出自己幸苦学来的知识,也带动起周围的这种气氛,让人人都学会分享。

谢谢阅读~,有什么建议或问题的话希望能够提出。

测试没有问题后,剩下的步骤就是把代码移植到开发板上面了,这个根据自己的开发板来弄把,过程也不是很复杂。

时间: 2024-10-12 21:44:14

视频监控、直播——基于opencv,lbx264,live555的RTSP流媒体服务器 (zc301P摄像头)的相关文章

视频直播:libx264编码一帧图片+live555搭建RTSP流媒体服务器

最近又复习了一下live555的源代码,并在VS2013下编译了视频直播程序. 从视频读取一帧后x264编码,用live555的RTSP传输流媒体. 效果清晰流畅(640*480+40帧每秒),很满意. 附上主要的cpp代码 /* H264FramedLiveSource.cpp By Chain_Gank */ #include <highgui.h> #include <cv.h> #include <stdio.h> #include <string.h&g

微信扫一扫,手机就能看远程视频监控直播

眼见为实,主要为幼儿园.食品安全.养殖场.交通路况.景点直播等已经安装过监控摄像头领域提供实时视频.微信直播. 目前市场上监控厂商提供手机远程监控太多了,但没有提供微信视频监控直播公司不多,因为监控视频转码成直播流相对比较麻烦,我公司采用自主研发的视频转码网关采集兼容第三方监控摄像头信号,把监控视频推送到云眼微视平台转码生成HTML5进行微信直播,可接入到类似手机.电脑无控件直播,类似云事通"视频直播广场",已经在幼儿园.农场.智慧城市等很多地方有应用,反应还不错的!

海康威视频监控设备Web查看系统(二):服务器篇

声明:本系列文章只提供交流与学习使用.文章中所有涉及到海康威视设备的SDK均可在海康威视官方网站下载得到.文章中所有除官方SDK意外的代码均可随意使用,任何涉及到海康威视公司利益的非正常使用由使用者自己负责,与本人无关. 前言: 上一篇文章<海康威视频监控设备Web查看系统(一):概要篇>笼统的介绍了关于海康视频中转方案的思路,本文将一步步实现方案中的视频中转服务端.文中会涉及到一些.net socket处理和基础的多线程操作.我用的是SDK版本是SDK_Win32_V4.2.8.1 .大家根

基于java spring框架开发部标1078视频监控平台精华文章索引

部标1078视频监控平台,是一个庞杂的工程,涵盖了多层协议,部标jt808,jt809,jt1078,苏标Adas协议等,多个平台功能标准,部标796标准,部标1077标准和苏标主动安全标准,视频方面的协议有RTSP, RTMP, RTP, 音视频编码有H.264, AAC,  726,711等,消化这些协议和功能标准就已经是需要一个较长的周期了,而构建一个视频平台的架构,也是比较复杂的,后端不仅有网关,还要有流媒体服务器,转发服务器,播放器,RTSP或RTMP服务器等多个服务器模块,需要的技术

幼儿园安装监控 家长看远程监控直播见解

幼儿园是第一个成长启蒙的地方,是家长和社会对孩子未来的寄托:作为幼儿园的园丁是幸福的,能让每一朵花儿在这盛开怒放,在这人生接触人生的第一任师长,让对人生未来充满敬意与希望. 因最近各地相继爆出虐童案,引起社会广泛关注,让家长和老师及对整个社会的教育产生了不信任.事情发生在谁的身上,对小孩及家庭都是不幸的,让幼小的心灵对成长的经历就抹上人生的阴影,对家长对社会充满激怒和对小孩子未来充满迷茫.太变态的老师只是少数,老师是人生最值得尊敬的职业,也是唯一能把小孩未来托付-希望之手. 安防发展20多年了,

海康威视频监控设备Web查看系统(三):Web篇

声明:本系列文章只提供交流与学习使用.文章中所有涉及到海康威视设备的SDK均可在海康威视官方网站下载得到.文章中所有除官方SDK以为的代码均可随意使用,任何涉及到海康威视公司利益的非正常使用由使用者自己负责,与本人无关. 前言: <海康威视频监控设备Web查看系统(一):概要篇> <海康威视频监控设备Web查看系统(二):服务器篇> 本文是本系列三篇文章中的最后一篇,在前两篇文章中,介绍了开发海康监控的方案及中转服务器的实现,本篇文章介绍Web端的功能实现,经过本篇文章中的项目开发

网易视频云:流媒体服务器原理和架构解析

网易视频云是网易公司旗下的视频云服务产品,以Paas服务模式,向开发者提供音视频编解码SDK和开放API,助力APP接入音视频功能.今天,网易视频云的技术专家给大家分享一篇流媒体技术性文章:流媒体服务器原理和架构解析. 一个完整的多媒体文件是由音频和视频两部分组成的,H264.Xvid等就是视频编码格式,MP3.AAC等就是音频编码格式,字幕文件只是附加文件.目前大部分的播放器产品对于H.264 + AAC的MP4编码格式支持最好,但是MP4也有很多的缺点,比如视频header很大,影响在线视频

流媒体服务器、海康威视 大华摄像头实现视频监控、直播解决方案

随着互联网+物联网进程的加快,视频监控应用领域变得越来越广泛,其中海康威视 大华等品牌的摄像头频繁出现在视野中.由于去年也实现过智慧工地项目上的视频监控方案,加上当今直播趋势不减.现在总结一下: 缘由:是1对N 点对多的直播方式, 一般都是采用服务器转发,所以此处不考虑WebRTC这种端对端的方式,WebRTC将在下一篇文章中讲解下实现思路. 前提:需要海康威视或大华的摄像头,大华摄像头清晰度 品质较好,但相对于海康的摄像头较贵,所以海康威视的摄像头更受口袋欢迎. 一.自建流媒体服务器 第一种方

用java写一个远程视频监控系统,实时监控(类似直播)我想用RPT协议,不知道怎么把RPT协议集成到项目中

我最近在用java写一个远程视频监控系统,实时监控(类似直播)我想用RPT协议,不知道怎么把RPT协议集成到项目中,第一次写项目,写过这类项目的多多提意见,哪方面的意见都行,有代码或者demo的求赏给我,谢谢