onvif 开发之video streamer---onvif实现功能和经验

目录(?)[-]

  1. 一产生onvif源码框架
    1. 从wsdl生成C头文件
    2. 从头文件生成源码框架
  2. 二创建soap运行环境
  3. 三RTSP视频对接
    1. 实现GetCapabilities命令
    2. 实现GetServices命令
    3. 实现GetVideoSources命令
    4. 实现GetProfiles命令
    5. GetVideoSourceConfiguration和GetVideoEncoderConfiguration
    6. GetVideoEncoderConfigurationOptions
  4. 四运行live555MediaServer服务器
  5. 五启动Onvif Device Manager测试
    1. Live video
    2. Video streaming
    3. Profiles
    4. 最后是运行的live555 rtsp服务器

有了前几篇的基础,现在可以正式开始onvif的实现工作,其中一项非常重要的部分就是视频流的对接,即能够在符合onvif标准的监控客户端软件里接收到设备端NVT发来的RTSP视频流。这里,我所用的客户端软件是Onvif Device Manager v2.2。【来自http://blog.csdn.net/ghostyu】

ONVIF Profile S Specification文档描述了Device或者说DVT和Client可以使用的一种Profile,Profile这个词在计算机领域非常常见,我们可以理解成一种方案、配置、框架等。

文档里描述了如果实现VideoStream,device和client应该具备的条件,当然如果实现文档的所有条件,就可以说该设备符合Profile S

如果单纯实现VideoStream,只需完成下列命令。

1、GetProfiles
2、GetStreamUri
填充rtsp路径,例如:rtsp://192.168.1.201/petrov.m4e
3、Media Streaming using RTSP
这里使用开源的live555,完成rtsp功能
4、GetVideoEncoderConfiguration
5、GetVideoEncoderConfigurationOptions
6、GetCapabilities
NVC为了获取DVT所支持的功能的命令

参考文档:

1、ONVIF Profile S Specification
描述ProfileS是什么样的一个东西,如何实现
2、Reference_of_ONVIF_Development_v1.01.02
Onvif DVT设计参考,指明了一条道路,但没有具体内容
3、ONVIF-Media-Service-Spec-v220
Onvif Media的说明介绍
4、http://www.onvif.org/onvif/ver20/util/operationIndex.html
onvif几乎全部命令的详细说明,非常重要。该文档告诉我们结构体成员的意义和如何填充。Onvif开发其实就是各种结构体的填充。

一、产生onvif源码框架

1、从wsdl生成C头文件

wsdl2h -o onvif.h -c -s -t .\typemap.dat http://www.onvif.org/onvif/ver10/device/wsdl/devicemgmt.wsdl http://www.onvif.org/onvif/ver10/event/wsdl/event.wsdl http://www.onvif.org/onvif/ver10/display.wsdl http://www.onvif.org/onvif/ver10/deviceio.wsdl http://www.onvif.org/onvif/ver20/imaging/wsdl/imaging.wsdl http://www.onvif.org/onvif/ver10/media/wsdl/media.wsdl http://www.onvif.org/onvif/ver20/ptz/wsdl/ptz.wsdl http://www.onvif.org/onvif/ver10/receiver.wsdl http://www.onvif.org/onvif/ver10/recording.wsdl http://www.onvif.org/onvif/ver10/search.wsdl http://www.onvif.org/onvif/ver10/network/wsdl/remotediscovery.wsdl http://www.onvif.org/onvif/ver10/replay.wsdl http://www.onvif.org/onvif/ver20/analytics/wsdl/analytics.wsdl http://www.onvif.org/onvif/ver10/analyticsdevice.wsdl http://www.onvif.org/onvif/ver10/schema/onvif.xsd http://www.onvif.org/ver10/actionengine.wsdl

跟前一篇discovery唯一不同的是,这里多了很多wsdl文件,这次创建完整的onvif代码框架

2、从头文件生成源码框架

soapcpp2 -c onvif.h -x -I /root/onvif/gsoap-2.8/gsoap/import -I /root/onvif/gsoap-2.8/gsoap/

产生的C文件比较庞大,最大的有十几兆,大部分的内容没有复用导致。

二、创建soap运行环境

int main(int argc, char **argv)
{
int m, s;
struct soap add_soap;
int server_udp;

server_udp = create_server_socket_udp();
//bind_server_udp1(server_udp);
pthread_t thrHello;
pthread_t thrProbe;
//pthread_create(&thrHello,NULL,main_Hello,server_udp);
//sleep(2);
pthread_create(&thrProbe,NULL,main_Probe,server_udp);

soap_init(&add_soap);
soap_set_namespaces(&add_soap, namespaces);

if (argc < 0) {
printf("usage: %s <server_port> \n", argv[0]);
exit(1);
} else {
m = soap_bind(&add_soap, NULL, 80, 100);
if (m < 0) {
soap_print_fault(&add_soap, stderr);
exit(-1);
}
fprintf(stderr, "Socket connection successful: master socket = %d\n", m);
for (;;) {
s = soap_accept(&add_soap);
if (s < 0) {
soap_print_fault(&add_soap, stderr);
exit(-1);
}
fprintf(stderr, "Socket connection successful: slave socket = %d\n", s);
soap_serve(&add_soap);
soap_end(&add_soap);
}
}
return 0;
}

注意,这里绑定了80端口,onvif使用的是http请求,然后附带xml,其实正常的是将onvif集成到web服务器中,普通的http请求有web服务器处理,onvif的http请求则有soap处理。我们这里的做法也可行,只不过onvif的访问web服务器的功能是无法使用的。

三、RTSP视频对接

1、实现GetCapabilities命令

客户端发送GetCapabilities命令来得到设备端的能力,然后依据GetCapabilities返回的结果再来进行下一步操作

在__tds__GetCapabilities函数中我们只需要填充Media部分和一些必要的即可

//想要对接RTSP视频,必须设置Media
tds__GetCapabilitiesResponse->Capabilities->Media = (struct tt__MediaCapabilities*)soap_malloc(soap, sizeof(struct tt__MediaCapabilities));
tds__GetCapabilitiesResponse->Capabilities->Media->XAddr = (char *) soap_malloc(soap, sizeof(char) * LARGE_INFO_LENGTH);
strcpy(tds__GetCapabilitiesResponse->Capabilities->Media->XAddr, _IPv4Address);
tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities = (struct tt__RealTimeStreamingCapabilities*)soap_malloc(soap, sizeof(struct tt__RealTimeStreamingCapabilities));
tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTPMulticast = (int *)soap_malloc(soap, sizeof(int));
*tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTPMulticast = _false;
tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTP_USCORETCP = (int *)soap_malloc(soap, sizeof(int));
*tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTP_USCORETCP = _true;
tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTP_USCORERTSP_USCORETCP = (int *)soap_malloc(soap, sizeof(int));
*tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->RTP_USCORERTSP_USCORETCP = _true;
tds__GetCapabilitiesResponse->Capabilities->Media->StreamingCapabilities->Extension = NULL;
tds__GetCapabilitiesResponse->Capabilities->Media->Extension = NULL;
tds__GetCapabilitiesResponse->Capabilities->Media->__size = 0;
tds__GetCapabilitiesResponse->Capabilities->Media->__any = 0;

另外必要填充的还有

//下面的重要,这里只实现视频流,需要设置VideoSources
tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->VideoSources = TRUE;
tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->VideoOutputs = FALSE;
tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->AudioSources = FALSE;
tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->AudioOutputs = FALSE;
tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->RelayOutputs = FALSE;
tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->__size = 0;
tds__GetCapabilitiesResponse->Capabilities->Extension->DeviceIO->__any = NULL;

tds__GetCapabilitiesResponse->Capabilities->Extension->Display = NULL;
tds__GetCapabilitiesResponse->Capabilities->Extension->Recording = NULL;
tds__GetCapabilitiesResponse->Capabilities->Extension->Search = NULL;
tds__GetCapabilitiesResponse->Capabilities->Extension->Replay = NULL;
tds__GetCapabilitiesResponse->Capabilities->Extension->Receiver = NULL;
tds__GetCapabilitiesResponse->Capabilities->Extension->AnalyticsDevice = NULL;
tds__GetCapabilitiesResponse->Capabilities->Extension->Extensions = NULL;
tds__GetCapabilitiesResponse->Capabilities->Extension->__size = 0;
tds__GetCapabilitiesResponse->Capabilities->Extension->__any = NULL;

2、实现GetServices命令

int __tds__GetServices(struct soap* soap, struct _tds__GetServices *tds__GetServices, struct _tds__GetServicesResponse *tds__GetServicesResponse)
{
DBG("__tds__GetServices\n");
/*该函数很必要*/
char _IPAddr[INFO_LENGTH];
int i = 0;
sprintf(_IPAddr, "http://%03d.%03d.%03d.%03d/onvif/services", 192, 168, 1, 233);
tds__GetServicesResponse->__sizeService = 1;

tds__GetServicesResponse->Service = (struct tds__Service *)soap_malloc(soap, sizeof(struct tds__Service));
tds__GetServicesResponse->Service[0].XAddr = (char *)soap_malloc(soap, sizeof(char) * INFO_LENGTH);
tds__GetServicesResponse->Service[0].Namespace = (char *)soap_malloc(soap, sizeof(char) * INFO_LENGTH);
strcpy(tds__GetServicesResponse->Service[0].Namespace, "http://www.onvif.org/ver10/events/wsdl");
strcpy(tds__GetServicesResponse[0].Service->XAddr, _IPAddr);
tds__GetServicesResponse->Service[0].Capabilities = NULL;
tds__GetServicesResponse->Service[0].Version = (struct tt__OnvifVersion *)soap_malloc(soap, sizeof(struct tt__OnvifVersion));
tds__GetServicesResponse->Service[0].Version->Major = 0;
tds__GetServicesResponse->Service[0].Version->Minor = 3;
tds__GetServicesResponse->Service[0].__any = (char **)soap_malloc(soap, sizeof(char *));
tds__GetServicesResponse->Service[0].__any[0] = (char *)soap_malloc(soap, sizeof(char) * INFO_LENGTH);
strcpy(tds__GetServicesResponse->Service[0].__any[0],"why1");
tds__GetServicesResponse->Service[0].__any[1] = (char *)soap_malloc(soap,sizeof(char) * INFO_LENGTH);
strcpy(tds__GetServicesResponse->Service[0].__any[1],"why2");
tds__GetServicesResponse->Service[0].__size = NULL;
tds__GetServicesResponse->Service[0].__anyAttribute = NULL;
return SOAP_OK;
}

3、实现GetVideoSources命令

int __tmd__GetVideoSources(struct soap* soap, struct _trt__GetVideoSources *trt__GetVideoSources, struct _trt__GetVideoSourcesResponse *trt__GetVideoSourcesResponse)
{
DBG("__tmd__GetVideoSources\n");

int size1;
size1 = 1;
trt__GetVideoSourcesResponse->__sizeVideoSources = size1;
trt__GetVideoSourcesResponse->VideoSources = (struct tt__VideoSource *)soap_malloc(soap, sizeof(struct tt__VideoSource) * size1);
trt__GetVideoSourcesResponse->VideoSources[0].Framerate = 30;
trt__GetVideoSourcesResponse->VideoSources[0].Resolution = (struct tt__VideoResolution *)soap_malloc(soap, sizeof(struct tt__VideoResolution));
trt__GetVideoSourcesResponse->VideoSources[0].Resolution->Height = 720;
trt__GetVideoSourcesResponse->VideoSources[0].Resolution->Width = 1280;
trt__GetVideoSourcesResponse->VideoSources[0].token = (char *)soap_malloc(soap, sizeof(char)*INFO_LENGTH);
strcpy(trt__GetVideoSourcesResponse->VideoSources[0].token,"GhostyuSource_token"); //注意这里需要和GetProfile中的sourcetoken相同

trt__GetVideoSourcesResponse->VideoSources[0].Imaging =(struct tt__ImagingSettings*)soap_malloc(soap, sizeof(struct tt__ImagingSettings));
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Brightness = (float*)soap_malloc(soap,sizeof(float));
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Brightness[0] = 128;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->ColorSaturation = (float*)soap_malloc(soap,sizeof(float));
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->ColorSaturation[0] = 128;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Contrast = (float*)soap_malloc(soap,sizeof(float));
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Contrast[0] = 128;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->IrCutFilter = (int *)soap_malloc(soap,sizeof(int));
*trt__GetVideoSourcesResponse->VideoSources[0].Imaging->IrCutFilter = 0;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Sharpness = (float*)soap_malloc(soap,sizeof(float));
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Sharpness[0] = 128;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->BacklightCompensation = (struct tt__BacklightCompensation*)soap_malloc(soap, sizeof(struct tt__BacklightCompensation));
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->BacklightCompensation->Mode = 0;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->BacklightCompensation->Level = 20;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Exposure = NULL;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Focus = NULL;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WideDynamicRange = (struct tt__WideDynamicRange*)soap_malloc(soap, sizeof(struct tt__WideDynamicRange));
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WideDynamicRange->Mode = 0;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WideDynamicRange->Level = 20;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WhiteBalance = (struct tt__WhiteBalance*)soap_malloc(soap, sizeof(struct tt__WhiteBalance));
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WhiteBalance->Mode = 0;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WhiteBalance->CrGain = 0;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->WhiteBalance->CbGain = 0;
trt__GetVideoSourcesResponse->VideoSources[0].Imaging->Extension = NULL;
trt__GetVideoSourcesResponse->VideoSources[0].Extension = NULL;
return SOAP_OK;
}

__tmd__GetVideoSources最重要的是token的填充,必须要和下面profile中的sourcetoken相同,需要匹配到这个视频源

4、实现GetProfiles命令

size = 1;
trt__GetProfilesResponse->Profiles =(struct tt__Profile *)soap_malloc(soap, sizeof(struct tt__Profile) * size);
trt__GetProfilesResponse->__sizeProfiles = size;

i=0;
trt__GetProfilesResponse->Profiles[i].Name = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
strcpy(trt__GetProfilesResponse->Profiles[i].Name,"my_profile");
trt__GetProfilesResponse->Profiles[i].token= (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
strcpy(trt__GetProfilesResponse->Profiles[i].token,"token_profile");
trt__GetProfilesResponse->Profiles[i].fixed = _false;
trt__GetProfilesResponse->Profiles[i].__anyAttribute = NULL;

除了上面的基本信息,还需要填充两大项:VideoSourceConfiguration和VideoEncoderConfiguration,一个用于描述视频源的信息,另外一个描述视频的编码信息

先给VideoSourceConfiguration分配空间

trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration = (struct tt__VideoSourceConfiguration *)soap_malloc(soap,sizeof(struct tt__VideoSourceConfiguration ));
trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Name = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->token = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->SourceToken = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds = (struct tt__IntRectangle *)soap_malloc(soap,sizeof(struct tt__IntRectangle));

然后在填充它

/*注意SourceToken*/
strcpy(trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Name,"VS_Name");
strcpy(trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->token,"VS_Token");
strcpy(trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->SourceToken,"GhostyuSource_token"); /*必须与__tmd__GetVideoSources中的token相同*/
trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->UseCount = 1;
trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds->x = 1;
trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds->y = 1;
trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds->height = 720;
trt__GetProfilesResponse->Profiles[i].VideoSourceConfiguration->Bounds->width = 1280;

如果是指针必须先用soap_malloc分配内存,然后才能赋值

下面是VideoEncoderConfiguration

trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration = (struct tt__VideoEncoderConfiguration *)soap_malloc(soap,sizeof(struct tt__VideoEncoderConfiguration));
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Name = (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->token= (char *)soap_malloc(soap,sizeof(char)*MAX_PROF_TOKEN);
strcpy(trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Name,"VE_Name1");
strcpy(trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->token,"VE_token1");
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->UseCount = 1;
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Quality = 10;
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Encoding = 1;//JPEG = 0, MPEG4 = 1, H264 = 2;
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Resolution = (struct tt__VideoResolution *)soap_malloc(soap, sizeof(struct tt__VideoResolution));
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Resolution->Height = 720;
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->Resolution->Width = 1280;
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->RateControl = (struct tt__VideoRateControl *)soap_malloc(soap, sizeof(struct tt__VideoRateControl));
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->RateControl->FrameRateLimit = 30;
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->RateControl->EncodingInterval = 1;
trt__GetProfilesResponse->Profiles[i].VideoEncoderConfiguration->RateControl->BitrateLimit = 500;

5、GetVideoSourceConfiguration和GetVideoEncoderConfiguration

int __trt__GetVideoSourceConfiguration(struct soap* soap, struct _trt__GetVideoSourceConfiguration *trt__GetVideoSourceConfiguration, struct _trt__GetVideoSourceConfigurationResponse *trt__GetVideoSourceConfigurationResponse)
{
DBG("__trt__GetVideoSourceConfiguration\n");
//该函数必要,live video需要
return SOAP_OK;
}

int __trt__GetVideoEncoderConfiguration(struct soap* soap, struct _trt__GetVideoEncoderConfiguration *trt__GetVideoEncoderConfiguration, struct _trt__GetVideoEncoderConfigurationResponse *trt__GetVideoEncoderConfigurationResponse)
{
DBG("__trt__GetVideoEncoderConfiguration\n");
return SOAP_OK;
}

6、GetVideoEncoderConfigurationOptions

int __trt__GetVideoEncoderConfigurationOptions(struct soap* soap, struct _trt__GetVideoEncoderConfigurationOptions *trt__GetVideoEncoderConfigurationOptions, struct _trt__GetVideoEncoderConfigurationOptionsResponse *trt__GetVideoEncoderConfigurationOptionsResponse)
{
DBG("__trt__GetVideoEncoderConfigurationOptions\n");
//该函数必要,video streaming需要
return SOAP_OK;
}

以上5、6不分的代码直接返回SOAP_OK即可,正常来说是应该填充的,这里不影响RTSP Video Stream,暂时就不去动它

四、运行live555MediaServer服务器

live555官网有很多测试文件,我这里用的是MPEG4的测试文件路劲为rtsp://192.168.1.201/petrov.m4e

五、启动Onvif Device Manager测试

有一个问题,OnvifDeviceManager的并不能自动发现设备(OnvifTestTool可以),还好它提供了手动添加功能

单击add,添加如下内容:http://192.168.1.233/onvif/device_service

注意,我在程序中固定了两个IP:linux192.168.1.233,windows:192.168.1.201,这里需看情况修改

测试截图:

1、Live video

2、Video streaming

3、Profiles

最后是运行的live555 rtsp服务器

终端打印的DEBUG信息

源代码下载地址:http://download.csdn.net/detail/ghostyu/4796093

http://blog.csdn.net/ghostyu/article/details/8208428

http://blog.chinaunix.net/uid-23381466-id-3799058.html

时间: 2024-10-05 20:00:08

onvif 开发之video streamer---onvif实现功能和经验的相关文章

Android开发之DatePickerDialog与TimePickerDialog的功能和用法详解

DatePickerDialog与TimePickerDialog的功能比较简单,用法也很简单,只要下面两步即可. ?  通过new关键字创建DatePickerDialog.TimePickerDialog实例,调用它们的show()方法即可将日期选择对话框.时间选择对话框显示出来. ?  为DatePickerDialog.TimePickerDialog绑定监听器,这样可以保证用户通过DatePickerDialog.TimePickerDialog设置事件是触发监听器,从而通过监听器来获

Onvif开发之Linux下gsoap的使用及移植

一直以来都是在CSDN上面学习别人的东西,很多次想写点什么但是又无从写起.由于公司项目需要,最近一段时间在研究onvif,在网上找了很多资料,发现资料是非常多,但是很少有比较全的资料,或者资料太多无从下手.我打算从做项目开始,用CSDN博客记录我的项目笔记,同时希望能帮助到需要帮助的人,以感谢这么多年来CSDN上各位高手对我的帮助.onvif的开发从gsoap的移植开始!今天完成了gsoap的移植,生成了代码. 一. 开发环境本人开发环境为:1. 电脑主频2.6G,内存4G:2. 虚拟机:Pro

onvif 开发文档【1】

一: onvif 介绍: Onvif是一套协议,或者简单说是一个标准,接触一个新的协议或者标准,第一步我想首先是要弄清这个协议是做什么?我也是带着这个疑问,开始了对onvif的研究和探索.下边的资料是我从百度上搜索到的,和我自己的学习步骤也是一样,先搜索点东西读一读,对onvif有一个表层的认识. 1:为什么会有onvif? ONVIF致力于通过全球性的开放接口标准来推进网络视频在安防市场的应用,这一接口标准将确保不同厂商生产的网络视频产品具有互通性.2008年11月,论坛正式发布了ONVIF第

onvif 开发文档【2】

二: onvif 开发环境的搭建 下边这张图来自于网上,对我们熟悉onvif开发描述的十分清晰,我就是顺着这个思路走下去的. 从上边的介绍中,我们基本知道onvif是个什么协议,其中十分关键一点是这种协议的展现形式是webservice.让我们通过下图对webservice的调用过程有一个初步的了解. 对基于webservice格式存在接口,我们第一步要首先寻找webserive对应的wsdl文件在那里?当然去onvif的官方网站去找了.要搭建开发平台的第一步就是从onvif的官方网站获取wsd

UWP开发之Template10实践二:拍照功能你合理使用了吗?(TempState临时目录问题)

最近在忙Asp.Net MVC开发一直没空更新UWP这块,不过有时间的话还是需要将自己的经验和大家分享下,以求共同进步. 在上章[UWP开发之Template10实践:本地文件与照相机文件操作的MVVM实例(图文付原代码)]已经谈到了使用FileOpenPicker进行文件选择,以及CameraCaptureUI进行拍照. 对于文件选择一般进行如下设置就能实现: // 选择多个文件 FileOpenPicker openPicker = new FileOpenPicker(); openPic

Android开发之Navigationdrawer导航抽屉功能的实现(源代码分享)

导航抽屉(navigationdrawer)是一个从屏幕左边滑入的面板,用于显示应用的主要导航项目.用户可以通过在屏幕左边缘滑入或者触摸操作栏的应用图标打开导航抽屉.导航抽屉覆盖在内容之上,但不覆盖操作栏.当导航抽屉完全打开后,操作栏的标题需要更换为应用的名称,而不是显示当前视图的名称,并且关闭所有和当前视图相关的操作按钮.操作栏的"更多操作"菜单按钮不需要关闭,以保证用户可以随时访问"设置"和"帮助".下面我们就来实现导航抽屉的功能. Layo

ArcEngine开发之Command控件使用篇

转自原文 ArcEngine开发之Command控件使用篇 在ArcEngine类库中有大量的Command控件用来与地图控件进行操作和交互.比如有一系列的地图浏览控件.地图查询控件.图斑选取控件.编辑控件来与MapControl和PageLayoutControl进行交互.这些控件被包含在ESRI.ArcGIS.Controls.dll类库中,位于ESRI.ArcGIS.Controls命名空间下. 这些内置的Command控件可以单独实例化来使用,也可以被安置在一个AxToolbarCont

[原]零基础学习SDL开发之在Android使用SDL2.0显示BMP图

关于如何移植SDL2.0到安卓上面来参考我的上一篇文章:[原]零基础学习SDL开发之移植SDL2.0到Android 在一篇文章我们主要使用SDL2.0来加载一张BMP图来渲染显示. 博主的开发环境:Ubuntu 14.04 64位,Eclipse + CDT + ADT+NDK 博主曾经自己使用NDK编译出了libSDL2.so,然后使用共享库的方式来调用libSDL2中的函数,结果发现SDL\src\core\android\SDL_android.c 这个jni函数写的实在是不够自己另外做

具体解释EBS接口开发之WIP模块接口

整体说明 文档目的 本文档针对WIP模块业务功能和接口进行分析和研究,对採用并发请求方式和调用API方式分别进行介绍 内容 WIP模块经常使用标准表简单介绍 WIP事物处理组成 WIP相关业务流程 WIP相关API研究事例 (十)參考文档(七)採购相关的一些知识 (一)WIP模块经常使用标准表简单介绍 1.1   经常使用标准表 例如以下表中列出了与WIP导入相关的表和说明: 表名 说明 其它信息 BOM_STRUCTURES_B BOM头信息 BOM_COMPONENTS_B BOM组件信息