实时边缘视频流人物检测(四)

五、 实验结果(★)

We have tested the feasibility of the proposed human objects detection and tracking scheme by processing video steam on edge computing devices. The experimental setup and results are discussed in this section.
通过在边缘计算设备上处理视频流,验证了所提出的人体目标检测与跟踪方案的可行性。本节讨论了实验装置和结果。
A. Experimental Setup
A、实验装置
A concept-proof prototype of the system has been built that consists of the edge computing and fog computing layers to validate our proposed scheme.
建立了由边缘计算层和雾计算层组成的系统概念验证原型,验证了本文提出的方案。
The edge computing node is a Raspberry PI 3 Model B with the configuration as follows: 1.2GHz 64-bit quad-core ARMv8 CPU, the memory is 1GB LPDDR2-900 SDRAM and operation system is Raspbian based on Linux kernel. The fog computing layer functions are implemented on a laptop, which configuration is as follows: the processor is 2.3 GHz Intel Core i7, the RAM memory is 16 GB and the operating system is Ubuntu 16.04.
边缘计算节点为树莓派3代B型,配置为:1.2GHz 64位四核ARMv8 CPU,内存为1GB LPDDR2-900 SDRAM,操作系统为基于Linux内核的Raspbian。雾计算层功能在笔记本电脑上实现,配置如下:处理器为2.3GHz英特尔酷睿i7,内存为16GB,操作系统为Ubuntu16.04。
The software application for human detection and tracking is implemented using C++ and python programming language and OpenCV library (version3.2.0) [23].
使用C++和Python编程语言和OpenCV库(版本3.2.0)实现了人体检测和跟踪软件应用。
B. Experimental Results
B、 实验结果
This section presents the experimental results on human object detection, object tracking and multi-tracker lifetime handle such as phase in & out of frame, re-tracking after tracked object lost, etc.
这一部分介绍了人体目标检测、目标跟踪和多跟踪器寿命处理的实验结果,如帧内和帧外相位、跟踪目标丢失后的再跟踪等。
1) Human detection: Figure 3 is an example of human object detection results. As it is computationally expensive to extract HOG features and calculate for classifying human/non-human objects, we reduced the workload of the edge device by using lower detection frequency. It detects frequency parameters though executing human detection algorithm at finite frames interval to improve the performance. In our application, the human detection frequency is once every five frames.
1) 人体检测:图3是人体目标检测结果的一个例子。由于HOG特征的提取和对人体/非人体目标分类的计算量较大,因此采用较低的检测频率,减少了边缘检测的工作量。它通过在有限帧间隔内执行人体检测算法来检测频率参数,提高了检测性能。在我们的应用中,人体检测频率是每五帧一次。

2) Object tracking: Figure 4 shows an example of the multiobject tracking results. Multi-tracker object queue is designed to manage tracker lifetime. Once the human object detection processing is done, tracker filter compares detected human and the multi-tracker object queue to rule out the duplicated trackers. Then those newly detected human objects are initialized as KCF trackers and appended to the multi-tracker object queue. During execution time, each tracker runs KCF tracking algorithm independently on target region through processing the video stream frame by frame until the object phases out, or it lost the object in the scenario.
2) 对象跟踪:图4显示了多对象跟踪结果的示例。多跟踪器对象队列用于管理跟踪器生存期。一旦完成了人体目标检测处理,跟踪滤波器将检测到的人体和多跟踪器对象队列进行比较,以排除重复的跟踪器。然后将新检测到的人体对象初始化为KCF跟踪器,并附加到多跟踪器对象队列中。在执行过程中,每个跟踪器通过逐帧处理视频流,在目标区域上独立运行KCF跟踪算法,直到目标逐渐消失,或者在场景中丢失目标。

3) Object tracker phase in & out:The boundary region is defined to handle scenarios that moving objects step in or out of the frame. All detected human objects within the boundary region are considered as tracking targets. They will be added to the multi-tracker queue. While in step out scenarios, those tracked objects that are moving out of the boundary region will be deleted and the corresponding trackers become an inactive status. After each frame is processed, those inactive trackers will be removed from the multi-tracker object queue such that the computing resources are relieved for future tasks. And the movement history is exported to tracking history log for further analysis. Figure 5 presents an example of the object tracker phase in & out results.
3) 对象跟踪器阶段输入和输出:定义边界区域以处理移动对象进入或离开帧的场景。将边界区域内所有检测到的人体目标视为跟踪目标。它们将被添加到多跟踪器队列中。而在跳出场景中,那些正在移出边界区域的被跟踪对象将被删除,相应的跟踪器将变为非活动状态。在处理完每一帧后,这些不活动的跟踪器将从多跟踪器对象队列中移除,以便为将来的任务释放计算资源。并将运动历史导出到跟踪历史日志中进行进一步分析。图5显示了对象跟踪器逐步输入和输出结果的示例。

4) Tracking object lost: Because of the occlusion between the background environment and the tracked objects, color appearance and illumination, the tracker may lose the objects of interest. It’s necessary to handle such scenarios so that failed trackers could be cleared from the multi-tracker queue and the lost objects can be re-detected and re-tracked as new objects of interest. Figure 6 shows a scenario where the tracker loses the object (the man in a black jacket on the left, marked as object #6) when he walked behind the blue signboard. Then the detection algorithm identified this person as a new object and assigned him to a new tracker (marked as object #15).
4) 跟踪对象丢失:由于背景环境和被跟踪对象之间的遮挡、颜色外观和照明,跟踪器可能会丢失感兴趣的对象。有必要处理这样的场景,以便从多跟踪器队列中清除失败的跟踪器,并将丢失的对象作为感兴趣的新对象重新检测和跟踪。图6显示了一个场景,当跟踪器走到蓝色的招牌后面时,它会丢失这个物体(左边穿黑色夹克的人,标记为对象#6)。然后,检测算法将此人识别为一个新对象,并将其分配给一个新的跟踪器(标记为对象#15)。

C. Discussions
C、 讨论
The algorithms are implemented and tested on both the edge and fog devices. The edge computing node is a Raspberry PI 3 Model B and the fog computing layer functions are implemented on a laptop. Their configurations are described above in the experimental setup subsection.
算法在边缘和光纤陀螺两种设备上都得到了实现和测试。边缘计算节点为树莓派3代模型B,雾计算层功能在笔记本上实现。它们的配置在上面的实验设置小节中描述。
Detection accuracy varies according to the video input resolution and the angle where the camera is placed. The algorithm works best when detecting a human from an angle of zero. But in practice when the camera is on the ceiling, it is harder to detect the human objects. The distance from the camera is also another factor that affects the detection rate. We changed the frame size to have the same frame with different cameras, the angle is, however, cannot be fixed.
检测精度根据视频输入分辨率和相机放置的角度而变化。该算法在从零角度检测人时效果最好。但在实际应用中,当摄像机位于天花板上时,很难检测到人体目标。与摄像机的距离也是影响检测率的另一个因素。我们改变了帧的大小,使同一帧有不同的相机,但角度是不能是固定的
Figure 7 shows that taking the Raspberry PI as the edge computing device, the experiment has achieved the processing speed of 12.2 frames per second by using 78% to 100% of the CPU and 90 to 120 MB of the RAM, corresponding to the detected number of human objects in a frame. Actually, on the fog device, the laptop, the same video streams, the same processing performance was achieved using 67% to 81% of the CPU and 130 to 150 MB of RAM.
图7显示,以树莓派为边缘计算设备,实验通过使用78%到100%的CPU和90到120 MB的RAM,实现了每秒12.2帧的处理速度,与一帧中检测到的人体数量相对应。实际上,在雾设备笔记本电脑上,同样的视频流,同样的处理性能是用67%到81%的CPU和130到150MB的RAM实现的。

五、结论

In this paper, a smart surveillance architecture is proposed, which leverages the advantages of the edge computing and fog computing paradigms to achieve the goal of on-site, real-time video processing for online human objects detection and tracking. A concept-proof testbed has been constructed, in which a laptop serves as fog computing node, and a Raspberry PI takes responsibility of edge computing node. A HOG+SVM based human detection and a KCF based object tracking algorithms are implemented on the edge and fog nodes. Using real-world pedestrian surveillance video streams, the experimental study has verified that the three-layered smart surveillance architecture is a promising solution for delay-sensitive, mission-critical tasks in many real-time surveillance applications, such as safety monitoring in Smart Cities, situational awareness in a battlefield, smart environmental criminology, etc. Our on-going efforts include the following tasks:
本文提出了一种智能监控体系结构,该结构利用边缘计算和雾计算两种计算模式的优点,实现了在线人体目标检测和跟踪的实时视频处理。搭建了一个概念验证实验台,其中笔记本电脑作为雾计算节点,树莓派负责边缘计算节点。在边缘和雾节点分别实现了基于HOG+SVM的人体检测和基于KCF的目标跟踪算法。实验研究利用真实的行人监控视频流,验证了三层智能监控体系结构在许多实时监控应用中,如智能城市的安全监控、战场的态势感知、智能环境犯罪学等。我们正在进行的工作包括:
1) The adopted detection methods are still computationally expensive to achieve satisfactory performance on edge devices. We are exploring to build lightweight algorithms that are well balanced among the complexity, resource consumption, and detection rate.
1) 为了在边缘设备上获得满意的性能,所采用的检测方法在计算上仍然是昂贵的。我们正在探索构建在复杂度、资源消耗和检测率之间很好平衡的轻量级算法
2) As the example illustrated in Fig. 6, once a tracked object is lost and re-detected, the information for the same object or human is going to be saved on a new queue, which is not desired. A more efficient and precise approach is mandatory, which is expected to establish the connection and continue the existing tracking task instead of initiating a new tracker.
2) 如图6所示的示例,一旦被跟踪对象丢失并被重新检测,相同对象或人的信息将被保存在新队列中,这是不期望的。一种更高效和精确的方法是强制性的,期望建立连接并继续现有的跟踪任务,而不是启动新的跟踪器
3) We are also investigating features that enable the higher layer functions such as behavior identification, which is particularly essential to identify and raise alarms against certain dangerous or malicious activities before damages are caused.
3) 我们也在研究能够实现更高层次功能的特性,如行为识别,这对于在造成损害之前识别并对某些危险或恶意活动发出警报尤为重要。

参考文献


原文地址:https://www.cnblogs.com/caihan/p/12251640.html

时间: 2024-11-08 16:42:42

实时边缘视频流人物检测(四)的相关文章

实时边缘视频流人物检测(一)

零.标题&摘要 1.标题: Real-Time Human Objects Tracking for Smart Surveillance at the Edge 应用于边缘智能监控的实时人体目标跟踪 2.摘要: Abstract- Allowing computation to be performed at the edge of a network, edge computing has been recognized as a promising approach to address

OpenCV2学习笔记(三):形态学及边缘角点检测

形态学滤波理论于上世纪90年代提出,目前被广泛用于分析及处理离散图像.其基本运算有4个: 膨胀.腐蚀.开启和闭合, 它们在二值图像和灰度图像中各有特点.基于这些基本运算还可推导和组合成各种数学形态学实用算法,用它们可以进行图像形状和结构的分析及处理,包括图像分割.特征抽取.边缘检测. 图像滤波.图像增强和恢复等.数学形态学方法利用一个称作结构元素的"探针"收集图像的信息,当探针在图像中不断移动时, 便可考察图像各个部分之间的相互关系,从而了解图像的结构特征.数学形态学基于探测的思想,与

一文带你学会使用YOLO及Opencv完成图像及视频流目标检测(上)|附源码

计算机视觉领域中,目标检测一直是工业应用上比较热门且成熟的应用领域,比如人脸识别.行人检测等,国内的旷视科技.商汤科技等公司在该领域占据行业领先地位.相对于图像分类任务而言,目标检测会更加复杂一些,不仅需要知道这是哪一类图像,而且要知道图像中所包含的内容有什么及其在图像中的位置,因此,其工业应用比较广泛.那么,今天将向读者介绍该领域中表现优异的一种算算法--"你只需要看一次"(you only look once,yolo),提出该算法的作者风趣幽默可爱,其个人主页及论文风格显示了其性

IOS常用CGRect的交错,边缘,中心的检测

转自:http://tsyouaschen.iteye.com/blog/1946957 判断给定的点是否被一个CGRect包含,可以用CGRectContainsPoint函数 BOOL contains = CGRectContainsPoint(CGRect rect, CGPoint point); 判断一个CGRect是否包含再另一个CGRect里面,常用与测试给定的对象之间是否又重叠 BOOL contains = CGRectContainsRect(CGRect rect1, C

Python-Anaconda练习candy算子用于边缘提取,再用hough变换检测直线边缘

img: 待检测的图像. threshold: 阈值,可先项,默认为10 line_length: 检测的最短线条长度,默认为50 line_gap: 线条间的最大间隙.增大这个值可以合并破碎的线条.默认为10 返回: lines: 线条列表, 格式如((x0, y0), (x1, y0)),标明开始点和结束点. 下面,我们用canny算子提取边缘,然后检测哪些边缘是直线? import skimage.transform as st import matplotlib.pyplot as pl

OpenCV DNN之YOLO实时对象检测

OpenCV DNN之YOLO实时对象检测 OpenCV在3.3.1的版本中开始正式支持Darknet网络框架并且支持YOLO1与YOLO2以及YOLO Tiny网络模型的导入与使用.YOLO是一种比SSD还要快的对象检测网络模型,算法作者在其论文中说FPS是Fast R-CNN的100倍,基于COCO数据集跟SSD网络的各项指标对比 在最新的OpenCV3.4上我也测试了YOLO3,发现不支持,因为YOLO3有个新层类型shortcut,OpenCV3.4的Darknet暂时还不支持.这里首先

Opencv学习笔记--Harris角点检测

image算法测试iteratoralgorithmfeatures 原创文章,转载请注明出处:http://blog.csdn.net/crzy_sparrow/article/details/7391511 文章目录: 一.Harris角点检测基本理论 二.opencv代码实现 三.改进的Harris角点检测 四.FAST角点检测 五.参考文献 六.附录(资料和源码) 一.Harris角点检测基本理论(要讲清楚东西太多,附录提供文档详细说明) 1.1 简略表达: 角点:最直观的印象就是在水平

基于边缘的分割

边缘检测是图像分割的另一种重要方法,利用图像灰度级在边缘处的突变,找到目标物体的边缘,图像中边缘处像素的灰度值不连续,这种不连续性可通过求导数来检测到.对于阶跃状边缘,其位置对应一阶导数的极值点,对应二阶导数的过零点(零交叉点).因此常用微分算子进行边缘检测.常用的一阶微分算子有Roberts算子.Prewitt算子和Sobel算子,二阶微分算子有Laplace算子和Kirsh算子等.在实际中各种微分算子常用小区域模板来表示,微分运算是利用模板和图像卷积来实现.这些算子对噪声敏感,只适合于噪声较

Android 人脸特征点检测(主动形状模型) ASM Demo (Active Shape Model on Android)

目前Android平台上进行人脸特征识别非常火爆,本人研究生期间一直从事人脸特征的处理,所以曾经用过一段ASM(主动形状模型)提取人脸基础特征点,所以这里采用JNI的方式将ASM在Android平台上进行了实现,同时在本应用实例中,给出了几个其他的图像处理的示例. 由于ASM (主动形状模型,Active Shape Model)的核心算法比较复杂,所以这里不进行算法介绍,我之前写过一篇详细的算法介绍和公式推导,有兴趣的朋友可以参考下面的连接: ASM(主动形状模型)算法详解 接下来介绍本应用的