Visionworks OpenVX

[TOC]

OpenVX

heterogeneous computation framework

Spec

OpenVX 1.2源碼解析 — 目錄結構

除了官方的參考實作外,下方是不同廠商的實作,有些有開放原始碼有些則是包裝程動態函式庫.

  1. Intel Computer Vision SDK
  2. AMD OVX : https://github.com/GPUOpen-ProfessionalCompute-Libraries/amdovx-core -->
  3. TI OVX:
  4. Nvidia Vision Works:

以上是有通過conformance test的廠商,另外ARM 也有類似的SDK(compute library)而且初期開發時在架構上也是參考OpenVX。

  1. ARM compute library:

雖然一開始OpenVX是針對電腦視覺運算設計的軟體框架,但由於類神經網路的編程模式(programming model)跟熱門程度讓Khronos OpenVX工作小組也特別訂定了Neural Network Extension使得OpenVX也加入了深度學習的戰場。

VisionWorks

NVIDIA VisionWorks toolkit is a software development package for computer vision (CV) and image processing. VisionWorks? implements and extends the Khronos OpenVX standard, and it is optimized for CUDA-capable GPUs and SOCs enabling developers to realize CV applications on a scalable and flexible platform.

VisionWorks includes the following primitives:

IMAGE ARITHMETIC

  • Absolute Difference
  • Accumulate Image
  • Accumulate Squared
  • Accumulate Weighted
  • Add / Subtract / Multiply +
  • Channel Combine
  • Channel Extract
  • Color Convert +
  • CopyImage
  • Convert Depth
  • Magnitude
  • MultiplyByScalar
  • Not / Or / And / Xor
  • Phase
  • Table Lookup
  • Threshold

FLOW & DEPTH

  • Median Flow
  • Optical Flow (LK) +
  • Semi-Global Matching
  • Stereo Block Matching
  • IME Create Motion Field
  • IME Refine Motion Field
  • IME Partition Motion Field

GEOMETRIC TRANSFORMS

  • Affine Warp +
  • Warp Perspective +
  • Flip Image
  • Remap
  • Scale Image +

FILTERS

  • BoxFilter
  • Convolution
  • Dilation Filter
  • Erosion Filter
  • Gaussian Filter
  • Gaussian Pyramid
  • Laplacian3x3
  • Median Filter
  • Scharr3x3
  • Sobel 3x3

FEATURES

  • Canny Edge Detector
  • FAST Corners +
  • FAST Track +
  • Harris Corners +
  • Harris Track
  • Hough Circles
  • Hough Lines

ANALYSIS

  • Histogram
  • Histogram Equalization
  • Integral Image
  • Mean Std Deviation
  • Min Max Locations

OpenVX for us

Requirements

  • Support user defined processing
  • Support optimization of duplicate processing
  • Open source framework (if available)

User defined processing

Yes. user node, base it on the Advanced Tiling Extensions (see the Intel‘s Extensions to the OpenVX* API: Advanced Tiling chapter)

Support optimization of duplicate processing

ref:

optimization tips

  • Use virtual images whenever possible, as this unlocks many graph compiler optimizations.
  • Whenever possible, prefer standard nodes and/or extensions over user kernel nodes (which serve as memory and execution barriers, hindering performance). This gives the Pipeline Manager much more flexibility to optimize the graph execution.
  • If you still need to implement a user node, base it on the Advanced Tiling Extensions (see the Intel‘s Extensions to the OpenVX* API: Advanced Tiling chapter)
  • If the application has independent graphs, run these graphs in parallel using vxScheduleGraph API call.
  • Provide enough parallel slack to the scheduler- do not break work (for example, images) into too many tiny pieces. Consider kernel fusion.
  • For images, use smallest data type that fits the application accuracy needs (for example, 32->16->8 bits).
  • Consider heterogeneous execution (see the Heterogeneous Computing with OpenVINO? toolkit chapter).
  • You can create an OpenVX image object that references a memory that was externally allocated (vxCreateImageFromHandle). To enable zero-copy with the GPU the externally allocated memory should be aligned. For more details, refer to https://software.intel.com/en-us/node/540453.
  • Beware of the (often prohibitive) vxVerifyGraph latency costs. For example, construct the graph in a way it would not require the verification upon the parameters updates. Notice that unlike Map/Unmap for the input images (see the Map/Unmap for OpenVX* Images section), setting new images with different meta-data (size, type, etc) almost certainly triggers the verification, potentially adding significant overhead.

Open source framework (if available)

OpenVino

Requirements

Software Requirements

A Windows build environment needs these components:

Get the Software

Your license includes the full version of the product. To access the toolkit:

  1. Make sure your system meets the minimum requirements listed on this page.
  2. Complete the registration form.
  3. Download the product.

Register & Download

AMD OpenVX

Features

  • The code is highly optimized for both x86 CPU and OpenCL for GPU
  • Supported hardware spans the range from low power embedded APUs (like the new G series) to laptop, desktop and workstation graphics
  • Supports Windows, Linux, and OS X
  • Includes a “graph optimizer” that looks at the entire processing pipeline and removes/replaces/merges functions to improve performance and minimize bandwidth at runtime
  • Scripting support allows for rapid prototyping, without re-compiling at production performance levels

Pre-requisites

  • CPU: SSE4.1 or above CPU, 64-bit.
  • GPU: Radeon Professional Graphics Cards or Vega Family of Products (16GB required for vx_loomsl and vx_nn libraries)
    • Windows: install the latest drivers and OpenCL SDK download
    • Linux: install ROCm
  • OpenCV 3 (optional)

    download

    for RunVX

    • Set OpenCV_DIR environment variable to OpenCV/build folder

Build Instructions

Build this project to generate AMD OpenVX library and RunVX executable.

Build using Visual Studio Professional 2013 on 64-bit Windows 10/8.1/7

  • Install OpenCV 3 with contrib download for RunVX tool to support camera capture and image display (optional)
  • OpenCV_DIR environment variable should point to OpenCV/build folder
  • Use amdovx-core/amdovx.sln to build for x64 platform
  • If AMD GPU (or OpenCL) is not available, set build flag ENABLE_OPENCL=0 in openvx/openvx.vcxproj and runvx/runvx.vcxproj.

Test

Download to C:\Users\aeejshe\Downloads

  • C:\Users\aeejshe\Downloads\amdovx-core-0.9-beta2
  • C:\Users\aeejshe\Downloads\opencv

Build SW according to guidelines, especially

  • set ENABLE_OPENCL=0
  • modify lib to C:\Users\aeejshe\Downloads\opencv\build\x64\vc12\lib\opencv_world310d.lib

Demo

C:\Users\aeejshe\Downloads\amdovx-core-0.9-beta2\amdovx-core-0.9-beta2>runvx exa
mples\gdf\canny.gdf

***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****

runvx.exe 0.9.7
OK: using AMD OpenVX 0.9.7
OK: enabled graph scheduling in separate threads
csv,HEADER ,STATUS, COUNT,cur-ms,avg-ms,min-ms,clenqueue-ms,clwait-ms,clwrite-ms
,clread-ms
OK: capturing 480x360 image(s) into 480x360 RGB image buffer
csv,OVERALL,  PASS,     1,      ,  8.60,  8.60,  0.00,  0.00,  0.00,  0.00 (medi
an 8.598)
> total elapsed time:   0.11 sec
Abort: Press any key to exit...

canny.gdf

# create input and output images
data input  = image:480,360,RGB2
data output = image:480,360,U008

# specify input source for input image and request for displaying input and output images
read input  examples/images/face1.jpg
view input  inputWindow
view output edgesWindow

# compute luma image channel from input RGB image
data yuv  = image-virtual:0,0,IYUV
data luma = image-virtual:0,0,U008
node org.khronos.openvx.color_convert input yuv
node org.khronos.openvx.channel_extract yuv !CHANNEL_Y luma

# compute edges in luma image using Canny edge detector
data hyst = threshold:RANGE,UINT8:INIT,80,100
data gradient_size = scalar:INT32,3
node org.khronos.openvx.canny_edge_detector luma hyst gradient_size !NORM_L1 output

graph TB
input --> |color_convert| yuv
yuv --> |channel_extract| luma
luma --> |merge| merged
hyst --> merged
gradient_size --> merged
merged --> |canny_edge_detector| output

runvx

usage

C:\Users\aeejshe\Downloads\amdovx-core-0.9-beta2\amdovx-core-0.9-beta2>runvx

***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****

runvx.exe 0.9.7

Usage:
  runvx.exe [options] [file] <file.gdf> [argument(s)]
  runvx.exe [options] node <kernelName> [argument(s)]
  runvx.exe [options] shell [argument(s)]

The argument(s) are data objects created using <data-description> syntax.
These arguments can be accessed from inside GDF as $1, $2, etc.

The available command-line options are:
  -h
      Show full help.
  -v
      Turn on verbose logs.
  -root:<directory>
      Replace ~ in filenames with <directory> in the command-line and
      GDF file. The default value of ‘~‘ is current working directory.
  -frames:[<start>:]<end>|eof|live
      Run the graph/node for specified frames or until eof or just as live.
      Use live to indicate that input is live until aborted by user.
  -affinity:CPU|GPU[<device-index>]
      Set context affinity to CPU or GPU.
  -dump-profile
      Print performance profiling information after graph launch.
  -enable-profile
      use directive VX_DIRECTIVE_AMD_ENABLE_PROFILE_CAPTURE when graph is create
d
  -discard-compare-errors
      Continue graph processing even if compare mismatches occur.
  -disable-virtual
      Replace all virtual data types in GDF with non-virtual data types.
      Use of this flag (i.e. for debugging) can make a graph run slower.

dump profile

C:\Users\aeejshe\Downloads\amdovx-core-0.9-beta2\amdovx-core-0.9-beta2>runvx -du
mp-profile examples\gdf\canny.gdf

***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****

runvx.exe 0.9.7
OK: using AMD OpenVX 0.9.7
OK: enabled graph scheduling in separate threads
csv,HEADER ,STATUS, COUNT,cur-ms,avg-ms,min-ms,clenqueue-ms,clwait-ms,clwrite-ms
,clread-ms
OK: capturing 480x360 image(s) into 480x360 RGB image buffer
csv,OVERALL,  PASS,     1,      ,  8.62,  8.62,  0.00,  0.00,  0.00,  0.00 (medi
an 8.621)
> total elapsed time:   0.07 sec
> graph profile:
 COUNT,tmp(ms),avg(ms),min(ms),max(ms),DEV,KERNEL
     1,  8.621,  8.621,  8.621,  8.621,CPU,GRAPH
     1,  1.196,  1.196,  1.196,  1.196,CPU,com.amd.openvx.ColorConvert_Y_RGB
     1,  4.905,  4.905,  4.905,  4.905,CPU,com.amd.openvx.CannySobel_U16_U8_3x3_
L1NORM
     1,  2.305,  2.305,  2.305,  2.305,CPU,com.amd.openvx.CannySuppThreshold_U8X
Y_U16_3x3
     1,  0.208,  0.208,  0.208,  0.208,CPU,com.amd.openvx.CannyEdgeTrace_U8_U8XY

Abort: Press any key to exit...

Test if CSE works

input

# create input and output images
data input  = image:480,360,RGB2
data output = image:480,360,U008
data output2 = image:480,360,U008

# specify input source for input image and request for displaying input and output images
read input  examples/images/face1.jpg
view input  inputWindow
view output edgesWindow

# compute luma image channel from input RGB image
data yuv  = image-virtual:0,0,IYUV
data yuv2  = image-virtual:0,0,IYUV
data luma = image-virtual:0,0,U008
data luma2 = image-virtual:0,0,U008
node org.khronos.openvx.color_convert input yuv
node org.khronos.openvx.color_convert input yuv2
node org.khronos.openvx.channel_extract yuv !CHANNEL_Y luma
node org.khronos.openvx.channel_extract yuv2 !CHANNEL_Y luma2

# compute edges in luma image using Canny edge detector
data hyst = threshold:RANGE,UINT8:INIT,80,100
data gradient_size = scalar:INT32,3
node org.khronos.openvx.canny_edge_detector luma hyst gradient_size !NORM_L1 output
node org.khronos.openvx.canny_edge_detector luma2 hyst gradient_size !NORM_L1 output2

Output

C:\Users\aeejshe\Downloads\amdovx-core-0.9-beta2\amdovx-core-0.9-beta2>runvx -du
mp-profile examples\gdf\canny.gdf

***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****

runvx.exe 0.9.7
OK: using AMD OpenVX 0.9.7
OK: enabled graph scheduling in separate threads
csv,HEADER ,STATUS, COUNT,cur-ms,avg-ms,min-ms,clenqueue-ms,clwait-ms,clwrite-ms
,clread-ms
OK: capturing 480x360 image(s) into 480x360 RGB image buffer
csv,OVERALL,  PASS,     1,      , 17.13, 17.13,  0.00,  0.00,  0.00,  0.00 (medi
an 17.127)
> total elapsed time:   0.07 sec
> graph profile:
 COUNT,tmp(ms),avg(ms),min(ms),max(ms),DEV,KERNEL
     1, 17.127, 17.127, 17.127, 17.127,CPU,GRAPH
     1,  1.202,  1.202,  1.202,  1.202,CPU,com.amd.openvx.ColorConvert_Y_RGB
     1,  1.192,  1.192,  1.192,  1.192,CPU,com.amd.openvx.ColorConvert_Y_RGB
     1,  4.857,  4.857,  4.857,  4.857,CPU,com.amd.openvx.CannySobel_U16_U8_3x3_
L1NORM
     1,  4.838,  4.838,  4.838,  4.838,CPU,com.amd.openvx.CannySobel_U16_U8_3x3_
L1NORM
     1,  2.312,  2.312,  2.312,  2.312,CPU,com.amd.openvx.CannySuppThreshold_U8X
Y_U16_3x3
     1,  2.302,  2.302,  2.302,  2.302,CPU,com.amd.openvx.CannySuppThreshold_U8X
Y_U16_3x3
     1,  0.209,  0.209,  0.209,  0.209,CPU,com.amd.openvx.CannyEdgeTrace_U8_U8XY

     1,  0.207,  0.207,  0.207,  0.207,CPU,com.amd.openvx.CannyEdgeTrace_U8_U8XY

Abort: Press any key to exit...

Q: Why CSE not work?

TODO:

API

//vx_api.h
VX_API_ENTRY vx_graph VX_API_CALL vxCreateGraph(vx_context context);
VX_API_ENTRY vx_status VX_API_CALL vxVerifyGraph(vx_graph graph);
VX_API_ENTRY vx_status VX_API_CALL vxProcessGraph(vx_graph graph);
VX_API_ENTRY vx_image VX_API_CALL vxCreateVirtualImage(vx_graph graph, vx_uint32 width, vx_uint32 height, vx_df_image color);

//vx_node.h
VX_API_ENTRY vx_node VX_API_CALL vxColorConvertNode(vx_graph graph, vx_image input, vx_image output);

OpenCV G-API

Intro

G-API Intro

Features

API

//core.hpp
GAPI_EXPORTS GMat resize(const GMat& src, const Size& dsize, double fx = 0, double fy = 0, int interpolation = INTER_LINEAR);

//GComputation.hpp
class GComputation{
    ...
    GComputation(GProtoInputArgs &&ins,
                 GProtoOutputArgs &&outs);             // Arg-to-arg overload
	void apply(GRunArgs &&ins, GRunArgsP &&outs, GCompileArgs &&args = {});
...
}

implementation

of G-API apply function

GComputation -> GComputation2: apply
GComputation2 -> GCompiler: compile
GCompiler -> Graph: build graph
Graph --> GComputation2: return ade::Graph
GComputation2 -> Graph: exec the graph

ref:

Vision grab post processing

Study if OpenVINO or OpenCV supports

  • CSE(common-subexpression elimination)
  • feed partially inputs
Lib CSE partially inputs
OpenVINO x x
AMDOVX x x
OpenCV G-API x x
Intel TBB x v
behavior: the ready nodes are called then exit
Code: C:\jshe\codes\lua\src\tbbtest\test_tbb_behavior.cpp
Tensorflow v

TODO

Test if can be called multiples like following

while true
    modify input
    vxProcessGraph()

ref: http://projects.eees.dei.unibo.it/adrenaline/tutorial-02-execute-openvx-examples/

OpenVX讀書筆記

summary

high level low level
ovx strong typed
eg VX_API_ENTRY vx_node VX_API_CALL vxColorConvertNode(vx_graph graph, vx_image input, vx_image output);
weak typed, eg
OpenVX.dll!agoCreateNode(_vx_graph * graph, int kernel_id)
tbb strong typed
make_edge(tbbflowoutput_port<1>(gpu_slm_split_n), tbbflowinput_port<1>(gpu_slm_mat_mult_n))
tbbflowfunction_node< validation_args_type > mat_validation_n(g, tbbflowunlimited, [](const validation_args_type& result) {
// Get references to matrixes
const tbbflowgfx_buffer& GPU_SLM_MAT = stdget<0>(result);
const tbbflowgfx_buffer& CPU_SLM_MAT = stdget<1>(result);
const tbbflowgfx_buffer& CPU_NAIVE_MAT = std::get<2>(result);

// Verify results
// Check that slm algorithm produces correct results on CPU:
validate_mat("matrix multiply: ‘SLM‘ CPU vs. CPU", SIZE_Y, SIZE_X, CPU_SLM_MAT.data(), CPU_NAIVE_MAT.data());
// Verify Gen results:
validate_mat("matrix multiply: SLM Gen vs. CPU", SIZE_Y, SIZE_X, GPU_SLM_MAT.data(), CPU_NAIVE_MAT.data());
});

Not sure
G-API strong typed TODO

// ovx: \vis_bep_12\C\Users\aeejshe\Downloads\amdovx-core-0.9-beta2\amdovx-core-0.9-beta2
// tbb: C:\Users\aeejshe\Downloads\tbb2017_20170604oss_win\tbb2017_20170604oss

How to register Kernel

Define a enum

VX_KERNEL_COLOR_CONVERT = VX_KERNEL_BASE(VX_ID_KHRONOS, VX_LIBRARY_KHR_BASE) + 0x1,

Registrtion

OVX_KERNEL_ENTRY( VX_KERNEL_COLOR_CONVERT         , ColorConvert, "color_convert",             AIN_AOUT,             ATYPE_II           , false ),

the parameters meaning

#define OVX_KERNEL_ENTRY(kernel_id,name,kname,argCfg,argType,validRectReset) 
#define ATYPE_II                               { VX_TYPE_IMAGE, VX_TYPE_IMAGE }

  • AIN_AOUT: 1 in, 1 out
  • ATYPE_II: 2 image types

Implement "DramaDivideNode" operation, it is used to select the best suited for this PC architecture

int agoDramaDivideNode(AgoNodeList * nodeList, AgoNode * anode)
{
	// save parameter list
	vx_uint32 paramCount = anode->paramCount;
	AgoData * paramList[AGO_MAX_PARAMS]; memcpy(paramList, anode->paramList, sizeof(paramList));
	// divide the node depending on the type
	int status = -1;
	switch (anode->akernel->id)
	{
		case VX_KERNEL_COLOR_CONVERT:
			status = agoDramaDivideColorConvertNode(nodeList, anode);
			break;

the function is called by optimize function

>	OpenVX.dll!agoCreateNode(_vx_graph * graph, int kernel_id) Line 2699	C++
 	OpenVX.dll!agoDramaDivideAppend(AgoNodeList * nodeList, _vx_node * anode, int new_kernel_id, _vx_reference * * paramList, unsigned int paramCount) Line 37	C++
 	OpenVX.dll!agoDramaDivideAppend(AgoNodeList * nodeList, _vx_node * anode, int new_kernel_id) Line 56	C++
 	OpenVX.dll!agoDramaDivideColorConvertNode(AgoNodeList * nodeList, _vx_node * anode) Line 244	C++
 	OpenVX.dll!agoDramaDivideNode(AgoNodeList * nodeList, _vx_node * anode) Line 1818	C++
 	OpenVX.dll!agoOptimizeDramaDivide(_vx_graph * agraph) Line 1962	C++
 	OpenVX.dll!agoOptimizeDrama(_vx_graph * agraph) Line 522	C++
 	OpenVX.dll!agoOptimizeGraph(_vx_graph * agraph) Line 209	C++
 	OpenVX.dll!vxVerifyGraph(_vx_graph * graph) Line 2450	C++
 	runvx.exe!CVxEngine::ProcessGraph(std::vector<char const *,std::allocator<char const *> > * graphNameList, unsigned __int64 beginIndex) Line 285	C++

How to schedule graph?

What optimization is done in optimize()?

Choose the best

原文地址:https://www.cnblogs.com/cutepig/p/12041564.html

时间: 2024-10-08 20:25:22

Visionworks OpenVX的相关文章

OpenVX

OpenVX openvx 1. 编译 尝试编译openvx_sample,下载相关代码. 下载的sample code直接使用make可以生成libopenvx.so. 使用python Build.py --os linux可以编译sample code. 2. OpenVX使用流程 主要包含7个部分: 创建openvx上下文 vx_context context = vxCreateContext(); 创建输入.输出图像结点 vx_image input_rgb_image = vxCr

2016年04月28日-统一沟通之OCS 2007 R2-公开课-视频

链接:http://pan.baidu.com/s/1sl46yi5 密码:s4gz 2016-04-28-部署-OCS 2007 R2-精讲-视频 培训课件-2010-统一沟通-OCS 2007 R2 前期公开课视频(免费): SFB公开课:TMG/IISARR/Web Application Proxy/发布UC(Lync/SFB)-1 http://dynamic.blog.51cto.com/711418/1760251 SFB公开课:TMG/IISARR/Web Application

2016年04月28日-统一沟通之OCS 2007 R2-公开课-通知

方式:QQ讨论组.(联系人QQ:3313395633) 时间:2016-04-28 20:00-22:00 主题:重温OCS 2007 R2年代(带您回到6年前!) 内容: 您还记得 OCS 2007 R2 吗 ? 你还想知道 OCS 2007 R2 有那些服务器角色吗 ? 你还想知道 如何实现 OCS 2007 R2 内部外部登录吗 ? 你还想知道 OCS 2007 R2 如何实现电话集成吗 ? 本次公开课将带大家回到2010年,本讲师第一次涉足企业与个人培训. 本次公开课将为大家讲解以下图中

基于GPU的图像处理平台

基于GPU的图像处理平台 1.  (309)英伟达推Jetson TX1 GPU模块力推人工智能 1.1 产品概述 Jetson TX1 GPU模块,主要针对近年来蓬勃发展的人工智能市场,包括无人机.机器人等设备. 1.2 处理板技术指标 1. Jetson TX1 GPU模块包括一颗浮点运算达到teraflop级的 2.  基于Maxwell架构的256核心GPU,64位ARM A57芯片组 3.  4GB LPDDR4 RAM内存(每秒带宽速度达25.6GB) 4.  5GB本地存储模块.8

基于GPU的图像处理平台解决方案

基于GPU的图像处理平台 1.  (309)英伟达推Jetson TX1 GPU模块力推人工智能 1.1 产品概述 Jetson TX1 GPU模块,主要针对近年来蓬勃发展的人工智能市场,包括无人机.机器人等设备. 1.2 处理板技术指标 1. Jetson TX1 GPU模块包括一颗浮点运算达到teraflop级的 2.  基于Maxwell架构的256核心GPU,64位ARM A57芯片组 3.  4GB LPDDR4 RAM内存(每秒带宽速度达25.6GB) 4.  5GB本地存储模块.8

NVIDIA Jetson? TX1

NVIDIA® Jetson TX1 是一台模块式计算机,代表了视觉计算领域近20年的研发成就,其尺寸仅有信用卡大小.Jetson TX1 基于崭新 NVIDIA Maxwell™ 架构,配有256个 NVIDIA CUDA® 核心和64位 CPUs,并具有出类拔萃的电源效率.此外,其还囊括了深度学习.计算机视觉.GPU 计算和图像处理等新技术.它是您值得拥有的完善的开发平台——包括开发者组件.工具.开发者平台.社区.支持.培训和产品设计包——因此其可以应付研究.原型设计.开发和生产.总之,Je

NVIDIA Jetson? TX1 Module

NVIDIA® Jetson TX1 是一台模块式计算机,代表了视觉计算领域近20年的研发成就,其尺寸仅有信用卡大小.Jetson TX1 基于NVIDIA Maxwell™ 架构,配有256个 NVIDIA CUDA® 核心和64位 CPUs,并具有出类拔萃的电源效率.此外,其还囊括了深度学习.计算机视觉.GPU 计算和图像处理等新技术.它是您值得拥有的完善的开发平台——包括开发者组件.工具.开发者平台.社区.支持.培训和产品设计包——因此其可以应付研究.原型设计.开发和生产.总之,Jetso

致ubuntu140405:基于window7的vslam开发环境搭建

对于直接安装或解压的程序,除git, cmake, cuda按默认路径安装外,其它程序都安装在D:\app\softname 对于编译安装的程序,都安装在D:\app\softname\build\install 对于ubuntu可首先直接从源安装以下包,之后就只剩下cuda, qt, openni2需要直接安装,其余都编译安装 0.基本开发工具 0.1git&cmake&cuda (1)win安装:安装路径默认,自动追加path环境变量 (2)ubuntu安装:git, cmake3,

让计算更贴近生活

在这些年,我们伴随着科幻电影作品成长,科幻电影作品中的技术情节让我们激动不已,越来越成熟的高科技产品在不断地改进和创造. 随着计算机技术以及相关辅助学科的发展,人类在高科技领域不断地获得各种飞跃式的突破,科幻电影作品中的场景也不断地被带到现实生 活中来.你是否还记得机器人在进行人物识别过程中的那些场景?机器人对眼前的场景进行扫描,从而来判断眼前的一切是否具有危险存在, 从而作出相关的反映.你是否期望在一个家庭中,有这么一个机器,在你工作下班回家后,其将家庭打扫得一尘不染?...所有的这一切都是