Kinect 开发 —— Kinect studio

This tool can record all the data coming into an application from a Kinect unit. You can then view, review and store the data. Kinect Studio lets you inject the captured data streams back into a Kinect-enabled application, allowing you to test your code without getting out of your chair (although it might be healthier to make developers get up and move). If you’re working on a project in a distributed team, you can easily share data files with team members, enabling consistent testing across the team.

To begin, open the solution for the Kinect application and start it in debug.

While the application is running, start Kinect Studio. As shown in Figure 1, Kinect Studio has four windows: a main window and Color Viewer, Depth Viewer and 3D Viewer windows. The main window shows a timeline as well as the controls used in Kinect Studio. In the Color Viewer window, you can see the couch in my living room. The Depth Viewer window uses color to represent the distance that an object or a person is from the Kinect unit. Red indicates a distance closer to the Kinect unit, and blue represents a distance farther from it. Using color to represent distance provides a visual indication of the distance. In the 3D Viewer window in Figure 1, notice that the view is slightly rotated with respect to the images in the Color Viewer and the Depth Viewer.

 


Capturing Kinect Data

When Kinect Studio is first started, the Connect to a Kinect App & Sensor dialog box shown in Figure 2opens.

In this dialog box, you specify which Kinect-enabled application you want to connect to. Connecting to an application enables Kinect Studio to capture the data coming into that application from the Kinect unit. Based on the windows available, Kinect Studio captures the data feeds from the color stream and the depth stream. With the application running and Kinect Studio connected, you can now capture the data. In the Kinect Studio main window, click the Record button (or press Ctrl+R) to begin collecting data. Next, have the test subject move through the scenario you want to test. When the scenario is complete, click the Stop button (or press Shift+F5). Kinect Studio then stores the data in memory. Once Kinect Studio is done processing the data, the timeline on the main window populates, as do the Color Viewer, Depth Viewer and 3D Viewer windows, as shown in Figure 3.

By moving the cursor along the timeline, you can see the content relative to the timeline selection. Typical of most video editing software, sections of the timeline can be selected and saved as a separate file so you can use just the parts you want. The file saved is an .xed file. All the collected data is now contained in the file, enabling you to replay it whenever you want or distribute it to the rest of your team.

Notice that there isn’t a Skeleton Viewer. Kinect Studio doesn’t capture skeleton data (collection of joints) because that data is evaluated at run time based on the depth and color views. Capturing a skeleton view would defeat the purpose of recording the data for testing. In other words, the intrinsic data comes from the depth and color sensors. The skeleton data is the product of the Kinect for Windows software analyzing this data. Therefore, the runtime is going to re-evaluate the skeleton data from the depth and color data being injected back into the Kinect-enabled application under development just as though a user were in front of the Kinect unit providing live interaction.

Now let’s see just how useful this tool is. While the Kinect-enabled application is running, open Kinect Studio, connect to the application and open the .xed file you previously saved. When you click the Play button (or press F5), Kinect Studio injects the data saved in the .xed file into the Kinect-enabled application, simulating the user (or users). The application in development reacts to the .xed feed as though the user were actually doing the actions in front of the Kinect in real time. In addition to relieving you from having to hop in front of the Kinect unit every time you want to test a change you make to your application to test it, this capability lets you test the application for different-sized users.


Capabilities of Kinect Studio

Let’s look at some of the other features Kinect Studio offers. While using the timeline to set the image where desired, the Color Viewer window shows a color image of the resolution that you enabled for the color stream from the Kinect. You can right-click on the image and select “Save image as” to save the still image as a bitmap.

The Depth Viewer also has a useful utility. To use it, pause the video at a specific point and then move the mouse pointer across the image. At the bottom of the Depth Viewer window, notice the data points that are displayed. First, the frame number indicates the frame number being displayed. Next, the x,y coordinates relative to the image being displayed are shown. Finally, and most interesting, the distance in millimeters from the Kinect to the object or person that the mouse pointer is “touching” is given. Let’s look at an example to understand this information a little better. I asked my daughters (Kenzy and Sherrie) to help me with this demonstration. I used Kinect Studio to capture them doing a routine that involved moving their arms and legs. Figure 4 shows how they were positioned: Kenzy is slightly behind but to the side of Sherrie.

In the image on the left, the mouse is on the point x=178, y=304. The depth is 2187 millimeters (mm). So this point (which corresponds to the “girl in red” [Sherrie]) is 2187 mm from the Kinect unit. The image on the right shows the mouse on the point x=326, y=304, with a depth of 2639 mm. So this point (which corresponds to the “girl in orange” [Kenzy]) is 2639 mm from the Kinect unit. Another way to interpret this data is to say that Kenzy is 452 mm behind Sherrie. The Kinect depth sensor provides this data and is one of the features that makes Kinect extremely powerful.

We have had the ability to capture data from webcams and similar hardware for many years. Kinect gives us the ability to capture the distance of objects in a relatively inexpensive package. By combining these data streams, we can create applications that “understand” not only the imagery being presented but also the three-dimensional aspects of the scene. Think about when relational databases first became commonplace. The ability to relate two seemingly disparate pieces of data via a defined relationship revolutionized the way applications work and the capabilities they can offer. Similarly, using Kinect, we can now draw far more value from the scene in front of the Kinect and can relate objects through analysis of imagery as well as actual distances.

Let’s take another look at Figure 4 from the perspective of developing an application that wants to know which user is closest to the screen in order to select the main player, for example. If all I had to work with was a color image of two children, it would be difficult to ascertain which child was standing closer to the screen. In fact, I would probably have to use some other means of determining which child was the main player, such as having the player highlight and select the shape of her body outline. Using the depth data, however, I no longer need to have these artificial mechanisms for determining what is obvious based on the physical layout. With Kinect, we’re a step closer to ubiquitous computing because the application is able to infer information by evaluating the user’s situation without requiring direct, artificial interaction (such as clicking the mouse or typing on a keyboard). The user simply behaves naturally. That’s the real power of programming Kinect-enabled applications. Developers can build applications for average businesses and households that allow users to be themselves. Or as Microsoft puts it, “You are the controller.” Without the depth data, this would not be feasible.

For me, the 3D Viewer window is the coolest feature of Kinect Studio. Again, using the timeline to set the point in time that you want to view, the image in the 3D Viewer represents a 3D model of a scene rendered by combining the depth and color data, as shown in Figure 5.

Kinect 开发 —— Kinect studio

时间: 2024-10-07 16:30:37

Kinect 开发 —— Kinect studio的相关文章

Kinect 开发 —— Kinect for windows SDK

开发 -- 基本的SDK和Windows 编程技巧(彩色图像视频流,深度图像视频流的采集,骨骼跟踪,音频处理,语音识别API) 深度数据,就是Kinect的精髓和灵魂,很多问题都转换为深度图像的模式识别问题 AForge.NET 是一套C#编写的Framework,提供计算机视觉,机器学习 www.aforgenet.com 图像处理需要消耗大量的计算资源,使用C#这类托管语言并不明智,应多使用OpenCV 应用层API详解 NUI API Kinect Audio DMO :提供束波成形和音源

【Kinect开发笔记之(一)】初识Kinect

一.Kinect简介 Kinect是微软在2010年6月14日对XBOX360体感周边外设正式发布的名字.它是一种3D体感摄影机(开发代号"Project Natal"),同时它导入了即时动态捕捉.影像辨识.麦克风输入.语音辨识.社群互动等功能. 二.Kinect分类 Kinect for Xbox 360:该版本设计之初就是为了Xbox 360定制的,并未考虑其他的平台.从微软授权角度而言,它无法用于商业开发. Kinect for Windows : 固件上做了升级,支持"

【Kinect开发笔记之(二)】Kinect for windows发展历程

新版本SDK和旧版本的SDK完全兼容,如果您之前安装过旧版本的,可以直接安装新版本的SDK,但是如果您之前的开发版本是Beta版的,则需要卸载之后再安装新版本.在Kinect for Windows SDK 1.0版本中,SDK和示例文件是打包一起安装的.而在之后的版本,为了可以分别升级,微软把这两者分开独立为Kinect for Windows SDK和Kinect for Windows Developer Toolkit这两部分,所以需要分别下载安装, Kinect for Windows

Kinect开发资源汇总

Kinect开发资源汇总   转自: http://www.sigvc.org/bbs/forum.php?mod=viewthread&tid=254&highlight=kinect By doctorimage整理 开发Kinect应用本质上和开发其他Windows应用一样,不同的是该SDK支持Kinect感应器的相关功能,比如彩色图像.深度图像.音频.骨骼动画数据等. 本文整理了Kinect应用开发的相关资源,如果你正在进行Kinect开发或打算进行Kinect开发,这将对你有很大

Kinect开发学习笔记之(一)Kinect介绍和应用

Kinect开发学习笔记之(一)Kinect介绍和应用 [email protected] http://blog.csdn.net/zouxy09 一.Kinect简单介绍 Kinectfor Xbox 360,简称 Kinect,是由微软开发,应用于Xbox 360 主机的周边设备.它让玩家不须要手持或踩踏控制器,而是使用语音指令或手势来操作 Xbox360 的系统界面.它也能捕捉玩家全身上下的动作,用身体来进行游戏,带给玩家"免控制器的游戏与娱乐体验".其在2010年11月4日于

Kinect开发笔记之三Kinect开发环境配置具体解释

0.前言: 首先说一下我的开发环境,Visual Studio是2013的,系统是win8的64位版本号,SDK是Kinect for windows SDK 1.8版本号.尽管前一篇博文费了半天劲,翻译了2.0SDK的新特性.但我还是决定要回退一个版本号. 事实上我之前一直在用2.0的SDK在调试Kinect,但无奈实验室提供的Kinect是for Windows 1.0版本号的,并且Kinect从1.8之后就好像是一个分水岭,就比方win8和win7有非常大的区别,2.0版的Kinect和S

Kinect 开发 —— 面部追踪

SDK1.5中新增了人脸识别类库:Microsoft.Kinect.Toolkit.FaceTracking使得在Kinect中进行人脸识别变得简单,该类库的源代码也在Developer Toolkit中.在Developer Toolkit中也自带人脸识别的例子,您也可以打开运行或者查看源代码. 开发前准备 要使用面部追踪功能,Kinect SDK版本应该至少是1.5,最新版本为1.6,您可以参考之前的那篇文章下载安装,Kinect SDK 和 Kinect Developer Toolkit

Kinect 开发 —— 杂一

Kinect 提供了非托管(C++)和托管(.NET)两种开发方式的SDK,如果您用C++开发的话,需要安装Speech Runtime(V11),Kinect for Windows Runtime和驱动的,如果您使用C#和VB.NET的话,需要Microsoft.Kinect.dll和Mirosoft.Speech.dll两个dll,这两个其实是对前C++里面的两个dll的.NET封装,不论何种开发,您都需要安装driver,所有这些都包含在Kinect SDK安装包中,安装方法您可以参考之

Kinect 开发 —— 手势识别(下)

基本手势追踪 手部追踪在技术上和手势识别不同,但是它和手势识别中用到的一些基本方法是一样的.在开发一个具体的手势控件之前,我们先建立一个可重用的追踪手部运动的类库以方便我们后续开发.这个手部追踪类库包含一个以动态光标显示的可视化反馈机制.手部追踪和手势控件之间的交互高度松耦合. 首先在Visual Studio中创建一个WPF控件类库项目.然后添加四个类: KinectCursorEventArgs.cs,KinectInput.cs,CusrorAdorner.cs和KinectCursorM