Frames of Reference参考帧

Frames of Reference参考帧

When describing the position and orientation of something (for example, your Tango device), it is important to indicate the frame of reference you are using to base your description on.当描述某些东西(例如Tango设备)的位置和方向,表明你正在使用的参考帧从而将你的描述标在上面是很重要的。

To help understand frames of reference, consider the following: Saying "Mary is standing three feet away" does not really tell you much. If you want to know Mary‘s position, you must also address the question "three feet from what?" If you say "Mary is standing three feet in front of the entrance to the Statue of Liberty," you can now establish Mary‘s position because you are using the Statue of Liberty as your frame of reference and you can measure the distance and directon of Mary relative to the Statue.

要帮助理解参考帧,考虑以下问题:这句话“玛丽站在三英尺远的地方”并没有告诉你太多信息。

But Mary isn‘t simply a point with a position in 3D space—she also has an orientation, which is described in terms of some type of rotation relative to the frame of reference. In other words, Mary, like all 3D objects, faces a certain direction. A full description of Mary‘s position and orientation (we call this combination a pose) in 3D space would be something like this: "Mary is standing three feet in front of the entrance to the Statue of Liberty, and she is directly facing it." Now you have provided information about her orientation. If Mary turned to her right, you could say "She is now rotated 90 degrees away from the Statue." This would be another description of orientation.

So how does all of this relate to a Tango device? In order to perform motion tracking, a device reports its pose (position and orientation) relative to its chosen frame of reference, which is fixed in 3D space. For example, the device might say "from the place that I first started motion tracking, I am now three feet forward and one foot up, and I have rotated 30 degrees to the right." By doing this, the device has told you its position using meaningful directions: three feet forward and one foot up from its original starting position. It has also told you about a change in its orientation: rotated 30 degrees to the right relative to its starting position.

To set things up for motion tracking, you must do the following:

  1. Choose a base frame. This is the thing you will be measuring from. As mentioned above, it is fixed in 3D space, like the Statue of Liberty in our example above. Example: theCOORDINATE_FRAME_START_OF_SERVICE frame.
  2. Choose a target frame. This is the thing you will be measuring to. For motion tracking this is usually COORDINATE_FRAME_DEVICE and represents your device‘s pose at any given instant as it moves through 3D space. The pose of the target frame changes as your device moves, and is measured against the base frame (which never changes), up to 100 times per second. This constant stream of measurements creates your motion track.

The numerical measurements of the pose of the target frame relative to the base frame at any given instant answer the question: "What is the device‘s position and orientation relative to its base frame of reference?"

In the next section, we discuss the use of start-of-service frame, area description frame, and device pose frame pairs for motion tracking. For certain applications, you may need to choose a frame pair that will enable you to make precise alignments of data sources from device components. We discuss these types of frame pairs later in this topic.

To learn more about the coordinate systems used for frames of reference, see Coordinate System Conventions.

Coordinate frames for motion tracking

The Tango APIs give you various frame pair options for motion tracking:

Target Frame Base Frame
COORDINATE_FRAME_DEVICE COORDINATE_FRAME_START_OF_SERVICE
COORDINATE_FRAME_DEVICE COORDINATE_FRAME_AREA_DESCRIPTION
COORDINATE_FRAME_START_OF_SERVICE COORDINATE_FRAME_AREA_DESCRIPTION

Let‘s consider a common use case:

Goal: Your app controls a camera in a fully virtual environment. You want the device to always calculate its pose relative to where it was when the Tango service started.

Solution: For the target frame, choose COORDINATE_FRAME_DEVICE. For the base frame, choose COORDINATE_FRAME_START_OF_SERVICE.

Here is the frame pair used in our example project titled cpp_hello_motion_tracking_example:

 TangoCoordinateFramePair pair;   pair.base = TANGO_COORDINATE_FRAME_START_OF_SERVICE;   pair.target = TANGO_COORDINATE_FRAME_DEVICE;   if (TangoService_connectOnPoseAvailable(1, &pair, onPoseAvailable) !=       TANGO_SUCCESS) {     LOGE("TangoHandler::OnResume, connectOnPoseAvailable error.");     std::exit(EXIT_SUCCESS);   }

Let‘s look at the details of individual frame pairs.


Target Frame Base Frame
COORDINATE_FRAME_DEVICE COORDINATE_FRAME_START_OF_SERVICE

This frame pair provides the pose of the device relative to when the Tango service first initialized successfully. This mode accumulates the movement of the device over time since the service started. The service can also detect if there is a motion tracking failure. During this period, the system reports an invalid pose. If TangoService_resetMotionTracking() is called or auto-reset is enabled in the service configuration, the system attempts to re-initialize tracking. After successful re-initialization, it makes a best effort attempt to recover the last known good pose of the device relative to the start of service frame and pick up where it left off. For more information, see Lifecycle of pose status. This frame pair does not include drift correction or localization. If your application does not use drift correction or localization, you can lower processing requirements by disabling area learning mode and not loading an ADF.


Target Frame Base Frame
COORDINATE_FRAME_DEVICE COORDINATE_FRAME_AREA_DESCRIPTION

This frame pair provides the pose of the device, including corrections, relative to the loaded area description‘s origin. It requires that area learning mode is turned on or a previously created ADF is loaded. If you turn on learning mode without loading an ADF, the origin of the area description base frame is initially the same as start of service. If you load an ADF with or without learning mode, the origin of the area description base frame is the origin stored in the ADF, and you will receive data only after the device has localized. Depending on your configuration settings, this mode is not always available. For more information, see Using Learning Mode and loaded Area Description Files. If you need to use motion tracking before the COORDINATE_FRAME_DEVICE toCOORDINATE_FRAME_AREA_DESCRIPTION frame pair becomes valid, you can use theCOORDINATE_FRAME_START_OF_SERVICE base frame in the interim.

Note: Drift corrections and localization events cause jumps in the pose. To avoid these jumps, use the COORDINATE_FRAME_START_OF_SERVICE base frame to drive the user-facing elements in your application and incorporate the ADF driven corrections using COORDINATE_FRAME_START_OF_SERVICE toCOORDINATE_FRAME_AREA_DESCRIPTION update callbacks.

For pairs using the COORDINATE_FRAME_DEVICE target frame, updates are available at the pose estimation rate supported by the device.


Target Frame Base Frame
COORDINATE_FRAME_START_OF_SERVICE COORDINATE_FRAME_AREA_DESCRIPTION

This frame pair provides updates only when a localization event or a drift correction occurs. This requires that area learning mode is turned on or a previously created ADF is loaded. If an ADF is loaded, the origin of the area description base frame is the origin stored in the ADF. This isolates the adjustments to the pose of the device from the incremental frame-to-frame motion, allowing you to decide when and how to incorporate the pose adjustments in your application to minimize disruption to the user experience.

Coordinate frames for component alignment

Target Frame Base Frame
COORDINATE_FRAME_DEVICE COORDINATE_FRAME_IMU
COORDINATE_FRAME_CAMERA_COLOR COORDINATE_FRAME_IMU
COORDINATE_FRAME_CAMERA_DEPTH COORDINATE_FRAME_IMU
COORDINATE_FRAME_CAMERA_FISHEYE COORDINATE_FRAME_IMU

Some applications need to align multiple data sources, such as the data from the color and depth cameras. You can pair the COORDINATE_FRAME_IMU base frame with one of the component target frames for these scenarios:

  1. You want to query the relative offsets of the individual components to the IMU frame of reference without knowing the layout of the specific device.
  2. You want the virtual image from the rendering camera to align with the center of the display.

Combined with the motion tracking coordinate frames and timestamps on the data, these offsets give you a more complete understanding of the various sensor inputs in both space and time. This is necessary for aligning and compositing multiple data sources together.

Note: The relative offsets between two components are sometimes referred to as the extrinsic parameters.

Since devices are designed to be mechanically rigid, these offsets are not expected to change and updates will not occur in the API. However, devices vary in how their components are spaced. Updating extrinsic parameters over time is not currently supported by the Tango APIs. These values are generated either from a one-time factory calibration or from the manufacturer‘s mechanical design files. Applications that require extremely tight requirements for the extrinsic parameters should consider implementing their own calibration procedure which can be performed by the end user.

The COORDINATE_FRAME_IMU base frame provides a common reference point for all of the internal components in the device. The origin of this base frame does not necessarily correspond to any one particular component and may differ between devices. Like other Android sensors, the axis of the device coordinate frame is aligned with the natural (default) orientation of the device as defined by the manufacturer. The manufacturer-defined natural orientation of the device may not match the desired orientation of your app. For maximum future compatibility, do not assume a Tango-compliant device has a natural orientation that is either landscape or portrait. Instead, use the Android getRotation() method to determine screen rotation, and then use the Android remapCoordinateSystem() method to map sensor coordinates to screen coordinates. For more general information about sensors, see the Android documentation on the sensor coordinate system. For a more detailed discussion of issues surrounding device orientation, see this Android Developers Blog post.

The component offsets are static and should only need to be queried once.

Note: The unit of measurement for coordinate frame pairs is meters.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

上次更新日期:六月 10, 2016

时间: 2024-10-19 06:20:49

Frames of Reference参考帧的相关文章

Difficulty in Understanding Relativity

Special relativity says: 1. the speed of light in vaccum is an invariable for all inertial frames of reference; 2. the laws of physics are the same in all inertial frames of reference. Then we will find the rest mass of a photon is zero and it takes

http2协议翻译(转)

超文本传输协议版本 2 IETF HTTP2草案(draft-ietf-httpbis-http2-13) 摘要 本规范描述了一种优化的超文本传输协议(HTTP).HTTP/2通过引进报头字段压缩以及多路复用来更有效利用网络资源.减少感知延迟.另外还介绍了服务器推送规范. 本文档保持对HTTP/1.1的后向兼容,HTTP的现有的语义保持不变. 1 介绍 The Hypertext Transfer Protocol (HTTP) is a wildly successful protocol.

What Are Tango Poses?Tango姿态是什么?

 What Are Tango Poses?什么是Tango姿态? As your device moves through 3D space, it calculates where it is (position) and how it's rotated (orientation) up to 100 times per second. A single instance of this combined calculation is called the device's pose. T

H.264/AVC I/P/B frames

H.264/AVC I/P/B frames GOP: group of I/P/B frames, the first frame in a GOP is I frame. there is only one I frame in a GOP. if there are B frame in a GOP, the last frame is a P frame. I frame: reference frame of P/B frame. P frame: its reference fram

Linux TCP/IP parameters reference

This is a reference of IP networking parameters that are configurable as described in our linux tweaking article -here-. /proc/sys/net/ipv4/* Variables: ip_forward - BOOLEAN 0 - disabled (default) not 0 - enabled Forward Packets between interfaces. T

转 HEVC 参考帧管理(RPS)

分析HM参考帧的管理: HEVC采用了参考帧集(RPS)的技术来管理已解码的帧,用作后续图像的参考.与之前的视频编码标准中参考侦管理策略不同的是,HEVC中的RPS技术,通过直接在每一帧开始的片头码流中传输DPB中各个帧的状态变化,而H.264/AVC中的滑动窗和MMCO (Memory ManagementControl Operation)这两种参考帧管理技术,是通过传输每一个片的DPB的相对变化来实现,一但发生数据丢失,将会有持续的影响. 1.参考帧管理基本知识 H.263, H.264/

Android中JNI调用时出现accessed stale local reference的问题

之前在做一个native的模块时遇到这样一个问题: 代码运行在android2.3上没有任何问题,可是在4.2上运行时报出了:JNI ERROR (app bug): accessed stale local reference 的错误. 后来在StackOverflow上找到了问题的答案.简单来说就是  4.0以上的android系统GC在垃圾回收时为了减少内存碎片,会对内存进行整理,整理时必然会移动对象的内存地址,这时C代码的指针还指向原来对象的地址,这时该对象已经被移动到了其他位置,因此会

ECShop后台站点地图关于 Deprecated: Assigning the return value of new by reference is deprecated的错误的解决办法

今天对后台系统进行一些简单的操作,当点击  系统设置---站点地图  时发现提示: Deprecated: Assigning the return value of new by reference is deprecated 的错误,如下图: 起先以为是代码的问题,可是一想到这个模板是官网的默认模板,对sitemap.php文件进行查找后未能发现问题.后来将错误内容从网上一查找,原来是由于从php5.3开始后,废除了php中的"=&"符号,所以删除&符号即可. 删除

caffe日常坑系列之:undefined reference to symbol '_ZN2cv6String10deallocateEv'

在使用caffe库编译C++时出现的 解决如下: /usr/bin/ld: /tmp/ccA5JGRP.o: undefined reference to symbol '_ZN2cv6String10deallocateEv'//usr/local/lib/libopencv_core.so.3.2: error adding symbols: DSO missing from command line解决:sudo apt-get autoremove libopencv-dev caffe