Camera’s Depth Texture

Camera’s Depth Texture

  In Unity a Camera can generate a depth or depth+normals texture. This is a minimalistic G-buffer texture that can be used for post-processing effects or to implement custom lighting models (e.g. light pre-pass). Camera actually builds the depth texture using Shader Replacement feature, so it’s entirely possible to do that yourself, in case you need a different G-buffer setup.

  Camera’s depth texture can be turned on using Camera.depthTextureMode variable from script.

  There are two possible depth texture modes:

  • DepthTextureMode.Depth: a depth texture.
  • DepthTextureMode.DepthNormals: depth and view space normals packed into one texture.

  

[DepthTextureMode.DepthNormals texture] 

  This builds a screen-sized 32 bit (8 bit/channel) texture, where view space normals are encoded into R&G channels, and depth is encoded in B&A channels. Normals are encoded using Stereographic projection, and depth is 16 bit value packed into two 8 bit channels. UnityCG.cginc include file has a helper function DecodeDepthNormal to decode depth and normal from the encoded pixel value. Returned depth is in 0..1 range.

参考:file:///C:/Program%20Files%20(x86)/Unity/Editor/Data/Documentation/html/en/Manual/SL-CameraDepthTexture.html

Camera’s Depth Texture

时间: 2024-07-31 09:29:38

Camera’s Depth Texture的相关文章

使用Depth Texture

使用Depth Textures: 可以将depth信息渲染到一张texture,有些效果的制作会需要scene depth信息,此时depth texture就可以派上用场了. Depth Texture在不同平台上有不同的实现,并且原生的支持也不一样. UnityCG.cginc里面定义了一些使用depth texture的帮助宏定义: UNITY_TRANSFER_DEPTH(o) 计算eye space的深度值,并写入变量o(float2).当需要渲染到一张深度贴图时,在vertex s

Unity3D组件参考手册

Refer to the information on these pages for details on working in-depth with various aspects of Unity. 这些页面的参考信息,是有关Unity深入工作的各个方面的详细信息. The Unity Manual Guide contains sections that apply only to certain platforms. Please select which platforms you

Android 音视频开发(四):使用 Camera API 采集视频数据

本文主要将的是:使用 Camera API 采集视频数据并保存到文件,分别使用 SurfaceView.TextureView 来预览 Camera 数据,取到 NV21 的数据回调. 注: 需要权限:<uses-permission android:name="android.permission.CAMERA" /> 一.预览 Camera 数据 做过Android开发的人一般都知道,有两种方法能够做到这一点:SurfaceView.TextureView. 下面是使用

opengl 教程(23) shadow mapping (2)

原帖地址:http://ogldev.atspace.co.uk/www/tutorial24/tutorial24.html Background In the previous tutorial we learned the basic principle behind the shadow mapping technique and saw how to render the depth into a texture and later display it on the screen b

Unity3D脚本学习——运行时类

AssetBundle 类,继承自Object.AssetBundles让你通过WWW类流式加载额外的资源并在运行时实例化它们.AssetBundles通过BuildPipeline.BuildAssetBundle创建. 参见:WWW.assetBundle ,Loading Resources at Runtime ,BuildPipeline.BuildPlayer function Start () { var www = new WWW ("http://myserver/myBund

图像处理与计算机视觉基础,经典以及最近发展

*************************************************************************************************************** 在这里,我特别声明:本文章的源作者是   杨晓冬  (个人邮箱:[email protected]).原文的链接是 http://www.iask.sina.com.cn/u/2252291285/ish.版权归 杨晓冬 朋友所有. 我非常感谢原作者辛勤地编写本文章,并愿意共

OpenGL 阴影之Shadow Mapping和Shadow Volumes

先说下开发环境.VS2013,C++空项目,引用glut,glew.glut包含基本窗口操作,免去我们自己新建win32窗口一些操作.glew使我们能使用最新opengl的API,因winodw本身只包含opengl 1.1版本的API,根本是不能用的. 其中矩阵计算采用gitHub项目openvr中的三份文件, Vectors.h ,Matrices.h, Matrices.cpp,分别是矢量与点类,矩阵类,我们需要的一些操作,矢量的叉乘和点乘,矩阵转置,矩阵的逆,矩阵与矢量相剩等. 这里主要

outdated: 41.Volumetric Fog &amp; IPicture Image Loading

雾气的实现在Cool Looking Fog中提到过,不过怎么实现雾气随距离的变化? 在Initialize()函数中首先设置fog的褪色,颜色,开始透明度和结尾透明度等等.如下, glEnable(GL_FOG); glFogi(GL_FOG_MODE, GL_LINEAR); // Fog fade is linear glFogfv(GL_FOG_COLOR, fogColor); // Set the color glFogf(GL_FOG_START, 0.0f); // Set th

窥探Unity5渲染内部之解析UnityShaderVariables.cginc

unity5的UnityShaderVariables.cginc比unity4大了1kb 这里装着unity shader 大部分内部参数,写这个方便以后自己查询 Camera参数 uniform float4 _Time; 时间,x = t/20,y = t,z = t*2,w = t*3 uniform float4 _SinTime; sin(时间), x = sin(t/8),y = sin(t/4),z = sin(t/2),w = sin(t) uniform float4 _Co