[ZZ] RGBM and RGBE encoding for HDR

Deferred lighting separate lighting rendering and make lighting a completely image-space technique. This is very different the forward rendering. At first as the limitation of the hardware, we could make per-object lit by max number of 8 lights at one time, everything (light source setting, materials, texture) must be set up by the render states before rendering; As the hardware become more powerful, we could do additive lighting, one light one pass, all lighting will be blend together. It seems no max number of light source here. If we put too much lighting calculation in the pxiel shader, the situation will become worse. Because of z-test, a lots of pixels that do the lighting calculation need to be discard.  But this will be become the advantages of deferred lighting.

In deferred lighting, lights major cast is based on the screen area coverd; all lighting is per-pixel and all surface are lit equally (every objects will use the same light equation); light allows fast hardware Z-Reject. At the other hand, some disadvantages comes along: large frame-buffer size required; At some situation will become high fill-rate (a lots of lights do full screen lighting); It is difficult to support multiple light equation; It is very hard to handle transparncy objects; And hig hardward specification.

Deferred lighting could be adapted to photo-realistic rendering, that could achive very complex visual effects and very high visual quality than the tranditional forward rendering. At the same time, keep the frame rate still high. This the main reason why some latest TV or video games vastly using it.

 

G-Buffers

The key point of the deferred lighting is the G-Buffers or the image contents. What kind of parameters we should save for geometries in it depends on our light equation. For example, if we do direct lighting with Phong / Blinn Model,  usually we need diffuse color, normal, position, diffuse coefficient, specular color, specular coefficient and so on.  Choosing of lighting parameters depends on you, you could even ignore and discard some of them. You could discard the specular part, and do not use a dedicated buffer to save the surface position but just save the depth buffer (the postion could be restored by the screen space depth value)(There is such a HDR Deferred rendering sample on the Nvidia site). Here is a comparsion between two implements that I download beyond3D and Nvidia[HDR Deferred Shading]. 
 
After G-buffers created, the light contribution will be calculated and saved into the light buffer. Sometimes, the final result value may beyond 1, you could ignore it and saturate the result with the hardware support or use a high dynamic range buffer to hold it. If you consider to use a HDR buffer, you should find some way to make those HDR display correctly on the LDR monitor. Here comes the tone mapping. 

 

High Dynamic Range (HDR) 
Sometimes, using HDR buffers will waste too much memory. Actually we could Encode the HDR value(RGB) and store them into LDR buffers(RGBA). A lots of encoded method could be found on the Internet. Usually, we could use RGBM or RGBE. Some weak hardware like WII perfer to use RGBM encoding, and powerful device like PS3, XBOX360 perfer to use RGBE encoding.

 

Transparency Objects 
There is no cheap solution for tranparency objects  in deferred lighting. One way may be fall-back to forward rendering after deferred lighting; Or we could do the color blend with pixel shader in the post-process phase.

 

Light Optimization 
As the deferred lighting is based on the screen space covered, so the best way to improve it is to find the minimal screen space covered. For the directional light, the full screen light shader will be used. For the volume light like point light and spot light, the convex light volume will be used to best enclosed and to find the minimal clip screen area(sphere for point light, frustum for spot light). Here we need to make sure that those light volume will not clip the near and far clip plane at the same time. Otherwise, there will be lighting hole.

Another method, we could use was use stencil buffer to accurately figure out the light screen areas.

时间: 2024-11-18 19:14:23

[ZZ] RGBM and RGBE encoding for HDR的相关文章

[ZZ] HDR the bungie way

http://blog.csdn.net/toughbro/article/details/6755394 bufferencoding游戏float算法 bungie 06年,gamefest上的paper. 全文讲的比较系统,有空的话还是看原文的比较好,这里摘录一些我觉得很不错的部分. 补上微云链接:http://url.cn/I4SYbF Why HDR? 为什么要hdr?简而言之就是更加真实,相比之下LDR失真的地方很多,进而影响了沉浸类游戏对玩家的代入感, 具体到实际的好处,列了这么几

UNREAL ENGINE 4.12 正式发布!下载地址

UNREAL ENGINE 4.12 正式发布! 下载地址:https://www.unrealengine.com/ Alexander Paschall 在 June 1, 2016 |功能新闻社区 Share on Facebook Share on Twitter Share on Google+ Share on LinkedIn 此版本内含虚幻引擎 4 的数百个更新,以及 GitHub 虚幻引擎开发者社区提交的 106 项改良!特此对虚幻引擎 4.12 版本的贡献者们表达诚挚谢意:

HDR Image encoding formats

HDR图像的编码与存储是PRBT.IBL中的一个重要问题.其主要是将scene-referred的颜色信息存储并保存后在渲染时进行使用,然后通过tone-mapping这样的操作将其映射到output-referred的R8G8B8的颜色值并输出到终端显示器上.当然,一般来说不需要直接存储并读取HDR图像,但如果要实现一些PBRT的效果或是一个自己的LightMap baker的话,那么HDR的存取就是必不可少的了.这里总结一下HDR的存储编码格式. 主要有以下三种格式: HDR:对应的编码方式

[ZZ] HDR&ToneMapping

http://blog.csdn.net/toughbro/article/details/6745207 float游戏存储照片blogimage HDR high dynamic range. 很多程序朋友(包括我)都是从dxsdk上看到和学习这个概念,开始学习的更多的是一整套hdr sample的流程: 在float render target上去render scene 后面很多console上的游戏使用rgbm等编码方式来节省内存和bandwidth 通过down sample去计算亮

[ZZ] Deferred Rendering and HDR

http://www.gamedev.net/topic/496785-deferred-rendering-and-hdr/ Quote: Original post by jstrohYeah I've been reading about people saying "oh you can only do it if the device supports fp16 texture blending" but it's pretty simple to just add to a

HDR文件格式简介及其读写函数

转自:http://blog.csdn.net/lqhbupt/article/details/7828827 1.HDR简介HDR的全称是High-DynamicRange(高动态范围).在此,我们先解释一下什么是DynamicRange(动态范围),动态范围是指图像中所包含的从“最亮”至“最暗”的比值,也就是图像从“最亮”到“最暗”之间灰度划分的等级数:动态范围越大,所能表示的层次越丰富,所包含的色彩空间也越广.那高动态范围(HDR)顾名思义就是从“最亮”到“最暗”可以达到非常高的比值.在日

Python正则表达式指南 zz

zz http://www.cnblogs.com/huxi/archive/2010/07/04/1771073.html 1. 正则表达式基础 1.1. 简单介绍 正则表达式并不是Python的一部分.正则表达式是用于处理字符串的强大工具,拥有自己独特的语法以及一个独立的处理引擎,效率上可能不如str自带的方法,但功能十分强大.得益于这一点,在提供了正则表达式的语言里,正则表达式的语法都是一样的,区别只在于不同的编程语言实现支持的语法数量不同:但不用担心,不被支持的语法通常是不常用的部分.如

python解析hdr图像文件的python实现

如题 import cv2 import numpy as np def rgbe2float(rgbe): res = np.zeros((rgbe.shape[0],rgbe.shape[1],3)) p = rgbe[:,:,3]>0 m = 2.0**(rgbe[:,:,3][p]-136.0) res[:,:,0][p] = rgbe[:,:,0][p] * m res[:,:,1][p] = rgbe[:,:,1][p] * m res[:,:,2][p] = rgbe[:,:,2]

那些证书相关的玩意儿(SSL,X.509,PEM,DER,CRT,CER,KEY,CSR,P12等)[zz]

那些证书相关的玩意儿(SSL,X.509,PEM,DER,CRT,CER,KEY,CSR,P12等)[zz]转载 <javascript:;> 2015-06-09 20:21:04 from:http://www.cnblogs.com/guogangj/p/4118605.html之前没接触过证书加密的话,对证书相关的这些概念真是感觉挺棘手的,因为一下子来了一大堆新名词,看起来像是另一个领域的东西,而不是我们所熟悉的编程领 域的那些东西,起码我个人感觉如此,且很长时间都没怎么搞懂.写这篇文