Hierarchical Z-Buffer Occlusion Culling

While I was at GDC I had the pleasure of attending the Rendering with Conviction talk by Stephen Hill, one of the topics was so cool that I thought it would be fun to try it out.  The hierarchical z-buffer solution presented at GDC borrows heavily from this paper, Siggraph 2008 Advances in Real-Time Rendering (Section 3.3.3).  Though I ran into a fair number of issues trying to get the AMD implementation working, a lot of the math is too simplistic and does not take into account perspective distortions and the proper width of the sphere in screen space so you end up with false negatives.

You should read the papers to get a firm grasp of the algorithm, but here is my take on the process and some implementation notes of my own.

Hierarchical Z-Buffer Culling Steps

  1. Bake step – Have your artists prepare occlusion geometry for things in the world that make sense as occluders, buildings, walls…etc.  They should all be super cheap to render, boxes/planes.  I actually ran across this paper, Geometric Simplification For Efficient Occlusion Culling In Urban Scenes, that sounded like a neat way of automating the process.
  2. CPU – Take all the occlusion meshes and frustum cull them.
  3. GPU – Render the remaining occluders to a ‘depth buffer’.  The depth buffer should not be full sized, in my code I’m using 512×256.  There is a Frostbite paper that mentions using a 256×114-ish sized buffer for a similar solution to do occlusion culling.  The ‘depth buffer’ should just be mip 0 in a full mip chain of render targets (not the actual depth buffer).
  4. GPU – Now downsample the RT containing depth information filling out the entire mipchain.  You’ll do this by rendering a fullscreen effect with a pixel shader taking the last level of the mip chain and down sampling it into the next, preserving the highest depth value in a sample group of 4 pixels.  For DX11, you can just constrain the shader resource view so that you can both read and render from the same mip chain.  For DX9 you’ll have to use StretchRect to copy from a second mip chain, since you can’t sample and render to the same mip chain in DX9.  In my code I actually found a more optimized solution, by ping-ponging between 2 mip chains one containing even and the other odd levels, and a single branch in your shader code you can get around the overhead of having to do the StretchRect and just sample from a different mip chain based on the even/odd mip level you need.
  5. CPU – Gather all the bounding spheres for everything in your level that could possibly be visible.
  6. GPU – DX11 send the list of bounds to a compute shader, which computes the screen space width of the sphere then uses the width to compute the mip level to sample from the HiZ map generated in step 4, such that the sphere covers no more than 2 pixels wide.  So large objects in screen space will sample from very high values in the mip chain since they require a coarse view of the world.  Whereas small objects in screen space will sample from very low values in the mip chain.  In DX9 the process is basically the same, the only difference is that you’ll render a point list of vertices, that instead of a Float3 position are Float4 bounds (xyz = position, w = radius).  You’ll also send down a stream of texcoords that will represent x/y pixel values of where to encode the results of the occlusion test for that bound.  Instead of a compute shader you’ll process the vertices using a vertex shader, you’ll also need to use the pixel location provided in the texcoord stream to make sure the results of the test are written out to that point in a render target, and in a pixel shader you’ll need to do the sampling to test to see if it’s visible, and output a color like white for culled, black for visible.
  7. CPU – Try to do some work on the CPU after the occluder rendering and culling process is kicked off, for me the entire process took about 0.74 ms of GPU time on a Radeon 5450, with 900 bounds.  The overhead of generating the HiZ mip chain and dispatching the culling process is the real bottleneck though, there’s little difference between 900 bounds and 10,000 bounds.
  8. CPU – Read back the results.  DX11 you’re just reading back a buffer output by a compute shader.  For DX9 you’ll have to copy the render target rendered to in step 6 containing the checker pattern of black and white pixels and then iterate over the pixels on the CPU to know what is visible and what is hidden.

Hierarchical Z-Buffer Downsampling Code

The downsampling is pretty much what you would expect, you take the current pixel, sample one pixel to the right, bottom and bottom right.  You take the furthest depth value and use it as the new depth in the downsampled pixel.  Here’s an example of a before and after version, black is a closer depth, the whiter a pixel is the further away / higher the depth value.

Before Downsample 

After Downsample 

The downsampling HLSL code looks like this:

  float4 vTexels;
  vTexels.x = LastMip.Load( nCoords );
  vTexels.y = LastMip.Load( nCoords, uint2(1,0) );
  vTexels.z = LastMip.Load( nCoords, uint2(0,1) );
  vTexels.w = LastMip.Load( nCoords, uint2(1,1) );
   
  float fMaxDepth = max( max( vTexels.x, vTexels.y ), max( vTexels.z, vTexels.w ) );

view rawDownsampling.hlsl hosted with ? by GitHub

Hierarchical Z-Buffer Culling Code

Here’s the heart of the algorithm, the culling.  One note, [numthreads(1,1,1)] is terrible for performance with compute shaders.  Anyone planning to use this should do a better job of their thread group and thread management than I did. This is the DX11 compute shader version, I decided to use it here since it’s clearer what the intentions are. You’ll find the DX9 code in the full sample at the bottom of the post.

  cbuffer CB
  {
  matrix View;
  matrix Projection;
  matrix ViewProjection;
   
  float4 FrustumPlanes[6]; // view-frustum planes in world space (normals face out)
   
  float2 ViewportSize; // Viewport Width and Height in pixels
   
  float2 PADDING;
  };
   
  // Bounding sphere center (XYZ) and radius (W), world space
  StructuredBuffer Buffer0 : register(t0);
  // Is Visible 1 (Visible) 0 (Culled)
  RWStructuredBuffer BufferOut : register(u0);
   
  Texture2D HizMap : register(t1);
  SamplerState HizMapSampler : register(s0);
   
  // Computes signed distance between a point and a plane
  // vPlane: Contains plane coefficients (a,b,c,d) where: ax + by + cz = d
  // vPoint: Point to be tested against the plane.
  float DistanceToPlane( float4 vPlane, float3 vPoint )
  {
  return dot(float4(vPoint, 1), vPlane);
  }
   
  // Frustum cullling on a sphere. Returns > 0 if visible, <= 0 otherwise
  float CullSphere( float4 vPlanes[6], float3 vCenter, float fRadius )
  {
  float dist01 = min(DistanceToPlane(vPlanes[0], vCenter), DistanceToPlane(vPlanes[1], vCenter));
  float dist23 = min(DistanceToPlane(vPlanes[2], vCenter), DistanceToPlane(vPlanes[3], vCenter));
  float dist45 = min(DistanceToPlane(vPlanes[4], vCenter), DistanceToPlane(vPlanes[5], vCenter));
   
  return min(min(dist01, dist23), dist45) + fRadius;
  }
   
  [numthreads(1, 1, 1)]
  void CSMain( uint3 GroupId : SV_GroupID,
  uint3 DispatchThreadId : SV_DispatchThreadID,
  uint GroupIndex : SV_GroupIndex)
  {
  // Calculate the actual index this thread in this group will be reading from.
  int index = DispatchThreadId.x;
   
  // Bounding sphere center (XYZ) and radius (W), world space
  float4 Bounds = Buffer0[index];
   
  // Perform view-frustum test
  float fVisible = CullSphere(FrustumPlanes, Bounds.xyz, Bounds.w);
   
  if (fVisible > 0)
  {
  float3 viewEye = -View._m03_m13_m23;
  float CameraSphereDistance = distance( viewEye, Bounds.xyz );
   
  float3 viewEyeSphereDirection = viewEye - Bounds.xyz;
   
  float3 viewUp = View._m01_m11_m21;
  float3 viewDirection = View._m02_m12_m22;
  float3 viewRight = normalize(cross(viewEyeSphereDirection, viewUp));
   
  // Help handle perspective distortion.
  // http://article.gmane.org/gmane.games.devel.algorithms/21697/
  float fRadius = CameraSphereDistance * tan(asin(Bounds.w / CameraSphereDistance));
   
  // Compute the offsets for the points around the sphere
  float3 vUpRadius = viewUp * fRadius;
  float3 vRightRadius = viewRight * fRadius;
   
  // Generate the 4 corners of the sphere in world space.
  float4 vCorner0WS = float4( Bounds.xyz + vUpRadius - vRightRadius, 1 ); // Top-Left
  float4 vCorner1WS = float4( Bounds.xyz + vUpRadius + vRightRadius, 1 ); // Top-Right
  float4 vCorner2WS = float4( Bounds.xyz - vUpRadius - vRightRadius, 1 ); // Bottom-Left
  float4 vCorner3WS = float4( Bounds.xyz - vUpRadius + vRightRadius, 1 ); // Bottom-Right
   
  // Project the 4 corners of the sphere into clip space
  float4 vCorner0CS = mul(ViewProjection, vCorner0WS);
  float4 vCorner1CS = mul(ViewProjection, vCorner1WS);
  float4 vCorner2CS = mul(ViewProjection, vCorner2WS);
  float4 vCorner3CS = mul(ViewProjection, vCorner3WS);
   
  // Convert the corner points from clip space to normalized device coordinates
  float2 vCorner0NDC = vCorner0CS.xy / vCorner0CS.w;
  float2 vCorner1NDC = vCorner1CS.xy / vCorner1CS.w;
  float2 vCorner2NDC = vCorner2CS.xy / vCorner2CS.w;
  float2 vCorner3NDC = vCorner3CS.xy / vCorner3CS.w;
  vCorner0NDC = float2( 0.5, -0.5 ) * vCorner0NDC + float2( 0.5, 0.5 );
  vCorner1NDC = float2( 0.5, -0.5 ) * vCorner1NDC + float2( 0.5, 0.5 );
  vCorner2NDC = float2( 0.5, -0.5 ) * vCorner2NDC + float2( 0.5, 0.5 );
  vCorner3NDC = float2( 0.5, -0.5 ) * vCorner3NDC + float2( 0.5, 0.5 );
   
  // In order to have the sphere covering at most 4 texels, we need to use
  // the entire width of the rectangle, instead of only the radius of the rectangle,
  // which was the original implementation in the ATI paper, it had some edge case
  // failures I observed from being overly conservative.
  float fSphereWidthNDC = distance( vCorner0NDC, vCorner1NDC );
   
  // Compute the center of the bounding sphere in screen space
  float3 Cv = mul( View, float4( Bounds.xyz, 1 ) ).xyz;
   
  // compute nearest point to camera on sphere, and project it
  float3 Pv = Cv - normalize( Cv ) * Bounds.w;
  float4 ClosestSpherePoint = mul( Projection, float4( Pv, 1 ) );
   
  // Choose a MIP level in the HiZ map.
  // The original assumed viewport width > height, however I‘ve changed it
  // to determine the greater of the two.
  //
  // This will result in a mip level where the object takes up at most
  // 2x2 texels such that the 4 sampled points have depths to compare
  // against.
  float W = fSphereWidthNDC * max(ViewportSize.x, ViewportSize.y);
  float fLOD = ceil(log2( W ));
   
  // fetch depth samples at the corners of the square to compare against
  float4 vSamples;
  vSamples.x = HizMap.SampleLevel( HizMapSampler, vCorner0NDC, fLOD );
  vSamples.y = HizMap.SampleLevel( HizMapSampler, vCorner1NDC, fLOD );
  vSamples.z = HizMap.SampleLevel( HizMapSampler, vCorner2NDC, fLOD );
  vSamples.w = HizMap.SampleLevel( HizMapSampler, vCorner3NDC, fLOD );
   
  float fMaxSampledDepth = max( max( vSamples.x, vSamples.y ), max( vSamples.z, vSamples.w ) );
  float fSphereDepth = (ClosestSpherePoint.z / ClosestSpherePoint.w);
   
  // cull sphere if the depth is greater than the largest of our HiZ map values
  BufferOut[index] = (fSphereDepth > fMaxSampledDepth) ? 0 : 1;
  }
  else
  {
  // The sphere is outside of the view frustum
  BufferOut[index] = 0;
  }
  }

view rawDx11_HiZ_Shader.hlsl hosted with ? by GitHub

Sample

Here’s my sample implementation of the Hierarchical Z-Buffer Culling solution in DX11 and DX9.  Some notes, during one of my iterations I disabled the code for rendering a visible representation of the occluders which are just two triangles hardcoded in a vertex buffer to be rendered every frame.  Also, DX9 doesn’t actually render anything based on the results.  I was just using PIX to test my output of the cull render target and was more focused on getting it working in DX11.  The controls are the arrow keys to move the camera around.  Red boxes represent culled boxes, white boxes are the visible ones.

[Source Code] [Binary Sample]

Notes

I haven’t quite figured out how to deal with shadows.  I’ve sort of figured out how to cull the objects whose shadows you can’t possibly see, but not really.  Stephen mentions using a tactic similar to the one presented in this paper, CC Shadow Volumes.  I wasn’t able to figure it out in the hour I spent going over the paper and haven’t really found the time to revisit it.

Update 7/5/2010

I’ve added a new post on how to solve the problem of culling objects that cast shadows.

Update 6/26/2011

I’ve been doing some additional research into generating occluders. It doesn’t completely solve it, but it’s a start. Further work is needed.

Update 4/13/2012

I’ve started a project to automatically generate the occluders to be used with Hi-Z occlusion culling, Oxel!

原文地址:https://www.cnblogs.com/decode1234/p/9396242.html

时间: 2024-11-10 11:08:16

Hierarchical Z-Buffer Occlusion Culling的相关文章

Occlusion Culling遮挡剔除理解设置和地形优化应用

这里使用的是unity5.5版本 具体解释网上都有,就不多说了,这里主要说明怎么使用,以及参数设置和实际注意点 在大场景地形的优化上,但也不是随便烘焙就能降低帧率的,必须结合实际情况来考虑,当然还有透明物体问题和动态物体的剔除等等都将详细说明. 首先说一下烘焙的关系 因为unity摄像机自带视椎剔除(Frustum Culling),所以如果都是动态物体,那么只有视椎剔除,可以在bake过后通过camera的occlusion culling里面的visualize看出,其实不baked话也是有

(转!)Z buffer和W buffer简介

几乎所有目前的 3D 显示芯片都有 Z buffer 或 W buffer.不过,还是常常可以看到有人对 Z buffer 和 W buffer 有一些基本的问题,像是 Z buffer 的用途.Z buffer 和 W buffer 的差别.或是一些精确度上的问题等等.这篇文章的目的就是要简单介绍一下 Z buffer 和 W buffer. Z buffer 和 W buffer 是做什么用的呢?它们的主要目的,就是去除隐藏面,也就是 Hidden surface elimination(或

[Attila GPU] Attila OGL2/D3D9 GPU C Model Simulator

http://www.opengpu.org/forum.php?mod=viewthread&tid=1094&highlight=Attila 查看: 4979|回复: 14    [Attila GPU] Attila OGL2/D3D9 GPU C Model Simulator [复制链接]     ic.expert 管理员 注册时间 2007-7-11 积分 32646 串个门 加好友 打招呼 发消息 电梯直达 1#  发表于 2009-10-19 01:29:41 |只看该

Culling

hidden surface determination viewing frustum culling backface culling contribution culling occlusion culling/(z culling)

测试不同格式下depth buffer的精度

这篇文章主要是参考MJP的"Attack of The Depth Buffer",测试不同格式下depth buffer的精度. 测试的depth buffer包含两类: 一是非线性的depth buffer,存储着perspective z(也就是最常用的,透视投影后归一化的z/w的buffer),二是线性的depth buffer,存储着view space z(这里的线性指的是在view space 中是否线性).测试的格式包括16位浮点数,32位浮点数,16位定点数,还有最常

Framework for Graphics Animation and Compositing Operations

FIELD OF THE DISCLOSURE The subject matter of the present disclosure relates to a framework for handling graphics animation and compositing operations for a graphical user interface of a computer system application. BACKGROUND OF THE DISCLOSURE Mac O

39. Volume Rendering Techniques

Milan Ikits University of Utah Joe Kniss University of Utah Aaron Lefohn University of California, Davis Charles Hansen University of Utah This chapter presents texture-based volume rendering techniques that are used for visualizing three-dimensional

【Unity】4.7 摄像机

分类:Unity.C#.VS2015 创建日期:2016-04-11 一.简介 摄像机(Camera)是为玩家捕捉并展示世界的一种设备.场景中至少需要有一台摄像机,也可以在一个场景中使用多台摄像机.这些摄像机可以设置为在屏幕的任何位置或只在某些部分按任何顺序进行渲染. 要将游戏呈现给玩家,相机是必不可少的.可以对相机进行自定义.脚本化或父子化,从而实现可以想到的任何效果.在拼图游戏中,可以让相机 (Camera) 处于静止状态,以看到拼图的全视图.在第一人称射击游戏中,可以将相机 (Camera

A trip through the Graphics Pipeline 2011_07_Z/Stencil processing, 3 different ways

In this installment, I’ll be talking about the (early) Z pipeline and how it interacts with rasterization. Like the last part, the text won’t proceed in actual pipeline order; again, I’ll describe the underlying algorithms first, and then fill in the