opengl 教程(23) shadow mapping (2)

原帖地址:http://ogldev.atspace.co.uk/www/tutorial24/tutorial24.html
Background

In the previous tutorial we learned the basic principle behind the shadow mapping technique and saw how to render the depth into a texture and later display it on the screen by sampling from the depth buffer. In this tutorial we will see how to use this capability and display the shadow itself.

We know that shadow mapping is a two-pass technique and that in the first pass the scene is rendered from the point of view of the light. Let‘s review what happens to the Z component of the position vector during that first pass:

  1. The position of the vertices that are fed into the vertex shader are generally specified in local space.
  2. The vertex shader transforms the position from local space to clip space and forwards it down the pipeline (see tutorial 12 if you need a refresher about clip space).
  3. The rasterizer performs perspective divide (a division of the position vector by its W component). This takes the position vector from clip space to NDC space. In NDC space everything which ends up on the screen has a X, Y and Z components in the range [-1,1]. Things outside these ranges are clipped away.
  4. The rasterizer maps the X and Y of the position vector to the dimensions of the framebuffer (e.g. 800x600, 1024x768, etc). The results are the screen space coordinates of the position vector.
  5. The rasterizer takes the screen space coordinates of the three triangle vertices and interpolates them to create the unique coordinates for each pixel that the triangle covers. The Z value (still in the [-1,1] range) is also interpolated so every pixel has its own depth.
  6. Since we disabled color writes in the first pass the fragment shader is disabled. The depth test, however, still executes. To compare the Z value of the current pixel with the one in the buffer the screen space coordinates of the pixel are used to fetch the depth from the buffer. If the depth of the new pixel is smaller than the stored one the buffer is updated (and if color writes were enabled the color buffer would have also been updated).

In the process above we saw how the depth value from the light point of view is calculated and stored. In the second pass we render from the camera point of view so naturally we get a different depth. But we need both depth values - one to get the triangles ordered correctly on the screen and the other to check what is inside the shadow and what is not. The trick in shadow mapping is to maintain two position vectors and two WVP matrices while traveling through the 3D pipeline. One WVP matrix is calculated from the light point of view and the other from the camera point of view. The vertex shader gets one position vector in local space as usual, but it outputs two vectors:

  1. The builtin gl_Position which is the result of transforming the position by the camera WVP matrix.
  2. A "plain" vector which is the result of transforming the position by the light WVP matrix.

The first vector will go through above process (--> NDC space...etc) and these will be used for the regular rasterization. The second vector will simply be interpolated by the rasterizer across the triangle face and each fragment shader invocation will be provided with its own value. So now for each physical pixel we also have a clip space coordinate of the same point in the original triangle when looking at it from the light point of view. It is very likely that the physical pixels from the two point of views are different but the general location in the triangle is the same. All that remains is to somehow use that clip space coordinate in order to fetch the depth value from the shadow map. After that we can compare the depth to the one in the clip space coordinate and if the stored depth is smaller then it means the pixel is in shadow (because another pixel had the same clip space coordinate but with a smaller depth).

So how can we fetch the depth in the fragment shader using the clip space coordinate that was calculated by trasforming the position by the light WVP matrix? When we start out we are basically in step 2 above.

  1. Since the fragment shader receives the clip space coordinate as a standard vertex attribute the rasterizer does not perform perspective divide on it (only what goes through gl_Position). But this is something that is very easy to do manually in the shader. We divide the coordinate by its W component and get a coordinate in NDC space.
  2. We know that in NDC the X and Y range from -1 to 1. In step 4 above the rasterizer maps the NDC coordinates to screen space and uses them to store the depth. We are going to sample the depth and for that we need a texture coordinate in the range [0,1]. If we linearly map the range [-1,1] to [0,1] we will get a texture coordinate that will map to the same location in the shadow map. Example: the X in NDC is zero and the width of the texture is 800. Zero in NDC needs to be mapped to 0.5 in the texture coordinate space (because it is half way between -1 and 1). The texture coordinate 0.5 is mapped to 400 in the texture which is the same location that is calculated by the rasterizer when it performs screen space transform.
  3. Transforming X and Y from NDC space to texture space is done as follows:
    • u = 0.5 * X + 0.5
    • v = 0.5 * Y + 0.5
Source walkthru

(lighting_technique.h:80)

class LightingTechnique : public Technique {
    public:
    ...
        void SetLightWVP(const Matrix4f& LightWVP);
        void SetShadowMapTextureUnit(unsigned int TextureUnit);
    ...
    private:
        GLuint m_LightWVPLocation;
        GLuint m_shadowMapLocation;
...

The lighting technique needs a couple of new attributes. A WVP matrix that is calculated from the light point of view and a texture unit for the shadow map. We will continue using texture unit 0 for the regular texture that is mapped on the object and will dedicate texture unit 1 for the shadow map.

(lighting.vs)

#version 330
layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;
uniform mat4 gWVP;
uniform mat4 gLightWVP;
uniform mat4 gWorld;
out vec4 LightSpacePos;
out vec2 TexCoord0;
out vec3 Normal0;
out vec3 WorldPos0;
void main()
{
    gl_Position = gWVP * vec4(Position, 1.0);
LightSpacePos = gLightWVP * vec4(Position, 1.0);
    TexCoord0 = TexCoord;
    Normal0 = (gWorld * vec4(Normal, 0.0)).xyz;
    WorldPos0 = (gWorld * vec4(Position, 1.0)).xyz;
}

This is the updated vertex shader of the LightingTechnique class with the additions marked in bold text. We have an additional WVP matrix uniform variable and a 4-vector as output which contains the clip space coordinates calculated by transforming the position by the light WVP matrix. As you can see, in the vertex shader of the first pass the variable gWVP contained the same matrix as gLightWVP here and gl_Position there got the same value as LightSpacePos here. But since LightSpacePos is just a standard vector it does not get an automatic perspective division as gl_Position. We will do this manually in the fragment shader below.

(lighting.fs:58)

float CalcShadowFactor(vec4 LightSpacePos)
{
    vec3 ProjCoords = LightSpacePos.xyz / LightSpacePos.w;
    vec2 UVCoords;
    UVCoords.x = 0.5 * ProjCoords.x + 0.5;
    UVCoords.y = 0.5 * ProjCoords.y + 0.5;
    float z = 0.5 * ProjCoords.z + 0.5;
    float Depth = texture(gShadowMap, UVCoords).x;
    if (Depth < (z + 0.00001))
        return 0.5;
    else
        return 1.0;
}

This function is used in the fragment shader to calculate the shadow factor of a pixel. The shadow factor is a new factor in the light equation. We simply multiply the result of our current light equation by that factor and this causes some attenuation of the light in pixels that are determined to be shadowed. The function takes the interpolated LightSpacePos vector that was passed from the vertex shader. The first step is to perform perspective division - we divide the XYZ components by the W component. This transfers the vector to NDC space. Next we prepare a 2D coordinate vector to be used as the texture coordinate and initialize it by transforming the LightSpacePos vector from NDC to texture space according to the equation in the background section. The texture coordinates are used to fetch the depth from the shadow map. This is the depth of the closest location from all the points in the scene that are projected to this pixel. We compare that depth to the depth of the current pixel and if it is smaller return a shadow factor of 0.5, else the shadow factor is 1.0 (no shadow). The Z from the NDC space also goes through transformation from the (-1,1) range to (0,1) range because we have to be in the same space when we compare. Notice that we add a small epsilon value to the current pixel‘s depth. This is to avoid precision errors that are inherent when dealing with floating point values.

时间: 2024-08-10 21:18:21

opengl 教程(23) shadow mapping (2)的相关文章

OpenGL 阴影之Shadow Mapping和Shadow Volumes

先说下开发环境.VS2013,C++空项目,引用glut,glew.glut包含基本窗口操作,免去我们自己新建win32窗口一些操作.glew使我们能使用最新opengl的API,因winodw本身只包含opengl 1.1版本的API,根本是不能用的. 其中矩阵计算采用gitHub项目openvr中的三份文件, Vectors.h ,Matrices.h, Matrices.cpp,分别是矢量与点类,矩阵类,我们需要的一些操作,矢量的叉乘和点乘,矩阵转置,矩阵的逆,矩阵与矢量相剩等. 这里主要

OpenGL阴影,Shadow Mapping(附源程序)

实验平台:Win7,VS2010 先上结果截图(文章最后下载程序,解压后直接运行BIN文件夹下的EXE程序): 本文描述图形学的两个最常用的阴影技术之一,Shadow Mapping方法(另一种是Shadow Volumes方法).在讲解Shadow Mapping基本原理及其基本算法的OpenGL实现之后,将继续深入分析解决几个实际问题,包括如何处理全方向点光源.多个光源.平行光.最近还有可能写一篇Shadow Volumes的博文(目前已经将基本理论弄清楚了),在那里,将对Shadow Ma

[OpenGL] shadow mapping(实时阴影映射)

source:原文地址 code:点击可以直接下载源代码 1978年,Lance Williams在其发表的论文<Casting curved shadows on curved surfaces>中提出了Shadow mapping算法,从那以后,该算法在离线渲染和实时渲染两个领域都得到了广泛的应用.皮尔斯动画工作室的Renderman渲染器.以及一些知名电影如<玩具总动员>都使用了shadow mapping技术. 在众多图形应用的阴影技术中,shadow mapping只是产

shadow mapping实现动态shadow实现记录

前段时间一直在弄一个室内场景,首先完成了render,效果还可以.然后给其加上shadow,使其更逼真.这里主要记录下在做的过程中遇到的问题. 1.是在导入场景的时候,由于场景比较大(200M)左右,所以在ios上加载这么大的场景会频繁的memorywarning,然后就会被系统kill掉.这个问题的解决方法是通过改变数据类型来达到压缩的目的.顶点的坐标double是没法改变的,如果改变会严重影响场景的准确度.这里主要是改变normal和 uv的类型,其实在正常的精度范围内,normal和 uv

NeHe OpenGL教程 第二十三课:球面映射

转自[翻译]NeHe OpenGL 教程 前言 声明,此 NeHe OpenGL教程系列文章由51博客yarin翻译(2010-08-19),本博客为转载并稍加整理与修改.对NeHe的OpenGL管线教程的编写,以及yarn的翻译整理表示感谢. NeHe OpenGL第二十三课:球面映射 球面映射: 这一个将教会你如何把环境纹理包裹在你的3D模型上,让它看起来象反射了周围的场景一样. 球体环境映射是一个创建快速金属反射效果的方法,但它并不像真实世界里那么精确!我们从18课的代码开始来创建这个教程

shadow mapping实现动态shadow实现记录 【转】

http://blog.csdn.net/iaccepted/article/details/45826539 前段时间一直在弄一个室内场景,首先完成了render,效果还可以.然后给其加上shadow,使其更逼真.这里主要记录下在做的过程中遇到的问题. 1.是在导入场景的时候,由于场景比较大(200M)左右,所以在iOS上加载这么大的场景会频繁的memorywarning,然后就会被系统kill掉.这个问题的解决方法是通过改变数据类型来达到压缩的目的.顶点的坐标double是没法改变的,如果改

Tutorial - Deferred Rendering Shadow Mapping 转

http://www.codinglabs.net/tutorial_opengl_deferred_rendering_shadow_mapping.aspx Tutorial - Deferred Rendering Shadow Mapping In this tutorial I will present the shadow mapping technique implemented in a deferred renderer. This tutorial will lean on

现代OpenGL教程 02——贴图

导读:现代OpenGL教程 01——入门指南在本文中,我们将给三角形加一个贴图,这需要在顶点和片段着色器中加入一些新变量,创建和使用贴图对象,并且学习一点贴图单元和贴图坐标的知识. 本文会使用两个新的类到tdogl命名空间中:tdogl:Bitmap和tdogl:Texture.这些类允许我们将jpg,png或bmp图片上传到显存并用于着色器.tdogl:Program类也增加一些相关接口. 获取代码 所有例子代码的zip打包可以从这里获取:https://github.com/tomdalli

OpenGL教程之新手上路

Jeff Molofee(NeHe)的OpenGL教程- 新手上路 译者的话:NeHe的教程一共同拥有30多课,内容翔实,而且不断更新 .国内的站点实在应该向他们学习.令人吃惊的是,NeHe提供的例程源代码差点儿都有跨平台的不同编译版本号,涉及从Visual C++.Borland C++.Visual Basic.MacOS X/GLUT.Linux/GLX.Code Warrior.Delphi.C++ Builder.MASM.ASM.MingW32&Allegro以及Python等等的不