凹凸映射(Bump mapping)

    Bump mapping is very much like Texture Mapping. However, where Texture Mapping added colour to a polygon, Bump Mapping adds, what appears to be surface roughness. This can have a dramatic effect on the look of a polygonal object. Bump Mapping can add minute detail to an object which would otherwise require a large number of polygons. Note that the polygon is still physically flat, but appears to a be bumpy.

  Take a look at the cube on the left. If you look closely, you can see lots of detail on it. It looks as if it must have been made from millions of tiny polygons, but is made from just 6. You might ask how this differs from Texture Mapping. The difference is that a Bump Map is a Texture Map that responds to the direction of the light.

The theory behind Bump Mapping

ake a close look at a rough surface. From a distance, the only way you know it is rough is by the fact that it‘s brightness changes up and down across it‘s surface. Your brain can pick out these bright and dark patterns and interpret them as bumps on the surface.

The little picture on the left illustrates this. You see what looks like an embossed surface. Some rectangles and letters have been pressed into it, but if you touch it, it just feels like the glass of your monitor.Nothing more has been done than change the brightness if the image in just the right places, your brain does the rest. This technique can be used to add real feeling to a polygon.

So how did I know which bits to make bright, and which to make dark? It‘s easy. Most people spend their lives in an environment where the main light source is above them (except us spods of course, whose main light source comes from the monitor). So surfaces angled upwards tend to be brightly lit, and downward inclined surfaces tend to be darker. Therefore it follows that if your eyes see light and dark areas on an object, they will interpret them as bumps; lighter bits it takes as up-facing, and darker bits it takes as down-facing. So, I just coloured the lines on the image accordingly.

As if you needed any more evidence, here is exactly the same image, but rotated 180 degrees. It appears to be the inverse of the previous one. Those areas that appeared to be pushed in, now seem to have popped out, and vice-versa.

Now, your brain is not entirely stupid. If you had visual evidence that the inverted image was lit from underneath, your brain would again interpret it as the first image. Infact, if you stare, and think hard enough about a light source comming from the bottom right, you can make that happen.

                                                                                                              

 What is a Bump Map

A bump map is very much like a texture map. However, rather than containing colours, it contains bumps. The most common way to represent bumps is by the height field method. A greyscaled texture map is used, where the brightness of each pixel represents how much it sticks out from the surface (see image on right). This is a very convenient way to store a bump map, and it‘s simple to make. How this information is used by the renderer will become apparent later.Of course, you needn‘t limit yourself to such simple patterns. You can have wood, stone, peeling paint, anything you want.

So how‘s it done

  Bump mapping is an extension of the Phong Shading technique. In Phong Shading, the surface normal was interpolated over the polygon, and that vector was used to calculate the brightness of that pixel. When you add bump mapping, you are altering the normal vector slightly, based on information in the bump map. Adjusting the normal vector causes changes in the brightness of the pixels in the polygon. Simple.

  Now, there are several ways of acheving this. I have never actually programmed real phong shading or bump mapping, only the fast versions (which work very nicely thankyou), so I am kind of making this next bit up as I go along. Bare with me.

  OK, so we need a method for converting the height information on the bump map into vector adjustment information for the phong shader. This is not so hard to do, but it might be tricky to explain.

  OK, so first you‘ll need a way to convert the bumps on the bumpmap into little vectors, one vector for each pixel. Take a look at the zoomed-in view of a bumpmap on the left. The lighter pixels stick out more than the darker ones. Get the picture? Now, for each pixel, a vector must be computed. These vectors represent the incline of the surface at that pixel. The picture on the right represents this. The little red vectors point in the ‘downhill‘ direction.

  There are many ways to calculate these vectors. Some are more accurate than others, but it depends exactly what you mean by accurate. One of the most common methods is to calculate the X and Y gradient at that pixel:

x_gradient = pixel(x-1, y) - pixel(x+1, y)

y_gradient = pixel(x, y-1) - pixel(x, y+1)

With these two gradients, you will now need to adjust the normal vector of the polygon at that point.Here is the polygon, with it‘s origional normal vector, n. Also shown are the two vectors which are going to be used to adjust the normal vector for this pixel. The two vectors must be aligned with the bumpmap for the polygon to be rendered correctly. I.E. the vectors are parallel to the axes of the bumpmap.

On the right are the bump map, and the polygon. Both pictures show the U and V vectors.

  Now you can see the new Normal vector after adjustment. The adjustment is simply:

  New_Normal = Normal + (U * x_gradient) + (V * y_gradient)

  With this New_Normal vector, you can procede to calculate the brightness of the polygon at that point, using the usual phong shading technique.

                                                                                      

时间: 2024-10-08 17:58:53

凹凸映射(Bump mapping)的相关文章

NeHe OpenGL教程 第二十二课:凹凸映射

转自[翻译]NeHe OpenGL 教程 前言 声明,此 NeHe OpenGL教程系列文章由51博客yarin翻译(2010-08-19),本博客为转载并稍加整理与修改.对NeHe的OpenGL管线教程的编写,以及yarn的翻译整理表示感谢. NeHe OpenGL第二十二课:凹凸映射 凹凸映射,多重纹理扩展: 这是一课高级教程,请确信你对基本知识已经非常了解了.这一课是基于第六课的代码的,它将建立一个非常酷的立体纹理效果. 这一课由Jens Schneider所写,它基本上是由第6课改写而来

ZFXEngine开发笔记之Bump Mapping(1)

作者:i_dovelemon 日期:2014 / 9 / 7 来源:CSDN博客 主题:Bump Mapping, Tangent Space, Normal Map, Height Map 引言 我们知道,在真实世界中,任何物体的表面都不是非常光滑的,这种粗糙的表面效果,在光源的照射下会有明暗的感觉.那么,如何在3D图形中模拟这种效果了?是通过建模的方法,为模型的表面建造出粗糙的表面?这种方法很难做出真实的粗糙感出来,毕竟表面的粗糙程度是很细微的效果,很难用建模的方式建模出来.所以,只能用其他

Bump mapping的GLSL实现 [转]

原文 http://www.cnblogs.com/CGDeveloper/archive/2008/07/03/1234206.html 如果物体表面细节很多,我们可以不断的精细化物体的几何数据,但是这样会产生大量的Lighting & Transformation等计算,为了实现丰富真实的物体表面,除了贴上一般纹理之外,往往还使用Bump mapping(凹凸纹理)技术.Bump mapping并没有增加物体的几何复杂度,它只是在计算物体的光照效果时作了“弊”,不使用物体本身的法向量,而是使

ZFXEngine开发笔记之Bump Mapping(2)

作者:i_dovelemon 日期: 2014 / 9 / 13 来源 : CSDN 主题 :Bump Mapping, Tangent Space, Normal Map, Height Map 引言 在上篇文章中,我讲述了如何根据高度图来创建法线图.并且承诺在后面会讲述3D几何学中的重要工具Tangent Space的相关知识.今天,就在这里,向大家讲述如何接下来的工作部分. Tangent Space 我们知道,在前篇文章中,讲述的法线图中的法线是在纹理图的空间中也就是Tangent Sp

【IPC进程间通讯之三】内存映射文件Mapping File

IPC进程间通信+共享内存Mapping         IPC(Inter-Process Communication.进程间通信).         文件映射(Mapping)是一种将文件内容映射到内存地址的技术,通过对映射内存,读写文件如同读写内存一般简单.        多个进程映射同一个文件映射对象,也即多个进程映射到同一个物理存储页面,因此.当一个进程向映射内存写入数据时,其它进程能够通过映射内存读取数据.通过这个机制实现进程间通信.                  1.内存文件映

映射(mapping)

就像是在 Data in, data out中解释过的,index中的每个document都有type.每个type都有自己的mapping或者schema definition.在type中mapping定义filed,定义每个filed中的数据类型,定义ES怎么处理这个filed,mapping也用于配置与该类型相关联的元数据. 我们会在 Types and Mappings中详细的讨论mapping,在这个章节,我们就是能让你足够开始就行了. core simple field types

Elasticsearch - 自动检测及动态映射Dynamic Mapping

一.自动映射: ES通过查看定义某文档的json格式就能猜测到文档结构,我们称之为自动映射,在开发过程中需要注意这些特性. 字段自动检测 在某个字段第一次出现时,如果之前没有定义过映射,ES会自动检测它可能满足的类型,然后创建对应的映射. JSON数据 ES中的数据类型 null 不会添加字段 true or false boolean floating point number double integer long object object array 依赖于第一个非null得值 stri

映射器Mapping

1)  org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping(核心) 将程序员定义的Action所对应的<bean>标签的name属性作为请求路径 <!-- 注册控制器(程序员) --> <bean name="/add.action" class="cn.itcast.javaee.springmvc.mapping.UserAction"></

依赖映射 Dependent Mapping

让一个类为其子类(泛意上的)执行DB映射 一些对象肯定会出现在另一对象的上下文中. 此时,使用另一对象的Mapper来执行第一个对象的映射,来简化映射过程. 运行机制 在DB持久化时,依赖者类依赖于所有者类.每个依赖者只能有一个所有者. 活动记录和行数据入口 依赖者类的映射代码都写在所有者中. 数据映射器 没有依赖者的映射器类,在所有者的映射器中完成依赖者的映射代码. 表数据入口 根本没有依赖者类.在所有者中完成对依赖者的处理. 通常,加载一个所有者时,会把相关的依赖者加载.当该相关加载耗费很大