关于视频YUV

这里有一篇摘自MSDN的文章。介绍了YUV视频数据格式。

About YUV Video

Digital video is often encoded in a YUV format. This article explains the general concepts of YUV video, along with some terminology, without going deeply into the mathematics of YUV video processing.

If you have worked with computer graphics, you are probably familiar with RGB color. An RGB color is encoded using three values: red, green, and blue. These values correspond directly to portions of the visible spectrum. The three RGB values form a mathematical
coordinate system, called a color space. The red component defines one axis of this coordinate system, blue defines the second, and green defines the third, as shown in the following illustration. Any valid RGB color falls somewhere within this color
space. For example, pure magenta is 100% blue, 100% red, and 0% green.

Although RGB is a common way to represent colors, other coordinate systems are possible. The term YUV refers to a family of color spaces, all of which encode brightness information separately from color information. Like RGB, YUV uses three values
to represent any color. These values are termed Y‘, U, and V. (In fact, this use of the term "YUV" is technically inaccurate. In computer video, the term YUV almost always refers to one particular color space named Y‘CbCr, discussed later. However, YUV is
often used as a general term for any color space that works along the same principles as Y‘CbCr.)

The Y‘ component, also called luma, represents the brightness value of the color. The prime symbol (‘) is used to differentiate luma from a closely related value, luminance, which is designated Y. Luminance is derived from linear RGB
values, whereas luma is derived from non-linear (gamma-corrected) RGB values. Luminance is a closer measure of true brightness but luma is more practical to use for technical reasons. The prime symbol is frequently omitted, but YUV color spaces always
use luma, not luminance.

Luma is derived from an RGB color by taking a weighted average of the red, green, and blue components. For standard-definition television, the following formula is used:

Y‘ = 0.299R + 0.587G + 0.114B

This formula reflects the fact that the human eye is more sensitive to certain wavelengths of light than others, which affects the perceived brightness of a color. Blue light appears dimmest, green appears brightest, and red is somewhere in between. This formula
also reflects the physical characteristics of the phosphors used in early televisions. A newer formula, taking into account modern television technology, is used for high-definition television:

Y‘ = 0.2125R + 0.7154G + 0.0721B

The luma equation for standard-definition television is defined in a specification named ITU-R BT.601. For high-definition television, the relevant specification is ITU-R BT.709.

The U and V components, also called chroma values or color difference values, are derived by subtracting the Y value from the red and blue components of the original RGB color:

U = B - Y‘

V = R - Y‘

Together, these values contain enough information to recover the original RGB value.

Benefits
of YUV

Analog television uses YUV partly for historical reasons. Analog color television signals were designed to be backward compatible with black-and-white televisions. The color television signal carries the chroma information (U and V) superimposed onto the luma
signal. Black-and-white televisions ignore the chroma and display the combined signal as a grayscale image. (The signal is designed so that the chroma does not significantly interfere with the luma signal.) Color televisions can extract the chroma and convert
the signal back to RGB.

YUV has another advantage that is more relevant today. The human eye is less sensitive to changes in hue than changes in brightness. As a result, an image can have less chroma information than luma information without sacrificing the perceived quality of the
image. For example, it is common to sample the chroma values at half the horizontal resolution of the luma samples. In other words, for every two luma samples in a row of pixels, there is one U sample and one V sample. Assuming that 8 bits are used to encode
each value, a total of 4 bytes are needed for every two pixels (two Y‘, one U, and one V), for an average of 16 bits per pixel, or 30% less than the equivalent 24-bit RGB encoding.

YUV is not inherently any more compact than RGB. Unless the chroma is downsampled, a YUV pixel is the same size as an RGB pixel. Also, the conversion from RGB to YUV is not lossy. If there is no downsampling, a YUV pixel can be converted back to RGB with no
loss of information. Downsampling makes a YUV image smaller and also loses some of the color information. If performed correctly, however, the loss is not perceptually significant.

YUV
in Computer Video

The formulas listed previously for YUV are not the exact conversions used in digital video. Digital video generally uses a form of YUV called Y‘CbCr. Essentially, Y‘CbCr works by scaling the YUV components to the following ranges:

Component Range
Y‘ 16–235
Cb/Cr 16–240, with 128 representing zero

These ranges assume 8 bits of precision for the Y‘CbCr components. Here is the exact derivation of Y‘CbCr, using the BT.601 definition of luma:

  1. Start with RGB values in the range [0...1]. In other words, pure black is 0 and pure white is 1. Importantly, these are non-linear (gamma corrected) RGB values.
  2. Calculate the luma. For BT.601, Y‘ = 0.299R + 0.587G + 0.114B, as described earlier.
  3. Calculate the intermediate chroma difference values (B - Y‘) and (R - Y‘). These values have a range of +/- 0.886 for (B - Y‘), and +/- 0.701 for (R - Y‘).
  4. Scale the chroma difference values as follows:

    Pb = (0.5 / (1 - 0.114)) × (B - Y‘)

    Pr = (0.5 / (1 - 0.299)) × (R - Y‘)

    These scaling factors are designed to give both values the same numerical range, +/- 0.5. Together, they define a YUV color space named Y‘PbPr. This color space is used in analog component video.

  5. Scale the Y‘PbPr values to get the final Y‘CbCr values:

    Y‘ = 16 + 219 × Y‘

    Cb = 128 + 224 × Pb

    Cr = 128 + 224 × Pr

These last scaling factors produce the range of values listed in the previous table. Of course, you can convert RGB directly to Y‘CbCr without storing the intermediate results. The steps are listed separately here to show how Y‘CbCr derives from the original
YUV equations given at the beginning of this article.

The following table shows RGB and YCbCr values for various colors, again using the BT.601 definition of luma.

Color R G B Y‘ Cb Cr
Black 0 0 0 16 128 128
Red 255 0 0 81 90 240
Green 0 255 0 145 54 34
Blue 0 0 255 41 240 110
Cyan 0 255 255 170 166 16
Magenta 255 0 255 106 202 222
Yellow 255 255 0 210 16 146
White 255 255 255 235 128 128

As this table shows, Cb and Cr do not correspond to intuitive ideas about color. For example, pure white and pure black both contain neutral levels of Cb and Cr (128). The highest and lowest values for Cb are blue and yellow, respectively. For Cr, the highest
and lowest values are red and cyan.

For
More Information

Related topics

Video
Media Types
Media
Types

http://msdn.microsoft.com/en-us/library/windows/desktop/bb530104(v=vs.85).aspx

时间: 2024-08-27 05:28:02

关于视频YUV的相关文章

最简单的基于FFMPEG的图像编码器(YUV编码为JPEG)

伴随着毕业论文的完毕,这两天最终腾出了空暇,又有时间搞搞FFMPEG的研究了.想着之前一直搞的都是FFMPEG解码方面的工作,非常少涉及到FFMPEG编码方面的东西,于是打算研究一下FFMPEG的编码.在网上看了一些样例,发现要不然是难度稍微有些大,要不然就是类库比較陈旧,于是就决定自己做一个编码方面的样例,方便以后学习. 本文的编码器实现了YUV420P的数据编码为JPEG图片.本着简单的原则,代码基本上精简到了极限.使用了2014年5月6号编译的最新的FFMPEG类库. 程序非常easy,打

利用FFmpeg玩转Android视频录制与压缩(二)<转>

转载出处:http://blog.csdn.net/mabeijianxi/article/details/72983362 预热 时光荏苒,光阴如梭,离上一次吹牛逼已经过去了两三个月,身边很多人的女票已经分了又合,合了又分,本屌依旧骄傲单身.上一次啊我们大致说了一些简单的FFmpeg命令以及Java层简单的调用方式,然后有很多朋友在github或者csdn上给我留言,很多时候我都选择避而不答,原因是本库以前用的so包是不开源的,我根本改不了里面东西.但是这一次啊我们玩点大的,我重新编译了FFm

最简单的基于FFMPEG的视频编码器(YUV编码为H.264)

本文介绍一个最简单的基于FFMPEG的视频编码器.该编码器实现了YUV420P的像素数据编码为H.264的压缩编码数据.编码器代码十分简单,但是每一行代码都很重要,适合好好研究一下.弄清楚了本代码也就基本弄清楚了FFMPEG的编码流程.目前我虽然已经调通了程序,但是还是有些地方没有完全搞明白,需要下一步继续探究然后补充内容. 本程序使用最新版的类库(编译时间为2014.5.6),开发平台为VC2010.所有的配置都已经做好,只需要运行就可以了. 下面直接上代码: /* *最简单的基于FFmpeg

最简单的基于FFmpeg的视频编码器-更新版(YUV编码为HEVC(H.265))

前一阵子做过一个基于FFmpeg的视频编码器的例子:最简单的基于FFMPEG的视频编码器(YUV编码为H.264)在该例子中,可以将YUV像素数据(YUV420P)编码为H.264码流.因为如今FFmpeg已经实现了对libx265的支持,因此对上述编码H.264的例子进行了升级,使之变成编码H.265(HEVC)的例子.比较早的FFmpeg的类库(大约几个月以前的版本,我这里编译时间是2014.05.06)对H.265的编码支持有问题.开始调试的时候,以为是自己的代码有问题,几经修改也没有找到

Linux音视频(YUV图像数据格式)

摄像头图像数据的主流封装格式是JPEG/MJPG/YUV等,这些都是经过编码压缩的数据,大大减少了图像尺寸,方便传输和存储. 拓展: YUV是视频图像数据的主流格式,它根据人类眼睛的视觉特征设计--由于人类的眼睛对亮度的敏感度比颜色要高许多,而且在RGB三原色中对绿色有尤为敏感,利用这个原理,可以把色度信息减少一点,人眼也无法查觉这一点. YUV三个字母中,其中"Y"表示明亮度(Lumina nce或Luma),也就是灰阶值,而"U"和"V"表示

使用D3D渲染YUV视频数据

源代码下载 在PC机上,对于YUV格式的视频如YV12,YUY2等的显示方法,一般是采用DIRECTDRAW,使用显卡的OVERLAY表面显示.OVERLAY技术主要是为了解决在PC上播放VCD而在显卡上实现的一个基于硬件的技术.OVERLAY的出现,很好的解决了在PC上播放VCD所遇到的困难.早期PC处理能力有限,播放VCD时,不但要做视频解码工作,还需要做YUV到RGB的颜色空间转换,软件实现非常耗费资源,于是,YUV OVERLAY表面出现了,颜色空间转换被转移到显卡上去实现,显卡做这些工

USB视频采集系统 视频测试软件将正式发布(方便调试测试各自摄像头,RAW,RGB,YUV)

先上图,看看这个软件,学习fpga将近一年,了解视频图像开发方向也半年有余,不断学习不断总结,开发软件工具是为了更方便的学习新通信 主要相关知识: FPGA+SDRAM+VGA(双端口fifo技术) 68013 从端口fifo(usb2.0高速传输,速度高达43Mbyte/s) C++(mFC vs2008开发) 软件介绍: 本软件可以采集68013传输的各种图像格式 1.各自分辨率自由配置,均支持(已测:640*480,800*600,1024*768,1280*1024,2592*1944)

2个YUV视频拼接技术

http://blog.csdn.net/huahuahailang/article/details/9040847 2个YUV视频拼接技术 http://zhongcong386.blog.163.com/blog/static/134727804201352610301922/ YUV420视频上面添加字幕 https://my.oschina.net/zhuanghaoren/blog/200684 ffmpeg如何给视频添加水印(watermark) http://blog.csdn.n

YUV与RGB视频采样

RGB:RGB表示红绿蓝三原色,一个视频点是有RGB红绿蓝三原色组成的,根据三原色各亮度不同,可组合成不同颜色的点,这就是彩色显示的原理.如果每种原色分别对应一个采样,          则需要三个采样表示一个彩色点,比如一个采样点是8位的,那么一个彩色点即为24位,也就是RGB8:8:8,称为真彩色. YUV:人类眼睛感觉亮度分成不同"视觉"细胞,因为感觉颜色的"彩色"视觉细胞少,所以对"颜色"不敏感,这就出现了减少"颜色"