【随笔】安卓平台YUV数据(NV12/I420)渲染

【场景】

为安卓应用增加解码h264和显示YUV的功能。解码用的是AMediacodec,此处不展开。

渲染用的是GLES 2.0,网上很多方案,包括webRTC的渲染都是针对I420(YUV420 三平面数据),比较少针对NV12的(可能我搜索能力比较辣鸡),

花了两天时间才找到正确的办法,特此记录。

【方案描述】

Opengl ES渲染 NV12的过程:

创建两个surface,分别代表Y平面和UV平面;

在shader中实现NV12转RGB,经过GPU渲染,最终呈现到安卓GLSurfaceView上面。

Opengl ES 渲染 I420的过程:

创建三个surface,分别代表Y平面,U平面,和V平面;

在shader中实现 I420转RGB,经过GPU渲染,最终呈现到安卓GLSurfaceView上面。

* 关于GLSurfaceView如何使用,这里不展开,仅仅记录jni层的render如何编写。

【代码】

代码基于安卓源码WebRTC的YUV渲染部分,加了YUV数据类型的判断和NV12相关的逻辑(标红)。

/*
 *  Copyright (c) 2012 The WebRTC project authors. All Rights Reserved.
 *
 *  Use of this source code is governed by a BSD-style license
 *  that can be found in the LICENSE file in the root of the source
 *  tree. An additional intellectual property rights grant can be found
 *  in the file PATENTS.  All contributing project authors may
 *  be found in the AUTHORS file in the root of the source tree.
 */
#include <GLES2/gl2.h>
#include <GLES2/gl2ext.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <stdio.h>

extern "C" {
#include "w_log.h"
}
#include "render_init.h"
#include "render_opengles20.h"

int32_t localColorFMT;
const char RenderOpenGles20::g_indices[] = { 0, 3, 2, 0, 2, 1 };

const char RenderOpenGles20::g_vertextShader[] = {
    "attribute vec4 aPosition;\n"
    "attribute vec2 aTextureCoord;\n"
    "varying vec2 vTextureCoord;\n"
    "void main() {\n"
    "  gl_Position = aPosition;\n"
    "  vTextureCoord = aTextureCoord;\n"
    "}\n" };

// The fragment shader.
// Do YUV to RGB565 conversion.
const char RenderOpenGles20::g_fragmentShader[] = {
    "precision mediump float;\n"
    "uniform sampler2D Ytex;\n"
    "uniform sampler2D Utex,Vtex;\n"
    "varying vec2 vTextureCoord;\n"
    "uniform int colorFMT;"
    "void main(void) {\n"
    "  float nx,ny,r,g,b,y,u,v;\n"
    "  mediump vec4 txl,ux,vx;"
    "  nx=vTextureCoord[0];\n"
    "  ny=vTextureCoord[1];\n"

    "  if (colorFMT == 21){"
    "  y=texture2D(Ytex,vec2(nx,ny)).r;\n"
    "  u=texture2D(Utex,vec2(nx,ny)).r;\n"
    "  v=texture2D(Utex,vec2(nx,ny)).a;\n"
    "  }"
    "  else {"
    "  y=texture2D(Ytex,vec2(nx,ny)).r;\n"
    "  u=texture2D(Utex,vec2(nx,ny)).r;\n"
    "  v=texture2D(Vtex,vec2(nx,ny)).r;\n"
    "  }"

    //"  y = v;\n"+
    "  y=1.1643*(y-0.0625);\n"
    "  u=u-0.5;\n"
    "  v=v-0.5;\n"

    "  r=y+1.5958*v;\n"
    "  g=y-0.39173*u-0.81290*v;\n"
    "  b=y+2.017*u;\n"
    "  gl_FragColor=vec4(r,g,b,1.0);\n"
    "}\n" };

RenderOpenGles20::RenderOpenGles20() :
_id(0),
_textureWidth(-1),
_textureHeight(-1),
_colorFMT(-1)
{
    LOGI("%s: id %d", __FUNCTION__, (int) _id);

    const GLfloat vertices[20] = {
        // X, Y, Z, U, V
        1, -1, 0, 1, 0, // Bottom Left
        -1, -1, 0, 0, 0, //Bottom Right
        -1, 1, 0, 0, 1, //Top Right
        1, 1, 0, 1, 1 }; //Top Left

    memcpy(_vertices, vertices, sizeof(_vertices));
}

RenderOpenGles20::~RenderOpenGles20() {
    glDeleteTextures(3, _textureIds);
}

int32_t RenderOpenGles20::SetRotateMode(){
    int32_t zOrder = 0;
    GLfloat left = 1;
    GLfloat right = 0;
    GLfloat top = 0;
    GLfloat bottom = 1; // rotate

    LOGI("Should rotate");
    SetCoordinates(zOrder, left, top, right, bottom);

    // set the vertices array in the shader
    // _vertices contains 4 vertices with 5 coordinates.
    // 3 for (xyz) for the vertices and 2 for the texture
    glVertexAttribPointer(_positionHandle, 3, GL_FLOAT, false,
                          5 * sizeof(GLfloat), _vertices);
    checkGlError("glVertexAttribPointer aPosition");

    glEnableVertexAttribArray(_positionHandle);
    checkGlError("glEnableVertexAttribArray positionHandle");

    // set the texture coordinate array in the shader
    // _vertices contains 4 vertices with 5 coordinates.
    // 3 for (xyz) for the vertices and 2 for the texture
    glVertexAttribPointer(_textureHandle, 2, GL_FLOAT, false, 5
                          * sizeof(GLfloat), &_vertices[3]);
    checkGlError("glVertexAttribPointer maTextureHandle");
    glEnableVertexAttribArray(_textureHandle);
    checkGlError("glEnableVertexAttribArray textureHandle");

    LOGI("Rotate Done");
}

int32_t RenderOpenGles20::SetFlags(uint32_t flags){
    LOGI("Flags: %d", flags);
    if (0 == (flags & FLAG_ROTATE)){
        SetRotateMode();
    }
}

int32_t RenderOpenGles20::Setup(int32_t width, int32_t height) {
    LOGE("%s: width %d, height %d", __FUNCTION__, (int) width,
                 (int) height);

    printGLString("Version", GL_VERSION);
    printGLString("Vendor", GL_VENDOR);
    printGLString("Renderer", GL_RENDERER);
    printGLString("Extensions", GL_EXTENSIONS);

    int maxTextureImageUnits[2];
    int maxTextureSize[2];
    glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, maxTextureImageUnits);
    glGetIntegerv(GL_MAX_TEXTURE_SIZE, maxTextureSize);

    LOGE("%s: number of textures %d, size %d", __FUNCTION__,
                 (int) maxTextureImageUnits[0], (int) maxTextureSize[0]);

    _program = createProgram(g_vertextShader, g_fragmentShader);
    if (!_program) {
        LOGE("%s: Could not create program", __FUNCTION__);
        return -1;
    }

    int positionHandle = glGetAttribLocation(_program, "aPosition");
    checkGlError("glGetAttribLocation aPosition");
    if (positionHandle == -1) {
        LOGE("%s: Could not get aPosition handle", __FUNCTION__);
        return -1;
    }
    _positionHandle = positionHandle;

    int textureHandle = glGetAttribLocation(_program, "aTextureCoord");
    checkGlError("glGetAttribLocation aTextureCoord");
    if (textureHandle == -1) {
        LOGE("%s: Could not get aTextureCoord handle", __FUNCTION__);
        return -1;
    }
    _textureHandle = textureHandle;

    // set the vertices array in the shader
    // _vertices contains 4 vertices with 5 coordinates.
    // 3 for (xyz) for the vertices and 2 for the texture
    glVertexAttribPointer(positionHandle, 3, GL_FLOAT, false,
                          5 * sizeof(GLfloat), _vertices);
    checkGlError("glVertexAttribPointer aPosition");

    glEnableVertexAttribArray(positionHandle);
    checkGlError("glEnableVertexAttribArray positionHandle");

    // set the texture coordinate array in the shader
    // _vertices contains 4 vertices with 5 coordinates.
    // 3 for (xyz) for the vertices and 2 for the texture
    glVertexAttribPointer(textureHandle, 2, GL_FLOAT, false, 5
                          * sizeof(GLfloat), &_vertices[3]);
    checkGlError("glVertexAttribPointer maTextureHandle");
    glEnableVertexAttribArray(textureHandle);
    checkGlError("glEnableVertexAttribArray textureHandle");

    glUseProgram(_program);
    int i = glGetUniformLocation(_program, "Ytex");
    checkGlError("glGetUniformLocation");
    glUniform1i(i, 0); /* Bind Ytex to texture unit 0 */ // 给shader里面的Ytex变量赋值
    checkGlError("glUniform1i Ytex");

    i = glGetUniformLocation(_program, "Utex");
    checkGlError("glGetUniformLocation Utex");
    glUniform1i(i, 1); /* Bind Utex to texture unit 1 */ // 给shader里面的Utex变量赋值
    checkGlError("glUniform1i Utex");

    i = glGetUniformLocation(_program, "Vtex");
    checkGlError("glGetUniformLocation");
    glUniform1i(i, 2); /* Bind Vtex to texture unit 2 */ // 给shader里面的Vtex变量赋值
    checkGlError("glUniform1i Vtex");

    glViewport(0, 0, width, height);// 视窗大小,只在setup函数的时候执行一次。
    LOGE("ViewPort:%d %d", width , height);
    checkGlError("glViewport");
    return 0;
}

// SetCoordinates
// Sets the coordinates where the stream shall be rendered.
// Values must be between 0 and 1.
int32_t RenderOpenGles20::SetCoordinates(int32_t zOrder,
                                         const float left,
                                         const float top,
                                         const float right,
                                         const float bottom) {
    if ((top > 1 || top < 0) || (right > 1 || right < 0) ||
        (bottom > 1 || bottom < 0) || (left > 1 || left < 0)) {
        LOGE("%s: Wrong coordinates", __FUNCTION__);
        return -1;
    }

    //  X, Y, Z, U, V
    // -1, -1, 0, 0, 1, // Bottom Left
    //  1, -1, 0, 1, 1, //Bottom Right
    //  1,  1, 0, 1, 0, //Top Right
    // -1,  1, 0, 0, 0  //Top Left

    // Bottom Left
    _vertices[0] = (left * 2) - 1;
    _vertices[1] = -1 * (2 * bottom) + 1;
    _vertices[2] = zOrder;

    //Bottom Right
    _vertices[5] = (right * 2) - 1;
    _vertices[6] = -1 * (2 * bottom) + 1;
    _vertices[7] = zOrder;

    //Top Right
    _vertices[10] = (right * 2) - 1;
    _vertices[11] = -1 * (2 * top) + 1;
    _vertices[12] = zOrder;

    //Top Left
    _vertices[15] = (left * 2) - 1;
    _vertices[16] = -1 * (2 * top) + 1;
    _vertices[17] = zOrder;

    return 0;
}

GLuint RenderOpenGles20::loadShader(GLenum shaderType, const char* pSource)
{
    GLuint shader = glCreateShader(shaderType);
    if (shader) {
        glShaderSource(shader, 1, &pSource, NULL);
        glCompileShader(shader);
        GLint compiled = 0;
        glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
        if (!compiled) {
            GLint infoLen = 0;
            glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
            if (infoLen) {
                char* buf = (char*) malloc(infoLen);
                if (buf) {
                    glGetShaderInfoLog(shader, infoLen, NULL, buf);
                    LOGE("%s: Could not compile shader %d: %s",
                                 __FUNCTION__, shaderType, buf);
                    free(buf);
                }
                glDeleteShader(shader);
                shader = 0;
            }
        }
    }
    return shader;
}

GLuint RenderOpenGles20::createProgram(const char* pVertexSource,
                                       const char* pFragmentSource) {
    GLuint vertexShader = loadShader(GL_VERTEX_SHADER, pVertexSource);
    if (!vertexShader) {
        return 0;
    }

    GLuint pixelShader = loadShader(GL_FRAGMENT_SHADER, pFragmentSource);
    if (!pixelShader) {
        return 0;
    }

    GLuint program = glCreateProgram();
    if (program) {
        glAttachShader(program, vertexShader);
        checkGlError("glAttachShader");
        glAttachShader(program, pixelShader);
        checkGlError("glAttachShader");
        glLinkProgram(program);
        GLint linkStatus = GL_FALSE;
        glGetProgramiv(program, GL_LINK_STATUS, &linkStatus);
        if (linkStatus != GL_TRUE) {
            GLint bufLength = 0;
            glGetProgramiv(program, GL_INFO_LOG_LENGTH, &bufLength);
            if (bufLength) {
                char* buf = (char*) malloc(bufLength);
                if (buf) {
                    glGetProgramInfoLog(program, bufLength, NULL, buf);
                    LOGE("%s: Could not link program: %s",
                                 __FUNCTION__, buf);
                    free(buf);
                }
            }
            glDeleteProgram(program);
            program = 0;
        }
    }
    return program;
}

void RenderOpenGles20::printGLString(const char *name, GLenum s) {
    const char *v = (const char *) glGetString(s);
    LOGI("GL %s = %s\n", name, v);
}

void RenderOpenGles20::checkGlError(const char* op) {
#ifdef ANDROID_LOG
    for (GLint error = glGetError(); error; error = glGetError()) {
        LOGE("after %s() glError (0x%x)\n", op, error);
    }
#else
    return;
#endif
}

static void InitializeTexture(int name, int id, int width, int height, uint32_t format) {
    glActiveTexture(name);
    glBindTexture(GL_TEXTURE_2D, id);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexImage2D(GL_TEXTURE_2D, 0, format, width, height, 0,
                 format, GL_UNSIGNED_BYTE, NULL);
}

// Uploads a plane of pixel data, accounting for stride != width*bpp.// 没用到的函数
static void GlTexSubImage2D(GLsizei width, GLsizei height, int stride,
                            const uint8_t* plane) {
    if (stride == width) {
        // Yay!  We can upload the entire plane in a single GL call.
        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_LUMINANCE,
                        GL_UNSIGNED_BYTE,
                        static_cast<const GLvoid*>(plane));
    } else {
        // Boo!  Since GLES2 doesn‘t have GL_UNPACK_ROW_LENGTH and Android doesn‘t
        // have GL_EXT_unpack_subimage we have to upload a row at a time.  Ick.
        for (int row = 0; row < height; ++row) {
            glTexSubImage2D(GL_TEXTURE_2D, 0, 0, row, width, 1, GL_LUMINANCE,
                            GL_UNSIGNED_BYTE,
                            static_cast<const GLvoid*>(plane + (row * stride)));
        }
    }
}

int32_t RenderOpenGles20::Render(void * data, int32_t widht, int32_t height)
{
    LOGI("%s: id %d", __FUNCTION__, (int) _id);

    glUseProgram(_program);
    checkGlError("glUseProgram");

    if (_colorFMT != localColorFMT || _textureWidth != (GLsizei) widht || _textureHeight != (GLsizei) height) {

        SetupTextures(widht, height);
    }
    UpdateTextures(data, widht, height);

    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, g_indices);
    checkGlError("glDrawArrays");

    return 0;
}

void RenderOpenGles20::SetupTextures(int32_t width, int32_t height)
{
    glDeleteTextures(3, _textureIds);
    glGenTextures(3, _textureIds); //Generate  the Y, U and V texture
    InitializeTexture(GL_TEXTURE0, _textureIds[0], width, height, GL_LUMINANCE);

    GLint i = glGetUniformLocation(_program, "colorFMT");
    checkGlError("glGetUniformLocation colorFMT");
    glUniform1i(i, localColorFMT); // 给shader里面的coloFMT变量赋值
    checkGlError("glUniform1i colorFMT");
    _colorFMT = localColorFMT;

    LOGI("localColorFMT:%d", localColorFMT);
    if (localColorFMT == COLOR_FormatYUV420Planar){
        InitializeTexture(GL_TEXTURE1, _textureIds[1], width / 2, height / 2, GL_LUMINANCE);
        InitializeTexture(GL_TEXTURE2, _textureIds[2], width / 2, height / 2, GL_LUMINANCE); //注意这里体现了I420三平面
    }
    else if (localColorFMT == COLOR_FormatYUV420SemiPlanar) {
        InitializeTexture(GL_TEXTURE1, _textureIds[1], width / 2, height / 2, GL_LUMINANCE_ALPHA); // 这里体现了NV12两平面
    }     /* 与直接使用RBGA数据不同,这里的参数采用的是GL_LUMINANCE,与GL_LUMINANCE_ALPHA,   GL_RGBA单独保存R、G、B、A四个数据,而GL_LUMINANCE将这四个数据合并成一个,   因为这样1个Y就可以与1个RGBA对应。GL_LUMINANCE_ALPHA代表首先是亮度,然后是alpha值,   这样我们就能将U值与V值分别取出。参考[2]*/

    checkGlError("SetupTextures");

    _textureWidth = width;
    _textureHeight = height;
}

void RenderOpenGles20::UpdateTextures(void* data, int32_t widht, int32_t height)
{
    glActiveTexture(GL_TEXTURE0);
    glBindTexture(GL_TEXTURE_2D, _textureIds[0]);
    glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, widht, height, GL_LUMINANCE, GL_UNSIGNED_BYTE,
                    data);

    LOGI("localColorFMT:%d", localColorFMT);
    if (localColorFMT == COLOR_FormatYUV420Planar){
        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D, _textureIds[1]);
        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, widht / 2, height / 2, GL_LUMINANCE,
                        GL_UNSIGNED_BYTE, (char *)data + widht * height);
        glActiveTexture(GL_TEXTURE2);
        glBindTexture(GL_TEXTURE_2D, _textureIds[2]);
        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, widht / 2, height / 2, GL_LUMINANCE,
                        GL_UNSIGNED_BYTE, (char *)data + widht * height * 5 / 4);
    }
    else if (localColorFMT == COLOR_FormatYUV420SemiPlanar){
        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D, _textureIds[1]);
        glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, widht / 2, height / 2, GL_LUMINANCE_ALPHA,
                        GL_UNSIGNED_BYTE, (char *)data + widht * height);
    }
    checkGlError("UpdateTextures");
}

  

【参考】

[1] 关于shader的编写,这部分可以参考: https://www.jianshu.com/p/39cde80d60e2 (这份资料的 glTexImage2D 似乎不正确,这部分并未参考)

[2] 关于 glTexImage2D 的操作,NV12和I420是不一样的,这篇讲渲染NV12的文章讲的很清楚:https://www.cnblogs.com/jukan/p/6994048.html(shader部分则未用作参考)

原文地址:https://www.cnblogs.com/i-am-normal/p/11973512.html

时间: 2024-10-19 07:27:34

【随笔】安卓平台YUV数据(NV12/I420)渲染的相关文章

Android用surface直接显示yuv数据

研究了一段时间Android的surface系统,一直执着地认为所有在surface或者屏幕上显示的画面,必须要转换成RGB才能显示,yuv数据也要通过颜色空间转换成RGB才能显示.可最近在研究stagefright视频显示时发现,根本找不到omx解码后的yuv是怎么转换成RGB的代码,yuv数据在render之后就找不到去向了,可画面确确实实的显示出来了,这从此颠覆了yuv必须要转换成RGB才能显示的真理了. 稍微看一下AsomePlayer的代码,不难发现,视频的每一帧是通过调用了Softw

YUV数据详解

http://www.cnblogs.com/azraelly/archive/2013/01/01/2841269.html YUV格式有两大类:planar和packed.对于planar的YUV格式,先连续存储所有像素点的Y,紧接着存储所有像素点的U,随后是所有像素点的V.对于packed的YUV格式,每个像素点的Y,U,V是连续交*存储的. YUV,分为三个分量,"Y"表示明亮度(Luminance或Luma),也就是灰度值:而"U"和"V&quo

多媒体基础知识之YUV数据

1.什么是YUV格式 YUV,是一种颜色编码方法.Y表示明亮度(Luminance.Luma),也就是灰度值.U和V则是色度.浓度(Chrominance.Chroma),作用是描述影像色彩及饱和度,用于指定像素的颜色.与我们熟知的RGB类似,YUV也是一种颜色编码方法,主要用于电视系统以及模拟视频领域,它将亮度信息(Y)与色彩信息(UV)分离,没有UV信息一样可以显示完整的图像,只不过是黑白的,这样的设计很好地解决了彩色电视机与黑白电视的兼容问题.并且,YUV不像RGB那样要求三个独立的视频信

安卓平台ARM Mali OpenCL例子-灰度转换

手头一块RK3288的板子,在板子上测试了一张1080p的彩色图灰度转换的OpenCL例子.OpenCL没有任何优化.例子请移步这里. 该例子是编译成安卓平台下的可执行程序. 进入jni文件夹,进行如下操作: 对于我的环境,是把可执行文件,kernel.cl和图片push到设备的//mnt/sdcard/opencl/gray路径下.请自行选择. adb连接设备后,执行效果如下图: 对于统计的GPU的时间为:数据来回拷贝时间+kernels执行时间. 在源码中,用时间去统计了kernel执行时间

TYPESDK手游聚合SDK客户端设计思路与架构之二:安卓平台统一化接口结构及思路

在上一篇<TypeSDK聚合sdk设计基本原则>中我们提到了,设计聚合sdk需要设计开发平台部分的接口,以及设计发布平台的聚合这2个大模块.那么我们今天就先来讲讲发布平台之一:安卓平台的统一化接口结构和思路. 一.相关的需求 安卓平台的统一化接口,我们需要考虑到具体以下的几点: 1.对外需要有统一的接口,保证不同的渠道sdk 对同一个游戏来说,是调用相同的接口,传递相同的参数 2.对内需要有一套扩展性很好的框架,可以应对不同渠道的sdk差异性 二.设计的模块 那么针对这些考虑点,安卓平台的统一

安卓平台的音视频互动开发平台

兼容Google.HTC.小米.Samsung.华为等主流硬件设备 支持iOS.Web.PC等设备和Android之间的互联互通 视频会话时,默认打开前置摄像头: 能够有Java音视频采集.显示驱动,兼容更多Android设备: 想要在Android平台下实现音视频通信,最快捷的方法是寻找开源项目或调用其他公司封装好的API,接下来小编介绍一款不错的SDK包给大家,(安卓平台的音视频互动开发平台)下面是一些关于如何调用相关API接口的方法,大家可以相互交流交流. Android通信平台相关API

Android用surface直接显示yuv数据(三)

本文用Java创建UI并联合JNI层操作surface来直接显示yuv数据(yv12),开发环境为Android 4.4,全志A23平台. package com.example.myyuvviewer; import java.io.File; import java.io.FileInputStream; import android.app.Activity; import android.os.Bundle; import android.os.Environment; import a

安卓平台的手机音视频通讯开发

现在在公交.地铁.道路上都可以看到人们拿着手机,打开微信,按照然后进行语音通话.音视频通话随着现在智能手机的普及越来月随处可见,4G网络的快速发展为现在音视频即时通讯的实现提供了网络通道上的可能现在即音视频时通讯是最为流行的通讯方式,而各种各样的即时通讯软件也层出不穷:服务提供商也提供了越来越丰富的通讯服务功能.目前在安卓平台下的主流即时通讯软件有:AnyChat.QQ.ICQ.MSN.新浪微博等.随着互联网的发展,即时通讯的运用将日益广泛. 目前即时通讯行业,很多开发上在进行音视频技术开发,目

android平台yuv缩放相关&lt;转&gt;

Android的视频相关的开发,大概一直是整个Android生态,以及Android API中,最为分裂以及兼容性问题最为突出的一部分.摄像头,以及视频编码相关的API,Google一直对这方面的控制力非常差,导致不同厂商对这两个API的实现有不少差异,而且从API的设计来看,一直以来优化也相当有限,甚至有人认为这是“Android上最难用的API之一” 以微信为例,我们录制一个540p的mp4文件,对于Android来说,大体上是遵循这么一个流程: 大体上就是从摄像头输出的YUV帧经过预处理之