[Android] 混音器AudioMixer

AudioMixer是Android的混音器,通过混音器可以把各个音轨的音频数据混合在一起,然后输出到音频设备。

创建AudioMixer

AudioMixer在MixerThread的构造函数内创建:

 AudioFlinger::MixerThread::MixerThread(...)
{
    ...
    mAudioMixer = new AudioMixer(mNormalFrameCount, mSampleRate);
    ...
}

这说明了一个MixerThread对应一个AudioMixer。

而且MixerThread传了两个参数给AudioMixer:

  1. mNormalFrameCount,AudioMixer会根据传进来的mNormalFrameCount作为一次输送数据的长度,把源buffer的音频数据写入目的buffer
  2. mSampleRate,AudioMixer会把传进来的mSampleRate作为音频数据输出的采样率

配置AudioMixer参数

在上一篇描述MixerThread的时候说过,prepareTrack_l内会配置AudioMixer的参数,现在来详细分析一下各个参数的作用。

mAudioMixer->setBufferProvider(name, track);

设置混音的源buffer,name为传入的索引,track即从mActiveTracks取出来的Track

关于索引name,在这里深入分析,name的获取过程如下:

int name = track->name();
+
+--> int name() const { return mName; }
    +
    +-->  mName = thread->getTrackName_l(channelMask, sessionId);
        +
        +--> return mAudioMixer->getTrackName(channelMask, sessionId);
            +
            +--> uint32_t names = (~mTrackNames) & mConfiguredNames;
            |
            +--> int n = __builtin_ctz(names);

names为索引的集合,names的每一个bit代表不同的索引,names上的某个bit为1,就代表该bit可以取出来作为索引,__builtin_ctz的作用是计算names的低位0的个数,即可以取出最低位为1的bit作为索引。如下:

11111111111111111111000000000000
                   ^

低位有12个0,则取bit12作为索引,那么返回的索引值为1<<12

决定names的参数有两个:

  1. mTrackNames:用于记录当前的Track,初始值为0。当加入某个Track时,该Track对应的bit会被置为1.
  2. mConfiguredNames:用于表明该AudioMixer所支持最多的Track数目,如支持最多N个Track,那么mConfiguredNames = 1<<N – 1,此时mConfiguredNames低位的N个bit为1,高位的32-N个bit为0。mConfiguredNames的默认值为-1,即N = 32

mAudioMixer->enable(name);

enable方法只是把track的enabled置为true,然后调用invalidateState(1 << name);表明需要调用刷新函数。

 void AudioMixer::enable(int name)
 {
     name -= TRACK0;
     track_t& track = mState.tracks[name];

     if (!track.enabled) {
         track.enabled = true;
         invalidateState(1 << name);
     }
 }

mAudioMixer->setParameter(name, param, AudioMixer::VOLUME0, (void *)vl);

mAudioMixer->setParameter(name, param, AudioMixer::VOLUME1, (void *)vr);

分别设置左右声道音量,然后调用invalidateState(1 << name);表明需要调用刷新函数。

         case VOLUME0:
         case VOLUME1:
             if (track.volume[param-VOLUME0] != valueInt) {
                 ALOGV("setParameter(VOLUME, VOLUME0/1: %04x)", valueInt);
                 track.prevVolume[param-VOLUME0] = track.volume[param-VOLUME0] << 16;
                 track.volume[param-VOLUME0] = valueInt;
                 if (target == VOLUME) {
                     track.prevVolume[param-VOLUME0] = valueInt << 16;
                     track.volumeInc[param-VOLUME0] = 0;
                 }

mAudioMixer->setParameter(

name,

AudioMixer::TRACK,

AudioMixer::FORMAT, (void *)track->format());

保证传进来的PCM数据为16bit

         case FORMAT:
             ALOG_ASSERT(valueInt == AUDIO_FORMAT_PCM_16_BIT);
             break;

mAudioMixer->setParameter(

name,

AudioMixer::TRACK,

AudioMixer::CHANNEL_MASK, (void *)track->channelMask());

设置通道数,mask:单音轨(mono),双音轨(stereo)…

         case CHANNEL_MASK: {
             audio_channel_mask_t mask = (audio_channel_mask_t) value;
             if (track.channelMask != mask) {
                 uint32_t channelCount = popcount(mask);
                 ALOG_ASSERT((channelCount <= MAX_NUM_CHANNELS_TO_DOWNMIX) && channelCount);
                 track.channelMask = mask;       //设置mask
                 track.channelCount = channelCount;   //更新音轨数目
                 // the mask has changed, does this track need a downmixer?
                 initTrackDownmix(&mState.tracks[name], name, mask);
                 ALOGV("setParameter(TRACK, CHANNEL_MASK, %x)", mask);
                 invalidateState(1 << name);
             }

mAudioMixer->setParameter(

name,

AudioMixer::RESAMPLE,

AudioMixer::SAMPLE_RATE,

(void *)reqSampleRate);

设置当前track的采样频率为reqSampleRate,并要求AudioMixer对当前track进行重采样,输出频率为当前AudioMixer的输出频率mSampleRate。然后调用invalidateState(1 << name);表明需要调用刷新函数。调用过程如下:

mAudioMixer->setParameter(
+   name,
|   AudioMixer::RESAMPLE,
|   AudioMixer::SAMPLE_RATE,
|   (void *)reqSampleRate);
|
+--> track.setResampler(uint32_t(valueInt), mSampleRate)
    +
    +--> if (sampleRate != value) {  //只有输入采样率跟输出采样率不同的时候才会进行重采样
        +    if (resampler == NULL) {
        |        quality = AudioResampler::VERY_HIGH_QUALITY;  //高级重采样
        |        resampler = AudioResampler::create(...);  //创建resampler
        |    }
        |}
        +-->      switch (quality) {
        |         default:
        |         case DEFAULT_QUALITY:
        |         case LOW_QUALITY:
        |             ALOGV("Create linear Resampler");
        |             resampler = new AudioResamplerOrder1(bitDepth, inChannelCount, sampleRate);
        |             break;
        |         case MED_QUALITY:
        |             ALOGV("Create cubic Resampler");
        |             resampler = new AudioResamplerCubic(bitDepth, inChannelCount, sampleRate);
        |             break;
        |         case HIGH_QUALITY:
        |             ALOGV("Create HIGH_QUALITY sinc Resampler");
        |             resampler = new AudioResamplerSinc(bitDepth, inChannelCount, sampleRate);
        |             break;
        |         case VERY_HIGH_QUALITY:   //由于我们选择的是VERY_HIGH_QUALITY,所以resampler创建的是AudioResamplerSinc
        |             ALOGV("Create VERY_HIGH_QUALITY sinc Resampler = %d", quality);
        |             resampler = new AudioResamplerSinc(bitDepth, inChannelCount, sampleRate, quality);
        |             break;
        |         }
        |
        +-->       // initialize resampler
                   resampler->init();

mAudioMixer->setParameter(

name,

AudioMixer::TRACK,

AudioMixer::MAIN_BUFFER, (void *)track->mainBuffer());

设置目的buffer。然后调用invalidateState(1 << name);表明需要调用刷新函数。

我们追踪一下目的buffer是在哪里创建的:

track->mainBuffer()
+
+--> int16_t *mainBuffer() const { return mMainBuffer; }

mMainBuffer是在track创建的时候就被赋值了

sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(...)
+
+--> track = new Track(...)
    +
    +--> AudioFlinger::PlaybackThread::Track::Track(...)
        +:mMainBuffer(thread->mixBuffer())
        |
        +--> int16_t *mixBuffer() const { return mMixBuffer; };

thread就是MixerThread,在MixerThread创建的同时,PlaybackThread也一同被创建。在PlaybackThread的构造函数内,申请了一块buffer,并赋值给mMixerBuffer

AudioFlinger::MixerThread::MixerThread
+
+--> AudioFlinger::PlaybackThread::PlaybackThread
    +
    +--> void AudioFlinger::PlaybackThread::readOutputParameters()
        +
        +--> mAllocMixBuffer = new int8_t[mNormalFrameCount * mFrameSize + align - 1];
        |
        +--> mMixBuffer = (int16_t *) ((((size_t)mAllocMixBuffer + align - 1) / align) * align);

这表明了一个AudioMixer对应一个mMixBuffer,经过某个AudioMixer的音频数据最后会汇聚到一个buffer内进行输出

invalidateState

我们上面大量提到了invalidateState可以用来表明需要调用刷新函数,现在来分析一下。

 void AudioMixer::invalidateState(uint32_t mask)
 {
     if (mask) {
         mState.needsChanged |= mask; //mask即track->name,表明该track需要被刷新
         mState.hook = process__validate;
     }
 }

由于AudioMixer进行混音处理的时候会调用process方法,而process调用的是mState.hook,所以调用invalidateState,会使得下一次的process函数会调用process__validate进行参数的刷新。process__validate分析如下:

void AudioMixer::process__validate(state_t* state, int64_t pts)
{
    ALOGW_IF(!state->needsChanged,
        "in process__validate() but nothing‘s invalid");

    uint32_t changed = state->needsChanged;  //所有需要invalidate的track都在这里面
    state->needsChanged = 0; // clear the validation flag

    // recompute which tracks are enabled / disabled
    uint32_t enabled = 0;
    uint32_t disabled = 0;
    while (changed) {          //对于所有需要invalidate的track,取出来
        const int i = 31 - __builtin_clz(changed);
        const uint32_t mask = 1<<i;
        changed &= ~mask;
        track_t& t = state->tracks[i];
        (t.enabled ? enabled : disabled) |= mask; //通过track.enabled或者track.disabled来判断该track是否需要混音
    }
    state->enabledTracks &= ~disabled;    //disabled mask
    state->enabledTracks |=  enabled;    //enabled mask

    // compute everything we need...
    int countActiveTracks = 0;
    bool all16BitsStereoNoResample = true;
    bool resampling = false;
    bool volumeRamp = false;
    uint32_t en = state->enabledTracks;
    while (en) {     //对所有需要进行混音的track
        const int i = 31 - __builtin_clz(en);  //取出最高位为1的bit
        en &= ~(1<<i);        //把这一位置为0

        countActiveTracks++;
        track_t& t = state->tracks[i];  //取出来track
        uint32_t n = 0;
        n |= NEEDS_CHANNEL_1 + t.channelCount - 1;    //至少有一个channel需要混音
        n |= NEEDS_FORMAT_16;          //必须为16bit PCM
        n |= t.doesResample() ? NEEDS_RESAMPLE_ENABLED : NEEDS_RESAMPLE_DISABLED; //是否需要重采样
        if (t.auxLevel != 0 && t.auxBuffer != NULL) {
            n |= NEEDS_AUX_ENABLED;
        }

        if (t.volumeInc[0]|t.volumeInc[1]) {
            volumeRamp = true;
        } else if (!t.doesResample() && t.volumeRL == 0) {
            n |= NEEDS_MUTE_ENABLED;
        }
        t.needs = n;    //更新track flag
        //下面为设置track的混音方法
        if ((n & NEEDS_MUTE__MASK) == NEEDS_MUTE_ENABLED) {    //mute
            t.hook = track__nop;
        } else {
            if ((n & NEEDS_AUX__MASK) == NEEDS_AUX_ENABLED) {
                all16BitsStereoNoResample = false;
            }
            if ((n & NEEDS_RESAMPLE__MASK) == NEEDS_RESAMPLE_ENABLED) {  //重采样
                all16BitsStereoNoResample = false;
                resampling = true;
                t.hook = track__genericResample;
                ALOGV_IF((n & NEEDS_CHANNEL_COUNT__MASK) > NEEDS_CHANNEL_2,
                        "Track %d needs downmix + resample", i);
            } else {
                if ((n & NEEDS_CHANNEL_COUNT__MASK) == NEEDS_CHANNEL_1){ //单声道
                    t.hook = track__16BitsMono;
                    all16BitsStereoNoResample = false;
                }
                if ((n & NEEDS_CHANNEL_COUNT__MASK) >= NEEDS_CHANNEL_2){  //双声道
                    t.hook = track__16BitsStereo;
                    ALOGV_IF((n & NEEDS_CHANNEL_COUNT__MASK) > NEEDS_CHANNEL_2,
                            "Track %d needs downmix", i);
                }
            }
        }
    }

    // select the processing hooks //下面为设置整体的混音方法,一个process__xxx内会循环调用track_xxx
    state->hook = process__nop;
    if (countActiveTracks) {
        if (resampling) {   //重采样,需要多一块重采样buffer
            if (!state->outputTemp) {
                state->outputTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
            }
            if (!state->resampleTemp) {
                state->resampleTemp = new int32_t[MAX_NUM_CHANNELS * state->frameCount];
            }
            state->hook = process__genericResampling;
        } else {
            if (state->outputTemp) {
                delete [] state->outputTemp;
                state->outputTemp = NULL;
            }
            if (state->resampleTemp) {
                delete [] state->resampleTemp;
                state->resampleTemp = NULL;
            }
            state->hook = process__genericNoResampling;  //双声道process
            if (all16BitsStereoNoResample && !volumeRamp) {
                if (countActiveTracks == 1) {
                    state->hook = process__OneTrack16BitsStereoNoResampling;  //单声道process
                }
            }
        }
    }

    ALOGV("mixer configuration change: %d activeTracks (%08x) "
        "all16BitsStereoNoResample=%d, resampling=%d, volumeRamp=%d",
        countActiveTracks, state->enabledTracks,
        all16BitsStereoNoResample, resampling, volumeRamp);

   state->hook(state, pts);  //这里调用一次进行混音,后续会在MixerThread的threadLoop_mix内调用

    // Now that the volume ramp has been done, set optimal state and
    // track hooks for subsequent mixer process
    if (countActiveTracks) {
        bool allMuted = true;
        uint32_t en = state->enabledTracks;
        while (en) {
            const int i = 31 - __builtin_clz(en);
            en &= ~(1<<i);
            track_t& t = state->tracks[i];
            if (!t.doesResample() && t.volumeRL == 0)
            {
                t.needs |= NEEDS_MUTE_ENABLED;
                t.hook = track__nop;
            } else {
                allMuted = false;
            }
        }
        if (allMuted) {
            state->hook = process__nop;
        } else if (all16BitsStereoNoResample) {
            if (countActiveTracks == 1) {
                state->hook = process__OneTrack16BitsStereoNoResampling;
            }
        }
    }
}

AudioMixer混音

在分析MixerThread时说过,我们调用AudioMixer的process方法进行混音的,实际上混音的方法是调用AudioMixer内的process_xxx方法,各个process方法大同小异。下面来分析process__genericResampling这个方法。

// generic code with resampling
void AudioMixer::process__genericResampling(state_t* state, int64_t pts)
{
    // this const just means that local variable outTemp doesn‘t change
    int32_t* const outTemp = state->outputTemp;        //重采样缓存
    const size_t size = sizeof(int32_t) * MAX_NUM_CHANNELS * state->frameCount;

    size_t numFrames = state->frameCount;

    uint32_t e0 = state->enabledTracks;
    while (e0) {
        // process by group of tracks with same output buffer
        // to optimize cache use
        uint32_t e1 = e0, e2 = e0;
        int j = 31 - __builtin_clz(e1);
        track_t& t1 = state->tracks[j];  //取出第一个track  t1
        e2 &= ~(1<<j);      //除了t1之外,其余的track的索引都在e2内

        //对于其他的track,通过循环取出来,赋值为t2,如果t2的目标buffer与t1的不同,则把t2从e1的集合中去掉
        //这么做就是为了把相同目标buffer的track取出来,一起进行混音,因为不同目标buffer的track是要混音输出到不同buffer的
        //不过实际上一般都会有相同的目标buffer,如MixerThread设定了mMixBuffer作为track的目标buffer
        //如果设定了eq(AudioEffect)那就有可能会出现不同目标buffer的情况?
        while (e2) {
            j = 31 - __builtin_clz(e2);
            e2 &= ~(1<<j);
            track_t& t2 = state->tracks[j];
            if (CC_UNLIKELY(t2.mainBuffer != t1.mainBuffer)) {
                e1 &= ~(1<<j);
            }
        }
        e0 &= ~(e1);
        int32_t *out = t1.mainBuffer;
        memset(outTemp, 0, size);
        while (e1) {   //对于e1内的所有track,调用t.hook进行混音
            const int i = 31 - __builtin_clz(e1);
            e1 &= ~(1<<i);
            track_t& t = state->tracks[i];
            int32_t *aux = NULL;
            if (CC_UNLIKELY((t.needs & NEEDS_AUX__MASK) == NEEDS_AUX_ENABLED)) {
                aux = t.auxBuffer;
            }

            // this is a little goofy, on the resampling case we don‘t
            // acquire/release the buffers because it‘s done by
            // the resampler.
            if ((t.needs & NEEDS_RESAMPLE__MASK) == NEEDS_RESAMPLE_ENABLED) {
                ALOGE("[%s:%d]", __FUNCTION__, __LINE__);
                t.resampler->setPTS(pts);
                t.hook(&t, outTemp, numFrames, state->resampleTemp, aux); //实际上重采样会走这里,然后输出到重采样buffer,outTemp
            } else {

                size_t outFrames = 0;

                ALOGE("[%s:%d]", __FUNCTION__, __LINE__);
                while (outFrames < numFrames) {
                    t.buffer.frameCount = numFrames - outFrames;
                    int64_t outputPTS = calculateOutputPTS(t, pts, outFrames);
                    t.bufferProvider->getNextBuffer(&t.buffer, outputPTS);
                    t.in = t.buffer.raw;
                    // t.in == NULL can happen if the track was flushed just after having
                    // been enabled for mixing.
                    if (t.in == NULL) break;

                    if (CC_UNLIKELY(aux != NULL)) {
                        aux += outFrames;
                    }
                    t.hook(&t, outTemp + outFrames*MAX_NUM_CHANNELS, t.buffer.frameCount,
                            state->resampleTemp, aux);
                    outFrames += t.buffer.frameCount;
                    t.bufferProvider->releaseBuffer(&t.buffer);
                }
            }
        }
        ditherAndClamp(out, outTemp, numFrames); //把重采样buffer内的数据输出到out,即目标buffer
    }
}

在process__invalidate时,设置了重采样时track.hook函数为track__genericResample,下面看一下这个函数做了什么

void AudioMixer::track__genericResample(track_t* t, int32_t* out, size_t outFrameCount,
        int32_t* temp, int32_t* aux)
{
    //设置输入采样率
    t->resampler->setSampleRate(t->sampleRate);

    // ramp gain - resample to temp buffer and scale/mix in 2nd step
    if (aux != NULL) {
        // always resample with unity gain when sending to auxiliary buffer to be able
        // to apply send level after resampling
        // TODO: modify each resampler to support aux channel?
        t->resampler->setVolume(UNITY_GAIN, UNITY_GAIN);
        memset(temp, 0, outFrameCount * MAX_NUM_CHANNELS * sizeof(int32_t));
        t->resampler->resample(temp, outFrameCount, t->bufferProvider);
        if (CC_UNLIKELY(t->volumeInc[0]|t->volumeInc[1]|t->auxInc)) {
            volumeRampStereo(t, out, outFrameCount, temp, aux);
        } else {
            volumeStereo(t, out, outFrameCount, temp, aux);
        }
    } else {
        if (CC_UNLIKELY(t->volumeInc[0]|t->volumeInc[1])) {
            t->resampler->setVolume(UNITY_GAIN, UNITY_GAIN);
            memset(temp, 0, outFrameCount * MAX_NUM_CHANNELS * sizeof(int32_t));
            t->resampler->resample(temp, outFrameCount, t->bufferProvider);
            volumeRampStereo(t, out, outFrameCount, temp, aux);
        }

        // constant gain
        else {
            //设置音量
            t->resampler->setVolume(t->volume[0], t->volume[1]);
            //进行重采样
            t->resampler->resample(out, outFrameCount, t->bufferProvider);
        }
    }
}

最终调用了resampler的resample方法进行重采样

那么,下一篇我们来分析重采样

时间: 2024-12-28 01:04:57

[Android] 混音器AudioMixer的相关文章

Android 音频系统:从 AudioTrack 到 AudioFlinger

1. Android 音频框架概述 Audio 是整个 Android 平台非常重要的一个组成部分,负责音频数据的采集和输出.音频流的控制.音频设备的管理.音量调节等,主要包括如下部分: Audio Application Framework:音频应用框架 AudioTrack:负责回放数据的输出,属 Android 应用框架 API 类 AudioRecord:负责录音数据的采集,属 Android 应用框架 API 类 AudioSystem: 负责音频事务的综合管理,属 Android 应

[深入理解Android卷一全文-第七章]深入理解Audio系统

由于<深入理解Android 卷一>和<深入理解Android卷二>不再出版,而知识的传播不应该由于纸质媒介的问题而中断,所以我将在CSDN博客中全文转发这两本书的全部内容. 第7章  深入理解Audio系统 本章主要内容 ·  具体分析AudioTrack. ·  具体分析AudioFlinger. ·  具体分析AudioPolicyService. 本章涉及的源代码文件名称及位置 以下是本章分析的源代码文件名称及其位置. ·  AudioTrack.java framewor

Android 4.1 Audio系统变化说明

转自Android 4.1 Audio系统变化说明 Android 4.1,英文代号简称JB.在国人眼里,JB这个词还和动物有点关系.Google如此频繁修改Android,终于推出了一个可以被大家整天JB JB挂在嘴上的版本.以后我的文章也可以一面用JB表示版本号,一面用JB表示毛主席常说的"战略上的鄙视了".请大家根据上下文揣摩我写下JB一词的心情. 今天将稍深入得介绍一下JB 4.1在Audio系统做的翻天覆地的改动.这里先啰嗦几句:就像80后经常抱怨自己晚生了几年一样,马上就会

Android音视频学习第7章:使用OpenSL ES进行音频解码

/* * *这里使用了transcode-1.1.7对wav文件进行解码,然后使用opensl es进行播放 * */ //用到的变量和结构体 WAV wav; //wav文件指针 SLObjectItf engineObject; //引擎对象 SLEngineItf engineInterface; //引擎接口 SLObjectItf outputMixObject; //混音器 SLObjectItf audioPlayerObject; //播放器对象 SLAndroidSimpleB

[Android] 混音线程MixerThread

MixerThread是按照音频输出的核心部分,所有Android的音频都需要经过MixerThread进行混音后再输出到音频设备. MixerThread的继承关系如下: MixerThread--->PlaybackThread--->ThreadBase--->Thread 在PlaybackThread中,重写了Thread的threadLoop,onFirstRef等方法,因此在调用MixerThread这些方法时,实际上就是调用了PlaybackThread的方法. 1. o

Android.mk编译的写法

一.Android 的简介: 1.Android.mk文件首先需要指定LOCAL_PATH变量,用于查找源文件.由于一般情况下Android.mk和需要编译的源文件在同一目录下,所以定义成如下形式: LOCAL_PATH:=$(call my-dir) 上面的语句的意思是将LOCAL_PATH变量定义成本文件所在目录路径. 2. Android.mk中可以定义多个编译模块,每个编译模块都是以include $(CLEAR_VARS)开始,以include $(BUILD_XXX)结束. incl

Android音频系统之AudioFlinger(四)【转】

Android音频系统之AudioFlinger(四) 分类: ALSA/Audio 2014-06-12 17:37 195人阅读 评论(0) 收藏 举报 1.1.1 AudioMixer 每一个MixerThread都有一个唯一对应的AudioMixer(在MixerThread中用mAudioMixer表示),它的作用如其名所表示的,就是为了完成音频的混音操作.   图 13?14 MixerThread示意图 如上图,MixerThread对外开放的接口主要涉及到Parameter(比如

Android音频底层调试-基于tinyalsa

由于Android中默认并没有使用标准alsa,而是使用的是tinyalsa,所以就算基于命令行的测试也要使用libtinyalsa.Android系统在上层Audio千变万化的时候,可以能这些个工具实时查看到,比如音频通道的切换等等. 1.编译tinyalsa配套工具 $ mmm external/tinyalsa/ 编译完后会产生tinyplay/tinymix/tinycap等等工具. tinymix: 查看配置混音器 tinyplay: 播放音频 tinycap: 录音 2.查看当前系统

Android音频系统之AudioFlinger(二) 【转】

1.1.1 音频设备的管理 虽然AudioFlinger实体已经成功创建并初始化,但到目前为止它还是一块静态的内存空间,没有涉及到具体的工作. 从职能分布上来讲,AudioPolicyService是策略的制定者,比如什么时候打开音频接口设备.某种Stream类型的音频对应什么设备等等.而AudioFlinger则是策略的执行者,例如具体如何与音频设备通信,如何维护现有系统中的音频设备,以及多个音频流的混音如何处理等等都得由它来完成. 目前Audio系统中支持的音频设备接口(Audio Inte