Android是架构分为三层:
- 底层 Linux Kernel
- 中间层 主要由C++实现 (Android 60%源码都是C++实现)
- 应用层 主要由JAVA开发的应用程序
应用程序执行过程大致如下: JAVA应用程序产生操作(播放音乐或停止),然后通过JNI调用进入中间层执行C++代码,中间层处理后可能需要硬件产生动作的,会继续将操作传到Linux Kernel,Kernel ,不需要硬件产生操作的可能在中间层做一些处理就直接返回。需要硬件产生操作的动作则需通过Kernel调用相关的驱动执行动作或一些处理。
在这里大家需要明白一点:Android仅使用了Linux的Kernel ,即便是一些常用的库例如pthread等,都是Android自已用C/C++/汇编重写实现的。
因为在音频通路建立过程中,涉及Android IPC通信及系统服务管理,所以下面就这两点先做个简述:
①Android IPC通信采用的是Client/Server结构,Client 客户端 (AudioRecord)通过接口(IAudioRecord)调用Server 服务器对象(AudioFlinger及AudioFlinger::RecordThread等)的方法,并获取执行结果。AudioRecord.cpp 主要是对类AudioRecord的实现,AudioFlinger.cpp主要是对类AudioFlinger的实现。在底层音频通信中,可以将AudioRecord作为Android IPC通信的客户端,而将AudioFlinger作为服务器端。AudioRecord获取服务器端接口(mAudioRecord)后就可以像执行自已的方法一样调用服务器端方法(AudioFlinger)。
②Android 启动时会创建一个服务管理进程。Android系统中所有的服务都必需注册添加到该进程中,可以通过sp<IServiceManager> sm=defaultServiceManager()获取管理进程接口,然后可以通过它的AddService方法将服务注册添加:sm->addService(String16("media.audio_flinger"), new AudioFlinger());只有将服务添加到管理进程中才能被其它的进程使用:
sp<IServiceManager> sm = defaultServiceManager(); sp<IBinder> binder = sm->getService(String16("media.audio_flinger"));
Android的音频系统在启动的时候会创建两个服务:一个是上面的示例 AudioFlingerService,一个是AudioPolicyService,并添加到管理进程中,之后其它进程可以使用它们提供的方法。
以下简称AudioFlingerService为AudioFlinger, AudioPolicyService为AudioPolicy
核心流程:
AudioSystem:getinput(…)->aps->getinput(..)->AudioPolicyService::getInput(…)->mpPolicyManager->getInput(…)->
<AudioPolicyService>mpClientInterface->openInput(…)->AudioFlinger::openInput(…)
录音流程分析
应用层录音
AndioRecord类的主要功能是让各种JAVA应用能够管理音频资源,以便它们通过此类能够录制平台的声音输入硬件所收集的声音。此功能的实现就是通过”pulling同步”(reading读取)AudioRecord对象的声音数据来完成的。在录音过程中,应用所需要做的就是通过read方法去及时地获取AudioRecord对象的录音数据. AudioRecord类提供的三个获取声音数据的方法分别是read(byte[], int, int), read(short[], int, int), read(ByteBuffer, int). 无论选择使用那一个方法都必须事先设定方便用户的声音数据的存储格式。
开始录音的时候,一个AudioRecord需要初始化一个相关联的声音buffer, 这个buffer主要是用来保存新的声音数据。这个buffer的大小,我们可以在对象构造期间去指定。它表明一个AudioRecord对象还没有被读取(同步)声音数据前能录多长的音(即一次可以录制的声音容量)。声音数据从音频硬件中被读出,数据大小不超过整个录音数据的大小(可以分多次读出),即每次读取初始化buffer容量的数据。一般情况下录音实现的简单流程如下:
- 创建一个数据流。
- 构造一个AudioRecord对象,其中需要的最小录音缓存buffer大小可以通过getMinBufferSize方法得到。如果buffer容量过小,将导致对象构造的失败。
- 初始化一个buffer,该buffer大于等于AudioRecord对象用于写声音数据的buffer大小。
- 开始录音。
- 从AudioRecord中读取声音数据到初始化buffer,将buffer中数据导入数据流。
- 停止录音。
- 关闭数据流。
程序示例 :
// Create a DataOuputStream to write the audio data into the saved file. OutputStream os = new FileOutputStream(file); BufferedOutputStream bos = new BufferedOutputStream(os); DataOutputStream dos = new DataOutputStream(bos); // Create a new AudioRecord object to record the audio. int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding); AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize); short[] buffer = new short[bufferSize]; audioRecord.startRecording(); isRecording = true ; while (isRecording) { int bufferReadResult = audioRecord.read(buffer, 0, bufferSize); for (int i = 0; i < bufferReadResult; i++) dos.writeShort(buffer[i]); } audioRecord.stop(); dos.close();
1. getMinBufferSize
getMinBufferSize函数前文已做介绍,不再细说,查看源码可知函数实现中通过调用native_get_min_buff_size这个JNI函数进入framework/base/core/jni/android_media_AudioRecord.cpp函数中的android_media_AudioRecord_get_min_buff_size.
native_get_min_buff_size函数到android_media_AudioRecord_get_min_buff_size的关联是通过android_media_AudioRecord.cpp中的函数数组来查看的:
static JNINativeMethod gMethods[] = { // name, signature, funcPtr {"native_start", "(II)I", (void *)android_media_AudioRecord_start}, {"native_stop", "()V", (void *)android_media_AudioRecord_stop}, {"native_setup", "(Ljava/lang/Object;IIIII[I)I", (void *)android_media_AudioRecord_setup}, {"native_finalize", "()V", (void *)android_media_AudioRecord_finalize}, {"native_release", "()V", (void *)android_media_AudioRecord_release}, {"native_read_in_byte_array", "([BII)I", (void *)android_media_AudioRecord_readInByteArray}, {"native_read_in_short_array", "([SII)I", (void *)android_media_AudioRecord_readInShortArray}, {"native_read_in_direct_buffer","(Ljava/lang/Object;I)I", (void *)android_media_AudioRecord_readInDirectBuffer}, {"native_set_marker_pos","(I)I", (void *)android_media_AudioRecord_set_marker_pos}, {"native_get_marker_pos","()I", (void *)android_media_AudioRecord_get_marker_pos}, {"native_set_pos_update_period", "(I)I", (void *)android_media_AudioRecord_set_pos_update_period}, {"native_get_pos_update_period", "()I", (void *)android_media_AudioRecord_get_pos_update_period}, {"native_get_min_buff_size", "(III)I", (void *)android_media_AudioRecord_get_min_buff_size}, };
android_media_AudioRecord_get_min_buff_size代码如下:
// ---------------------------------------------------------------------------- // returns the minimum required size for the successful creation of an AudioRecord instance. // returns 0 if the parameter combination is not supported. // return -1 if there was an error querying the buffer size. static jint android_media_AudioRecord_get_min_buff_size(JNIEnv *env, jobject thiz, jint sampleRateInHertz, jint nbChannels, jint audioFormat) { ALOGV(">> android_media_AudioRecord_get_min_buff_size(%d, %d, %d)",sampleRateInHertz, nbChannels, audioFormat); size_t frameCount = 0; //以地址的方式获取frameCount的值。 status_t result = AudioRecord::getMinFrameCount(&frameCount,sampleRateInHertz, (audioFormat == ENCODING_PCM_16BIT ?AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT), audio_channel_in_mask_from_count(nbChannels)); if (result == BAD_VALUE) { return 0; } if (result != NO_ERROR) { return -1; } return frameCount * nbChannels * (audioFormat == ENCODING_PCM_16BIT ? 2 : 1); }
根据最小的framecount计算最小的buffersize。音频中最常见的是frame这个单位,一个frame就是1个采样点的字节数*声道。为啥搞个frame出来?因为对于多//声道的话,用1个采样点的字节数表示不全,因为播放的时候肯定是多个声道的数据都要播出来//才行。所以为了方便,就说1秒钟有多少个frame,这样就能抛开声道数,把意思表示全了。getMinBufSize函数完了后,我们得到一个满足最小要求的缓冲区大小。这样用户分配缓冲区就有了依据。
2. new AudioRecord
public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat, int bufferSizeInBytes) throws IllegalArgumentException { mRecordingState = RECORDSTATE_STOPPED; // remember which looper is associated with the AudioRecord instanciation // 获得主线程的Looper,关于Looper的介绍见其他专题。 if ((mInitializationLooper = Looper.myLooper()) == null) {
mInitializationLooper = Looper.getMainLooper(); } audioParamCheck(audioSource, sampleRateInHz, channelConfig, audioFormat); audioBuffSizeCheck(bufferSizeInBytes); // native initialization int[] session = new int[1]; session[0] = 0; //TODO: update native initialization when information about hardware init failure // due to capture device already open is available. //调用native层的native_setup,把自己的WeakReference传进去 int initResult = native_setup( new WeakReference<AudioRecord>(this), mRecordSource, mSampleRate, mChannelMask, mAudioFormat, mNativeBufferSizeInBytes, session); if (initResult != SUCCESS) { loge("Error code "+initResult+" when initializing native AudioRecord object."); return; // with mState == STATE_UNINITIALIZED } mSessionId = session[0]; mState = STATE_INITIALIZED; }
函数实现通过调用native_setup函数进入了framework/base/core/jni/android_media_AudioRecord.cpp中的android_media_AudioRecord_setup:
static int android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this, jint source, jint sampleRateInHertz, jint channelMask, // Java channel masks map directly to the native definition jint audioFormat, jint buffSizeInBytes, jintArray jSession) { //ALOGV(">> Entering android_media_AudioRecord_setup"); //ALOGV("sampleRate=%d, audioFormat=%d, channel mask=%x, buffSizeInBytes=%d", // sampleRateInHertz, audioFormat, channelMask, buffSizeInBytes); if (!audio_is_input_channel(channelMask)) { ALOGE("Error creating AudioRecord: channel mask %#x is not valid.", channelMask); return AUDIORECORD_ERROR_SETUP_INVALIDCHANNELMASK; } uint32_t nbChannels = popcount(channelMask); // compare the format against the Java constants if ((audioFormat != ENCODING_PCM_16BIT) && (audioFormat != ENCODING_PCM_8BIT)) { ALOGE("Error creating AudioRecord: unsupported audio format."); return AUDIORECORD_ERROR_SETUP_INVALIDFORMAT; } int bytesPerSample = audioFormat == ENCODING_PCM_16BIT ? 2 : 1; audio_format_t format = audioFormat == ENCODING_PCM_16BIT ? AUDIO_FORMAT_PCM_16_BIT : AUDIO_FORMAT_PCM_8_BIT; if (buffSizeInBytes == 0) { ALOGE("Error creating AudioRecord: frameCount is 0."); return AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT; } int frameSize = nbChannels * bytesPerSample; size_t frameCount = buffSizeInBytes / frameSize; if ((uint32_t(source) >= AUDIO_SOURCE_CNT) && (uint32_t(source) != AUDIO_SOURCE_HOTWORD)) { ALOGE("Error creating AudioRecord: unknown source."); return AUDIORECORD_ERROR_SETUP_INVALIDSOURCE; } jclass clazz = env->GetObjectClass(thiz); if (clazz == NULL) { ALOGE("Can‘t find %s when setting up callback.", kClassPathName); return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED; } if (jSession == NULL) { ALOGE("Error creating AudioRecord: invalid session ID pointer"); return AUDIORECORD_ERROR; } jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL); if (nSession == NULL) { ALOGE("Error creating AudioRecord: Error retrieving session id pointer"); return AUDIORECORD_ERROR; } int sessionId = nSession[0]; env->ReleasePrimitiveArrayCritical(jSession, nSession, 0); nSession = NULL; // create an uninitialized AudioRecord object sp<AudioRecord> lpRecorder = new AudioRecord(); // create the callback information: // this data will be passed with every AudioRecord callback audiorecord_callback_cookie *lpCallbackData = new audiorecord_callback_cookie; lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz); // we use a weak reference so the AudioRecord object can be garbage collected. lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this); lpCallbackData->busy = false; lpRecorder->set((audio_source_t) source, sampleRateInHertz, format, // word length, PCM channelMask, frameCount, recorderCallback,// callback_t lpCallbackData,// void* user 0, // notificationFrames, true, // threadCanCallJava sessionId); if (lpRecorder->initCheck() != NO_ERROR) { ALOGE("Error creating AudioRecord instance: initialization check failed."); goto native_init_failure; } nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL); if (nSession == NULL) { ALOGE("Error creating AudioRecord: Error retrieving session id pointer"); goto native_init_failure; } // read the audio session ID back from AudioRecord in case a new session was created during set() nSession[0] = lpRecorder->getSessionId(); env->ReleasePrimitiveArrayCritical(jSession, nSession, 0); nSession = NULL; { // scope for the lock Mutex::Autolock l(sLock); sAudioRecordCallBackCookies.add(lpCallbackData); } // save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field // of the Java object setAudioRecord(env, thiz, lpRecorder); // save our newly created callback information in the "nativeCallbackCookie" field // of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize() env->SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, (int)lpCallbackData); return AUDIORECORD_SUCCESS; // failure: native_init_failure: env->DeleteGlobalRef(lpCallbackData->audioRecord_class); env->DeleteGlobalRef(lpCallbackData->audioRecord_ref); delete lpCallbackData; env->SetIntField(thiz, javaAudioRecordFields.nativeCallbackCookie, 0); return AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED; }