android audio开发的一些专用术语(待翻译)

Audio Terminology

IN THIS DOCUMENT

  1. Generic Terms
    1. Digital Audio
    2. Hardware and Accessories
    3. Audio Signal Path
  2. Android-Specific Terms
  3. Sample Rate Conversion

This document provides a glossary of audio-related terminology, including a list of widely used, generic terms and a list of terms that are specific to Android.

Generic Terms



These are audio terms that are widely used, with their conventional meanings.

Digital Audio

  • acoustics
  • The study of the mechanical properties of sound, for example how the physical placement of transducers such as speakers and microphones on a device affects perceived audio quality.
  • attenuation
  • A multiplicative factor less than or equal to 1.0, applied to an audio signal to decrease the signal level. Compare to "gain".
  • bits per sample or bit depth
  • Number of bits of information per sample.
  • channel
  • A single stream of audio information, usually corresponding to one location of recording or playback.
  • downmixing
  • To decrease the number of channels, e.g. from stereo to mono, or from 5.1 to stereo. This can be accomplished by dropping some channels, mixing channels, or more advanced signal processing. Simple mixing without attenuation or limiting has the potential for overflow and clipping. Compare to "upmixing".
  • duck
  • To temporarily reduce the volume of one stream, when another stream becomes active. For example, if music is playing and a notification arrives, then the music stream could be ducked while the notification plays. Compare to "mute".
  • frame
  • A set of samples, one per channel, at a point in time.
  • frames per buffer
  • The number of frames handed from one module to the next at once; for example the audio HAL interface uses this concept.
  • gain
  • A multiplicative factor greater than or equal to 1.0, applied to an audio signal to increase the signal level. Compare to "attenuation".
  • Hz
  • The units for sample rate or frame rate.
  • latency
  • Time delay as a signal passes through a system.
  • mono
  • One channel.
  • multichannel
  • See "surround sound". Strictly, since stereo is more than one channel, it is also "multi" channel. But that usage would be confusing.
  • mute
  • To (temporarily) force volume to be zero, independently from the usual volume controls.
  • PCM
  • Pulse Code Modulation, the most common low-level encoding of digital audio. The audio signal is sampled at a regular interval, called the sample rate, and then quantized to discrete values within a particular range depending on the bit depth. For example, for 16-bit PCM, the sample values are integers between -32768 and +32767.
  • ramp
  • To gradually increase or decrease the level of a particular audio parameter, for example volume or the strength of an effect. A volume ramp is commonly applied when pausing and resuming music, to avoid a hard audible transition.
  • sample
  • A number representing the audio value for a single channel at a point in time.
  • sample rate or frame rate
  • Number of frames per second; note that "frame rate" is thus more accurate, but "sample rate" is conventionally used to mean "frame rate."
  • sonification
  • The use of sound to express feedback or information, for example touch sounds and keyboard sounds.
  • stereo
  • Two channels.
  • stereo widening
  • An effect applied to a stereo signal, to make another stereo signal which sounds fuller and richer. The effect can also be applied to a mono signal, in which case it is a type of upmixing.
  • surround sound
  • Various techniques for increasing the ability of a listener to perceive sound position beyond stereo left and right.
  • upmixing
  • To increase the number of channels, e.g. from mono to stereo, or from stereo to surround sound. This can be accomplished by duplication, panning, or more advanced signal processing. Compare to "downmixing".
  • virtualizer
  • An effect that attempts to spatialize audio channels, such as trying to simulate more speakers, or give the illusion that various sound sources have position.
  • volume
  • Loudness, the subjective strength of an audio signal.

Hardware and Accessories

These terms are related to audio hardware and accessories.

Inter-device interconnect

These technologies connect audio and video components between devices, and are readily visible at the external connectors. The HAL implementor may need to be aware of these, as well as the end user.

  • Bluetooth
  • A short range wireless technology. The major audio-related Bluetooth profiles and Bluetooth protocols are described at these Wikipedia articles:
    • A2DP for music
    • SCO for telephony
  • DisplayPort
  • Digital display interface by VESA.
  • HDMI
  • High-Definition Multimedia Interface, an interface for transferring audio and video data. For mobile devices, either a micro-HDMI (type D) or MHL connector is used.
  • MHL
  • Mobile High-Definition Link is a mobile audio/video interface, often over micro-USB connector.
  • phone connector
  • A mini or sub-mini phone connector connects a device to wired headphones, headset, or line-level amplifier.
  • SlimPort
  • An adapter from micro-USB to HDMI.
  • S/PDIF
  • Sony/Philips Digital Interface Format is an interconnect for uncompressed PCM. See Wikipedia article S/PDIF.
  • USB
  • Universal Serial Bus. See Wikipedia article USB.

Intra-device interconnect

These technologies connect internal audio components within a given device, and are not visible without disassembling the device. The HAL implementor may need to be aware of these, but not the end user.

See these Wikipedia articles:

Audio Signal Path

These terms are related to the signal path that audio data follows from an application to the transducer, or vice-versa.

  • ADC
  • Analog to digital converter, a module that converts an analog signal (continuous in both time and amplitude) to a digital signal (discrete in both time and amplitude). Conceptually, an ADC consists of a periodic sample-and-hold followed by a quantizer, although it does not have to be implemented that way. An ADC is usually preceded by a low-pass filter to remove any high frequency components that are not representable using the desired sample rate. See Wikipedia article Analog-to-digital_converter.
  • AP
  • Application processor, the main general-purpose computer on a mobile device.
  • codec
  • Coder-decoder, a module that encodes and/or decodes an audio signal from one representation to another. Typically this is analog to PCM, or PCM to analog. Strictly, the term "codec" is reserved for modules that both encode and decode, however it can also more loosely refer to only one of these. See Wikipedia article Audio codec.
  • DAC
  • Digital to analog converter, a module that converts a digital signal (discrete in both time and amplitude) to an analog signal (continuous in both time and amplitude). A DAC is usually followed by a low-pass filter to remove any high frequency components introduced by digital quantization. See Wikipedia article Digital-to-analog converter.
  • DSP
  • Digital Signal Processor, an optional component which is typically located after the application processor (for output), or before the application processor (for input). The primary purpose of a DSP is to off-load the application processor, and provide signal processing features at a lower power cost.
  • PDM
  • Pulse-density modulation is a form of modulation used to represent an analog signal by a digital signal, where the relative density of 1s versus 0s indicates the signal level. It is commonly used by digital to analog converters. See Wikipedia article Pulse-density modulation.
  • PWM
  • Pulse-width modulation is a form of modulation used to represent an analog signal by a digital signal, where the relative width of a digital pulse indicates the signal level. It is commonly used by analog to digital converters. See Wikipedia article Pulse-width modulation.

Android-Specific Terms



These are terms specific to the Android audio framework, or that may have a special meaning within Android beyond their general meaning.

  • ALSA
  • Advanced Linux Sound Architecture. As the name suggests, it is an audio framework primarily for Linux, but it has influenced other systems. See Wikipedia article ALSA for the general definition. As used within Android, it refers primarily to the kernel audio framework and drivers, not to the user-mode API. See tinyalsa.
  • AudioEffect
  • An API and implementation framework for output (post-processing) effects and input (pre-processing) effects. The API is defined at android.media.audiofx.AudioEffect.
  • AudioFlinger
  • The sound server implementation for Android. AudioFlinger runs within the mediaserver process. See Wikipedia article Sound server for the generic definition.
  • audio focus
  • A set of APIs for managing audio interactions across multiple independent apps. See Managing Audio Focus and the focus-related methods and constants of android.media.AudioManager.
  • AudioMixer
  • The module within AudioFlinger responsible for combining multiple tracks and applying attenuation (volume) and certain effects. The Wikipedia article Audio mixing (recorded music)may be useful for understanding the generic concept. But that article describes a mixer more as a hardware device or a software application, rather than a software module within a system.
  • audio policy
  • Service responsible for all actions that require a policy decision to be made first, such as opening a new I/O stream, re-routing after a change and stream volume management.
  • AudioRecord
  • The primary low-level client API for receiving data from an audio input device such as microphone. The data is usually in pulse-code modulation (PCM) format. The API is defined atandroid.media.AudioRecord.
  • AudioResampler
  • The module within AudioFlinger responsible for sample rate conversion.
  • AudioTrack
  • The primary low-level client API for sending data to an audio output device such as a speaker. The data is usually in PCM format. The API is defined at android.media.AudioTrack.
  • client
  • Usually same as application or app, but sometimes the "client" of AudioFlinger is actually a thread running within the mediaserver system process. An example of that is when playing media that is decoded by a MediaPlayer object.
  • HAL
  • Hardware Abstraction Layer. HAL is a generic term in Android. With respect to audio, it is a layer between AudioFlinger and the kernel device driver with a C API, which replaces the earlier C++ libaudio.
  • FastMixer
  • A thread within AudioFlinger that services lower latency "fast tracks" and drives the primary output device.
  • fast track
  • An AudioTrack client with lower latency but fewer features, on some devices.
  • MediaPlayer
  • A higher-level client API than AudioTrack, for playing either encoded content, or content which includes multimedia audio and video tracks.
  • media.log
  • An AudioFlinger debugging feature, available in custom builds only, for logging audio events to a circular buffer where they can then be dumped retroactively when needed.
  • mediaserver
  • An Android system process that contains a number of media-related services, including AudioFlinger.
  • NBAIO
  • An abstraction for "non-blocking" audio input/output ports used within AudioFlinger. The name can be misleading, as some implementations of the NBAIO API actually do support blocking. The key implementations of NBAIO are for pipes of various kinds.
  • normal mixer
  • A thread within AudioFlinger that services most full-featured AudioTrack clients, and either directly drives an output device or feeds its sub-mix into FastMixer via a pipe.
  • OpenSL ES
  • An audio API standard by The Khronos Group. Android versions since API level 9 support a native audio API which is based on a subset of OpenSL ES 1.0.1.
  • silent mode
  • A user-settable feature to mute the phone ringer and notifications, without affecting media playback (music, videos, games) or alarms.
  • SoundPool
  • A higher-level client API than AudioTrack, used for playing sampled audio clips. It is useful for triggering UI feedback, game sounds, etc. The API is defined at android.media.SoundPool.
  • Stagefright
  • See Media.
  • StateQueue
  • A module within AudioFlinger responsible for synchronizing state among threads. Whereas NBAIO is used to pass data, StateQueue is used to pass control information.
  • strategy
  • A grouping of stream types with similar behavior, used by the audio policy service.
  • stream type
  • An enumeration that expresses a use case for audio output. The audio policy implementation uses the stream type, along with other parameters, to determine volume and routing decisions. Specific stream types are listed at android.media.AudioManager.
  • tee sink
  • See the separate article on tee sink in Audio Debugging.
  • tinyalsa
  • A small user-mode API above ALSA kernel with BSD license, recommended for use in HAL implementations.
  • ToneGenerator
  • A higher-level client API than AudioTrack, used for playing DTMF signals. See the Wikipedia article Dual-tone multi-frequency signaling, and the API definition atandroid.media.ToneGenerator.
  • track
  • An audio stream, controlled by the AudioTrack API.
  • volume attenuation curve
  • A device-specific mapping from a generic volume index to a particular attenuation factor for a given output.
  • volume index
  • A unitless integer that expresses the desired relative volume of a stream. The volume-related APIs of android.media.AudioManager operate in volume indices rather than absolute attenuation factors.

Sample Rate Conversion


  • downsample
  • To resample, where sink sample rate < source sample rate.
  • Nyquist frequency
  • The Nyquist frequency, equal to 1/2 of a given sample rate, is the maximum frequency component that can be represented by a discretized signal at that sample rate. For example, the human hearing range is typically assumed to extend up to approximately 20 kHz, and so a digital audio signal must have a sample rate of at least 40 kHz to represent that range. In practice, sample rates of 44.1 kHz and 48 kHz are commonly used, with Nyquist frequencies of 22.05 kHz and 24 kHz respectively. See Nyquist frequency and Hearing range for more information.
  • resampler
  • Synonym for sample rate converter.
  • resampling
  • The process of converting sample rate.
  • sample rate converter
  • A module that resamples.
  • sink
  • The output of a resampler.
  • source
  • The input to a resampler.
  • upsample
  • To resample, where sink sample rate > source sample rate.
时间: 2024-10-12 20:49:36

android audio开发的一些专用术语(待翻译)的相关文章

Android手机开发(一)

<Android核心分析>整理如下:(看到好文章就忍不住想分享给大家) 1. 方法论探讨之设计意图 为什么要研究Android,是因为它够庞大,它够复杂,他激起了我作为一个程序员的内心的渴望,渴望理解这种复杂性.我研究的对象是作为手机开发平台的Android软件系统部分,而不是Dalvik虚拟机本身. 作为一个从其他平台装接过来的程序员,要从事Andoid平台系统开发,我的关于手机平台上积累的知识已经不能满足需要了,Android为我们带来了大量的新名词,Activity,Manifest,I

Android多媒体开发介绍(转)

Android多媒体开发介绍 转自:http://blog.csdn.net/reiliu/article/details/9060557 一.       多媒体架构 基于第三方PacketVideo公司的OpenCORE来实现,支持所有通用的音频/视频/静态图像格式,包括:MPEG4.H.264.MP3.AAC.AMR.JPG.PNG.GIF等.从功能上分为两部分,一是音/视频的回放(PlayBack),二是音视频的纪录(Recorder). CODEC(编解码器)使用OpenMAX 1L

《Android系统开发》笔记

<Android系统开发>笔记1:Android系统概述 Android四层架构: 1. Linux Kernel&driver层 a.依赖于Linux 2.6内核,包含安全性.内存管理.进程管理,网络协议栈.驱动模型等b.Android自己加入的驱动,Binder IPC驱动,显示驱动,输入设备驱动.音频系统驱动,摄像头驱动,Wifi驱动.蓝牙驱动,电源管理等 2. Android本地库 & Java执行环境层 Android本地库(C/C++)Bionic:为嵌入式设备定制

Android音频开发(6):使用 OpenSL ES API(上)

前面几篇文章介绍了如何在 Java 层,利用 Android 提供的 AudioRecord 采集音频,利用 AudioTrack 播放音频,利用 MediaCodec 来编解码,这些 API 均是 Android 提供的 Java 层 API,无论是采集.播放还是编解码,这些 API 接口都需要将音频数据从 Java 拷贝到 native 层,或者从 native 层拷贝到 Java,如果希望减少拷贝,开发更加高效的 Android 音频应用,则建议使用 Android NDK 提供的 Ope

Android音频开发之——如何播放一帧音频

本文重点关注如何在Android平台上播放一帧音频数据.阅读本文之前,建议先读一下<Android音频开发(1):基础知识>,因为音频开发过程中,经常要涉及到这些基础知识,掌握了这些重要的概念后,开发过程中的很多参数和流程就会更加容易理解. Android SDK 提供了3套音频播放的API,分别是:MediaPlayer,SoundPool,AudioTrack,关于它们的区别可以看这篇文章:<Intro to the three Android Audio APIs>,简单来说

Android音频开发(2):如何采集一帧音频

本文重点关注如何在Android平台上采集一帧音频数据.阅读本文之前,建议先读一下我的上一篇文章<Android音频开发(1):基础知识>,因为音频开发过程中,经常要涉及到这些基础知识,掌握了这些重要的概念后,开发过程中的很多参数和流程就会更加容易理解. Android SDK 提供了两套音频采集的API,分别是:MediaRecorder 和 AudioRecord,前者是一个更加上层一点的API,它可以直接把手机麦克风录入的音频数据进行编码压缩(如AMR.MP3等)并存成文件,而后者则更接

Android应用开发性能优化完全分析

 应用UI性能问题分析 UI可谓是一个应用的脸,所以每一款应用在开发阶段我们的交互.视觉.动画工程师都拼命的想让它变得自然大方美丽,可是现实总是不尽人意,动画和交互总会觉得开发做出来的应用用上去感觉不自然,没有达到他们心目中的自然流畅细节:这种情况之下就更别提发布给终端用户使用了,用户要是能够感觉出来,少则影响心情,多则卸载应用:所以一个应用的UI显示性能问题就不得不被开发人员重视. 2-1 应用UI卡顿原理 人类大脑与眼睛对一个画面的连贯性感知其实是有一个界限的,譬如我们看电影会觉得画面很自然

Android Camera开发之基础知识篇

概述 Android框架支持设备的相机拍照和录像功能,你的应用可以直接调用系统的Camera应用来拍照或者录像(比如微信拍照),当然也可以利用Android系统提供的API开发一个Camera应用来实现相机拍照和录像功能(比如市面上流行的360相机).此篇文章主要记录相机开发有关的基础知识,以及带着自己的理解翻译Camera官方文档,如有翻译不恰当支出,还请指出改正.当然我会开一个有关相机开发的一个系列,该系列主要内容包括如下: 相机基本预览拍照功能. 实现相机的Flash,Hdr,滤镜,前后摄

Android音频开发(3):如何播放一帧音频

本文重点关注如何在Android平台上播放一帧音频数据.阅读本文之前,建议先读一下<Android音频开发(1):基础知识>,因为音频开发过程中,经常要涉及到这些基础知识,掌握了这些重要的概念后,开发过程中的很多参数和流程就会更加容易理解. Android SDK 提供了3套音频播放的API,分别是:MediaPlayer,SoundPool,AudioTrack,关于它们的区别可以看这篇文章:<Intro to the three Android Audio APIs>,简单来说