windows core audio apis

这个播放流程有一次当初不是很理解,做个记录,代码中的中文部分,原文档是有解释的:To move a stream of rendering data through the endpoint buffer, the client alternately calls the IAudioRenderClient::GetBuffer method and theIAudioRenderClient::ReleaseBuffer method. The client accesses the data in the endpoint buffer as a series of data packets. The GetBuffer call retrieves the next packet so that the client can fill it with rendering data. After writing the data to the packet, the client calls ReleaseBuffer to add the completed packet to the rendering queue.看最后一句,客户端通过执行ReleaseBuffer,来把已完成的数据包追加到渲染队列去,除了这个我估计还会做一些Realse工作比如修改某个状态等。//-----------------------------------------------------------
// Play an audio stream on the default audio rendering
// device. The PlayAudioStream function allocates a shared
// buffer big enough to hold one second of PCM audio data.
// The function uses this buffer to stream data to the
// rendering device. The inner loop runs every 1/2 second.
//-----------------------------------------------------------

// REFERENCE_TIME time units per second and per millisecond
#define REFTIMES_PER_SEC  10000000
#define REFTIMES_PER_MILLISEC  10000

#define EXIT_ON_ERROR(hres)                if (FAILED(hres)) { goto Exit; }
#define SAFE_RELEASE(punk)                if ((punk) != NULL)                  { (punk)->Release(); (punk) = NULL; }

const CLSID CLSID_MMDeviceEnumerator = __uuidof(MMDeviceEnumerator);
const IID IID_IMMDeviceEnumerator = __uuidof(IMMDeviceEnumerator);
const IID IID_IAudioClient = __uuidof(IAudioClient);
const IID IID_IAudioRenderClient = __uuidof(IAudioRenderClient);

HRESULT PlayAudioStream(MyAudioSource *pMySource)
{
    HRESULT hr;
    REFERENCE_TIME hnsRequestedDuration = REFTIMES_PER_SEC;
    REFERENCE_TIME hnsActualDuration;
    IMMDeviceEnumerator *pEnumerator = NULL;
    IMMDevice *pDevice = NULL;
    IAudioClient *pAudioClient = NULL;
    IAudioRenderClient *pRenderClient = NULL;
    WAVEFORMATEX *pwfx = NULL;
    UINT32 bufferFrameCount;
    UINT32 numFramesAvailable;
    UINT32 numFramesPadding;
    BYTE *pData;
    DWORD flags = 0;

    hr = CoCreateInstance(
           CLSID_MMDeviceEnumerator, NULL,
           CLSCTX_ALL, IID_IMMDeviceEnumerator,
           (void**)&pEnumerator);
    EXIT_ON_ERROR(hr)

    hr = pEnumerator->GetDefaultAudioEndpoint(
                        eRender, eConsole, &pDevice);
    EXIT_ON_ERROR(hr)

    hr = pDevice->Activate(
                    IID_IAudioClient, CLSCTX_ALL,
                    NULL, (void**)&pAudioClient);
    EXIT_ON_ERROR(hr)

    hr = pAudioClient->GetMixFormat(&pwfx);
    EXIT_ON_ERROR(hr)

    hr = pAudioClient->Initialize(
                         AUDCLNT_SHAREMODE_SHARED,
                         0,
                         hnsRequestedDuration,
                         0,
                         pwfx,
                         NULL);
    EXIT_ON_ERROR(hr)

    // Tell the audio source which format to use.
    hr = pMySource->SetFormat(pwfx);
    EXIT_ON_ERROR(hr)

    // Get the actual size of the allocated buffer.
    hr = pAudioClient->GetBufferSize(&bufferFrameCount);
    EXIT_ON_ERROR(hr)

    hr = pAudioClient->GetService(
                         IID_IAudioRenderClient,
                         (void**)&pRenderClient);
    EXIT_ON_ERROR(hr)
//////从这一直到下面的结束,其实发现,还要修改播放缓冲,就得固定的需要三个步骤,主要这里省的以后去分析他为什么这么做,其实这就是他固定的过程
    // Grab the entire buffer for the initial fill operation.
    hr = pRenderClient->GetBuffer(bufferFrameCount, &pData);
    EXIT_ON_ERROR(hr)

    // Load the initial data into the shared buffer.
    hr = pMySource->LoadData(bufferFrameCount, pData, &flags);
    EXIT_ON_ERROR(hr)

    hr = pRenderClient->ReleaseBuffer(bufferFrameCount, flags);
    EXIT_ON_ERROR(hr)
//////结束
    // Calculate the actual duration of the allocated buffer.
    hnsActualDuration = (double)REFTIMES_PER_SEC *
                        bufferFrameCount / pwfx->nSamplesPerSec;

    hr = pAudioClient->Start();  // Start playing.
    EXIT_ON_ERROR(hr)

    // Each loop fills about half of the shared buffer.
    while (flags != AUDCLNT_BUFFERFLAGS_SILENT)
    {
        // Sleep for half the buffer duration.
        Sleep((DWORD)(hnsActualDuration/REFTIMES_PER_MILLISEC/2));

        // See how much buffer space is available.
        hr = pAudioClient->GetCurrentPadding(&numFramesPadding);
        EXIT_ON_ERROR(hr)

        numFramesAvailable = bufferFrameCount - numFramesPadding;

        // Grab all the available space in the shared buffer.
        hr = pRenderClient->GetBuffer(numFramesAvailable, &pData);
        EXIT_ON_ERROR(hr)

        // Get next 1/2-second of data from the audio source.
        hr = pMySource->LoadData(numFramesAvailable, pData, &flags);
        EXIT_ON_ERROR(hr)

        hr = pRenderClient->ReleaseBuffer(numFramesAvailable, flags);
        EXIT_ON_ERROR(hr)
    }

    // Wait for last data in buffer to play before stopping.
    Sleep((DWORD)(hnsActualDuration/REFTIMES_PER_MILLISEC/2));

    hr = pAudioClient->Stop();  // Stop playing.
    EXIT_ON_ERROR(hr)

Exit:
    CoTaskMemFree(pwfx);
    SAFE_RELEASE(pEnumerator)
    SAFE_RELEASE(pDevice)
    SAFE_RELEASE(pAudioClient)
    SAFE_RELEASE(pRenderClient)

    return hr;
}
时间: 2024-10-14 18:49:33

windows core audio apis的相关文章

Core Audio 在Vista/Win7上实现

应用范围:Vista / win7, 不支持XP 1. 关于Windows Core Auido APIs 在Windowss Vista及Windows 7操作系统下,微软为应用程序提供了一套新的音频组件来改进音频质量.Core Audio APIs提供了这些组件的使用方法,是更高级的APIs的实现基础.例如:DirectSound.DirectMuisc.waveXxx.mixerXxx等API都是在其之上构建.他们之间的关系如下图所示. Core Audio APIs由三大部分组成:MMD

Windows Media Audio Professional

Windows Media Audio Professional (WMA Pro) is an improved lossy codec closely related to WMA standard. It retains most of the same general coding features, but also features improved entropy coding and quantization strategies as well as more efficien

windows core、磁盘管理命令

查看磁盘分区使用wmic logicaldisk; Windows server core环境:配置一块新的磁盘[整分] 常规GUI操作 命令操作(cmd命令输入diskpart) 1.[磁盘管理] 1.右击磁盘[联机]操作: 1.list disk   //使用list disk查看磁盘列表 select  disk *; //选中磁盘*代表第几块磁盘:2.  .online disk  //联机操作 2.右击初始化磁盘操作:3.新建简单卷; 2.create partition primar

WebRTC手记之本地音频采集

转载请注明出处:http://www.cnblogs.com/fangkm/p/4374668.html 上一篇博文介绍了本地视频采集,这一篇就介绍下音频采集流程,也是先介绍WebRTC原生的音频采集,再介绍Chromium源码对它的定制. 1. WebRTC原生音频采集 先介绍一下WebRTC中与音频采集貌似相关的接口概念: 结构上看起来是不是和视频Track的结构类似?不过前面提过,如果你以对称的思维,在此结构中找出与视频track相似的采集源和输出源,那就肯定无功而返了,LocalAudi

windows上面捕获声卡数据

转自:http://shanewfx.github.io/blog/2013/08/14/caprure-audio-on-windows/ 前一段时间接到一个任务,需要采集到声卡的输出信号,以便与麦克风的输入信号进行混音. 在考虑如何实现这个需求前,我们先讨论下电脑声音的三种模式: 1) render模式 该方式实际上就是播放(output)声音,常见的API如PlaySound, WaveOutXXX, DirectSound等 2) capture模式 该方式实际上就是录入(input)声

Windows基础-使用XAudio2播放音频(本质是WASAPI)

对于常见的音频播放,使用XAudio2足够了. 时间是把杀猪刀,滑稽的是我成了猪 早在Windows Vista中,M$推出了新的音频架构UAA,其中的CoreAudio接替了DSound.WaveXxx.MediaFundation,通过Core Audio APIs,Windows的音频性能可以与MacOS X相媲美(手动偷笑). Universal Audio Architecture (UAA) CoreAudio属于UAA,只在用户层进行一系列音频处理,而系统内核只负责递交缓冲数据给音

Core MIDI and Friends

http://www.slideshare.net/invalidname/core-midi-and-friends 31 of 31 Core MIDI and Friends 8,215 views Share Like Chris Adamson , Developer, Author at Subsequently and Furthermore, Inc. Follow 0  1  0 Published on Nov 7, 2011 CocoaHeads Ann Arbor pre

Windows上的音频采集技术

在制作发布端的时候,需要采集到声卡的输出信号,以便与麦克风的输入信号进行混音,对于音频处理的技术,主要有如下几种: 采集麦克风输入 采集声卡输出 将音频数据送入声卡进行播放 对多路音频输入进行混音处理 以下有两份代码可以参考: a.XP带回声消除功能的DirectSound音频采集 b.Vista以上带回声消除功能的Windows Core Audio 1.Windows上音频处理的API 在Windows操作系统上,常用的音频处理技术主要包括:Wave系列API函数.DirectSound.C

基于Orangpi Zero和Linux ALSA实现WIFI无线音箱(二)

作品已经完成,先上源码: https://files.cnblogs.com/files/qzrzq1/WIFISpeaker.zip 全文包含三篇,这是第二篇,主要讲述发送端程序的原理和过程. 第一篇:基于Orangpi Zero和Linux ALSA实现WIFI无线音箱(一) 第三篇:基于Orangpi Zero和Linux ALSA实现WIFI无线音箱(三) 以下是正文: 发送端程序基于MFC的对话框类实现,开发环境Visual Studio 2012,主要实现了5个功能,下面逐个讲述: