How to use VideoToolbox to decompress H.264 video stream

来源:http://stackoverflow.com/questions/29525000/how-to-use-videotoolbox-to-decompress-h-264-video-stream/

How to use VideoToolbox to decompress H.264 video stream


up vote 15 down vote favorite

12

I had a lot of trouble figuring out how to use Apple‘s Hardware accelerated video framework to decompress an H.264 video stream. After a few weeks I figured it out and wanted to share an extensive example since I couldn‘t find one.

My goal is to give a thorough, instructive example of Video Toolbox introduced in WWDC 513. My code will not compile or run since it needs to be integrated with an elementary H.264 stream (like a video read from a file or streamed from online etc) and needs to be tweaked depending on the specific case.

I should mention that I have very little experience with video en/decoding except what I learned while googling the subject. I don‘t know all the details about video formats, parameter structure etc. so I‘ve only included what I think you need to know.

I am using XCode 6.2 and have deployed to iOS devices that are running iOS 8.1 and 8.2.

objective-c ios8 h.264 video-toolbox


shareimprove this question

edited May 12 at 16:51

asked Apr 8 at 20:44

Livy Stork
1,0191625

 

add a comment

3 Answers

active
oldest
votes

up vote
38
down vote

accepted

Concepts:

NALUs: NALUs are simply a chunk of data of varying length that has a NALU start code header 0x00 00 00 01 YY where the first 9 bits of YY
tells you what type of NALU this is and therefore what type of data
follows the header. (Since you only need the first 9 bits, I use YY & 0x1F to just get the relevant bits.) I list what all these types are in the method NSString * const naluTypesStrings[], but you don‘t need to know what they all are.

Parameters: Your decoder needs parameters so it knows how the H.264 video data is stored. The 2 you need to set are Sequence Parameter Set (SPS) and Picture Parameter Set (PPS)
and they each have their own NALU type number. You don‘t need to know
what the parameters mean, the decoder knows what to do with them.

H.264 Stream Format: In most H.264 streams, you
will receive with an initial set of PPS and SPS parameters followed by
an i frame (aka IDR frame or flush frame) NALU. Then you will receive
several P frame NALUs (maybe a few dozen or so), then another set of
parameters (which may be the same as the initial parameters) and an i
frame, more P frames, etc. i frames are much bigger than P frames.
Conceptually you can think of the i frame as an entire image of the
video, and the P frames are just the changes that have been made to that
i frame, until you receive the next i frame.

Procedure:

  1. Generate individual NALUs from your H.264 stream.
    I cannot show code for this step since it depends a lot on what video
    source you‘re using. I made this graphic to show what I was working
    with ("data" in the graphic is "frame" in my following code), but your
    case may and probably will differ. My method receivedRawVideoFrame: is called every time I receive a frame (uint8_t *frame) which was one of 2 types. In the diagram, those 2 frame types are the 2 big purple boxes.
  2. Create a CMVideoFormatDescriptionRef from your SPS and PPS NALUs with CMVideoFormatDescriptionCreateFromH264ParameterSets( ).
    You cannot display any frames without doing this first. The SPS and
    PPS may look like a jumble of numbers, but VTD knows what to do with
    them. All you need to know is that CMVideoFormatDescriptionRef is a description of video data., like width/height, format type (kCMPixelFormat_32BGRA, kCMVideoCodecType_H264
    etc.), aspect ratio, color space etc. Your decoder will hold onto the
    parameters until a new set arrives (sometimes parameters are resent
    regularly even when they haven‘t changed).
  3. Re-package your IDR and non-IDR frame NALUs according to the "AVCC" format.
    This means removing the NALU start codes and replacing them with a
    4-byte header that states the length of the NALU. You don‘t need to do
    this for the SPS and PPS NALUs. (Note that the 4-byte NALU length
    header is in big-endian, so if you have a UInt32 value it must be byte-swapped before copying to the CMBlockBuffer using CFSwapInt32. I do this in my code with the htonl function call.)
  4. Package the IDR and non-IDR NALU frames into CMBlockBuffer. Do not do this with the SPS PPS parameter NALUs. All you need to know about CMBlockBuffers
    is that they are a method to wrap arbitrary blocks of data in core
    media. (Any compressed video data in a video pipeline is wrapped in
    this.)
  5. Package the CMBlockBuffer into CMSampleBuffer. All you need to know about CMSampleBuffers is that they wrap up our CMBlockBuffers with other information (here it would be the CMVideoFormatDescription and CMTime, if CMTime is used).
  6. Create a VTDecompressionSessionRef and feed the sample buffers into VTDecompressionSessionDecodeFrame( ). Alternatively, you can use AVSampleBufferDisplayLayer and its enqueueSampleBuffer:
    method and you won‘t need to use VTDecompSession. It‘s simpler to set
    up, but will not throw errors if something goes wrong like VTD will.
  7. In the VTDecompSession callback, use the resultant CVImageBufferRef to display the video frame. If you need to convert your CVImageBuffer to a UIImage, see my StackOverflow answer here.

Other notes:

  • H.264 streams can vary a lot. From what I learned, NALU start code headers are sometimes 3 bytes (0x00 00 01) and sometimes 4 (0x00 00 00 01). My code works for 4 bytes; you will need to change a few things around if you‘re working with 3.
  • If you want to know more about NALUs, I found this answer
    to be very helpful. In my case, I found that I didn‘t need to ignore
    the "emulation prevention" bytes as described, so I personally skipped
    that step but you may need to know about that.
  • If your VTDecompressionSession outputs an error number (like -12909)
    look up the error code in your XCode project. Find the VideoToolbox
    framework in your project navigator, open it and find the header
    VTErrors.h. If you can‘t find it, I‘ve also included all the error
    codes below in another answer.

Code Example:

So let‘s start by declaring some global variables and including the VT framework (VT = Video Toolbox).

#import <VideoToolbox/VideoToolbox.h>

@property (nonatomic, assign) CMVideoFormatDescriptionRef formatDesc;
@property (nonatomic, assign) VTDecompressionSessionRef decompressionSession;
@property (nonatomic, retain) AVSampleBufferDisplayLayer *videoLayer;
@property (nonatomic, assign) int spsSize;
@property (nonatomic, assign) int ppsSize;

The following array is only used so that you can print out what type of NALU frame you are receiving. If you know what all these types mean, good for you, you know more about H.264 than me :) My code only handles types 1, 5, 7 and 8.

NSString * const naluTypesStrings[] =
{
    @"0: Unspecified (non-VCL)",
    @"1: Coded slice of a non-IDR picture (VCL)",    // P frame
    @"2: Coded slice data partition A (VCL)",
    @"3: Coded slice data partition B (VCL)",
    @"4: Coded slice data partition C (VCL)",
    @"5: Coded slice of an IDR picture (VCL)",      // I frame
    @"6: Supplemental enhancement information (SEI) (non-VCL)",
    @"7: Sequence parameter set (non-VCL)",         // SPS parameter
    @"8: Picture parameter set (non-VCL)",          // PPS parameter
    @"9: Access unit delimiter (non-VCL)",
    @"10: End of sequence (non-VCL)",
    @"11: End of stream (non-VCL)",
    @"12: Filler data (non-VCL)",
    @"13: Sequence parameter set extension (non-VCL)",
    @"14: Prefix NAL unit (non-VCL)",
    @"15: Subset sequence parameter set (non-VCL)",
    @"16: Reserved (non-VCL)",
    @"17: Reserved (non-VCL)",
    @"18: Reserved (non-VCL)",
    @"19: Coded slice of an auxiliary coded picture without partitioning (non-VCL)",
    @"20: Coded slice extension (non-VCL)",
    @"21: Coded slice extension for depth view components (non-VCL)",
    @"22: Reserved (non-VCL)",
    @"23: Reserved (non-VCL)",
    @"24: STAP-A Single-time aggregation packet (non-VCL)",
    @"25: STAP-B Single-time aggregation packet (non-VCL)",
    @"26: MTAP16 Multi-time aggregation packet (non-VCL)",
    @"27: MTAP24 Multi-time aggregation packet (non-VCL)",
    @"28: FU-A Fragmentation unit (non-VCL)",
    @"29: FU-B Fragmentation unit (non-VCL)",
    @"30: Unspecified (non-VCL)",
    @"31: Unspecified (non-VCL)",
};

Now this is where all the magic happens.

-(void) receivedRawVideoFrame:(uint8_t *)frame withSize:(uint32_t)frameSize isIFrame:(int)isIFrame
{
    OSStatus status;

    uint8_t *data = NULL;
    uint8_t *pps = NULL;
    uint8_t *sps = NULL;

    // I know what my H.264 data source‘s NALUs look like so I know start code index is always 0.
    // if you don‘t know where it starts, you can use a for loop similar to how i find the 2nd and 3rd start codes
    int startCodeIndex = 0;
    int secondStartCodeIndex = 0;
    int thirdStartCodeIndex = 0;

    long blockLength = 0;

    CMSampleBufferRef sampleBuffer = NULL;
    CMBlockBufferRef blockBuffer = NULL;

    int nalu_type = (frame[startCodeIndex + 4] & 0x1F);
    NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]);

    // if we havent already set up our format description with our SPS PPS parameters, we
    // can‘t process any frames except type 7 that has our parameters
    if (nalu_type != 7 && _formatDesc == NULL)
    {
        NSLog(@"Video error: Frame is not an I Frame and format description is null");
        return;
    }

    // NALU type 7 is the SPS parameter NALU
    if (nalu_type == 7)
    {
        // find where the second PPS start code begins, (the 0x00 00 00 01 code)
        // from which we also get the length of the first SPS code
        for (int i = startCodeIndex + 4; i < startCodeIndex + 40; i++)
        {
            if (frame[i] == 0x00 && frame[i+1] == 0x00 && frame[i+2] == 0x00 && frame[i+3] == 0x01)
            {
                secondStartCodeIndex = i;
                _spsSize = secondStartCodeIndex;   // includes the header in the size
                break;
            }
        }

        // find what the second NALU type is
        nalu_type = (frame[secondStartCodeIndex + 4] & 0x1F);
        NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]);
    }

    // type 8 is the PPS parameter NALU
    if(nalu_type == 8)
    {
        // find where the NALU after this one starts so we know how long the PPS parameter is
        for (int i = _spsSize + 4; i < _spsSize + 30; i++)
        {
            if (frame[i] == 0x00 && frame[i+1] == 0x00 && frame[i+2] == 0x00 && frame[i+3] == 0x01)
            {
                thirdStartCodeIndex = i;
                _ppsSize = thirdStartCodeIndex - _spsSize;
                break;
            }
        }

        // allocate enough data to fit the SPS and PPS parameters into our data objects.
        // VTD doesn‘t want you to include the start code header (4 bytes long) so we add the - 4 here
        sps = malloc(_spsSize - 4);
        pps = malloc(_ppsSize - 4);

        // copy in the actual sps and pps values, again ignoring the 4 byte header
        memcpy (sps, &frame[4], _spsSize-4);
        memcpy (pps, &frame[_spsSize+4], _ppsSize-4);

        // now we set our H264 parameters
        uint8_t*  parameterSetPointers[2] = {sps, pps};
        size_t parameterSetSizes[2] = {_spsSize-4, _ppsSize-4};

        status = CMVideoFormatDescriptionCreateFromH264ParameterSets(kCFAllocatorDefault, 2,
                                                (const uint8_t *const*)parameterSetPointers,
                                                parameterSetSizes, 4,
                                                &_formatDesc);

        NSLog(@"\t\t Creation of CMVideoFormatDescription: %@", (status == noErr) ? @"successful!" : @"failed...");
        if(status != noErr) NSLog(@"\t\t Format Description ERROR type: %d", (int)status);

        // See if decomp session can convert from previous format description
        // to the new one, if not we need to remake the decomp session.
        // This snippet was not necessary for my applications but it could be for yours
        /*BOOL needNewDecompSession = (VTDecompressionSessionCanAcceptFormatDescription(_decompressionSession, _formatDesc) == NO);
         if(needNewDecompSession)
         {
             [self createDecompSession];
         }*/

        // now lets handle the IDR frame that (should) come after the parameter sets
        // I say "should" because that‘s how I expect my H264 stream to work, YMMV
        nalu_type = (frame[thirdStartCodeIndex + 4] & 0x1F);
        NSLog(@"~~~~~~~ Received NALU Type \"%@\" ~~~~~~~~", naluTypesStrings[nalu_type]);
    }

    // create our VTDecompressionSession.  This isnt neccessary if you choose to use AVSampleBufferDisplayLayer
    if((status == noErr) && (_decompressionSession == NULL))
    {
        [self createDecompSession];
    }

    // type 5 is an IDR frame NALU.  The SPS and PPS NALUs should always be followed by an IDR (or IFrame) NALU, as far as I know
    if(nalu_type == 5)
    {
        // find the offset, or where the SPS and PPS NALUs end and the IDR frame NALU begins
        int offset = _spsSize + _ppsSize;
        blockLength = frameSize - offset;
        data = malloc(blockLength);
        data = memcpy(data, &frame[offset], blockLength);

        // replace the start code header on this NALU with its size.
        // AVCC format requires that you do this.
        // htonl converts the unsigned int from host to network byte order
        uint32_t dataLength32 = htonl (blockLength - 4);
        memcpy (data, &dataLength32, sizeof (uint32_t));

        // create a block buffer from the IDR NALU
        status = CMBlockBufferCreateWithMemoryBlock(NULL, data,  // memoryBlock to hold buffered data
                                                    blockLength,  // block length of the mem block in bytes.
                                                    kCFAllocatorNull, NULL,
                                                    0, // offsetToData
                                                    blockLength,   // dataLength of relevant bytes, starting at offsetToData
                                                    0, &blockBuffer);

        NSLog(@"\t\t BlockBufferCreation: \t %@", (status == kCMBlockBufferNoErr) ? @"successful!" : @"failed...");
    }

    // NALU type 1 is non-IDR (or PFrame) picture
    if (nalu_type == 1)
    {
        // non-IDR frames do not have an offset due to SPS and PSS, so the approach
        // is similar to the IDR frames just without the offset
        blockLength = frameSize;
        data = malloc(blockLength);
        data = memcpy(data, &frame[0], blockLength);

        // again, replace the start header with the size of the NALU
        uint32_t dataLength32 = htonl (blockLength - 4);
        memcpy (data, &dataLength32, sizeof (uint32_t));

        status = CMBlockBufferCreateWithMemoryBlock(NULL, data,  // memoryBlock to hold data. If NULL, block will be alloc when needed
                                                    blockLength,  // overall length of the mem block in bytes
                                                    kCFAllocatorNull, NULL,
                                                    0,     // offsetToData
                                                    blockLength,  // dataLength of relevant data bytes, starting at offsetToData
                                                    0, &blockBuffer);

        NSLog(@"\t\t BlockBufferCreation: \t %@", (status == kCMBlockBufferNoErr) ? @"successful!" : @"failed...");
    }

    // now create our sample buffer from the block buffer,
    if(status == noErr)
    {
        // here I‘m not bothering with any timing specifics since in my case we displayed all frames immediately
        const size_t sampleSize = blockLength;
        status = CMSampleBufferCreate(kCFAllocatorDefault,
                                      blockBuffer, true, NULL, NULL,
                                      _formatDesc, 1, 0, NULL, 1,
                                      &sampleSize, &sampleBuffer);

        NSLog(@"\t\t SampleBufferCreate: \t %@", (status == noErr) ? @"successful!" : @"failed...");
    }

    if(status == noErr)
    {
        // set some values of the sample buffer‘s attachments
        CFArrayRef attachments = CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, YES);
        CFMutableDictionaryRef dict = (CFMutableDictionaryRef)CFArrayGetValueAtIndex(attachments, 0);
        CFDictionarySetValue(dict, kCMSampleAttachmentKey_DisplayImmediately, kCFBooleanTrue);

        // either send the samplebuffer to a VTDecompressionSession or to an AVSampleBufferDisplayLayer
        [self render:sampleBuffer];
    }

    // free memory to avoid a memory leak, do the same for sps, pps and blockbuffer
    if (NULL != data)
    {
        free (data);
        data = NULL;
    }
}

The following method creates your VTD session. Recreate it whenever you receive new parameters. (You don‘t have to recreate it every time you receive parameters, pretty sure.)

If you want to set attributes for the destination CVPixelBuffer, read up on CoreVideo PixelBufferAttributes values and put them in NSDictionary *destinationImageBufferAttributes.

-(void) createDecompSession
{
    // make sure to destroy the old VTD session
    _decompressionSession = NULL;
    VTDecompressionOutputCallbackRecord callBackRecord;
    callBackRecord.decompressionOutputCallback = decompressionSessionDecodeFrameCallback;

    // this is necessary if you need to make calls to Objective C "self" from within in the callback method.
    callBackRecord.decompressionOutputRefCon = (__bridge void *)self;

    // you can set some desired attributes for the destination pixel buffer.  I didn‘t use this but you may
    // if you need to set some attributes, be sure to uncomment the dictionary in VTDecompressionSessionCreate
    NSDictionary *destinationImageBufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
                                                      [NSNumber numberWithBool:YES],
                                                      (id)kCVPixelBufferOpenGLESCompatibilityKey,
                                                      nil];

    OSStatus status =  VTDecompressionSessionCreate(NULL, _formatDesc, NULL,
                                                    NULL, // (__bridge CFDictionaryRef)(destinationImageBufferAttributes)
                                                    &callBackRecord, &_decompressionSession);
    NSLog(@"Video Decompression Session Create: \t %@", (status == noErr) ? @"successful!" : @"failed...");
    if(status != noErr) NSLog(@"\t\t VTD ERROR type: %d", (int)status);
}

Now this method gets called every time VTD is done decompressing any frame you sent to it. This method gets called even if there‘s an error or if the frame is dropped.

void decompressionSessionDecodeFrameCallback(void *decompressionOutputRefCon,
                                             void *sourceFrameRefCon,
                                             OSStatus status,
                                             VTDecodeInfoFlags infoFlags,
                                             CVImageBufferRef imageBuffer,
                                             CMTime presentationTimeStamp,
                                             CMTime presentationDuration)
{
    THISCLASSNAME *streamManager = (__bridge THISCLASSNAME *)decompressionOutputRefCon;

    if (status != noErr)
    {
        NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:status userInfo:nil];
        NSLog(@"Decompressed error: %@", error);
    }
    else
    {
        NSLog(@"Decompressed sucessfully");

        // do something with your resulting CVImageBufferRef that is your decompressed frame
        [streamManager displayDecodedFrame:imageBuffer];
    }
}

This is where we actually send the sampleBuffer off to the VTD to be decoded.

- (void) render:(CMSampleBufferRef)sampleBuffer
{
    VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression;
    VTDecodeInfoFlags flagOut;
    NSDate* currentTime = [NSDate date];
    VTDecompressionSessionDecodeFrame(_decompressionSession, sampleBuffer, flags,
                                      (void*)CFBridgingRetain(currentTime), &flagOut);

    CFRelease(sampleBuffer);

    // if you‘re using AVSampleBufferDisplayLayer, you only need to use this line of code
    // [videoLayer enqueueSampleBuffer:sampleBuffer];
}

If you‘re using AVSampleBufferDisplayLayer, be sure to init the layer like this, in viewDidLoad or inside some other init method.

-(void) viewDidLoad
{
    // create our AVSampleBufferDisplayLayer and add it to the view
    videoLayer = [[AVSampleBufferDisplayLayer alloc] init];
    videoLayer.frame = self.view.frame;
    videoLayer.bounds = self.view.bounds;
    videoLayer.videoGravity = AVLayerVideoGravityResizeAspect;

    // set Timebase, you may need this if you need to display frames at specific times
    // I didn‘t need it so I haven‘t verified that the timebase is working
    CMTimebaseRef controlTimebase;
    CMTimebaseCreateWithMasterClock(CFAllocatorGetDefault(), CMClockGetHostTimeClock(), &controlTimebase);

    //videoLayer.controlTimebase = controlTimebase;
    CMTimebaseSetTime(self.videoLayer.controlTimebase, kCMTimeZero);
    CMTimebaseSetRate(self.videoLayer.controlTimebase, 1.0);

    [[self.view layer] addSublayer:videoLayer];
}

shareimprove this answer

edited Nov 10 at 13:19

jpihl
1,39911223

answered Apr 8 at 20:44

Livy Stork
1,0191625

 

1  

This is great! I actually got
this working just before finding this awesome example. Was getting an
error VTDecompressionSessionDecodeFrame: -12911. Make sure the correct
blockLength is sent to CMBlockBufferCreateWithMemoryBlock
– 3rdLion
Apr 9 at 4:53
1  

@Livy firstly thanks for the
great example. However I‘m running into few issues when streaming from
the network. Looks like my picture NALUs are grouped in packs of 2 -
when I enqueue every single NALU the image looks terrible, but when I
copy bytes from 2 NALUs to single block buffer it‘s "almost" fine.
"Almost" because sometimes the image gets broken. Have you encountered
such issues?
– Tomasz Wójcik
May 12 at 12:25
1  

@LivyStork I‘m just receiving
packets from network, containing NALU, type 1 or 5. Then I just
replace first 3 or 4 bytes with length (AVCC), build the block buffer
then build the sample buffer with attachments and enqueue it to the
AVSampleBufferDisplayLayer. When I enqueue single block buffer I get
like half of the image or just simply broken image. With 2 block buffers
in sample it‘s almost fine (sometimes image tearing, sometimes fine
image). I‘m pretty sure that emulation prevention bytes are not the
issue, because when I save image to the disk I can play it with mplayer
without problems.
– Tomasz Wójcik
May 13 at 7:18
1  

@LivyStork From my
investigation paired NALUs correspond to the same frame number (I‘ve
parsed the slice header - I hope that I did that correctly and this is
not just a coincidence). Maybe that AVSampleBufferDisplayLayer gets
hogged by lots of sample buffers and cannot handle it? I have no idea
how it works "below the hood".
– Tomasz Wójcik
May 13 at 7:22
1  

One of the best SOs I‘ve
seen. Thanks a bunch. I wish I had this resource when I was trying to
get HW decoding for my app working, would have made it much easier.
– braden
May 21 at 19:42

show 3 more comments

up vote
6
down vote

If you can‘t find the VTD error codes in the framework, I decided to
just include them here. (Again, all these errors and more can be found inside the VideoToolbox.framework itself in the project navigator.)

You will get one of these error codes either in the the VTD decode
frame callback or when you create your VTD session if you did something
incorrectly.

kVTPropertyNotSupportedErr              = -12900,
kVTPropertyReadOnlyErr                  = -12901,
kVTParameterErr                         = -12902,
kVTInvalidSessionErr                    = -12903,
kVTAllocationFailedErr                  = -12904,
kVTPixelTransferNotSupportedErr         = -12905, // c.f. -8961
kVTCouldNotFindVideoDecoderErr          = -12906,
kVTCouldNotCreateInstanceErr            = -12907,
kVTCouldNotFindVideoEncoderErr          = -12908,
kVTVideoDecoderBadDataErr               = -12909, // c.f. -8969
kVTVideoDecoderUnsupportedDataFormatErr = -12910, // c.f. -8970
kVTVideoDecoderMalfunctionErr           = -12911, // c.f. -8960
kVTVideoEncoderMalfunctionErr           = -12912,
kVTVideoDecoderNotAvailableNowErr       = -12913,
kVTImageRotationNotSupportedErr         = -12914,
kVTVideoEncoderNotAvailableNowErr       = -12915,
kVTFormatDescriptionChangeNotSupportedErr   = -12916,
kVTInsufficientSourceColorDataErr       = -12917,
kVTCouldNotCreateColorCorrectionDataErr = -12918,
kVTColorSyncTransformConvertFailedErr   = -12919,
kVTVideoDecoderAuthorizationErr         = -12210,
kVTVideoEncoderAuthorizationErr         = -12211,
kVTColorCorrectionPixelTransferFailedErr    = -12212,
kVTMultiPassStorageIdentifierMismatchErr    = -12213,
kVTMultiPassStorageInvalidErr           = -12214,
kVTFrameSiloInvalidTimeStampErr         = -12215,
kVTFrameSiloInvalidTimeRangeErr         = -12216,
kVTCouldNotFindTemporalFilterErr        = -12217,
kVTPixelTransferNotPermittedErr         = -12218,

shareimprove this answer

edited Jun 8 at 18:03

answered Apr 9 at 15:42

Livy Stork
1,0191625

 

add a comment

up vote
4
down vote

In addition to VTErrors above, I thought it‘s worth adding
CMFormatDescription, CMBlockBuffer, CMSampleBuffer errors that you may
encounter while trying Livy‘s example.

kCMFormatDescriptionError_InvalidParameter  = -12710,
kCMFormatDescriptionError_AllocationFailed  = -12711,
kCMFormatDescriptionError_ValueNotAvailable = -12718,

kCMBlockBufferNoErr                             = 0,
kCMBlockBufferStructureAllocationFailedErr      = -12700,
kCMBlockBufferBlockAllocationFailedErr          = -12701,
kCMBlockBufferBadCustomBlockSourceErr           = -12702,
kCMBlockBufferBadOffsetParameterErr             = -12703,
kCMBlockBufferBadLengthParameterErr             = -12704,
kCMBlockBufferBadPointerParameterErr            = -12705,
kCMBlockBufferEmptyBBufErr                      = -12706,
kCMBlockBufferUnallocatedBlockErr               = -12707,
kCMBlockBufferInsufficientSpaceErr              = -12708,

kCMSampleBufferError_AllocationFailed             = -12730,
kCMSampleBufferError_RequiredParameterMissing     = -12731,
kCMSampleBufferError_AlreadyHasDataBuffer         = -12732,
kCMSampleBufferError_BufferNotReady               = -12733,
kCMSampleBufferError_SampleIndexOutOfRange        = -12734,
kCMSampleBufferError_BufferHasNoSampleSizes       = -12735,
kCMSampleBufferError_BufferHasNoSampleTimingInfo  = -12736,
kCMSampleBufferError_ArrayTooSmall                = -12737,
kCMSampleBufferError_InvalidEntryCount            = -12738,
kCMSampleBufferError_CannotSubdivide              = -12739,
kCMSampleBufferError_SampleTimingInfoInvalid      = -12740,
kCMSampleBufferError_InvalidMediaTypeForOperation = -12741,
kCMSampleBufferError_InvalidSampleData            = -12742,
kCMSampleBufferError_InvalidMediaFormat           = -12743,
kCMSampleBufferError_Invalidated                  = -12744,
kCMSampleBufferError_DataFailed                   = -16750,
kCMSampleBufferError_DataCanceled                 = -16751,

shareimprove this answer

answered Jun 8 at 17:49

Jetdog
414

 

    

Agreed. Again, all these
errors and more can be found inside the VideoToolbox.framework itself in
the project navigator. Like so: gyazo.com/d1f954bcd353a418367f0db012046736
– Livy Stork
Jun 8 at 18:02

时间: 2024-10-07 19:11:26

How to use VideoToolbox to decompress H.264 video stream的相关文章

使用VideoToolbox硬编码H.264&lt;转&gt;

文/落影loyinglin(简书作者)原文链接:http://www.jianshu.com/p/37784e363b8a著作权归作者所有,转载请联系作者获得授权,并标注“简书作者”. =========================================== 使用VideoToolbox硬编码H.264 前言 H.264是目前很流行的编码层视频压缩格式,目前项目中的协议层有rtmp与http,但是视频的编码层都是使用的H.264.在熟悉H.264的过程中,为更好的了解H.264,尝

H.264格式,iOS硬编解码 以及 iOS 11对HEVC硬编解码的支持

H.264格式,iOS硬编解码 以及 iOS 11对HEVC硬编解码的支持 1,H.264格式 网络表示层NAL,如图H.264流由一帧一帧的NALU组成: SPS:序列参数集,作用于一系列连续的编码图像: PPS:图像参数集,作用于编码视频序列中一个或多个独立的图像: 这两个帧也是独立的NALU. I-Frame:关键帧,帧内编码后的帧,显示比较完全的一帧: P-Frame:参考前一帧,可能只是对比前一帧的运动估计的变化部分: B-Frame:会参照前后的帧,其他类似P-Frame.B和P F

How to decode a H.264 frame on iOS by hardware decoding?

来源:http://stackoverflow.com/questions/25197169/how-to-decode-a-h-264-frame-on-ios-by-hardware-decoding How to decode a H.264 frame on iOS by hardware decoding? up vote 8 down vote favorite 4 I have been used ffmpeg to decode every single frame that I

H.264视频的RTP荷载格式

Status of This Memo This document specifies an Internet standards track protocol for the   Internet community, and requests discussion and suggestions for   improvements.  Please refer to the current edition of the "Internet   Official Protocol Stand

H.264开源解码器评测

转自:http://wmnmtm.blog.163.com/blog/static/38245714201142883032575/ 要播放HDTV,就首先要正确地解开封装,然后进行视频音频解码.所以我们需要分离器,视频解码器和音频解码器,俗称hdtv的“三件套”,又统称滤镜. H264的分离器: 常见的有Gabest MP4分离器,就是MP4splitter,也是Gabest编写的,Halli的分离器和NDigital分离器等. H264的视频解码器: CoreAVC的H264视频解码器Cor

H.264码流打包分析

H.264码流打包分析 SODB 数据比特串-->最原始的编码数据 RBSP 原始字节序列载荷-->在SODB的后面填加了结尾比特(RBSP trailing bits 一个bit"1")若干比特"0",以便字节对齐. EBSP 扩展字节序列载荷-- >在RBSP基础上填加了仿校验字节(0X03)它的原因是: 在NALU加到Annexb上时,需要填加每组NALU之前的开始码 StartCodePrefix,如果该NALU对应的slice为一帧的开始

【H.264/AVC视频编解码技术具体解释】十三、熵编码算法(4):H.264使用CAVLC解析宏块的残差数据

<H.264/AVC视频编解码技术具体解释>视频教程已经在"CSDN学院"上线,视频中详述了H.264的背景.标准协议和实现,并通过一个实战project的形式对H.264的标准进行解析和实现,欢迎观看! "纸上得来终觉浅.绝知此事要躬行".仅仅有自己依照标准文档以代码的形式操作一遍,才干对视频压缩编码标准的思想和方法有足够深刻的理解和体会. 链接地址:H.264/AVC视频编解码技术具体解释 GitHub代码地址:点击这里 1. H.264的CAVLC

H.264 / MPEG-4 Part 10 White Paper-翻译

1. Introduction Broadcast(广播) television and home entertainment(娱乐) have been revolutionised(彻底改变) by the advent(出现) of digital TV andDVD-video.(--广播电视和家庭娱乐方式被彻底改变了在数字电视和DVD-Video出现之后--) These applications and many more were made possible by the stan

x264 - 高品质 H.264 编码器

转自:http://www.5i01.cn/topicdetail.php?f=510&t=3735840&r=18&last=48592660 H.264 / MPEG-4 AVC 是优秀的视讯编码格式就目前已成熟的视讯编码格式而言,H.264的压缩率是最佳的.压缩率极高,可以只用很低 bitrate 提供堪用画质. 而 x264 为免费开放原始码的 H.264 / MPEG-4 AVC 编码器,是目前编码效率最高的开放原始码 H.264 编码器. 此文只是基础知识,说明只是大略