This page is documenting a work in progress feature do not assume it is available on the mainline branch.


The current AVCodec API assumes that for each input packet (or frame) an output frame (or packet) would be output, with minimal caching between the two.

In a number of situation the model does not work at all (e.g. hevc pseudo-interlace would produce two packets for each input frame, multi-unit hardware decoders would buffer and output similarly).

API Design


int avcodec_decode_push(AVCodecContext *avctx, AVPacket *packet);
int avcodec_decode_pull(AVCodecContext *avctx, AVFrame *frame);
int avcodec_decode_need_data(AVCodecContext *avctx);
int avcodec_decode_have_data(AVCodecContext *avctx);

Frames can be pushed to the decoder until need_data returns 1, once have_data return 1 the decoded frames can be recovered.


int avcodec_encode_push(AVCodecContext *avctx, AVFrame *frame);
int avcodec_encode_pull(AVCodecContext *avctx, AVPacket *packet);
int avcodec_encode_need_data(AVCodecContext *avctx);
int avcodec_encode_have_data(AVCodecContext *avctx);

The encoding works similarly, swapping the input and output type.


The compatibility API would require the monolithic decode and encode function to return appropriate AVERROR to signal when the function should be call again with no input data to get the additional data before the end of the stream.

CategoryBlueprint CategoryBlueprintActive