Encoder and Decoder can be implemented within the same file if small enough or, if large, they should be split at least in 3 groups of file:


If a new codec is being introduced a new AV_CODEC_ID must be added to AVCodecID in libavcodec/avcodec.h AVCodecID and a new AVCodecDescriptor must be added to codec_descriptors in libavcodec/codec_desc.c


The AVCodec structure is used by both encoders and decoders, but different fields must be filled.

typedef struct AVCodec {
     * Name of the codec implementation.
     * The name is globally unique among encoders and among decoders (but an
     * encoder and a decoder can share the same name).
     * This is the primary way to find a codec from the user perspective.
    const char *name;
     * Descriptive name for the codec, meant to be more human readable than name.
     * You should use the NULL_IF_CONFIG_SMALL() macro to define it.
    const char *long_name;
    enum AVMediaType type;
    enum AVCodecID id;
     * Codec capabilities.
     * see CODEC_CAP_*
    int capabilities;
    const AVRational *supported_framerates; ///< array of supported framerates, or NULL if any, array is terminated by {0,0}
    const enum AVPixelFormat *pix_fmts;     ///< array of supported pixel formats, or NULL if unknown, array is terminated by -1
    const int *supported_samplerates;       ///< array of supported audio samplerates, or NULL if unknown, array is terminated by 0
    const enum AVSampleFormat *sample_fmts; ///< array of supported sample formats, or NULL if unknown, array is terminated by -1
    const uint64_t *channel_layouts;         ///< array of support channel layouts, or NULL if unknown. array is terminated by 0
    attribute_deprecated uint8_t max_lowres; ///< maximum value for lowres supported by the decoder
    const AVClass *priv_class;              ///< AVClass for the private context
    const AVProfile *profiles;              ///< array of recognized profiles, or NULL if unknown, array is terminated by {FF_PROFILE_UNKNOWN}

     * No fields below this line are part of the public API. They
     * may not be used outside of libavcodec and can be changed and
     * removed at will.
     * New public fields should be added right above.
    int priv_data_size;
    struct AVCodec *next;
     * @name Frame-level threading support functions
     * @{
     * If defined, called on thread contexts when they are created.
     * If the codec allocates writable tables in init(), re-allocate them here.
     * priv_data will be set to a copy of the original.
    int (*init_thread_copy)(AVCodecContext *);
     * Copy necessary context variables from a previous thread context to the current one.
     * If not defined, the next thread will start automatically; otherwise, the codec
     * must call ff_thread_finish_setup().
     * dst and src will (rarely) point to the same context, in which case memcpy should be skipped.
    int (*update_thread_context)(AVCodecContext *dst, const AVCodecContext *src);
    /** @} */

     * Private codec-specific defaults.
    const AVCodecDefault *defaults;

     * Initialize codec static data, called from avcodec_register().
    void (*init_static_data)(struct AVCodec *codec);

    int (*init)(AVCodecContext *);
    int (*encode_sub)(AVCodecContext *, uint8_t *buf, int buf_size,
                      const struct AVSubtitle *sub);
     * Encode data to an AVPacket.
     * @param      avctx          codec context
     * @param      avpkt          output AVPacket (may contain a user-provided buffer)
     * @param[in]  frame          AVFrame containing the raw data to be encoded
     * @param[out] got_packet_ptr encoder sets to 0 or 1 to indicate that a
     *                            non-empty packet was returned in avpkt.
     * @return 0 on success, negative error code on failure
    int (*encode2)(AVCodecContext *avctx, AVPacket *avpkt, const AVFrame *frame,
                   int *got_packet_ptr);
    int (*decode)(AVCodecContext *, void *outdata, int *outdata_size, AVPacket *avpkt);
    int (*close)(AVCodecContext *);
     * Flush buffers.
     * Will be called when seeking
    void (*flush)(AVCodecContext *);
} AVCodec;


Libav uses succint Makefile rules, add the object files needed to OBJS-$(CONFIG_NAME_DECODER).

Eventual common dependencies can be expressed in configure

OBJS-$(CONFIG_VP8_DECODER)             += vp8.o vp8dsp.o vp56rac.o
OBJS-$(CONFIG_VP9_DECODER)             += vp9.o vp9data.o vp9dsp.o \
                                          vp9block.o vp9prob.o vp9mvs.o vp56rac.o

Configure dependencies

Dependencies across different components can be described using _select and _dep.

vp8_decoder_select="h264pred videodsp"





Multi threading support

Libav provides infrastructure to implement both slice-threading or frame-threading. The former shortens the latency while the latter increases it consistently while being overall more efficient throughput wise.

Slice threading

Slice decoding is signalled by adding CODEC_CAP_SLICE_THREADS to AVCodec.capabilities. In this mode one can run several independent slice decoding tasks in frame decoding mode.

In order to use that feature one must use AVCodec.execute() or AVCodec.execute2(). The latter provides called function with additional information about execution environment (i.e. thread and actual job number).

Both these functions take the following arguments:

Both AVCodec.execute() and AVCodec.execute2() call the provided working function with the provided AVCodecContext* and the next item in the execution-specific data.

Here's a small example how it should be used:

struct SliceData {
    int y;
    int colour;

/* simply draw something in slice */
static void worker(AVCodecContext *avctx, void *arg2)
    LocalCodecContext *c = avctx->priv_data;
    struct SliceData *sd = arg2;
    int i;

    for (i = 0; i < 20; i++)
        c->frame->data[0][i + sd->y * c->frame->linesize[0]] = sd->colour;

/* let's assume we have N slices */
static void decode_slices(AVCodecContext *avctx)
    int i;
    LocalCodecContext *c = avctx->priv_data;
    struct SliceData *data = c->slice_data;

    for (i = 0; i < N; i++) {
        data[i].y = avctx->width * i / N;
        data[i].colour = i * 3 + N;

    avctx->execute(avctx, worker, data, NULL, N, sizeof(data[0]));

BEWARE: this is intended to be called multithreaded so do not change any variable in the codec context and make sure you write into non-overlapping frame areas, otherwise hard to debug bugs may happen.

Frame threading

Slice decoding is signalled by adding CODEC_CAP_FRAME_THREADS to AVCodec.capabilities. In this mode frame decoding may be called in parallel for different frame. This is an ideal decoding mode for intraframe-only codecs (unless one can decode them by slices).

#include "thread.h"

static int decode_frame(AVCodecContext *avctx, void *data, int *got_frame,
                        AVPacket *avpkt)
    ThreadFrame frame = { .f = data };

    if ((ret = ff_thread_get_buffer(avctx, &frame, 0)) < 0) {
        av_log(avctx, AV_LOG_ERROR, "get_buffer() failed\n");
        return ret;


    // continue working with frame.f as usual

BEWARE: frame decoding and codec cleanup is called in each thread with copies of codec context initialised only once, so do not allocate data in codec init or it will be scheduled to be freed multiple times at the end. Allocate necessary tables in decode_frame().






Often encoders should have specific defaults different from the global ones in order to provide a decent output out of box.

The most common parameter to override is the bitrate followed by the quantization settings.

Parameter negotiation

Audio Specific

The audio codecs should define an array of supported sample formats, channel layouts and sample rates and set it in AVCodec.

static const int twolame_samplerates[] = {
    16000, 22050, 24000, 32000, 44100, 48000, 0


    .sample_fmts    = (const enum AVSampleFormat[]) {
    .channel_layouts = (const uint64_t[]) {
        0 },
    .supported_samplerates = twolame_samplerates,

Video Specific

CategoryHowTo CategoryWIP