FFmpeg 4.3 音视频-多路H265监控录放C++开发八,使用SDLVSQT显示yuv文件 ,使用ffmpeg的AVFrame

news2024/11/4 18:20:41

一. AVFrame 核心回顾,uint8_t *data[AV_NUM_DATA_POINTERS] 和  int linesize[AV_NUM_DATA_POINTERS]

AVFrame 存储的是解码后的数据,(包括音频和视频)例如:yuv数据,或者pcm数据,参考AVFrame结构体的第一句话。

其核心数据为:

AV_NUM_DATA_POINTERS = 8;

uint8_t *data[AV_NUM_DATA_POINTERS];

 int linesize[AV_NUM_DATA_POINTERS];

uint8_t *data[AV_NUM_DATA_POINTERS];

data -->xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
            ^                          ^                        ^
            |                           |                        |
        data[0]                data[1]              data[2]

比如说,当pix_fmt=AV_PIX_FMT_YUV420P时,data中的数据是按照YUV的格式存储的,也就是:

data -->YYYYYYYYYYYYYYYYYYYYYYYYUUUUUUUUUUUVVVVVVVVVVVV
            ^                                                     ^                        ^
            |                                                     |                         |
       data[0]                                           data[1]               data[2]

 int linesize[AV_NUM_DATA_POINTERS];

 linesize是指对应于每一行的大小,为什么需要这个变量,是因为在YUV格式和RGB格式时,每行的大小不一定等于图像的宽度。

linesize = width + padding size(16+16) for YUV
linesize = width*pixel_size  for RGB
padding is needed during Motion Estimation and Motion Compensation for Optimizing MV serach and  P/B frame reconstruction

for RGB only one channel is available
so RGB24 : data[0] = packet rgbrgbrgbrgb......
           linesize[0] = width*3
data[1],data[2],data[3],linesize[1],linesize[2],linesize[2] have no any means for RGB

在二核心函数中 关于linesize[x]字节数的验证 代码,可以参考

/**
 * This structure describes decoded (raw) audio or video data.
 *
 * AVFrame must be allocated using av_frame_alloc(). Note that this only
 * allocates the AVFrame itself, the buffers for the data must be managed
 * through other means (see below).
 * AVFrame must be freed with av_frame_free().
 *
 * AVFrame is typically allocated once and then reused multiple times to hold
 * different data (e.g. a single AVFrame to hold frames received from a
 * decoder). In such a case, av_frame_unref() will free any references held by
 * the frame and reset it to its original clean state before it
 * is reused again.
 *
 * The data described by an AVFrame is usually reference counted through the
 * AVBuffer API. The underlying buffer references are stored in AVFrame.buf /
 * AVFrame.extended_buf. An AVFrame is considered to be reference counted if at
 * least one reference is set, i.e. if AVFrame.buf[0] != NULL. In such a case,
 * every single data plane must be contained in one of the buffers in
 * AVFrame.buf or AVFrame.extended_buf.
 * There may be a single buffer for all the data, or one separate buffer for
 * each plane, or anything in between.
 *
 * sizeof(AVFrame) is not a part of the public ABI, so new fields may be added
 * to the end with a minor bump.
 *
 * Fields can be accessed through AVOptions, the name string used, matches the
 * C structure field name for fields accessible through AVOptions. The AVClass
 * for AVFrame can be obtained from avcodec_get_frame_class()
 */

typedef struct AVFrame {
#define AV_NUM_DATA_POINTERS 8
    /**
     * pointer to the picture/channel planes.
     * This might be different from the first allocated byte. For video,
     * it could even point to the end of the image data.
     *
     * All pointers in data and extended_data must point into one of the
     * AVBufferRef in buf or extended_buf.
     *
     * Some decoders access areas outside 0,0 - width,height, please
     * see avcodec_align_dimensions2(). Some filters and swscale can read
     * up to 16 bytes beyond the planes, if these filters are to be used,
     * then 16 extra bytes must be allocated.
     *
     * NOTE: Pointers not needed by the format MUST be set to NULL.
     *
     * @attention In case of video, the data[] pointers can point to the
     * end of image data in order to reverse line order, when used in
     * combination with negative values in the linesize[] array.
     */
    uint8_t *data[AV_NUM_DATA_POINTERS];

    /**
     * For video, a positive or negative value, which is typically indicating
     * the size in bytes of each picture line, but it can also be:
     * - the negative byte size of lines for vertical flipping
     *   (with data[n] pointing to the end of the data
     * - a positive or negative multiple of the byte size as for accessing
     *   even and odd fields of a frame (possibly flipped)
     *
     * For audio, only linesize[0] may be set. For planar audio, each channel
     * plane must be the same size.
     *
     * For video the linesizes should be multiples of the CPUs alignment
     * preference, this is 16 or 32 for modern desktop CPUs.
     * Some code requires such alignment other code can be slower without
     * correct alignment, for yet other it makes no difference.
     *
     * @note The linesize may be larger than the size of usable data -- there
     * may be extra padding present for performance reasons.
     *
     * @attention In case of video, line size values can be negative to achieve
     * a vertically inverted iteration over image lines.
     */
    int linesize[AV_NUM_DATA_POINTERS];

    /**
     * pointers to the data planes/channels.
     *
     * For video, this should simply point to data[].
     *
     * For planar audio, each channel has a separate data pointer, and
     * linesize[0] contains the size of each channel buffer.
     * For packed audio, there is just one data pointer, and linesize[0]
     * contains the total size of the buffer for all channels.
     *
     * Note: Both data and extended_data should always be set in a valid frame,
     * but for planar audio with more channels that can fit in data,
     * extended_data must be used in order to access all channels.
     */
    uint8_t **extended_data;

    /**
     * @name Video dimensions
     * Video frames only. The coded dimensions (in pixels) of the video frame,
     * i.e. the size of the rectangle that contains some well-defined values.
     *
     * @note The part of the frame intended for display/presentation is further
     * restricted by the @ref cropping "Cropping rectangle".
     * @{
     */
    int width, height;
    /**
     * @}
     */

    /**
     * number of audio samples (per channel) described by this frame
     */
    int nb_samples;

    /**
     * format of the frame, -1 if unknown or unset
     * Values correspond to enum AVPixelFormat for video frames,
     * enum AVSampleFormat for audio)
     */
    int format;

    /**
     * 1 -> keyframe, 0-> not
     */
    int key_frame;

    /**
     * Picture type of the frame.
     */
    enum AVPictureType pict_type;

    /**
     * Sample aspect ratio for the video frame, 0/1 if unknown/unspecified.
     */
    AVRational sample_aspect_ratio;

    /**
     * Presentation timestamp in time_base units (time when frame should be shown to user).
     */
    int64_t pts;

    /**
     * DTS copied from the AVPacket that triggered returning this frame. (if frame threading isn't used)
     * This is also the Presentation time of this AVFrame calculated from
     * only AVPacket.dts values without pts values.
     */
    int64_t pkt_dts;

    /**
     * Time base for the timestamps in this frame.
     * In the future, this field may be set on frames output by decoders or
     * filters, but its value will be by default ignored on input to encoders
     * or filters.
     */
    AVRational time_base;

#if FF_API_FRAME_PICTURE_NUMBER
    /**
     * picture number in bitstream order
     */
    attribute_deprecated
    int coded_picture_number;
    /**
     * picture number in display order
     */
    attribute_deprecated
    int display_picture_number;
#endif

    /**
     * quality (between 1 (good) and FF_LAMBDA_MAX (bad))
     */
    int quality;

    /**
     * for some private data of the user
     */
    void *opaque;

    /**
     * When decoding, this signals how much the picture must be delayed.
     * extra_delay = repeat_pict / (2*fps)
     */
    int repeat_pict;

    /**
     * The content of the picture is interlaced.
     */
    int interlaced_frame;

    /**
     * If the content is interlaced, is top field displayed first.
     */
    int top_field_first;

    /**
     * Tell user application that palette has changed from previous frame.
     */
    int palette_has_changed;

#if FF_API_REORDERED_OPAQUE
    /**
     * reordered opaque 64 bits (generally an integer or a double precision float
     * PTS but can be anything).
     * The user sets AVCodecContext.reordered_opaque to represent the input at
     * that time,
     * the decoder reorders values as needed and sets AVFrame.reordered_opaque
     * to exactly one of the values provided by the user through AVCodecContext.reordered_opaque
     *
     * @deprecated Use AV_CODEC_FLAG_COPY_OPAQUE instead
     */
    attribute_deprecated
    int64_t reordered_opaque;
#endif

    /**
     * Sample rate of the audio data.
     */
    int sample_rate;

#if FF_API_OLD_CHANNEL_LAYOUT
    /**
     * Channel layout of the audio data.
     * @deprecated use ch_layout instead
     */
    attribute_deprecated
    uint64_t channel_layout;
#endif

    /**
     * AVBuffer references backing the data for this frame. All the pointers in
     * data and extended_data must point inside one of the buffers in buf or
     * extended_buf. This array must be filled contiguously -- if buf[i] is
     * non-NULL then buf[j] must also be non-NULL for all j < i.
     *
     * There may be at most one AVBuffer per data plane, so for video this array
     * always contains all the references. For planar audio with more than
     * AV_NUM_DATA_POINTERS channels, there may be more buffers than can fit in
     * this array. Then the extra AVBufferRef pointers are stored in the
     * extended_buf array.
     */
    AVBufferRef *buf[AV_NUM_DATA_POINTERS];

    /**
     * For planar audio which requires more than AV_NUM_DATA_POINTERS
     * AVBufferRef pointers, this array will hold all the references which
     * cannot fit into AVFrame.buf.
     *
     * Note that this is different from AVFrame.extended_data, which always
     * contains all the pointers. This array only contains the extra pointers,
     * which cannot fit into AVFrame.buf.
     *
     * This array is always allocated using av_malloc() by whoever constructs
     * the frame. It is freed in av_frame_unref().
     */
    AVBufferRef **extended_buf;
    /**
     * Number of elements in extended_buf.
     */
    int        nb_extended_buf;

    AVFrameSideData **side_data;
    int            nb_side_data;

/**
 * @defgroup lavu_frame_flags AV_FRAME_FLAGS
 * @ingroup lavu_frame
 * Flags describing additional frame properties.
 *
 * @{
 */

/**
 * The frame data may be corrupted, e.g. due to decoding errors.
 */
#define AV_FRAME_FLAG_CORRUPT       (1 << 0)
/**
 * A flag to mark the frames which need to be decoded, but shouldn't be output.
 */
#define AV_FRAME_FLAG_DISCARD   (1 << 2)
/**
 * @}
 */

    /**
     * Frame flags, a combination of @ref lavu_frame_flags
     */
    int flags;

    /**
     * MPEG vs JPEG YUV range.
     * - encoding: Set by user
     * - decoding: Set by libavcodec
     */
    enum AVColorRange color_range;

    enum AVColorPrimaries color_primaries;

    enum AVColorTransferCharacteristic color_trc;

    /**
     * YUV colorspace type.
     * - encoding: Set by user
     * - decoding: Set by libavcodec
     */
    enum AVColorSpace colorspace;

    enum AVChromaLocation chroma_location;

    /**
     * frame timestamp estimated using various heuristics, in stream time base
     * - encoding: unused
     * - decoding: set by libavcodec, read by user.
     */
    int64_t best_effort_timestamp;

    /**
     * reordered pos from the last AVPacket that has been input into the decoder
     * - encoding: unused
     * - decoding: Read by user.
     */
    int64_t pkt_pos;

#if FF_API_PKT_DURATION
    /**
     * duration of the corresponding packet, expressed in
     * AVStream->time_base units, 0 if unknown.
     * - encoding: unused
     * - decoding: Read by user.
     *
     * @deprecated use duration instead
     */
    attribute_deprecated
    int64_t pkt_duration;
#endif

    /**
     * metadata.
     * - encoding: Set by user.
     * - decoding: Set by libavcodec.
     */
    AVDictionary *metadata;

    /**
     * decode error flags of the frame, set to a combination of
     * FF_DECODE_ERROR_xxx flags if the decoder produced a frame, but there
     * were errors during the decoding.
     * - encoding: unused
     * - decoding: set by libavcodec, read by user.
     */
    int decode_error_flags;
#define FF_DECODE_ERROR_INVALID_BITSTREAM   1
#define FF_DECODE_ERROR_MISSING_REFERENCE   2
#define FF_DECODE_ERROR_CONCEALMENT_ACTIVE  4
#define FF_DECODE_ERROR_DECODE_SLICES       8

#if FF_API_OLD_CHANNEL_LAYOUT
    /**
     * number of audio channels, only used for audio.
     * - encoding: unused
     * - decoding: Read by user.
     * @deprecated use ch_layout instead
     */
    attribute_deprecated
    int channels;
#endif

    /**
     * size of the corresponding packet containing the compressed
     * frame.
     * It is set to a negative value if unknown.
     * - encoding: unused
     * - decoding: set by libavcodec, read by user.
     */
    int pkt_size;

    /**
     * For hwaccel-format frames, this should be a reference to the
     * AVHWFramesContext describing the frame.
     */
    AVBufferRef *hw_frames_ctx;

    /**
     * AVBufferRef for free use by the API user. FFmpeg will never check the
     * contents of the buffer ref. FFmpeg calls av_buffer_unref() on it when
     * the frame is unreferenced. av_frame_copy_props() calls create a new
     * reference with av_buffer_ref() for the target frame's opaque_ref field.
     *
     * This is unrelated to the opaque field, although it serves a similar
     * purpose.
     */
    AVBufferRef *opaque_ref;

    /**
     * @anchor cropping
     * @name Cropping
     * Video frames only. The number of pixels to discard from the the
     * top/bottom/left/right border of the frame to obtain the sub-rectangle of
     * the frame intended for presentation.
     * @{
     */
    size_t crop_top;
    size_t crop_bottom;
    size_t crop_left;
    size_t crop_right;
    /**
     * @}
     */

    /**
     * AVBufferRef for internal use by a single libav* library.
     * Must not be used to transfer data between libraries.
     * Has to be NULL when ownership of the frame leaves the respective library.
     *
     * Code outside the FFmpeg libs should never check or change the contents of the buffer ref.
     *
     * FFmpeg calls av_buffer_unref() on it when the frame is unreferenced.
     * av_frame_copy_props() calls create a new reference with av_buffer_ref()
     * for the target frame's private_ref field.
     */
    AVBufferRef *private_ref;

    /**
     * Channel layout of the audio data.
     */
    AVChannelLayout ch_layout;

    /**
     * Duration of the frame, in the same units as pts. 0 if unknown.
     */
    int64_t duration;
} AVFrame;

二 核心函数  av_frame_alloc(),av_frame_get_buffer

AVFrame*  avframe1 = av_frame_alloc();

从实现来看,av_frame_alloc 函数只是 给 avframe1分配了空间,但是内部的值都没有,也就是说avframe内部需要空间的都没有分配。

int av_frame_get_buffer(AVFrame *frame, int align);

给传递进来的 frame 的内部元素分配空间,

第一个参数:给那个frame分配空间

第二个参数:分配空间的对齐是按照 align 进行,如果填充的是0,会根据当前CPU给一个默认值,测试在32位 windows上,这个值就是32. 一般都会填写0,使用默认值

/**
 * Allocate new buffer(s) for audio or video data.
 *
 * The following fields must be set on frame before calling this function:
 * - format (pixel format for video, sample format for audio)
 * - width and height for video
 * - nb_samples and ch_layout for audio
 *
 * This function will fill AVFrame.data and AVFrame.buf arrays and, if
 * necessary, allocate and fill AVFrame.extended_data and AVFrame.extended_buf.
 * For planar formats, one buffer will be allocated for each plane.
 *
 * @warning: if frame already has been allocated, calling this function will
 *           leak memory. In addition, undefined behavior can occur in certain
 *           cases.
 *
 * @param frame frame in which to store the new buffers.
 * @param align Required buffer size alignment. If equal to 0, alignment will be
 *              chosen automatically for the current CPU. It is highly
 *              recommended to pass 0 here unless you know what you are doing.
 *
 * @return 0 on success, a negative AVERROR on error.
 */
int av_frame_get_buffer(AVFrame *frame, int align);

内部实现:

可以看到如果是video,则会先判断 width 和 height 是否  > 0

 也就是说,我们在调用这个函数之前,如果是for video,需要保证avframe 的 width 和height 的属性有被设置过。

int av_frame_get_buffer(AVFrame *frame, int align)
{
    if (frame->format < 0)
        return AVERROR(EINVAL);

FF_DISABLE_DEPRECATION_WARNINGS
    if (frame->width > 0 && frame->height > 0)
        return get_video_buffer(frame, align);
    else if (frame->nb_samples > 0 &&
             (av_channel_layout_check(&frame->ch_layout)
#if FF_API_OLD_CHANNEL_LAYOUT
              || frame->channel_layout || frame->channels > 0
#endif
             ))
        return get_audio_buffer(frame, align);
FF_ENABLE_DEPRECATION_WARNINGS

    return AVERROR(EINVAL);
}

那么如果我们不设置会有什么问题呢?

试一试

设置一下 width 和height 再来看一下

还是有问题:Invalid argument

void testAVframe() {
    cout << avcodec_configuration() << endl;
    AVFrame*  avframe1 = av_frame_alloc();
    cout << "debug1...." << endl;

    avframe1->width = 300;
    avframe1->height = 600;
    int ret = 0;
    ret = av_frame_get_buffer(avframe1, 0);
    if (ret < 0 ) {
        //如果方法失败,会返回一个 负数,可以通过 av_strerror函数打印这个具体的信息
        char buf[1024] = { 0 };
        av_strerror(ret, buf, sizeof(buf));
        cout << buf << endl;
    }

    cout << "debug2......" << endl;
}

那么应该再来看源码中的具体方法:get_video_buffer(frame, align);

源码在 frame.c中,我们看到 在 av_pix_fmt_desc_get(frame->format)中 返回了一个 desc,如果这个desc 为null,也会返回error。那么也就是说,这个frame->format 应该是有必要设置的,如下:

static int get_video_buffer(AVFrame *frame, int align)
{
    const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(frame->format);
    int ret, i, padded_height, total_size;
    int plane_padding = FFMAX(16 + 16/*STRIDE_ALIGN*/, align);
    ptrdiff_t linesizes[4];
    size_t sizes[4];

    if (!desc)
        return AVERROR(EINVAL);

为了验证这个问题,我们可以设置一下frame 中的 format 测试一下。发现是可以的。我们这时候再将  avframe1中的 关键数据 打印 看一下。

void testAVframe() {
    cout << avcodec_configuration() << endl;
    AVFrame*  avframe1 = av_frame_alloc();
    cout << "debug1...." << endl;

    avframe1->width = 300;
    avframe1->height = 600;

    //设置 foramt为 AV_PIX_FMT_YUV420P,再次测试
    avframe1->format = AV_PIX_FMT_YUV420P;
    int ret = 0;
    ret = av_frame_get_buffer(avframe1, 0);
    if (ret < 0 ) {
        //如果方法失败,会返回一个 负数,可以通过 av_strerror函数打印这个具体的信息
        char buf[1024] = { 0 };
        av_strerror(ret, buf, sizeof(buf));
        cout << buf << endl;
    }

    cout << "debug2......" << endl;
}

关于linesize[x]字节数的验证

void testAVframe() {
    cout << avcodec_configuration() << endl;
    AVFrame*  avframe1 = av_frame_alloc();
    cout << "debug1...." << endl;

    //只设置 宽和高 ,av_frame_get_buffer 函数还是会报错误。
    avframe1->width = 641 ;
    avframe1->height = 111;
    
    //设置 foramt为 AV_PIX_FMT_YUV420P,再次测试 就成功了
    avframe1->format = AV_PIX_FMT_RGB24;
    int ret = 0;
    ret = av_frame_get_buffer(avframe1, 0);
    if (ret < 0 ) {
        //如果方法失败,会返回一个 负数,可以通过 av_strerror函数打印这个具体的信息
        char buf[1024] = { 0 };
        av_strerror(ret, buf, sizeof(buf));
        cout << buf << endl;
    }

    cout << "debug2......" << endl;

    // avframe1 通过 av_frame_get_buffer 函数后,打印相关数据
    cout<< " 640 *111 yuv420p case , avframe1->linesize[0]  =  " << avframe1->linesize[0] << endl; ///640
    cout << " 640 *111 yuv420p case , avframe1->linesize[1]  =  " << avframe1->linesize[1] << endl; ///320
    cout << " 640 *111 yuv420p case , avframe1->linesize[2]  =  " << avframe1->linesize[2] << endl; ///320

    cout << " 641 *111 yuv420p case , avframe1->linesize[0]  =  " << avframe1->linesize[0] << endl;///672 由于字节对齐,多了一个32字节出来
    cout << " 641 *111 yuv420p case , avframe1->linesize[1]  =  " << avframe1->linesize[1] << endl;///352 由于字节对齐,多了一个32字节出来
    cout << " 641 *111 yuv420p case , avframe1->linesize[2]  =  " << avframe1->linesize[2] << endl;///352 由于字节对齐,多了一个32字节出来


    cout << " 640 *111 AV_PIX_FMT_RGB24 case , avframe1->linesize[0]  =  " << avframe1->linesize[0] << endl; //1920,这是因为640/32 是可以除尽的,因此640 * (RGB占用3个字节) = 1920
    cout << " 640 *111 AV_PIX_FMT_RGB24 case , avframe1->linesize[1]  =  " << avframe1->linesize[1] << endl; //0
    cout << " 640 *111 AV_PIX_FMT_RGB24 case , avframe1->linesize[2]  =  " << avframe1->linesize[2] << endl;//0


    cout << " 641 *111 AV_PIX_FMT_RGB24 case , avframe1->linesize[0]  =  " << avframe1->linesize[0] << endl; //2016, 这是因为641/32 是不能除尽的,因此 对于 多出来的这1个像素,本来占用1*3 = 3个字节就好,但是由于需要字节对齐,实际上给这1个像素要分配32个单位,因此实际分配位 32 *3 = 96字节 96+1920 = 2016个字节
    cout << " 641 *111 AV_PIX_FMT_RGB24 case , avframe1->linesize[1]  =  " << avframe1->linesize[1] << endl;
    cout << " 641 *111 AV_PIX_FMT_RGB24 case , avframe1->linesize[2]  =  " << avframe1->linesize[2] << endl;

}

三。核心函数 av_frame_ref() 和 av_frame_unref(AVFrame *frame);   av_frame_free(AVFrame **frame);     av_buffer_get_ref_count(const AVBufferRef *buf);

int av_frame_ref(AVFrame *dst, const AVFrame *src);

引用计数 +1    和     引用计数 -1

void testAVframe1() {
    int ret = 0;
    AVFrame* avframe1 = av_frame_alloc();
    avframe1->width = 641;
    avframe1->height = 111;
    avframe1->format = AV_PIX_FMT_YUV420P;
    ret = av_frame_get_buffer(avframe1, 0);
    if (ret < 0) {
        //如果方法失败,会返回一个 负数,可以通过 av_strerror函数打印这个具体的信息
        char buf[1024] = { 0 };
        av_strerror(ret, buf, sizeof(buf));
        cout << buf << endl;
    }
    //这里有个疑问,这时 avframe没有放置具体的数据,为什么这个buf[0] 有值?
    if (avframe1->buf[0])
    {
        //av_buffer_get_ref_count函数打印 引用计数 为1
        cout << "frame1 ref count = " <<
            av_buffer_get_ref_count(avframe1->buf[0]); // 线程安全
        cout << endl;
    }

    AVFrame* avframe2 = av_frame_alloc();


    ret = av_frame_ref(avframe2, avframe1);
    if (ret <0 ) {
        //如果方法失败,会返回一个 负数,可以通过 av_strerror函数打印这个具体的信息
        char buf[1024] = { 0 };
        av_strerror(ret, buf, sizeof(buf));
        cout << buf << endl;
    }

    if (avframe1->buf[0])
    {
        //av_buffer_get_ref_count函数打印 引用计数2
        cout << "frame1 ref count = " <<
            av_buffer_get_ref_count(avframe1->buf[0]); // 线程安全
        cout << endl;
    }
    if (avframe2->buf[0])
    {
        //av_buffer_get_ref_count函数打印 引用计数2
        cout << "frame2 ref count = " <<
            av_buffer_get_ref_count(avframe2->buf[0]); // 线程安全
        cout << endl;
    }



    cout << "debug2...." << endl;

    av_frame_unref(avframe2);
    if (avframe1->buf[0])
    {
        //av_buffer_get_ref_count函数打印 引用计数1
        cout << "frame111111 ref count = " <<
            av_buffer_get_ref_count(avframe1->buf[0]); // 线程安全
        cout << endl;
    }

    //到这里 只是通过 av_frame_unref(avframe2); 释放了avframe2的内部数据,但是avframe2还是存在的
    if (avframe2->buf[0])
    {
        //走不到这一行
        //av_buffer_get_ref_count函数打印 引用计数
        cout << "frame222222 ref count = " <<
            av_buffer_get_ref_count(avframe2->buf[0]); // 线程安全
        cout << endl;
    }
    av_frame_free(&avframe2);


    cout << "debug3...." << endl;


    av_frame_free(&avframe1);
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2231991.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

jenkins 构建报错 Cannot run program “sh”

原因 在 windows 操作系统 jenkins 自动化部署的时候, 由于自动化构建的命令是 shell 执行的,而默认windows 从 path 路径拿到的 shell 没有 sh.exe &#xff0c;因此报错。 解决方法 前提是已经安装过 git WINR 输入cmd 打开命令行, 然后输入where git 获取 git 的路径, …

基于Spring Boot的高校物品捐赠管理系统设计与实现,LW+源码+讲解

摘 要 传统办法管理信息首先需要花费的时间比较多&#xff0c;其次数据出错率比较高&#xff0c;而且对错误的数据进行更改也比较困难&#xff0c;最后&#xff0c;检索数据费事费力。因此&#xff0c;在计算机上安装高校物品捐赠管理系统软件来发挥其高效地信息处理的作用&a…

AndroidStudio通过Bundle进行数据传递

作者&#xff1a;CSDN-PleaSure乐事 欢迎大家阅读我的博客 希望大家喜欢 使用环境&#xff1a;AndroidStudio 目录 1.新建活动 2.修改页面布局 代码&#xff1a; 效果&#xff1a; 3.新建类ResultActivity并继承AppCompatActivity 4.新建布局文件activity_result.xml 代…

测试分层:减少对全链路回归依赖的探索!

引言&#xff1a;测试分层与全链路回归的挑战 在软件开发和测试过程中&#xff0c;全链路回归测试往往是一个复杂且耗费资源的环节&#xff0c;尤其在系统庞大且模块众多的场景下&#xff0c;全链路测试的集成难度显著提高。而“测试分层”作为一种结构化的测试方法&#xff0…

【python】OpenCV—findContours(4.5)

文章目录 1、功能描述2、原理分析3、代码实现4、效果展示5、完整代码6、参考 1、功能描述 输入图片&#xff0c;计算出图片中的目标到相机间的距离 2、原理分析 用最简单的三角形相似性 已知参数&#xff0c;物体的宽度 W W W&#xff0c;物体到相机的距离 D D D&#xff0…

jmeter基础01-3_环境准备-Linux系统安装jdk

Step1. 查看系统类型 打开终端&#xff0c;命令行输入uname -a&#xff0c;显示所有系统信息&#xff0c;包括内核名称、主机名、内核版本等。 如果输出是x86_64&#xff0c;则系统为64位。如果输出是i686 或i386&#xff0c;则系统为32位。 Step2. 官网下载安装包 https://www…

获取JSON对象的时候,值会自动带上双引号

问题&#xff1a;当使用下方代码&#xff0c;获取JsonNode对象的时候&#xff0c;从该对象中通过键获取的值会自动带上双引号。 JsonNode jsonNode new ObjectMapper().readTree("JSON字符串"); 注意&#xff1a;以上方法是获得的JsonNode对象&#xff0c;不是JSO…

大气污染监测系统方案:智慧环保监测的“千里眼“

​ 作为星创易联的一名工程师,我有幸参与了某市环保局的大气污染监测系统项目。该市地处我国中部地区,近年来工业发展迅速,大气污染问题日益突出。为加强环境管理,政府决定构建一套覆盖全市的大气污染在线监测系统,实时掌握各区域的空气质量状况。 我们公司凭借在物联网领域的…

leetcode-88-合并两个有序数组

题解&#xff1a; 解法一&#xff1a;从后向前同时遍历两个数组&#xff0c;因为nums1后面是0&#xff0c;从后遍历节省空间。 1、定义三个指针&#xff0c;分别为&#xff1a;len1m-1指向nums1的最后一个非0数字&#xff1b;len2n-1指向nums2的最后一个数字&#xff1b;len3…

百度文心智能体:巧用汉字笔画生成与汉字搜索插件,打造一个学习汉字的教育类智能体

这篇文章&#xff0c;主要介绍如何巧用汉字笔画生成与汉字搜索插件&#xff0c;打造一个学习汉字的教育类智能体。 目录 一、教育类智能体 1.1、智能体演示 1.2、智能体插件 1.3、智能体prompt &#xff08;1&#xff09;角色和目标 &#xff08;2&#xff09;思考路径 …

MySQL rand()函数、rand(n)、生成不重复随机数

文章目录 一、rand()与rand(n)二、rand()使用示例2.1、rand()与order by/group by使用随机排序分组2.2、round()与rand()的组合使用2.3、rand与ceiling的组合使用2.4、rand与floor组合使用2.5、rand与md5组合使用 三、总结3.1、rand()与rand(n)的区别 有时候我们想要生成一个唯…

『Linux学习笔记』如何在 Ubuntu 22.04 上安装和配置 VNC

『Linux学习笔记』如何在 Ubuntu 22.04 上安装和配置 VNC 文章目录 一. 『Linux学习笔记』如何在 Ubuntu 22.04 上安装和配置 VNC1. 介绍 二. 参考文献 一. 『Linux学习笔记』如何在 Ubuntu 22.04 上安装和配置 VNC 如何在 Ubuntu 22.04 上安装和配置 VNChttps://hub.docker.c…

ubuntu22-安装vscode-配置shell命令环境-mac安装

文章目录 1.安装vscode2.修改语言为中文3.配置bash调试环境3.1.安装插件3.2.添加配置文件 4.调试bash4.1.新建tmp.sh文件4.2.运行启动 5.mac安装6.mac卸载 1.安装vscode 从官网下载安装包Code_1.93.1-1726079302_amd64.deb。 在ubuntu系统中&#xff0c;安装包所在目录打开命令…

MongoDB 8.0.3版本安装教程

MongoDB 8.0.3版本安装教程 一、下载安装 1.进入官网 2.选择社区版 3.点击下载 4.下载完成后点击安装 5.同意协议&#xff0c;下一步 6.选择第二个Custon&#xff0c;自定义安装 7.选择安装路径 &#xff01;记住安装路径 8.默认&#xff0c;下一步 9.取…

编程八种语言谁是最受市场青睐的?

你听说过"编程语言江湖"吗?在这个瞬息万变的IT世界里&#xff0c;各种编程语言就像武林高手&#xff0c;各展绝技&#xff0c;争夺"武林盟主"的宝座。 1. JavaScript/TypeScript: 江湖新贵的崛起江湖中有一句老话:"十年磨一剑&#xff0c;霜刃未曾试…

(转载)Tools for Learning LLVM TableGen

前提 最近在学习有关llvm的东西&#xff0c;其中TableGen占了一部分&#xff0c;所以想特意学习下TableGen相关的语法。这里找到了LLVM官网的一篇介绍TableGen的博客&#xff0c;学习并使用机器翻译为中文。在文章的最后也添加了一些学习TableGen的资源。 原文地址&#xff1…

openpnp - 在openpnp中单独测试相机

文章目录 openpnp - 在openpnp中单独测试相机概述笔记测试工装相机镜头顶部盖子到目标的距离END openpnp - 在openpnp中单独测试相机 概述 底部相机的位置不合适, 重新做了零件&#xff0c;准备先确定一下相机和吸嘴的距离是多少才合适。 如果在设备上直接实验&#xff0c;那…

联动香港、成都、武汉三所高校!“2024 深圳国际金融科技大赛”校园行圆满结束

在金融科技蓬勃发展的当下&#xff0c;人才培养成为推动行业前行的关键。为推进深圳市金融科技人才高地建设&#xff0c;向高校学子提供一个展示自身知识、能力和创意的平台&#xff0c;2024 FinTechathon 深圳国际金融科技大赛——西丽湖金融科技大学生挑战赛重磅开启&#xf…

【SQL Server】华中农业大学空间数据库实验报告 实验一 数据库

实验目的 熟悉了解掌握SQL Server软件的基本操作与使用方法&#xff0c;认识界面&#xff0c;了解其两个基本操作系统文件&#xff0c;并能熟练区分与应用交互式与T-SQL式两种方法在SQL Server中如何进行操作&#xff1b;学习有关数据库的基本操作&#xff0c;包括&#xff1a…

LeetCode:83. 删除排序链表中的重复元素 II(java) 保留一个重复的

目录 题目描述: 代码: 第一种: 第二种: 题目描述: 给定一个已排序的链表的头 head &#xff0c; 删除所有重复的元素&#xff0c;使每个元素只出现一次 。返回 已排序的链表 。 示例 1&#xff1a; 输入&#xff1a;head [1,1,2] 输出&#xff1a;[1,2]示例 2&#xff1a…