【FFmpeg】avformat_find_stream_info函数

news2024/12/23 10:11:00

【FFmpeg】avformat_find_stream_info

  • 1.avformat_find_stream_info
    • 1.1 初始化解析器(av_parser_init)
    • 1.2 查找探测解码器(find_probe_decoder)
    • 1.3 尝试打开解码器(avcodec_open2)
    • 1.4 读取帧(read_frame_internal)
      • 1.4.1 读取packet(ff_read_packet)
      • 1.4.2 解析packet(parse_packet)
    • 1.5 尝试解码一帧(try_decode_frame)
    • 1.6 检查编码器参数(has_codec_parameters)
    • 1.7 估算时长(estimate_timings)
  • 2.小结

参考:
FFmpeg源代码简单分析:avformat_find_stream_info()

示例工程:
【FFmpeg】调用ffmpeg库实现264软编
【FFmpeg】调用ffmpeg库实现264软解
【FFmpeg】调用ffmpeg库进行RTMP推流和拉流
【FFmpeg】调用ffmpeg库进行SDL2解码后渲染

流程分析:
【FFmpeg】编码链路上主要函数的简单分析
【FFmpeg】解码链路上主要函数的简单分析

函数分析:
【FFmpeg】avformat_open_input函数

FFmpeg版本为7.0,avformat_find_stream_info内部函数的调用关系如下
在这里插入图片描述

1.avformat_find_stream_info

函数的主要功能是根据前面获取的AVFormatContext,结合输入的options,分析每个流(AVStream)的信息,主要内容包括:
(1)确定分析的最大时长
(2)对每个流当中的codec信息的获取
 (2.1)初始化解析器(av_parser_init)
 (2.2)查找探测解码器(find_probe_decoder)
 (2.3)尝试打开解码器(avcodec_open2)
(3)frame信息的获取
 (3.1)编解码器的检查和一些额外信息的检查
 (3.2)读取帧(read_frame_internal)
 (3.3)尝试解码一帧(try_decode_frame)
(4)到达文件末尾检查
(5)计算流当中时间相关信息
(6)结尾处理

这个函数对AVStream当中很多的变量进行了处理,比较复杂,只能做主体的记录,很多变量的含义不是很理解。我对这里的理解是,一个流当中应该主要考虑的问题包括:(1)流格式:这个在avformat_open_input当中已经获取了;(2)时间:流持续的时间(duration),起始时间(start_time),FPS,DTS等等;(3)数据内容:pix_fmt,width等。在这些复杂的流程之中,最重要的一个环节是解码流程,这里实际上进行了一帧的解码,才能够去获取帧级别的信息

int avformat_find_stream_info(AVFormatContext *ic, AVDictionary **options)
{
    FFFormatContext *const si = ffformatcontext(ic);
    int count = 0, ret = 0;
    int64_t read_size;
    AVPacket *pkt1 = si->pkt;
    int64_t old_offset  = avio_tell(ic->pb);
    // new streams might appear, no options for those
    int orig_nb_streams = ic->nb_streams;
    int flush_codecs;
    int64_t max_analyze_duration = ic->max_analyze_duration;
    int64_t max_stream_analyze_duration;
    int64_t max_subtitle_analyze_duration;
    int64_t probesize = ic->probesize;
    int eof_reached = 0;
    int *missing_streams = av_opt_ptr(ic->iformat->priv_class, ic->priv_data, "missing_streams");

    flush_codecs = probesize > 0;

    av_opt_set_int(ic, "skip_clear", 1, AV_OPT_SEARCH_CHILDREN);
	// ----- 1.确定分析最大时长 ----- //
    max_stream_analyze_duration = max_analyze_duration;
    max_subtitle_analyze_duration = max_analyze_duration;
    // 计算最大的分析时长
    // 字幕:30 * AV_TIME_BASE
    // FLV:90 * AV_TIME_BASE
    // MPEG:7 * AV_TIME_BASE
    if (!max_analyze_duration) {
        max_stream_analyze_duration =
        max_analyze_duration        = 5*AV_TIME_BASE;
        max_subtitle_analyze_duration = 30*AV_TIME_BASE;
        if (!strcmp(ic->iformat->name, "flv"))
            max_stream_analyze_duration = 90*AV_TIME_BASE;
        if (!strcmp(ic->iformat->name, "mpeg") || !strcmp(ic->iformat->name, "mpegts"))
            max_stream_analyze_duration = 7*AV_TIME_BASE;
    }
	// 如果进行了probe,打印偏移量
    if (ic->pb) {
        FFIOContext *const ctx = ffiocontext(ic->pb);
        av_log(ic, AV_LOG_DEBUG, "Before avformat_find_stream_info() pos: %"PRId64" bytes read:%"PRId64" seeks:%d nb_streams:%d\n",
               avio_tell(ic->pb), ctx->bytes_read, ctx->seek_count, ic->nb_streams);
    }
	// 通常情况下,在调用avformat_find_stream_info之前,nb_stream为0
	// 在avformat_find_stream_info中会获取nb_stream信息
	// ----- 2.对每个流当中的codec信息的获取 ----- //
    for (unsigned i = 0; i < ic->nb_streams; i++) {
        const AVCodec *codec;
        AVDictionary *thread_opt = NULL;
        AVStream *const st  = ic->streams[i];
        FFStream *const sti = ffstream(st);
        AVCodecContext *const avctx = sti->avctx;

        /* check if the caller has overridden the codec id */
        // only for the split stuff
        /* 检查调用者是否已覆盖编解码器id */
        if (!sti->parser && !(ic->flags & AVFMT_FLAG_NOPARSE) && sti->request_probe <= 0) {
        	// 初始化解析器
            sti->parser = av_parser_init(st->codecpar->codec_id);
            if (sti->parser) {
                if (sti->need_parsing == AVSTREAM_PARSE_HEADERS) {
                    sti->parser->flags |= PARSER_FLAG_COMPLETE_FRAMES;
                } else if (sti->need_parsing == AVSTREAM_PARSE_FULL_RAW) {
                    sti->parser->flags |= PARSER_FLAG_USE_CODEC_TS;
                }
            } else if (sti->need_parsing) {
                av_log(ic, AV_LOG_VERBOSE, "parser not found for codec "
                       "%s, packets or times may be invalid.\n",
                       avcodec_get_name(st->codecpar->codec_id));
            }
        }
		// 将codecpar赋值给avctx
        ret = avcodec_parameters_to_context(avctx, st->codecpar);
        if (ret < 0)
            goto find_stream_info_err;
        if (sti->request_probe <= 0)
            sti->avctx_inited = 1;
		// 查找并探测解码器
        codec = find_probe_decoder(ic, st, st->codecpar->codec_id);

        /* Force thread count to 1 since the H.264 decoder will not extract
         * SPS and PPS to extradata during multi-threaded decoding. */
        // 强制线程数为1,因为H.264解码器在多线程解码期间不会将SPS和PPS提取为额外数据
        av_dict_set(options ? &options[i] : &thread_opt, "threads", "1", 0);
        /* Force lowres to 0. The decoder might reduce the video size by the
         * lowres factor, and we don't want that propagated to the stream's
         * codecpar */
        // 将lowres强制为0。解码器可能会减少视频大小的低分辨率因素,不希望传播到流的编解码器
        av_dict_set(options ? &options[i] : &thread_opt, "lowres", "0", 0);
		// 白名单的设置
        if (ic->codec_whitelist)
            av_dict_set(options ? &options[i] : &thread_opt, "codec_whitelist", ic->codec_whitelist, 0);

        // Try to just open decoders, in case this is enough to get parameters.
        // Also ensure that subtitle_header is properly set.
        // 尝试只打开解码器部分,如果这足以获取所需的参数,那么就进行这个操作
        // 同时也要保证字幕头信息被合理设置
        if (!has_codec_parameters(st, NULL) && sti->request_probe <= 0 ||
            st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE) {
            if (codec && !avctx->codec)
            	// 尝试打开codec
                if (avcodec_open2(avctx, codec, options ? &options[i] : &thread_opt) < 0)
                    av_log(ic, AV_LOG_WARNING,
                           "Failed to open codec in %s\n", __func__);
        }
        if (!options)
            av_dict_free(&thread_opt);
    }

    read_size = 0;
    // ----- 3.frame信息的获取 ----- //
    for (;;) {
        const AVPacket *pkt;
        AVStream *st;
        FFStream *sti;
        AVCodecContext *avctx;
        int analyzed_all_streams;
        unsigned i;
        // 检查用户是否请求中断与cb相关的阻塞函数
        if (ff_check_interrupt(&ic->interrupt_callback)) {
            ret = AVERROR_EXIT;
            av_log(ic, AV_LOG_DEBUG, "interrupted\n");
            break;
        }

        /* check if one codec still needs to be handled */
        // 检查编解码器是否仍然需要处理
        for (i = 0; i < ic->nb_streams; i++) {
            AVStream *const st  = ic->streams[i];
            FFStream *const sti = ffstream(st);
            int fps_analyze_framecount = 20;
            int count;
			// 如果没有codec的参数,则退出循环
            if (!has_codec_parameters(st, NULL))
                break;
            /* If the timebase is coarse (like the usual millisecond precision
             * of mkv), we need to analyze more frames to reliably arrive at
             * the correct fps. */
            // 如果时间基准很粗略(比如mkv的一般毫秒级精度),为了确保准确地确定帧率,我们需要分析更多的帧
            // 换句话说,为了得到稳定的帧速率测量,当视频的时间精度较低时,必须对更多的帧进行分析和处理
            if (av_q2d(st->time_base) > 0.0005) // av_q2d: 将AVRational转换成double类型
                fps_analyze_framecount *= 2;
            // 1. 时间基础不可靠吗?
			// 2. 这是一种启发式方法,可以在快速接受报头中的值与进行一些额外检查之间取得平衡。
			// 3. 老的DivX和Xvid通常有一些毫无意义的时间基础,比如1fps或2fps。MPEG-2通常滥用字段重复标志来存储不同的帧率
			// 		还有一些“可变”fps文件也需要检测
            if (!tb_unreliable(ic, st)) // 
                fps_analyze_framecount = 0;
            if (ic->fps_probe_size >= 0)
                fps_analyze_framecount = ic->fps_probe_size;
            // #define AV_DISPOSITION_ATTACHED_PIC         (1 << 10)
            // 流作为attached picture/“cover art”(例如ID3v2中的APIC帧)存储在文件中。除非发生查找,否则将在从文件读取的前几个
            // 		包中返回与它关联的第一个(通常是唯一的)包。它也可以在avstream.attachd_pic中随时访问
            if (st->disposition & AV_DISPOSITION_ATTACHED_PIC)
                fps_analyze_framecount = 0;
            /* variable fps and no guess at the real fps */
            // 可变的FPS,不猜测真实的FPS
            count = (ic->iformat->flags & AVFMT_NOTIMESTAMPS) ?
                       sti->info->codec_info_duration_fields/2 :
                       sti->info->duration_count;
            if (!(st->r_frame_rate.num && st->avg_frame_rate.num) &&
                st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
                if (count < fps_analyze_framecount)
                    break;
            }
            // Look at the first 3 frames if there is evidence of frame delay
            // but the decoder delay is not set.
            // 看看前3帧,如果有帧延迟的证据,但解码器延迟没有设置
            if (sti->info->frame_delay_evidence && count < 2 && sti->avctx->has_b_frames == 0)
                break;
            if (!sti->avctx->extradata &&
                (!sti->extract_extradata.inited || sti->extract_extradata.bsf) &&
                extract_extradata_check(st)) // 额外数据的检查
                break;
            if (sti->first_dts == AV_NOPTS_VALUE &&
                (!(ic->iformat->flags & AVFMT_NOTIMESTAMPS) || sti->need_parsing == AVSTREAM_PARSE_FULL_RAW) &&
                sti->codec_info_nb_frames < ((st->disposition & AV_DISPOSITION_ATTACHED_PIC) ? 1 : ic->max_ts_probe) &&
                (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO ||
                 st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO))
                break;
        }
        analyzed_all_streams = 0;
        // missing_streams的赋值在别的地方,例如对于FLV格式,missing_streams的处理方式为
        // flv->missing_streams = flags & (FLV_HEADER_FLAG_HASVIDEO | FLV_HEADER_FLAG_HASAUDIO);
        // 如果没有丢失的流信息,
        if (!missing_streams || !*missing_streams)
            if (i == ic->nb_streams) {
                analyzed_all_streams = 1;
                /* NOTE: If the format has no header, then we need to read some
                 * packets to get most of the streams, so we cannot stop here. */
                // 如果格式没有头,那么我们需要读取一些数据包来获取大部分流,所以我们不能在这里停止
                if (!(ic->ctx_flags & AVFMTCTX_NOHEADER)) {
                    /* If we found the info for all the codecs, we can stop. */
                    // 如果找到所有的codecs,可以停止
                    ret = count;
                    av_log(ic, AV_LOG_DEBUG, "All info found\n");
                    flush_codecs = 0;
                    break;
                }
            }
        /* We did not get all the codec info, but we read too much data. */
        // 没有得到所有的编解码器信息,但读取了太多的数据
        if (read_size >= probesize) {
            ret = count;
            av_log(ic, AV_LOG_DEBUG,
                   "Probe buffer size limit of %"PRId64" bytes reached\n", probesize);
            for (unsigned i = 0; i < ic->nb_streams; i++) {
                AVStream *const st  = ic->streams[i];
                FFStream *const sti = ffstream(st);
                if (!st->r_frame_rate.num &&
                    sti->info->duration_count <= 1 &&
                    st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO &&
                    strcmp(ic->iformat->name, "image2"))
                    av_log(ic, AV_LOG_WARNING,
                           "Stream #%d: not enough frames to estimate rate; "
                           "consider increasing probesize\n", i);
            }
            break;
        }

        /* NOTE: A new stream can be added there if no header in file
         * (AVFMTCTX_NOHEADER). */
        // 如果文件中没有头文件,可以添加一个新的流(AVFMTCTX_NOHEADER)
        // 读取帧的信息
        ret = read_frame_internal(ic, pkt1);
        if (ret == AVERROR(EAGAIN))
            continue;

        if (ret < 0) {
            /* EOF or error*/
            eof_reached = 1;
            break;
        }

        if (!(ic->flags & AVFMT_FLAG_NOBUFFER)) {
        	// 将AVPacket加入到list当中
            ret = avpriv_packet_list_put(&si->packet_buffer,
                                         pkt1, NULL, 0);
            if (ret < 0)
                goto unref_then_goto_end;

            pkt = &si->packet_buffer.tail->pkt;
        } else {
            pkt = pkt1;
        }

        st  = ic->streams[pkt->stream_index];
        sti = ffstream(st);
        if (!(st->disposition & AV_DISPOSITION_ATTACHED_PIC))
            read_size += pkt->size;

        avctx = sti->avctx;
        if (!sti->avctx_inited) {
            ret = avcodec_parameters_to_context(avctx, st->codecpar);
            if (ret < 0)
                goto unref_then_goto_end;
            sti->avctx_inited = 1;
        }

        if (pkt->dts != AV_NOPTS_VALUE && sti->codec_info_nb_frames > 1) {
            /* check for non-increasing dts */
            if (sti->info->fps_last_dts != AV_NOPTS_VALUE &&
                sti->info->fps_last_dts >= pkt->dts) {
                av_log(ic, AV_LOG_DEBUG,
                       "Non-increasing DTS in stream %d: packet %d with DTS "
                       "%"PRId64", packet %d with DTS %"PRId64"\n",
                       st->index, sti->info->fps_last_dts_idx,
                       sti->info->fps_last_dts, sti->codec_info_nb_frames,
                       pkt->dts);
                sti->info->fps_first_dts =
                sti->info->fps_last_dts  = AV_NOPTS_VALUE;
            }
            /* Check for a discontinuity in dts. If the difference in dts
             * is more than 1000 times the average packet duration in the
             * sequence, we treat it as a discontinuity. */
            // 检查dts中的不连续。如果dts的差异超过序列中平均数据包持续时间的1000倍,我们将其视为不连续
            if (sti->info->fps_last_dts != AV_NOPTS_VALUE &&
                sti->info->fps_last_dts_idx > sti->info->fps_first_dts_idx &&
                (pkt->dts - (uint64_t)sti->info->fps_last_dts) / 1000 >
                (sti->info->fps_last_dts     - (uint64_t)sti->info->fps_first_dts) /
                (sti->info->fps_last_dts_idx - sti->info->fps_first_dts_idx)) {
                av_log(ic, AV_LOG_WARNING,
                       "DTS discontinuity in stream %d: packet %d with DTS "
                       "%"PRId64", packet %d with DTS %"PRId64"\n",
                       st->index, sti->info->fps_last_dts_idx,
                       sti->info->fps_last_dts, sti->codec_info_nb_frames,
                       pkt->dts);
                sti->info->fps_first_dts =
                sti->info->fps_last_dts  = AV_NOPTS_VALUE;
            }

            /* update stored dts values */
            // 更新存储的DTS值
            if (sti->info->fps_first_dts == AV_NOPTS_VALUE) {
                sti->info->fps_first_dts     = pkt->dts;
                sti->info->fps_first_dts_idx = sti->codec_info_nb_frames;
            }
            sti->info->fps_last_dts     = pkt->dts;
            sti->info->fps_last_dts_idx = sti->codec_info_nb_frames;
        }
        if (sti->codec_info_nb_frames > 1) {
            int64_t t = 0;
            int64_t limit;

            if (st->time_base.den > 0)
                t = av_rescale_q(sti->info->codec_info_duration, st->time_base, AV_TIME_BASE_Q);
            if (st->avg_frame_rate.num > 0)
                t = FFMAX(t, av_rescale_q(sti->codec_info_nb_frames, av_inv_q(st->avg_frame_rate), AV_TIME_BASE_Q));

            if (   t == 0
                && sti->codec_info_nb_frames > 30
                && sti->info->fps_first_dts != AV_NOPTS_VALUE
                && sti->info->fps_last_dts  != AV_NOPTS_VALUE) {
                int64_t dur = av_sat_sub64(sti->info->fps_last_dts, sti->info->fps_first_dts);
                t = FFMAX(t, av_rescale_q(dur, st->time_base, AV_TIME_BASE_Q));
            }

            if (analyzed_all_streams)                                limit = max_analyze_duration;
            else if (avctx->codec_type == AVMEDIA_TYPE_SUBTITLE) limit = max_subtitle_analyze_duration;
            else                                                     limit = max_stream_analyze_duration;

            if (t >= limit) {
                av_log(ic, AV_LOG_VERBOSE, "max_analyze_duration %"PRId64" reached at %"PRId64" microseconds st:%d\n",
                       limit,
                       t, pkt->stream_index);
                if (ic->flags & AVFMT_FLAG_NOBUFFER)
                    av_packet_unref(pkt1);
                break;
            }
            if (pkt->duration > 0) {
                const int fields = sti->codec_desc && (sti->codec_desc->props & AV_CODEC_PROP_FIELDS);
                if (avctx->codec_type == AVMEDIA_TYPE_SUBTITLE && pkt->pts != AV_NOPTS_VALUE && st->start_time != AV_NOPTS_VALUE && pkt->pts >= st->start_time
                    && (uint64_t)pkt->pts - st->start_time < INT64_MAX
                ) {
                    sti->info->codec_info_duration = FFMIN(pkt->pts - st->start_time, sti->info->codec_info_duration + pkt->duration);
                } else
                    sti->info->codec_info_duration += pkt->duration;
                sti->info->codec_info_duration_fields += sti->parser && sti->need_parsing && fields
                                                         ? sti->parser->repeat_pict + 1 : 2;
            }
        }
        if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {
#if FF_API_R_FRAME_RATE
            ff_rfps_add_frame(ic, st, pkt->dts);
#endif
            if (pkt->dts != pkt->pts && pkt->dts != AV_NOPTS_VALUE && pkt->pts != AV_NOPTS_VALUE)
                sti->info->frame_delay_evidence = 1;
        }
        if (!sti->avctx->extradata) {
            ret = extract_extradata(si, st, pkt);
            if (ret < 0)
                goto unref_then_goto_end;
        }

        /* If still no information, we try to open the codec and to
         * decompress the frame. We try to avoid that in most cases as
         * it takes longer and uses more memory. For MPEG-4, we need to
         * decompress for QuickTime.
         *
         * If AV_CODEC_CAP_CHANNEL_CONF is set this will force decoding of at
         * least one frame of codec data, this makes sure the codec initializes
         * the channel configuration and does not only trust the values from
         * the container. */
        // 1.如果仍然没有信息,尝试打开编解码器并解压缩帧。尽量避免在大多数情况下使用,因为它需要更长的时间和使用更多的内存
        //		对于MPEG-4,需要为QuickTime解压
        // 2.如果设置了AV_CODEC_CAP_CHANNEL_CONF,这将强制解码至少一帧编解码器数据,这确保编解码器初始化通道配置,
        //		并且不仅信任来自容器的值
        try_decode_frame(ic, st, pkt,
                         (options && i < orig_nb_streams) ? &options[i] : NULL);

        if (ic->flags & AVFMT_FLAG_NOBUFFER)
            av_packet_unref(pkt1);

        sti->codec_info_nb_frames++;
        count++;
    }
	// ----- 4.到达文件末尾检查 ----- //
    if (eof_reached) {
        for (unsigned stream_index = 0; stream_index < ic->nb_streams; stream_index++) {
            AVStream *const st = ic->streams[stream_index];
            AVCodecContext *const avctx = ffstream(st)->avctx;
            if (!has_codec_parameters(st, NULL)) {
                const AVCodec *codec = find_probe_decoder(ic, st, st->codecpar->codec_id);
                if (codec && !avctx->codec) {
                    AVDictionary *opts = NULL;
                    if (ic->codec_whitelist)
                        av_dict_set(&opts, "codec_whitelist", ic->codec_whitelist, 0);
                    if (avcodec_open2(avctx, codec, (options && stream_index < orig_nb_streams) ? &options[stream_index] : &opts) < 0)
                        av_log(ic, AV_LOG_WARNING,
                               "Failed to open codec in %s\n", __func__);
                    av_dict_free(&opts);
                }
            }

            // EOF already reached while reading the stream above.
            // So continue with reoordering DTS with whatever delay we have.
            // 在读取上面的流时已经达到EOF。所以继续重新排序DTS不管有什么延迟
            if (si->packet_buffer.head && !has_decode_delay_been_guessed(st)) {
                update_dts_from_pts(ic, stream_index, si->packet_buffer.head);
            }
        }
    }
	// 清理decoders
    if (flush_codecs) {
        AVPacket *empty_pkt = si->pkt;
        int err = 0;
        av_packet_unref(empty_pkt);

        for (unsigned i = 0; i < ic->nb_streams; i++) {
            AVStream *const st  = ic->streams[i];
            FFStream *const sti = ffstream(st);

            /* flush the decoders */
            if (sti->info->found_decoder == 1) {
                err = try_decode_frame(ic, st, empty_pkt,
                                        (options && i < orig_nb_streams)
                                        ? &options[i] : NULL);

                if (err < 0) {
                    av_log(ic, AV_LOG_INFO,
                        "decoding for stream %d failed\n", st->index);
                }
            }
        }
    }
    // ----- 5.计算流当中时间相关信息 ----- //
	// 计算帧率
    ff_rfps_calculate(ic);
	
    for (unsigned i = 0; i < ic->nb_streams; i++) {
        AVStream *const st  = ic->streams[i];
        FFStream *const sti = ffstream(st);
        AVCodecContext *const avctx = sti->avctx;

        if (avctx->codec_type == AVMEDIA_TYPE_VIDEO) {
            if (avctx->codec_id == AV_CODEC_ID_RAWVIDEO && !avctx->codec_tag && !avctx->bits_per_coded_sample) {
            	// 返回一个表示与像素格式pix_fmt相关联的fourCC代码的值,如果找不到相关的fourCC代码,则返回0
                uint32_t tag= avcodec_pix_fmt_to_codec_tag(avctx->pix_fmt);
                // 查找pixel的fmt
                if (avpriv_pix_fmt_find(PIX_FMT_LIST_RAW, tag) == avctx->pix_fmt)
                    avctx->codec_tag= tag;
            }

            /* estimate average framerate if not set by demuxer */
            // 估计平均帧率,如果没有被demuxer所设置
            if (sti->info->codec_info_duration_fields &&
                !st->avg_frame_rate.num &&
                sti->info->codec_info_duration) {
                int best_fps      = 0;
                double best_error = 0.01;
                AVRational codec_frame_rate = avctx->framerate;

                if (sti->info->codec_info_duration        >= INT64_MAX / st->time_base.num / 2||
                    sti->info->codec_info_duration_fields >= INT64_MAX / st->time_base.den ||
                    sti->info->codec_info_duration        < 0)
                    continue;
                av_reduce(&st->avg_frame_rate.num, &st->avg_frame_rate.den,
                          sti->info->codec_info_duration_fields * (int64_t) st->time_base.den,
                          sti->info->codec_info_duration * 2 * (int64_t) st->time_base.num, 60000);

                /* Round guessed framerate to a "standard" framerate if it's
                 * within 1% of the original estimate. */
                // 如果猜测的帧率在原始估计的1%以内,则将其舍入为“标准”帧率
                for (int j = 0; j < MAX_STD_TIMEBASES; j++) {
                    AVRational std_fps = { get_std_framerate(j), 12 * 1001 };
                    double error       = fabs(av_q2d(st->avg_frame_rate) /
                                              av_q2d(std_fps) - 1);

                    if (error < best_error) {
                        best_error = error;
                        best_fps   = std_fps.num;
                    }

                    if (si->prefer_codec_framerate && codec_frame_rate.num > 0 && codec_frame_rate.den > 0) {
                        error       = fabs(av_q2d(codec_frame_rate) /
                                           av_q2d(std_fps) - 1);
                        if (error < best_error) {
                            best_error = error;
                            best_fps   = std_fps.num;
                        }
                    }
                }
                if (best_fps)
                    av_reduce(&st->avg_frame_rate.num, &st->avg_frame_rate.den,
                              best_fps, 12 * 1001, INT_MAX);
            }
            if (!st->r_frame_rate.num) {
                const AVCodecDescriptor *desc = sti->codec_desc;
                AVRational mul = (AVRational){ desc && (desc->props & AV_CODEC_PROP_FIELDS) ? 2 : 1, 1 };
                AVRational  fr = av_mul_q(avctx->framerate, mul);

                if (fr.num && fr.den && av_cmp_q(st->time_base, av_inv_q(fr)) <= 0) {
                    st->r_frame_rate = fr;
                } else {
                    st->r_frame_rate.num = st->time_base.den;
                    st->r_frame_rate.den = st->time_base.num;
                }
            }
            st->codecpar->framerate = avctx->framerate;
            if (sti->display_aspect_ratio.num && sti->display_aspect_ratio.den) {
                AVRational hw_ratio = { avctx->height, avctx->width };
                st->sample_aspect_ratio = av_mul_q(sti->display_aspect_ratio,
                                                   hw_ratio);
            }
        } else if (avctx->codec_type == AVMEDIA_TYPE_AUDIO) { // 音频的检查
            if (!avctx->bits_per_coded_sample)
                avctx->bits_per_coded_sample =
                    av_get_bits_per_sample(avctx->codec_id);
            // set stream disposition based on audio service type
            switch (avctx->audio_service_type) {
            case AV_AUDIO_SERVICE_TYPE_EFFECTS:
                st->disposition = AV_DISPOSITION_CLEAN_EFFECTS;
                break;
            case AV_AUDIO_SERVICE_TYPE_VISUALLY_IMPAIRED:
                st->disposition = AV_DISPOSITION_VISUAL_IMPAIRED;
                break;
            case AV_AUDIO_SERVICE_TYPE_HEARING_IMPAIRED:
                st->disposition = AV_DISPOSITION_HEARING_IMPAIRED;
                break;
            case AV_AUDIO_SERVICE_TYPE_COMMENTARY:
                st->disposition = AV_DISPOSITION_COMMENT;
                break;
            case AV_AUDIO_SERVICE_TYPE_KARAOKE:
                st->disposition = AV_DISPOSITION_KARAOKE;
                break;
            }
        }
    }
	// 估算视频流的起始时间,持续时间
    if (probesize)
        estimate_timings(ic, old_offset);

    av_opt_set_int(ic, "skip_clear", 0, AV_OPT_SEARCH_CHILDREN);
	// ----- 6.结尾处理 ----- //
    if (ret >= 0 && ic->nb_streams)
        /* We could not have all the codec parameters before EOF. */
        // 我们不可能在EOF之前得到所有的编解码器参数
        ret = -1;
    for (unsigned i = 0; i < ic->nb_streams; i++) {
        AVStream *const st  = ic->streams[i];
        FFStream *const sti = ffstream(st);
        const char *errmsg;

        /* if no packet was ever seen, update context now for has_codec_parameters */
        // 如果没有看到packet,现在更新上下文为has_codec_parameters
        if (!sti->avctx_inited) {
            if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO &&
                st->codecpar->format == AV_SAMPLE_FMT_NONE)
                st->codecpar->format = sti->avctx->sample_fmt;
            ret = avcodec_parameters_to_context(sti->avctx, st->codecpar);
            if (ret < 0)
                goto find_stream_info_err;
        }
        if (!has_codec_parameters(st, &errmsg)) {
            char buf[256];
            avcodec_string(buf, sizeof(buf), sti->avctx, 0);
            av_log(ic, AV_LOG_WARNING,
                   "Could not find codec parameters for stream %d (%s): %s\n"
                   "Consider increasing the value for the 'analyzeduration' (%"PRId64") and 'probesize' (%"PRId64") options\n",
                   i, buf, errmsg, ic->max_analyze_duration, ic->probesize);
        } else {
            ret = 0;
        }
    }
	// 计算chapters
    ret = compute_chapters_end(ic);
    if (ret < 0)
        goto find_stream_info_err;

    /* update the stream parameters from the internal codec contexts */
    // 从内部编解码器上下文更新流参数
    for (unsigned i = 0; i < ic->nb_streams; i++) {
        AVStream *const st  = ic->streams[i];
        FFStream *const sti = ffstream(st);

        if (sti->avctx_inited) {
            ret = avcodec_parameters_from_context(st->codecpar, sti->avctx);
            if (ret < 0)
                goto find_stream_info_err;

            if (sti->avctx->rc_buffer_size > 0 || sti->avctx->rc_max_rate > 0 ||
                sti->avctx->rc_min_rate) {
                size_t cpb_size;
                AVCPBProperties *props = av_cpb_properties_alloc(&cpb_size);
                if (props) {
                    if (sti->avctx->rc_buffer_size > 0)
                        props->buffer_size = sti->avctx->rc_buffer_size;
                    if (sti->avctx->rc_min_rate > 0)
                        props->min_bitrate = sti->avctx->rc_min_rate;
                    if (sti->avctx->rc_max_rate > 0)
                        props->max_bitrate = sti->avctx->rc_max_rate;
                    if (!av_packet_side_data_add(&st->codecpar->coded_side_data,
                                                 &st->codecpar->nb_coded_side_data,
                                                 AV_PKT_DATA_CPB_PROPERTIES,
                                                 (uint8_t *)props, cpb_size, 0))
                        av_free(props);
                }
            }
        }

        sti->avctx_inited = 0;
#if FF_API_AVSTREAM_SIDE_DATA
FF_DISABLE_DEPRECATION_WARNINGS
        if (st->codecpar->nb_coded_side_data > 0) {
            av_assert0(!st->side_data && !st->nb_side_data);
            st->side_data = av_calloc(st->codecpar->nb_coded_side_data, sizeof(*st->side_data));
            if (!st->side_data) {
                ret = AVERROR(ENOMEM);
                goto find_stream_info_err;
            }

            for (int j = 0; j < st->codecpar->nb_coded_side_data; j++) {
                uint8_t *data = av_memdup(st->codecpar->coded_side_data[j].data,
                                          st->codecpar->coded_side_data[j].size);
                if (!data) {
                    ret = AVERROR(ENOMEM);
                    goto find_stream_info_err;
                }
                st->side_data[j].type = st->codecpar->coded_side_data[j].type;
                st->side_data[j].size = st->codecpar->coded_side_data[j].size;
                st->side_data[j].data = data;
                st->nb_side_data++;
            }
        }
FF_ENABLE_DEPRECATION_WARNINGS
#endif
    }

find_stream_info_err:
    for (unsigned i = 0; i < ic->nb_streams; i++) {
        AVStream *const st  = ic->streams[i];
        FFStream *const sti = ffstream(st);
        int err;

        if (sti->info) {
            av_freep(&sti->info->duration_error);
            av_freep(&sti->info);
        }

        err = codec_close(sti);
        if (err < 0 && ret >= 0)
            ret = err;
        av_bsf_free(&sti->extract_extradata.bsf);
    }
    if (ic->pb) {
        FFIOContext *const ctx = ffiocontext(ic->pb);
        av_log(ic, AV_LOG_DEBUG, "After avformat_find_stream_info() pos: %"PRId64" bytes read:%"PRId64" seeks:%d frames:%d\n",
               avio_tell(ic->pb), ctx->bytes_read, ctx->seek_count, count);
    }
    return ret;

unref_then_goto_end:
    av_packet_unref(pkt1);
    goto find_stream_info_err;
}

1.1 初始化解析器(av_parser_init)

参考【FFmpeg】解码链路上主要函数的简单分析

1.2 查找探测解码器(find_probe_decoder)

find_probe_decoder首先检查是否是H264格式,随后调用ff_find_decoder进行解码器的查找

static const AVCodec *find_probe_decoder(AVFormatContext *s, const AVStream *st, enum AVCodecID codec_id)
{
    const AVCodec *codec;

#if CONFIG_H264_DECODER
    /* Other parts of the code assume this decoder to be used for h264,
     * so force it if possible. */
    // 直接返回H264格式
    if (codec_id == AV_CODEC_ID_H264)
        return avcodec_find_decoder_by_name("h264");
#endif
	// 查找解码器
    codec = ff_find_decoder(s, st, codec_id);
    if (!codec)
        return NULL;
	// 解码器的能力中包括了避免进行probe,则重新寻找一个支持probe的解码器
    if (codec->capabilities & AV_CODEC_CAP_AVOID_PROBING) {
        const AVCodec *probe_codec = NULL;
        void *iter = NULL;
        while ((probe_codec = av_codec_iterate(&iter))) {
            if (probe_codec->id == codec->id &&
                    av_codec_is_decoder(probe_codec) &&
                    !(probe_codec->capabilities & (AV_CODEC_CAP_AVOID_PROBING | AV_CODEC_CAP_EXPERIMENTAL))) {
                return probe_codec;
            }
        }
    }
	// 如果实在找不到支持probe的解码器,则返回指定的解码器
    return codec;
}

ff_find_decoder的调用方式为

const AVCodec *ff_find_decoder(AVFormatContext *s, const AVStream *st,
                               enum AVCodecID codec_id)
{
    switch (st->codecpar->codec_type) {
    case AVMEDIA_TYPE_VIDEO:
        if (s->video_codec)    return s->video_codec;
        break;
    case AVMEDIA_TYPE_AUDIO:
        if (s->audio_codec)    return s->audio_codec;
        break;
    case AVMEDIA_TYPE_SUBTITLE:
        if (s->subtitle_codec) return s->subtitle_codec;
        break;
    }

    return avcodec_find_decoder(codec_id);
}

可以看到,ff_find_decoder根据codec_type来确定decoder的类型,视频、音频或字幕,如果不是这三种类型,再调用avcodec_find_decoder确定codec,avcodec_find_decoder函数在其他文中已经记录过,大致的意思是遍历支持的decoder,查看是否有与codec_id匹配的decoder

1.3 尝试打开解码器(avcodec_open2)

参考【FFmpeg】编码链路上主要函数的简单分析

1.4 读取帧(read_frame_internal)

该函数用于读取一帧,通过调用ff_read_packet来读取packet

static int read_frame_internal(AVFormatContext *s, AVPacket *pkt)
{
    FFFormatContext *const si = ffformatcontext(s);
    int ret, got_packet = 0;
    AVDictionary *metadata = NULL;

    while (!got_packet && !si->parse_queue.head) {
        AVStream *st;
        FFStream *sti;

        /* read next packet */
        // 读取一个packet,具体会调用.read_packet实现,根据不同格式有所不同
        // 例如对于FLV格式,ff_read_packet内部会调用flv_read_packet()读取一个packet
        ret = ff_read_packet(s, pkt);
        if (ret < 0) {
            if (ret == AVERROR(EAGAIN))
                return ret;
            /* flush the parsers */
            for (unsigned i = 0; i < s->nb_streams; i++) {
                AVStream *const st  = s->streams[i];
                FFStream *const sti = ffstream(st);
                if (sti->parser && sti->need_parsing)
                    parse_packet(s, pkt, st->index, 1);  // 解析packet
            }
            /* all remaining packets are now in parse_queue =>
             * really terminate parsing */
            break;
        }
        ret = 0;
        st  = s->streams[pkt->stream_index];
        sti = ffstream(st);

        st->event_flags |= AVSTREAM_EVENT_FLAG_NEW_PACKETS;

        /* update context if required */
        // 更新一个context,需要先关闭decoder
        if (sti->need_context_update) {
            if (avcodec_is_open(sti->avctx)) {
                av_log(s, AV_LOG_DEBUG, "Demuxer context update while decoder is open, closing and trying to re-open\n");
                ret = codec_close(sti);
                sti->info->found_decoder = 0;
                if (ret < 0)
                    return ret;
            }

            /* close parser, because it depends on the codec */
            if (sti->parser && sti->avctx->codec_id != st->codecpar->codec_id) {
                av_parser_close(sti->parser);
                sti->parser = NULL;
            }
			// 将codecpar的信息拷贝给avctx
            ret = avcodec_parameters_to_context(sti->avctx, st->codecpar);
            if (ret < 0) {
                av_packet_unref(pkt);
                return ret;
            }

            sti->codec_desc = avcodec_descriptor_get(sti->avctx->codec_id);

            sti->need_context_update = 0;
        }
		// 检查时间戳
        if (pkt->pts != AV_NOPTS_VALUE &&
            pkt->dts != AV_NOPTS_VALUE &&
            pkt->pts < pkt->dts) {
            av_log(s, AV_LOG_WARNING,
                   "Invalid timestamps stream=%d, pts=%s, dts=%s, size=%d\n",
                   pkt->stream_index,
                   av_ts2str(pkt->pts),
                   av_ts2str(pkt->dts),
                   pkt->size);
        }
        if (s->debug & FF_FDEBUG_TS)
            av_log(s, AV_LOG_DEBUG,
                   "ff_read_packet stream=%d, pts=%s, dts=%s, size=%d, duration=%"PRId64", flags=%d\n",
                   pkt->stream_index,
                   av_ts2str(pkt->pts),
                   av_ts2str(pkt->dts),
                   pkt->size, pkt->duration, pkt->flags);
		// 重启parser
        if (sti->need_parsing && !sti->parser && !(s->flags & AVFMT_FLAG_NOPARSE)) {
            sti->parser = av_parser_init(st->codecpar->codec_id);
            if (!sti->parser) {
                av_log(s, AV_LOG_VERBOSE, "parser not found for codec "
                       "%s, packets or times may be invalid.\n",
                       avcodec_get_name(st->codecpar->codec_id));
                /* no parser available: just output the raw packets */
                sti->need_parsing = AVSTREAM_PARSE_NONE;
            } else if (sti->need_parsing == AVSTREAM_PARSE_HEADERS)
                sti->parser->flags |= PARSER_FLAG_COMPLETE_FRAMES;
            else if (sti->need_parsing == AVSTREAM_PARSE_FULL_ONCE)
                sti->parser->flags |= PARSER_FLAG_ONCE;
            else if (sti->need_parsing == AVSTREAM_PARSE_FULL_RAW)
                sti->parser->flags |= PARSER_FLAG_USE_CODEC_TS;
        }

        if (!sti->need_parsing || !sti->parser) {
            /* no parsing needed: we just output the packet as is */
            // 无需解析
            compute_pkt_fields(s, st, NULL, pkt, AV_NOPTS_VALUE, AV_NOPTS_VALUE);
            if ((s->iformat->flags & AVFMT_GENERIC_INDEX) &&
                (pkt->flags & AV_PKT_FLAG_KEY) && pkt->dts != AV_NOPTS_VALUE) {
                ff_reduce_index(s, st->index);
                av_add_index_entry(st, pkt->pos, pkt->dts,
                                   0, 0, AVINDEX_KEYFRAME);
            }
            got_packet = 1;
        } else if (st->discard < AVDISCARD_ALL) {
            if ((ret = parse_packet(s, pkt, pkt->stream_index, 0)) < 0)
                return ret;
            st->codecpar->sample_rate = sti->avctx->sample_rate;
            st->codecpar->bit_rate = sti->avctx->bit_rate;
            ret = av_channel_layout_copy(&st->codecpar->ch_layout, &sti->avctx->ch_layout);
            if (ret < 0)
                return ret;
            st->codecpar->codec_id = sti->avctx->codec_id;
        } else {
            /* free packet */
            av_packet_unref(pkt);
        }
        if (pkt->flags & AV_PKT_FLAG_KEY)
            sti->skip_to_keyframe = 0;
        if (sti->skip_to_keyframe) {
            av_packet_unref(pkt);
            got_packet = 0;
        }
    }
	// 删除列表中最老的AVPacket并返回它
    if (!got_packet && si->parse_queue.head)
        ret = avpriv_packet_list_get(&si->parse_queue, pkt);

    if (ret >= 0) {
        AVStream *const st  = s->streams[pkt->stream_index];
        FFStream *const sti = ffstream(st);
        int discard_padding = 0;
        if (sti->first_discard_sample && pkt->pts != AV_NOPTS_VALUE) {
            int64_t pts = pkt->pts - (is_relative(pkt->pts) ? RELATIVE_TS_BASE : 0);
            int64_t sample = ts_to_samples(st, pts);
            int64_t duration = ts_to_samples(st, pkt->duration);
            int64_t end_sample = sample + duration;
            if (duration > 0 && end_sample >= sti->first_discard_sample &&
                sample < sti->last_discard_sample)
                discard_padding = FFMIN(end_sample - sti->first_discard_sample, duration);
        }
        if (sti->start_skip_samples && (pkt->pts == 0 || pkt->pts == RELATIVE_TS_BASE))
            sti->skip_samples = sti->start_skip_samples;
        sti->skip_samples = FFMAX(0, sti->skip_samples);
        if (sti->skip_samples || discard_padding) {
            uint8_t *p = av_packet_new_side_data(pkt, AV_PKT_DATA_SKIP_SAMPLES, 10);
            if (p) {
                AV_WL32(p, sti->skip_samples);
                AV_WL32(p + 4, discard_padding);
                av_log(s, AV_LOG_DEBUG, "demuxer injecting skip %u / discard %u\n",
                       (unsigned)sti->skip_samples, (unsigned)discard_padding);
            }
            sti->skip_samples = 0;
        }
		/*
			side data和extra data:
			(1)side data和extra data都与编码器和解码器有关,但用途和功能有所不同
			(2)side data指的是编码过程中生成的辅助信息,对编解码过程可能很重要,对于某些格式,可能包含重建图像所必须的码表或其他数据
			(3)extra data指的是编解码过程中的额外信息,不直接参与图像编解码,但对解码过程有帮助,可以是解码器的初始化信息、编码参数等
		*/
#if FF_API_AVSTREAM_SIDE_DATA
        if (sti->inject_global_side_data) {
            for (int i = 0; i < st->codecpar->nb_coded_side_data; i++) {
                const AVPacketSideData *const src_sd = &st->codecpar->coded_side_data[i];
                uint8_t *dst_data;

                if (av_packet_get_side_data(pkt, src_sd->type, NULL))
                    continue;

                dst_data = av_packet_new_side_data(pkt, src_sd->type, src_sd->size);
                if (!dst_data) {
                    av_log(s, AV_LOG_WARNING, "Could not inject global side data\n");
                    continue;
                }

                memcpy(dst_data, src_sd->data, src_sd->size);
            }
            sti->inject_global_side_data = 0;
        }
#endif
    }
	// meta data指的是与媒体文件相关联的描述性信息,如标题、作者、版权等等。这些信息通常不直接影响编解码过程,但有助于描述和组织媒体内容
    if (!si->metafree) {
        int metaret = av_opt_get_dict_val(s, "metadata", AV_OPT_SEARCH_CHILDREN, &metadata);
        if (metadata) {
            s->event_flags |= AVFMT_EVENT_FLAG_METADATA_UPDATED;
            av_dict_copy(&s->metadata, metadata, 0);
            av_dict_free(&metadata);
            av_opt_set_dict_val(s, "metadata", NULL, AV_OPT_SEARCH_CHILDREN);
        }
        si->metafree = metaret == AVERROR_OPTION_NOT_FOUND;
    }

    if (s->debug & FF_FDEBUG_TS)
        av_log(s, AV_LOG_DEBUG,
               "read_frame_internal stream=%d, pts=%s, dts=%s, "
               "size=%d, duration=%"PRId64", flags=%d\n",
               pkt->stream_index,
               av_ts2str(pkt->pts),
               av_ts2str(pkt->dts),
               pkt->size, pkt->duration, pkt->flags);

    /* A demuxer might have returned EOF because of an IO error, let's
     * propagate this back to the user. */
    if (ret == AVERROR_EOF && s->pb && s->pb->error < 0 && s->pb->error != AVERROR(EAGAIN))
        ret = s->pb->error;

    return ret;
}

在上面的代码中,进行了很多的检查和赋值,其中最核心的功能为读取packet(ff_read_packet)和解析packet(parse_packet),ff_read_packet实现的方式如下

1.4.1 读取packet(ff_read_packet)

ff_read_packet的主要功能是读取packet,存入AVPacket中。在代码中,最重要的地方为read_packet,具体执行了读取packet的任务。如果是FLV格式,则会调用flv_read_packet进行读取。假设现在编解码过程中刚执行了avformat_input_open,此时nb_stream为0,很多由nb_stream决定的循环不会进入。nb_stream的创建会再flv_read_packet中调用create_stream中实现,这略去部分代码, 因为read_packet函数非常重要,后续在别的文章中详细记录

int ff_read_packet(AVFormatContext *s, AVPacket *pkt)
{
    FFFormatContext *const si = ffformatcontext(s);
    int err;

#if FF_API_INIT_PACKET
FF_DISABLE_DEPRECATION_WARNINGS
    pkt->data = NULL;
    pkt->size = 0;
    av_init_packet(pkt);
FF_ENABLE_DEPRECATION_WARNINGS
#else
    av_packet_unref(pkt);
#endif

    for (;;) {
        PacketListEntry *pktl = si->raw_packet_buffer.head;

        if (pktl) {
            AVStream *const st = s->streams[pktl->pkt.stream_index];
            if (si->raw_packet_buffer_size >= s->probesize)
                if ((err = probe_codec(s, st, NULL)) < 0)
                    return err;
            if (ffstream(st)->request_probe <= 0) {
                avpriv_packet_list_get(&si->raw_packet_buffer, pkt);
                si->raw_packet_buffer_size -= pkt->size;
                return 0;
            }
        }
		// 读取packet
        err = ffifmt(s->iformat)->read_packet(s, pkt);
        if (err < 0) {
            av_packet_unref(pkt);

            /* Some demuxers return FFERROR_REDO when they consume
               data and discard it (ignored streams, junk, extradata).
               We must re-call the demuxer to get the real packet. */
            if (err == FFERROR_REDO)
                continue;
            if (!pktl || err == AVERROR(EAGAIN))
                return err;
            for (unsigned i = 0; i < s->nb_streams; i++) {
                AVStream *const st  = s->streams[i];
                FFStream *const sti = ffstream(st);
                if (sti->probe_packets || sti->request_probe > 0)
                    if ((err = probe_codec(s, st, NULL)) < 0)
                        return err;
                av_assert0(sti->request_probe <= 0);
            }
            continue;
        }

        err = av_packet_make_refcounted(pkt);
        if (err < 0) {
            av_packet_unref(pkt);
            return err;
        }

        err = handle_new_packet(s, pkt, 1);
        if (err <= 0) /* Error or passthrough */
            return err;
    }
}

以FLV格式为例,会使用flv_read_packet进行packet的读取

static int flv_read_packet(AVFormatContext *s, AVPacket *pkt)
{
    // ...
    /* now find stream */
    for (i = 0; i < s->nb_streams; i++) {
        st = s->streams[i];
        if (stream_type == FLV_STREAM_TYPE_AUDIO) {
            if (st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO &&
                (s->audio_codec_id || flv_same_audio_codec(st->codecpar, flags)))
                break;
        } else if (stream_type == FLV_STREAM_TYPE_VIDEO) {
            if (st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO &&
                (s->video_codec_id || flv_same_video_codec(st->codecpar, video_codec_id)))
                break;
        } else if (stream_type == FLV_STREAM_TYPE_SUBTITLE) {
            if (st->codecpar->codec_type == AVMEDIA_TYPE_SUBTITLE)
                break;
        } else if (stream_type == FLV_STREAM_TYPE_DATA) {
            if (st->codecpar->codec_type == AVMEDIA_TYPE_DATA)
                break;
        }
    }
    /* 创建新的stream,同时会将nb_stream + 1 */
    // 这里也表明了在调用avformat_open_input之后nb_stream为0,但是
    // 		在调用了avformat_find_stream_info之后nb_stream不为0的原因
    if (i == s->nb_streams) {
        static const enum AVMediaType stream_types[] = {AVMEDIA_TYPE_VIDEO, AVMEDIA_TYPE_AUDIO, AVMEDIA_TYPE_SUBTITLE, AVMEDIA_TYPE_DATA};
        st = create_stream(s, stream_types[stream_type]);
        if (!st)
            return AVERROR(ENOMEM);
    }
    // ...
}

1.4.2 解析packet(parse_packet)

解析packet中,最核心是调用了av_parser_parse2函数进行前面packet的解析工作,略去一些代码,在其他文章中记录

/**
 * Parse a packet, add all split parts to parse_queue.
 *
 * @param pkt   Packet to parse; must not be NULL.
 * @param flush Indicates whether to flush. If set, pkt must be blank.
 */
// 解析一个packet,并且将所有解析拆分的部分添加到parse_queue
static int parse_packet(AVFormatContext *s, AVPacket *pkt,
                        int stream_index, int flush)
{
    // ...
    while (size > 0 || (flush && got_output)) {
        int64_t next_pts = pkt->pts;
        int64_t next_dts = pkt->dts;
        int len;
		// 
        len = av_parser_parse2(sti->parser, sti->avctx,
                               &out_pkt->data, &out_pkt->size, data, size,
                               pkt->pts, pkt->dts, pkt->pos);

        // ...

1.5 尝试解码一帧(try_decode_frame)

如果在前面仍然没有获取到信息,需要尝试打开解码器进行解码,不过需要尽量避免这个操作,因为会带来额外的时间开销。具体来说,会调用avcodec_send_packet将packet送入解码器进行解码,再调用avcodec_receive_frame获取已经解码的frame的信息

/* returns 1 or 0 if or if not decoded data was returned, or a negative error */
static int try_decode_frame(AVFormatContext *s, AVStream *st,
                            const AVPacket *pkt, AVDictionary **options)
{
    FFStream *const sti = ffstream(st);
    AVCodecContext *const avctx = sti->avctx;
    const AVCodec *codec;
    int got_picture = 1, ret = 0;
    AVFrame *frame = av_frame_alloc();
    AVSubtitle subtitle;
    int do_skip_frame = 0;
    enum AVDiscard skip_frame;
    int pkt_to_send = pkt->size > 0;

    if (!frame)
        return AVERROR(ENOMEM);

    if (!avcodec_is_open(avctx) &&
        sti->info->found_decoder <= 0 &&
        (st->codecpar->codec_id != -sti->info->found_decoder || !st->codecpar->codec_id)) {
        AVDictionary *thread_opt = NULL;

        codec = find_probe_decoder(s, st, st->codecpar->codec_id);

        if (!codec) {
            sti->info->found_decoder = -st->codecpar->codec_id;
            ret                     = -1;
            goto fail;
        }

        /* Force thread count to 1 since the H.264 decoder will not extract
         * SPS and PPS to extradata during multi-threaded decoding. */
        // 强制线程数为1,因为H.264解码器在多线程解码期间不会将SPS和PPS提取为额外数据
        av_dict_set(options ? options : &thread_opt, "threads", "1", 0);
        /* Force lowres to 0. The decoder might reduce the video size by the
         * lowres factor, and we don't want that propagated to the stream's
         * codecpar */
        av_dict_set(options ? options : &thread_opt, "lowres", "0", 0);
        if (s->codec_whitelist)
            av_dict_set(options ? options : &thread_opt, "codec_whitelist", s->codec_whitelist, 0);
        ret = avcodec_open2(avctx, codec, options ? options : &thread_opt);
        if (!options)
            av_dict_free(&thread_opt);
        if (ret < 0) {
            sti->info->found_decoder = -avctx->codec_id;
            goto fail;
        }
        sti->info->found_decoder = 1;
    } else if (!sti->info->found_decoder)
        sti->info->found_decoder = 1;

    if (sti->info->found_decoder < 0) {
        ret = -1;
        goto fail;
    }

    if (avpriv_codec_get_cap_skip_frame_fill_param(avctx->codec)) {
        do_skip_frame = 1;
        skip_frame = avctx->skip_frame;
        avctx->skip_frame = AVDISCARD_ALL;
    }

    while ((pkt_to_send || (!pkt->data && got_picture)) &&
           ret >= 0 &&
           (!has_codec_parameters(st, NULL) || !has_decode_delay_been_guessed(st) ||
            (!sti->codec_info_nb_frames &&
             (avctx->codec->capabilities & AV_CODEC_CAP_CHANNEL_CONF)))) {
        got_picture = 0;
        // 如果是视频或者是音频信息
        if (avctx->codec_type == AVMEDIA_TYPE_VIDEO ||
            avctx->codec_type == AVMEDIA_TYPE_AUDIO) {
            // 将packet送入解码器
            ret = avcodec_send_packet(avctx, pkt);
            if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
                break;
            if (ret >= 0)
                pkt_to_send = 0;
            // 获取已经解码的frame
            ret = avcodec_receive_frame(avctx, frame);
            if (ret >= 0)
                got_picture = 1;
            if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
                ret = 0;
        } else if (avctx->codec_type == AVMEDIA_TYPE_SUBTITLE) { // 如果是字幕信息
            ret = avcodec_decode_subtitle2(avctx, &subtitle,
                                           &got_picture, pkt);
            if (got_picture)
                avsubtitle_free(&subtitle);
            if (ret >= 0)
                pkt_to_send = 0;
        }
        if (ret >= 0) {
            if (got_picture)
                sti->nb_decoded_frames++; // 已经解码的帧数
            ret       = got_picture;
        }
    }

fail:
    if (do_skip_frame) {
        avctx->skip_frame = skip_frame;
    }

    av_frame_free(&frame);
    return ret;
}

1.6 检查编码器参数(has_codec_parameters)

该函数会检查视频或者音频的参数是否都进行了配置,如果没有进行配置则报出错误

static int has_codec_parameters(const AVStream *st, const char **errmsg_ptr)
{
    const FFStream *const sti = cffstream(st);
    const AVCodecContext *const avctx = sti->avctx;

#define FAIL(errmsg) do {                                         \
        if (errmsg_ptr)                                           \
            *errmsg_ptr = errmsg;                                 \
        return 0;                                                 \
    } while (0)

    if (   avctx->codec_id == AV_CODEC_ID_NONE
        && avctx->codec_type != AVMEDIA_TYPE_DATA)
        FAIL("unknown codec");
    switch (avctx->codec_type) {
    case AVMEDIA_TYPE_AUDIO:
        if (!avctx->frame_size && determinable_frame_size(avctx))
            FAIL("unspecified frame size");
        if (sti->info->found_decoder >= 0 &&
            avctx->sample_fmt == AV_SAMPLE_FMT_NONE)
            FAIL("unspecified sample format");
        if (!avctx->sample_rate)
            FAIL("unspecified sample rate");
        if (!avctx->ch_layout.nb_channels)
            FAIL("unspecified number of channels");
        if (sti->info->found_decoder >= 0 && !sti->nb_decoded_frames && avctx->codec_id == AV_CODEC_ID_DTS)
            FAIL("no decodable DTS frames");
        break;
    case AVMEDIA_TYPE_VIDEO:
        if (!avctx->width)
            FAIL("unspecified size");
        if (sti->info->found_decoder >= 0 && avctx->pix_fmt == AV_PIX_FMT_NONE)
            FAIL("unspecified pixel format");
        if (st->codecpar->codec_id == AV_CODEC_ID_RV30 || st->codecpar->codec_id == AV_CODEC_ID_RV40)
            if (!st->sample_aspect_ratio.num && !st->codecpar->sample_aspect_ratio.num && !sti->codec_info_nb_frames)
                FAIL("no frame in rv30/40 and no sar");
        break;
    case AVMEDIA_TYPE_SUBTITLE:
        if (avctx->codec_id == AV_CODEC_ID_HDMV_PGS_SUBTITLE && !avctx->width)
            FAIL("unspecified size");
        break;
    case AVMEDIA_TYPE_DATA:
        if (avctx->codec_id == AV_CODEC_ID_NONE) return 1;
    }

    return 1;
}

1.7 估算时长(estimate_timings)

该函数用于估算流的起始时间,持续时间

static void estimate_timings(AVFormatContext *ic, int64_t old_offset)
{
    int64_t file_size;

    /* get the file size, if possible */
    if (ic->iformat->flags & AVFMT_NOFILE) {
        file_size = 0;
    } else {
        file_size = avio_size(ic->pb);
        file_size = FFMAX(0, file_size);
    }
	// 确定估算的方法
    if ((!strcmp(ic->iformat->name, "mpeg") ||
         !strcmp(ic->iformat->name, "mpegts")) &&
        file_size && (ic->pb->seekable & AVIO_SEEKABLE_NORMAL)) {
        /* get accurate estimate from the PTSes */
        estimate_timings_from_pts(ic, old_offset);
        ic->duration_estimation_method = AVFMT_DURATION_FROM_PTS;
    } else if (has_duration(ic)) {
        /* at least one component has timings - we use them for all
         * the components */
        fill_all_stream_timings(ic);
        /* nut demuxer estimate the duration from PTS */
        if (!strcmp(ic->iformat->name, "nut"))
            ic->duration_estimation_method = AVFMT_DURATION_FROM_PTS;
        else
            ic->duration_estimation_method = AVFMT_DURATION_FROM_STREAM;
    } else {
        /* less precise: use bitrate info */
        estimate_timings_from_bit_rate(ic);
        ic->duration_estimation_method = AVFMT_DURATION_FROM_BITRATE;
    }
    // 更新时间流的计时信息
    update_stream_timings(ic);

    for (unsigned i = 0; i < ic->nb_streams; i++) {
        AVStream *const st = ic->streams[i];
        if (st->time_base.den)
            av_log(ic, AV_LOG_TRACE, "stream %u: start_time: %s duration: %s\n", i,
                   av_ts2timestr(st->start_time, &st->time_base),
                   av_ts2timestr(st->duration, &st->time_base));
    }
    av_log(ic, AV_LOG_TRACE,
           "format: start_time: %s duration: %s (estimate from %s) bitrate=%"PRId64" kb/s\n",
           av_ts2timestr(ic->start_time, &AV_TIME_BASE_Q),
           av_ts2timestr(ic->duration, &AV_TIME_BASE_Q),
           duration_estimate_name(ic->duration_estimation_method),
           (int64_t)ic->bit_rate / 1000);
}

其中,估算的方法定义为

/**
 * The duration of a video can be estimated through various ways, and this enum can be used
 * to know how the duration was estimated.
 */
enum AVDurationEstimationMethod {
	// 由PTS精确估计
    AVFMT_DURATION_FROM_PTS,    ///< Duration accurately estimated from PTSes
    // 从具有已知持续时间的流估计的持续时间
    AVFMT_DURATION_FROM_STREAM, ///< Duration estimated from a stream with a known duration
    // 从比特率估计的持续时间(不太准确)
    AVFMT_DURATION_FROM_BITRATE ///< Duration estimated from bitrate (less accurate)
};

在这里,一共使用了3种方法来进行duration的估计,分别是:
(1)AVFMT_DURATION_FROM_PTS
精确的估计方法,从注释上看,且仅用于MPEG-PS格式的文件,调用estimate_timings_from_pts()实现,通过利用最后一个AVPacket的pts和第一个AVPacket的pts相减得到
(2)AVFMT_DURATION_FROM_STREAM
粗略估计的方法,从具有已知持续时间的流估计的持续时间,调用fill_all_stream_timings()实现。具体来说,首先更新流的信息,然后遍历每个流,如果某个流的起始时间戳为0,会从别的流当中拷贝一份起始时间戳start_time和持续时间duration给到当前的流
(3)AVFMT_DURATION_FROM_BITRATE
不太准确估计的方法,从比特率估计的持续时间,调用estimate_timings_from_bit_rate()实现。具体来说,如果AVFormatContext当中没有bit_rate,则把AVFormatContext下面包含的所有AVStream的bit_rate加起来,作为AVFormatContext的bit_rate;如果有bit_rate,则直接认为这个bit_rate是正确的。持续时间duration由file_size除以bit_rate获得。

2.小结

avformat_find_stream_info主要的功能是解析数据源当中的流信息,它是基于avformat_open_input中获取的格式进行的,其中涉及到的关键流信息包括音频信息,视频信息,字幕信息,其中视频信息中又包括了起始时间戳,持续时间,AVCodec信息(width,height等),stream数量等等,这是极其重要的函数,需要重点关注

CSDN : https://blog.csdn.net/weixin_42877471
Github : https://github.com/DoFulangChen

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1887798.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Redis的使用(二)redis的命令总结

1.概述 这一小节&#xff0c;我们主要来研究一下redis的五大类型的基本使用&#xff0c;数据类型如下&#xff1a; redis我们接下来看一看这八种类型的基本使用。我们可以在redis的官网查询这些命令:Commands | Docs,同时我们也可以用help 数据类型查看命令的帮助文档。 2. 常…

新鲜出炉!恭喜这 5 位同学中选 NebulaGraph 社区 2024 开源之夏项目!

开源之夏是中国科学院软件研究所发起的“开源软件供应链点亮计划”系列暑期活动&#xff0c;旨在鼓励高校学生积极参与开源软件的开发维护&#xff0c;促进优秀开源软件社区的蓬勃发展。活动联合各大开源社区&#xff0c;针对重要开源软件的开发与维护提供项目开发任务&#xf…

【JPCS出版,PSESG 2024,8月16-18】2024年电力系统工程与智能电网国际学术会议

2024年电力系统工程与智能电网国际学术会议(PSESG 2024)于2024年8月16-18日在中国北京隆重召开。 会议旨在为从事“电力系统工程”、“智能电网”、“储能技术”等领域的专家学者、工程技术人员、研发人员提供一个共享科研成果和前沿技术&#xff0c;了解学术发展趋势&#xf…

linux的Top学习

学习文档 https://www.cnblogs.com/liulianzhen99/articles/17638178.html TOP 问题 1&#xff1a;top 输出的利用率信息是如何计算出来的&#xff0c;它精确吗&#xff1f; top 命令访问 /proc/stat 获取各项 cpu 利用率使用值内核调用 stat_open 函数来处理对 /proc/sta…

PMP通过率为什么高?

很多人在初步了解PMP的时候&#xff0c;都会考虑到PMP考试的难度以及通过率&#xff0c;继而在网上查询到很多资料后&#xff0c;都会发现&#xff0c;其实PMP的国内通过率一直都是很高的。 通过率高≠含金量低 看到PMP的通过率这么高&#xff0c;很多人觉得证书的水分很大&a…

鼠标连点器:解放双手的自动化效率神器,鼠标自动快速连点!

日常使用电脑整理工作时&#xff0c;总会做一些重复的工作&#xff0c;比如&#xff1a;刷题、做任务、浏览多张图片、浏览多个文件等。这些操作的工作量在于鼠标左键&#xff0c;需要一直重复的点&#xff0c;略微有些枯燥了。 面对重复且枯燥的工作&#xff0c;我们可以借助第…

Windows系统安装NVM,实现Node.js多版本管理

目录 一、前言 二、NVM简介 三、准备工作 1、卸载Node 2、创建文件夹 四、下载NVM 五、安装NVM 六、使用NVM 1、NVM常用操作命令 2、查看NVM版本信息 3、查看Node.js版本列表&#xff1b; 4、下载指定版本Node.js 5、使用指定版本Node.js 6、查看已安装Node.js列…

快速入门FreeRTOS心得(正点原子学习版)

对于FreeROTS&#xff0c;我第一反应想到的就是通信里的TDM&#xff08;时分多址&#xff09;。不同任务给予分配不同的时间间隔&#xff0c;也就是任务之间在每个timeslot都在来回切换。 这里有重要的一点&#xff0c;就是中断要短小&#xff0c;优先级是自高到底进行打断。 …

如何避免删库跑路?

如何避免删库跑路&#xff0c;这几乎是一个老生常谈的话题&#xff0c;也是大部分上了规模的企业都很关心的话题&#xff0c;京东到家、微盟、链家、思科... 在这些大企业上发生过的删库事件仍然历历在目&#xff0c;无论是否当事人有意为之还是系统 BUG 导致&#xff0c;造成的…

vue-advanced-chat 聊天控件的使用

测试代码&#xff1a;https://github.com/robinfoxnan/vue-advanced-chat-test0 控件源码&#xff1a;https://github.com/advanced-chat/vue-advanced-chat 先上个效果图&#xff1a; 这个控件就是专门为聊天而设计的&#xff0c;但是也有一些不足&#xff1a; 1&#xf…

国际数字影像产业园:汇聚全球力量,共绘影像新蓝图

在数字化浪潮席卷全球的今天&#xff0c;我们自豪地宣布&#xff0c;国际数字影像产业园已正式起航&#xff0c;以全球视野为引领&#xff0c;致力于推动数字影像产业的创新发展&#xff0c;引领全球潮流。 一、汇聚全球智慧 国际数字影像产业园以开放包容的姿态&#xff0c;汇…

MIX OTP——使用 ETS 加速

每次我们需要查找存储容器时&#xff0c;我们都需要向注册表发送一条消息。如果我们的注册表被多个进程同时访问&#xff0c;注册表可能会成为瓶颈&#xff01; 在本章中&#xff0c;我们将了解 ETS&#xff08;Erlang Term Storage&#xff09;以及如何将其用作缓存机制。 警…

【信息系统项目管理师】常见图表

作文里面的画图题用语言描述画图过程 合同 采购综合评分标准 责任分配矩阵 成本预算表 成本估算 成本管理计划 活动清单 活动属性 变更日志 问题日志 项目章程 自己再添加更多内容 甘特图 甘特图包含以下三个含义&#xff1a; 1、以图形或表格的形式显示活动&#xff1b; 2、…

JavaScript中window对象 , location对象以及history对象使用方法详细介绍

2.BOM&#xff08;Browser Object Model&#xff09; 操作浏览器的。常用的浏览器对象&#xff1a; 1.window对象&#xff1a;Window 对象表示浏览器中打开的窗口。 2.location对象&#xff1a;Location 对象包含有关当前 URL 的信息。Location 对象是 window 对象的一部分&…

[PyTorch]:加速Pytorch 模型训练的几种方法(几行代码),最快提升八倍(附实验记录)

本篇文章转自&#xff1a;Some Techniques To Make Your PyTorch Models Train (Much) Faster 本篇博文概述了在不影响 PyTorch 模型准确性的情况下提高其训练性能的技术。为此&#xff0c;将 PyTorch 模型包装在 LightningModule 中&#xff0c;并使用 Trainer 类来实现各种训…

使用 Python 五年后,我发现学 python 必看这三本书!少走一半弯路

第一本 《Python编程-从入门到实践》 适合零基础的读者 豆瓣评分&#xff1a;9.1 推荐指数&#xff1a;5颗星 推荐理由&#xff1a; 本书是针对所有层次的 Python 读者而作的 Python 入门书。全书分为两部分&#xff1a; 第一部分介绍使用Python 编程所必须了解的…

将excel表格转换为element table(上)

最近有个功能需要将excel展示到html 界面里面&#xff0c;看是简单的一个需求也是需要费尽心思才完得成 原始数据 想要把excel 读取出来&#xff0c;于是使用xlsl的插件 npm i xlsx通过插件可以获取到已经分析好的数据 然后使用sheet_to_html将数据转换为html 再使用v-htm…

ROS2 RQT

1. RQT是什么 RQT是一个GUI框架&#xff0c;通过插件的方式实现了各种各样的界面工具。 强行解读下&#xff1a;RQT就像插座&#xff0c;任何电器只要符合插座的型号就可以插上去工作。 2.选择插件 这里我们可以选择现有的几个RQT插件来试一试&#xff0c;可以看到和话题、参…

视频太大怎么压缩变小?6款视频压缩软件免费版分享

视频太大怎么压缩得又小又清晰呢&#xff1f;无论是视频文件传输、视频文件存储&#xff0c;还是进行自媒体视频上传&#xff0c;都对视频文件的大小有一定的限制。高质量的视频文件往往伴随着文件占据大量存储空间&#xff0c;导致文件传输速度变慢。今天教大家6种视频压缩软件…

配置WLAN 示例

规格 仅AR129CVW、AR129CGVW-L、AR109W、AR109GW-L、AR161W、AR161EW、AR161FGW-L、AR161FW、AR169FVW、AR169JFVW-4B4S、AR169JFVW-2S、AR169EGW-L、AR169EW、AR169FGW-L、AR169W-P-M9、AR1220EVW和AR301W支持WLAN-FAT AP功能。 组网需求 如图1所示&#xff0c;企业使用WLAN…