目标检测mAP

news2024/9/19 11:14:16

概述

AP (Average precision) is a popular metric in measuring the accuracy of object detectors like Faster R-CNN, SSD, etc. Average precision computes the average precision value for recall value over 0 to 1. It sounds complicated but actually pretty simple as we illustrate it with an example. But before that, we will do a quick recap on precision, recall, and IoU first

Precision & Recall

Precision measures how accurate is your predictions. i.e. the percentage of your predictions are correct.

                                                             Precision = TP / (TP + FP)

Recall measures what percentage of all the positives you had found. For example, we can find 80% of the possible positive cases in our top K predictions.

                                                             Recall = TP / (TP + FN)

F1-score

                                                             F1 = 2PR / (P + R)

IoU(Intersection over union)

IoU measures the overlap between 2 boundaries. We use that to measure how much our predicted boundary overlaps with the ground truth (the real object boundary). In some datasets, we predefine an IoU threshold (say 0.5) in classifying whether the prediction is a true positive or a false positive.

AP(average precision)

Let’s create an over-simplified example in demonstrating the calculation of the average precision. In this example, the whole dataset contains 5 apples only. We collect all the predictions made for apples in all the images and rank it in descending order according to the predicted confidence level. The second column indicates whether the prediction is correct or not. In this example, the prediction is correct if IoU ≥ 0.5

Let’s take the row with rank #3 and demonstrate how precision and recall are calculated first.

Precision is the proportion of TP = 2/3 = 0.67.

Recall is the proportion of TP out of the possible positives = 2/5 = 0.4.

Recall values increase as we go down the prediction ranking. However, precision has a zigzag pattern — it goes down with false positives and goes up again with true positives.

Let’s plot the precision against the recall value to see this zig-zag pattern.

The general definition for the Average Precision (AP) is finding the area under the precision-recall curve above.

 Precision and recall are always between 0 and 1. Therefore, AP falls within 0 and 1 also. Before calculating AP for the object detection, we often smooth out the zigzag pattern first.

Graphically, at each recall level, we replace each precision value with the maximum precision value to the right of that recall level.

So the orange line is transformed into the green lines and the curve will decrease monotonically instead of the zigzag pattern. The calculated AP value will be less suspectable to small variations in the ranking. Mathematically, we replace the precision value for recall ȓ with the maximum precision for any recall ≥ ȓ.

Interpolated AP

PASCAL VOC is a popular dataset for object detection. For the PASCAL VOC challenge, a prediction is positive if IoU ≥ 0.5. Also, if multiple detections of the same object are detected, it counts the first one as a positive while the rest as negatives.

In Pascal VOC2008, an average for the 11-point interpolated AP is calculated.

First, we divide the recall value from 0 to 1.0 into 11 points — 0, 0.1, 0.2, …, 0.9 and 1.0. Next, we compute the average of maximum precision value for these 11 recall values.

 In our example, AP = (5 × 1.0 + 4 × 0.57 + 2 × 0.5) / 11

Here are the more precise mathematical definitions.

When APᵣ turns extremely small, we can assume the remaining terms to be zero. i.e. we don’t necessarily make predictions until the recall reaches 100%. If the possible maximum precision levels drop to a negligible level, we can stop.

For 20 different classes in PASCAL VOC, we compute an AP for every class and also provide an average for those 20 AP results.

According to the original researcher, the intention of using 11 interpolated point in calculating AP is

The intention in interpolating the precision/recall curve in this way is to reduce the impact of the “wiggles” in the precision/recall curve, caused by small variations in the ranking of examples.

However, this interpolated method is an approximation which suffers two issues. First, It is less precise. Second, it lost the capability in measuring the difference for methods with low AP. Therefore, a different AP calculation is adopted after 2008 for PASCAL VOC

AP (Area under curve AUC)

For later Pascal VOC competitions, VOC2010–2012 samples the curve at all unique recall values (r₁, r₂, …), whenever the maximum precision value drops. With this change, we are measuring the exact area under the precision-recall curve after the zigzags are removed.

No approximation or interpolation is needed. Instead of sampling 11 points, we sample p(rᵢ) whenever it drops and computes AP as the sum of the rectangular blocks.

This definition is called the Area Under Curve (AUC). As shown below, as the interpolated points do not cover where the precision drops, both methods will diverge.

 COCO mAP

Latest research papers tend to give results for the COCO dataset only. In COCO mAP, a 101-point interpolated AP definition is used in the calculation. For COCO, AP is the average over multiple IoU (the minimum IoU to consider a positive match). AP@[.5:.95] corresponds to the average AP for IoU from 0.5 to 0.95 with a step size of 0.05. For the COCO competition, AP is the average over 10 IoU levels on 80 categories (AP@[.50:.05:.95]: start from 0.5 to 0.95 with a step size of 0.05). The following are some other metrics collected for the COCO dataset.

mAP (mean average precision) is the average of AP. In some context, we compute the AP for each class and average them. But in some context, they mean the same thing. For example, under the COCO context, there is no difference between AP and mAP. Here is the direct quote from COCO:

AP is averaged over all categories. Traditionally, this is called “mean average precision” (mAP). We make no distinction between AP and mAP (and likewise AR and mAR) and assume the difference is clear from context.

In ImageNet, the AUC method is used. So even all of them follow the same principle in measurement AP, the exact calculation may vary according to the datasets. Fortunately, development kits are available in calculating this metric.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/705230.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

安卓热修系列-插件资源冲突解决方案

作者:37手游移动客户端团队 背景 在做插件化过程中,宿主需要用到插件的资源,涉及到加载插件的资源; 因为插件是以apk的方式存在的,所以插件的ID和宿主的ID可能导致重复; 为了解决这个问题,需…

[游戏开发][Unity]点击Play按钮卡顿特别久

一般小工程不会遇到这个问题,我在公司接手了几个老项目,都遇到了这个问题。每次Play卡顿几分钟甚至十几分钟,很是头疼。

DL学习11-nin-mnist

对于使用卷积神经网络加全连接层的结构而言,对于全连接的参数的巨大了,对于简单的任务容易造成过拟合,且会增加模型的额外开销,例如alexnet,vgg等,全连接层的开销会随着参数的增加而爆炸式增长。 nin旨在使…

ELK增量同步数据【MySql->ES】

一、前置条件 1. linux,已经搭建好的logstasheskibana【系列版本7.0X】,es 的plugs中安装ik分词器 ES版本: Logstash版本: (以上部署,都是运维同事搞的,我不会部署,同事给力&#…

动态SLAM论文(3) — Detect-SLAM: Making Object Detection and SLAM Mutually Beneficial

目录 1 Introduction 2 Related Work 3 Detect-SLAM 3.1 移动物体去除 3.2 Mapping Objects 3.3 增强SLAM检测器 4 实验 4.1 动态环境下的鲁棒SLAM 4.2. 提升检测性能 5 结论 Abstract:近年来,在SLAM和目标检测方面取得了显著进展,…

使用python sdk添加删除阿里云pvc路由

1. 前言 由于线路供应商sdwan存在单点问题,需要实现线路高可用解决方案,需要设计自动切换阿里云vpc路由解决方案。通过阿里云文档了解,可通过阿里云专有网络Python SDK,通过sdk实现创建、删除、查询等vpc网络相关操作&#xff08…

如何与德科斯米尔Draexlmaier 建立 EDI 连接?

德科斯米尔Draexlmaier(以下简称为DRX)是一家总部位于德国的汽车零部件供应商和系统集成商,如今已成为全球领先的汽车内部装饰系统、电气和电子系统、电缆技术以及储能系统的制造商之一。EDI 帮助DRX与其交易伙伴之间实现信息流的一致性、无误…

CHATGPT使用笔记

CHATGPT是帮你做事,而不是替你做事 1、联网插件: 使用Webpilot插件联网时还可以同时使用其它两种插件(一次可以同时使用三个插件),而使用Web Browsing插件功能联网时无法使用插件功能(联网功能和插件只能…

SpringBoot2+Vue2实战(八)文件上传实现

一、文件上传 创建数据库表 Files import com.baomidou.mybatisplus.annotation.IdType; import com.baomidou.mybatisplus.annotation.TableId; import com.baomidou.mybatisplus.annotation.TableName; import lombok.Data;Data TableName("sys_file") public cl…

18.RocketMQ中消息重复的场景和幂等处理

highlight: arduino-light 消息重复的场景 发送消息异常,重试发送导致消息重复★ 当一条消息已被成功发送到服务端并完成持久化。此时出现网络闪断或者客户端宕机,导致服务端对生产者的确认应答失败。生产者发送消息到mq时发送成功未获取到响应,然后生产者进行消息发…

信号链噪声分析18

文章目录 概要整体架构流程技术名词解释技术细节小结 概要 提示:这里可以添加技术概要 到目前为止,我们考虑的是基带采样情况,即所有目标信号均位于第一奈奎斯特区内。 图 显示了另外一种情况,其中采样信号频带局限于第一奈奎斯…

5.8.1 TCP概述

5.8.1 TCP概述 TCP是在Internet中TCP/IP协议家族中最为重要的协议之一,因特网中各种网络特性参差不齐,所以必须要有一个功能很强的互联网可靠传输协议的要求,TCP特点要与UDP特点对比来看。 UDP特点TCP特点无连接面向连接不可靠的服务可靠的…

一文详解!自动化测试如何管理测试数据

目录 前言 脚本与数据捆绑 配置文件 测试文件 数据库管理 数据平台 综述 前言 测试数据管理是自动化测试中非常重要的一环,它涉及到数据的创建、存储、维护和管理。 在之前的自动化测试框架相关文章中,无论是接口自动化还是UI自动化&#xff0c…

机器学习-支持向量机SVM

文章目录 前言1 支持向量机1.1 数据集示例11.2 带有高斯核的SVM1.2.1 高斯核1.2.2 数据集示例21.2.3 数据集示例3 2 垃圾邮件分类2.1 邮件预处理2.2 训练SVM进行垃圾邮件分类 前言 在本练习中,我们将使用支持向量机(SVM)来构建垃圾邮件分类器…

机器学习基础之《概述》

一、机器学习与人工智能、深度学习 1、机器学习是人工智能的一个实现途径 2、深度学习是机器学习的一个方法发展而来 二、统计学习和机器学习 实际机器学习在上世纪80年代已经出现,搞统计的 机器学习中有一个方法,叫人工神经网络,发展成深度…

高压线路距离保护程序逻辑原理(六)

(三)振荡与短路故障的区分 在系统发生振荡时,又发生短路故障的机率虽然不多,但万一发生应要求保护能可靠地动作于跳闸。这就要求保护能很好地区分振荡和短路故障。但是在常规距离保护中,对振荡闭锁后再发生故…

【机器学习】比较全面的XGBoost算法讲解

本文是《机器学习入门基础》(黄海广著)的第十章的部分内容。 XGBoost算法 XGBoost是2014年2月由华盛顿大学的博士生陈天奇发明的基于梯度提升算法(GBDT)的机器学习算法,其算法不但具有优良的学习效果,而且训练速度高效&#xff0c…

【软件测试】测试的分类

目录 测试的分类 1.按测试对像划分 ⭐1.界面测试 2.可靠性测试 3.容错性测试 4.文档测试 ⭐5.兼容性测试: ⭐6.易用性测试: ⭐7.安装卸载测试 ⭐8. 安全测试: ⭐9.性能测试 10.内存泄漏测试 2.按是否查看代码划分 1.黑盒测试(…

Html + Jquery + Vue前端学习笔记

文章目录 一,Vue1,v-model 数据绑定2,生成描述列表 二,HtmlJquery1,动态修改类名2,layui手风琴效果3,输入框样式修改4,多行文本显示省略号5,div内容居右6,字符…

Mysql基础教程

SELECT Company FROM Orders SQL 简介 SQL 教程SQL 语法 SQL 是用于访问和处理数据库的标准的计算机语言。 什么是 SQL? SQL 指结构化查询语言SQL 使我们有能力访问数据库SQL 是一种 ANSI 的标准计算机语言 编者注:ANSI,美国国家标准化…