yolo增加mobileone

news2024/10/5 18:25:49

代码地址:GitHub - apple/ml-mobileone: This repository contains the official implementation of the research paper, "An Improved One millisecond Mobile Backbone".

论文地址:https://arxiv.org/abs/2206.04040

MobileOne出自Apple,它的作者声称在iPhone 12上MobileOne的推理时间只有1毫秒,这也是MobileOne这个名字中One的含义。从MobileOne的快速落地可以看到重参数化在移动端的潜力:简单、高效、即插即用。

图3中的左侧部分构成了MobileOne的一个完整building block。它由上下两部分构成,其中上面部分基于深度卷积(Depthwise Convolution),下面部分基于点卷积(Pointwise Convolution)。深度卷积与点卷积的术语来自于MobileNet。深度卷积本质上是一个分组卷积,它的分组数g与输入通道相同。而点卷积是一个1×1卷积。

图3中的深度卷积模块由三条分支构成。最左侧分支是1×1卷积;中间分支是过参数化的3×3卷积,即k个3×3卷积;右侧部分是一个包含BN层的shortcut连接。这里的1×1卷积和3×3卷积都是深度卷积(也即分组卷积,分组数g等于输入通道数)。

图3中的点卷积模块由两条分支构成。左侧分支是过参数化的1×1卷积,由k个1×1卷积构成。右侧分支是一个包含BN层的跳跃连接。在训练阶段,MobileOne就是由这样的building block堆叠而成。当训练完成后,可以使用重参数化方法将图3中左侧所示的building block重参数化图3中右侧的结构。

 1、yolov5

创建yolov5s-mobileone.yaml

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license

# Parameters
nc: 80  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple
anchors:
  - [10,13, 16,30, 33,23]  # P3/8
  - [30,61, 62,45, 59,119]  # P4/16
  - [116,90, 156,198, 373,326]  # P5/32

# YOLOv5 v6.0 backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, Conv, [64, 6, 2, 2]],  # 0-P1/2
   [ -1, 1, MobileOne, [ 128, True, 2] ],  # 1-P2/4
   [ -1, 1, MobileOne, [ 256, True, 8] ],  # 2-P3/8
   [ -1, 1, MobileOne, [ 512, True, 10] ],  # 3-P4/16
   [ -1, 1, MobileOne, [ 1024, True, 1] ],  # 4-P5/32
   [-1, 1, SPPF, [1024, 5]],  # 5
  ]

# YOLOv5 v6.0 head
head:
  [[-1, 1, Conv, [512, 1, 1]], # 6
   [-1, 1, nn.Upsample, [None, 2, 'nearest']], # 7
   [[-1, 3], 1, Concat, [1]],  # cat backbone P4
   [ -1, 1, MobileOne, [ 512, False, 3] ],  # 9

   [-1, 1, Conv, [256, 1, 1]],
   [-1, 1, nn.Upsample, [None, 2, 'nearest']],
   [[-1, 2], 1, Concat, [1]],  # cat backbone P3
   [ -1, 1, MobileOne, [ 256, False, 3] ],  # 13 (P3/8-small)

   [-1, 1, Conv, [256, 3, 2]],
   [[-1, 10], 1, Concat, [1]],  # cat head P4
   [ -1, 1, MobileOne, [ 512, False, 3] ],  # 16 (P4/16-medium)

   [-1, 1, Conv, [512, 3, 2]],
   [[-1, 6], 1, Concat, [1]],  # cat head P5
   [ -1, 1, MobileOne, [ 1024, False, 3] ],  # 19 (P5/32-large)

   [[13, 16, 19], 1, Detect, [nc, anchors]],  # Detect(P3, P4, P5)
  ]

common.py中增加

from typing import Optional, List, Tuple
import torch.nn.functional as F


class SEBlock(nn.Module):
    """ Squeeze and Excite module.

        Pytorch implementation of `Squeeze-and-Excitation Networks` -
        https://arxiv.org/pdf/1709.01507.pdf
    """

    def __init__(self,
                 in_channels: int,
                 rd_ratio: float = 0.0625) -> None:
        """ Construct a Squeeze and Excite Module.

        :param in_channels: Number of input channels.
        :param rd_ratio: Input channel reduction ratio.
        """
        super(SEBlock, self).__init__()
        self.reduce = nn.Conv2d(in_channels=in_channels,
                                out_channels=int(in_channels * rd_ratio),
                                kernel_size=1,
                                stride=1,
                                bias=True)
        self.expand = nn.Conv2d(in_channels=int(in_channels * rd_ratio),
                                out_channels=in_channels,
                                kernel_size=1,
                                stride=1,
                                bias=True)

    def forward(self, inputs: torch.Tensor) -> torch.Tensor:
        """ Apply forward pass. """
        b, c, h, w = inputs.size()
        x = F.avg_pool2d(inputs, kernel_size=[h, w])
        x = self.reduce(x)
        x = F.relu(x)
        x = self.expand(x)
        x = torch.sigmoid(x)
        x = x.view(-1, c, 1, 1)
        return inputs * x


class MobileOneBlock(nn.Module):
    """ MobileOne building block.

        This block has a multi-branched architecture at train-time
        and plain-CNN style architecture at inference time
        For more details, please refer to our paper:
        `An Improved One millisecond Mobile Backbone` -
        https://arxiv.org/pdf/2206.04040.pdf
    """

    def __init__(self,
                 in_channels: int,
                 out_channels: int,
                 kernel_size: int,
                 stride: int = 1,
                 padding: int = 0,
                 dilation: int = 1,
                 groups: int = 1,
                 inference_mode: bool = False,
                 use_se: bool = False,
                 num_conv_branches: int = 1) -> None:
        """ Construct a MobileOneBlock module.

        :param in_channels: Number of channels in the input.
        :param out_channels: Number of channels produced by the block.
        :param kernel_size: Size of the convolution kernel.
        :param stride: Stride size.
        :param padding: Zero-padding size.
        :param dilation: Kernel dilation factor.
        :param groups: Group number.
        :param inference_mode: If True, instantiates model in inference mode.
        :param use_se: Whether to use SE-ReLU activations.
        :param num_conv_branches: Number of linear conv branches.
        """
        super(MobileOneBlock, self).__init__()
        self.inference_mode = inference_mode
        self.groups = groups
        self.stride = stride
        self.kernel_size = kernel_size
        self.in_channels = in_channels
        self.out_channels = out_channels
        self.num_conv_branches = num_conv_branches

        # Check if SE-ReLU is requested
        if use_se:
            self.se = SEBlock(out_channels)
        else:
            self.se = nn.Identity()
        self.activation = nn.ReLU()

        if inference_mode:
            self.reparam_conv = nn.Conv2d(in_channels=in_channels,
                                          out_channels=out_channels,
                                          kernel_size=kernel_size,
                                          stride=stride,
                                          padding=padding,
                                          dilation=dilation,
                                          groups=groups,
                                          bias=True)
        else:
            # Re-parameterizable skip connection
            self.rbr_skip = nn.BatchNorm2d(num_features=in_channels) \
                if out_channels == in_channels and stride == 1 else None

            # Re-parameterizable conv branches
            rbr_conv = list()
            for _ in range(self.num_conv_branches):
                rbr_conv.append(self._conv_bn(kernel_size=kernel_size,
                                              padding=padding))
            self.rbr_conv = nn.ModuleList(rbr_conv)

            # Re-parameterizable scale branch
            self.rbr_scale = None
            if kernel_size > 1:
                self.rbr_scale = self._conv_bn(kernel_size=1,
                                               padding=0)

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        """ Apply forward pass. """
        # Inference mode forward pass.
        if self.inference_mode:
            return self.activation(self.se(self.reparam_conv(x)))

        # Multi-branched train-time forward pass.
        # Skip branch output
        identity_out = 0
        if self.rbr_skip is not None:
            identity_out = self.rbr_skip(x)

        # Scale branch output
        scale_out = 0
        if self.rbr_scale is not None:
            scale_out = self.rbr_scale(x)

        # Other branches
        out = scale_out + identity_out
        for ix in range(self.num_conv_branches):
            out += self.rbr_conv[ix](x)

        return self.activation(self.se(out))

    def reparameterize(self):
        """ Following works like `RepVGG: Making VGG-style ConvNets Great Again` -
        https://arxiv.org/pdf/2101.03697.pdf. We re-parameterize multi-branched
        architecture used at training time to obtain a plain CNN-like structure
        for inference.
        """
        if self.inference_mode:
            return
        kernel, bias = self._get_kernel_bias()
        self.reparam_conv = nn.Conv2d(in_channels=self.rbr_conv[0].conv.in_channels,
                                      out_channels=self.rbr_conv[0].conv.out_channels,
                                      kernel_size=self.rbr_conv[0].conv.kernel_size,
                                      stride=self.rbr_conv[0].conv.stride,
                                      padding=self.rbr_conv[0].conv.padding,
                                      dilation=self.rbr_conv[0].conv.dilation,
                                      groups=self.rbr_conv[0].conv.groups,
                                      bias=True)
        self.reparam_conv.weight.data = kernel
        self.reparam_conv.bias.data = bias

        # Delete un-used branches
        for para in self.parameters():
            para.detach_()
        self.__delattr__('rbr_conv')
        self.__delattr__('rbr_scale')
        if hasattr(self, 'rbr_skip'):
            self.__delattr__('rbr_skip')

        self.inference_mode = True

    def _get_kernel_bias(self) -> Tuple[torch.Tensor, torch.Tensor]:
        """ Method to obtain re-parameterized kernel and bias.
        Reference: https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py#L83

        :return: Tuple of (kernel, bias) after fusing branches.
        """
        # get weights and bias of scale branch
        kernel_scale = 0
        bias_scale = 0
        if self.rbr_scale is not None:
            kernel_scale, bias_scale = self._fuse_bn_tensor(self.rbr_scale)
            # Pad scale branch kernel to match conv branch kernel size.
            pad = self.kernel_size // 2
            kernel_scale = torch.nn.functional.pad(kernel_scale,
                                                   [pad, pad, pad, pad])

        # get weights and bias of skip branch
        kernel_identity = 0
        bias_identity = 0
        if self.rbr_skip is not None:
            kernel_identity, bias_identity = self._fuse_bn_tensor(self.rbr_skip)

        # get weights and bias of conv branches
        kernel_conv = 0
        bias_conv = 0
        for ix in range(self.num_conv_branches):
            _kernel, _bias = self._fuse_bn_tensor(self.rbr_conv[ix])
            kernel_conv += _kernel
            bias_conv += _bias

        kernel_final = kernel_conv + kernel_scale + kernel_identity
        bias_final = bias_conv + bias_scale + bias_identity
        return kernel_final, bias_final

    def _fuse_bn_tensor(self, branch) -> Tuple[torch.Tensor, torch.Tensor]:
        """ Method to fuse batchnorm layer with preceeding conv layer.
        Reference: https://github.com/DingXiaoH/RepVGG/blob/main/repvgg.py#L95

        :param branch:
        :return: Tuple of (kernel, bias) after fusing batchnorm.
        """
        if isinstance(branch, nn.Sequential):
            kernel = branch.conv.weight
            running_mean = branch.bn.running_mean
            running_var = branch.bn.running_var
            gamma = branch.bn.weight
            beta = branch.bn.bias
            eps = branch.bn.eps
        else:
            assert isinstance(branch, nn.BatchNorm2d)
            if not hasattr(self, 'id_tensor'):
                input_dim = self.in_channels // self.groups
                kernel_value = torch.zeros((self.in_channels,
                                            input_dim,
                                            self.kernel_size,
                                            self.kernel_size),
                                           dtype=branch.weight.dtype,
                                           device=branch.weight.device)
                for i in range(self.in_channels):
                    kernel_value[i, i % input_dim,
                                 self.kernel_size // 2,
                                 self.kernel_size // 2] = 1
                self.id_tensor = kernel_value
            kernel = self.id_tensor
            running_mean = branch.running_mean
            running_var = branch.running_var
            gamma = branch.weight
            beta = branch.bias
            eps = branch.eps
        std = (running_var + eps).sqrt()
        t = (gamma / std).reshape(-1, 1, 1, 1)
        return kernel * t, beta - running_mean * gamma / std

    def _conv_bn(self,
                 kernel_size: int,
                 padding: int) -> nn.Sequential:
        """ Helper method to construct conv-batchnorm layers.

        :param kernel_size: Size of the convolution kernel.
        :param padding: Zero-padding size.
        :return: Conv-BN module.
        """
        mod_list = nn.Sequential()
        mod_list.add_module('conv', nn.Conv2d(in_channels=self.in_channels,
                                              out_channels=self.out_channels,
                                              kernel_size=kernel_size,
                                              stride=self.stride,
                                              padding=padding,
                                              groups=self.groups,
                                              bias=False))
        mod_list.add_module('bn', nn.BatchNorm2d(num_features=self.out_channels))
        return mod_list

 在yolo.py中增加

       if m in (Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
                 BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x, C2f_Add,
                 MobileOne):
            c1, c2 = ch[f], args[0]
            if c2 != no:  # if not output
                c2 = make_divisible(c2 * gw, 8)

            args = [c1, c2, *args[1:]]
            if m in [BottleneckCSP, C3, C3TR, C3Ghost, C3x, C2f_Add]:
                args.insert(2, n)  # number of repeats
                n = 1

同时在yolo.py的basemodel中添加

    def fuse(self):  # fuse model Conv2d() + BatchNorm2d() layers
        LOGGER.info('Fusing layers... ')
        for m in self.model.modules():
            if isinstance(m, (Conv, DWConv)) and hasattr(m, 'bn'):
                m.conv = fuse_conv_and_bn(m.conv, m.bn)  # update conv
                delattr(m, 'bn')  # remove batchnorm
                m.forward = m.forward_fuse  # update forward
            if hasattr(m, 'reparameterize'):
                m.reparameterize()
        self.info()
        return self

运行yolo.py

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/950284.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

java八股文面试[数据库]——B树和B+树的区别

B树是一种树状数据结构,它能够存储数据、对其进行排序并允许以O(logn)的时间复杂度进行查找、顺序读取、插入和删除等操作。 1、B树的特性 B树中允许一个结点中包含多个key,可以是3个、4个、5个甚至更多,并不确定,需要看具体的实…

VBA技术资料MF50:VBA_在Excel中突出显示前3个值

【分享成果,随喜正能量】人受到尊重,不是因为权钱,而是他骨子里透出的,正直与善良。。 我给VBA的定义:VBA是个人小型自动化处理的有效工具。利用好了,可以大大提高自己的工作效率,而且可以提高…

RabbitMQ工作模式-路由模式

官方文档参考:https://www.rabbitmq.com/tutorials/tutorial-four-python.html 使用direct类型的Exchange,发N条消息并使用不同的routingKey,消费者定义队列并将队列routingKey、Exchange绑定。此时使用direct模式Exchange必须要routingKey完成匹配的情况下消息才…

Node.js /webpack DAY6

一、Node.js 入门 1. 什么是 Node.js? 2. 什么是前端工程化? 3. Node.js 为何能执行 JS? 4. Node.js 安装 5. 使用 Node.js 总结 6. fs 模块 - 读写文件 /*** 目标:基于 fs 模块 读写文件内容* 1. 加载 fs 模块对象* 2. 写入文件…

Java并发(十五)----synchronized解决共享的问题

为了避免临界区的竞态条件发生,有多种手段可以达到目的。 阻塞式的解决方案:synchronized,Lock 非阻塞式的解决方案:原子变量 此次介绍使用阻塞式的解决方案:synchronized,来解决上述问题,即…

Ubuntu入门03——Ubuntu用户操作

1.Ubuntu如何进入root用户 进入ROOT用户的指令: Linux用su命令来切换用户: su root执行命令后,会提示你输入密码,而Ubuntu是没有设置root初始密码的。 若su命令不能切换root,提示su: Authentication failure&#x…

弹窗、抽屉、页面跳转区别 | web交互入门

当用户点击或触发浏览页面的某个操作,有很多web交互方式,可以大致分为弹窗、抽屉、跳转新页面三种web交互方式。虽然这三种web交互方式看起来没什么不同,但实际上弹窗、抽屉、跳转新页面对交互体验有蛮大的影响。 这需要UI\UX设计师针对不同…

【拾枝杂谈】从游戏开发的角度来谈谈原神4.0更新

君兮_的个人主页 勤时当勉励 岁月不待人 C/C 游戏开发 Hello,米娜桑们,这里是君兮_,结合最近的学习内容和以后自己的目标,今天又开了杂谈这个新坑,分享一下我在学习游戏开发的成长和自己的游戏理解,当然现在还是一枚…

Kotlin入门1. 语法基础

Kotlin入门1. 语法基础 一、简介二、在Idea创建一个示例项目三、基本语法1. 第一个程序2. 基本数据类型(1) 数字(2) 类型转换(3) 数学运算位运算 (4)可空类型 3. 函数4. 字符串(1) 字符串拼接(2) 字符串查找(3) 字符串替换(4) 字符串分割 5. null 安全的…

系统架构师---系统规划

目录 前言: 项目的提出与选择 项目立项的目标和动机 进行基础研究并获取技术 进行应用研发并获得产品 提供技术服务 信息技术产品的使用者 项目的选择和确定 选择有核心价值的产品、项目或可开发方向 评估项目风险、收益和代价 评估项目的多种实施方式 平…

HDFS文件删除后,HIVE元数据还存在的问题

一.背景 手动在hdfs上删除了一个表的分区数据(inc_day2023-08-30),当查询这个表这个分区的数据时报错文件不存在 二.原因 即HDFS数据删除了,但是hive metastore元数据却没有更新,使用show partitions tablename 发现该分区还存在 三.解决办法…

python 笔记(1)——基础和常用部分

目录 1、print 输出不换行 2、格式化输出字符串 3、浮点数的处理 4、进制转换和ASCII与字符间的转换 5、随机数 6、字符串截取和内置方法 6-1)字符串截取 6-2)字符串内置方法 7、元组、列表,及其遍历方式 7-1)列表常用内…

前端开发学习路线

无前端基础学习路线: B站免费视频1 B站免费视频2 有HTML、CSS、JavaScript基础,可直接通过以上视频中Vue2Vue3中实战项目学习Vue。

【网络】多路转接——poll | epoll

🐱作者:一只大喵咪1201 🐱专栏:《网络》 🔥格言:你只管努力,剩下的交给时间! 书接上文五种IO模型 | select。 poll | epoll 🍧poll🧁认识接口🧁简…

<硬件设计> 阻抗设计(二) 使用Si9000计算阻抗

目录 01 阻抗相关参数 02 差分阻抗设计示例 确定阻抗线模型 明确对应差分线特征阻抗 获取板厂的板材、阻焊相关参数 使用Si9000软件通过调整线宽和线间距达到目标阻抗 03 文章总结 大家好,这里是程序员杰克。一名平平无奇的嵌入式软件工程师。 上篇已对阻抗…

【工具类】生成唯一编码

需求描述: 在一个项目中,很多的业务表都有一个唯一编码字段,如下: 同事把生成唯一编码字段的代码封装成了一个工具类,符合生成唯一编码规则的地方就可以使用啦! 代码如下: Target(ElementTyp…

java定位问题工具

一、使用 JDK 自带工具查看 JVM 情况 在我的机器上运行 ls 命令,可以看到 JDK 8 提供了非常多的工具或程序: 接下来,我会与你介绍些常用的监控工具。你也可以先通过下面这张图了解下各种工具的基本作用: 为了测试这些工具&#x…

Java自定义捕获异常

需求分析 ElectricalCustomerVO electricalCustomerVO new ElectricalCustomerVO(); electricalCustomerVO.setElcNumber(chatRecordsLog.getDeviceNumber()); List<ElectricalCustomerVO> electricalCustomerlist electricalCustomerMapper.selectElectricalCustomer…

Understanding Black-box Predictions via Influence Functions阅读笔记

Understanding Black-box Predictions via Influence Functions阅读笔记 1.案例1----理解模型行为2.案例2----生成对抗训练样本3.案例3----调试域不匹配4.案例4----修正错误标注参考 1.案例1----理解模型行为 通过告诉我们对一个给定的预测“负责”的训练点&#xff0c;影响函数…

七大排序完整版

目录 一、直接插入排序 &#xff08;一&#xff09;单趟直接插入排 1.分析核心代码 2.完整代码 &#xff08;二&#xff09;全部直接插入排 1.分析核心代码 2.完整代码 &#xff08;三&#xff09;时间复杂度和空间复杂度 二、希尔排序 &#xff08;一&#xff09;对…