yolov5(v7.0)网络修改实践一:集成YOLOX的backbone(CSPDarknet和Pafpn)到yolov5(v7.0)框架中

news2025/3/10 18:35:59

yolov5太好用了,无论是实际做工程还是学习研究,yolov5都比较好上手,而且现在工业界yolov5也应用广泛。但是,作为学习研究,有不少在yolov5之后提出的涨点算法,还是有价值进行研究的,也便于跟进当下研究进展。于是打算在yolov5框架上集成一些优秀算法进行学习研究。

这次选的是在 yolov5后面提出的YOLOX,具体算法内容就不进行分析解读了,yolox是旷世提出的yolo系列的目标检测算法,应用的tricks比较多,采用了Darknet骨干网络,pafpn网络的特征融合方式,decoupledhead的双分支解耦合头,无锚框的anchor-free算法,还有mosaic等数据增强方式。这里先把yolox的特征提取阶段网络,及backbone和neck对应的CSPDarknet和Pafpn集成到yolov5中。

1、首先是yolox的网络配置

如下是GitHub下载的yolox的官方代码,可以依次看到,yolox的backbone和head分别默认使用YOLOPAFPN和YOLOXHead,这里先看backbone 部分
在这里插入图片描述
如下是YOLOPAFPN的代码,通过这个代码可以了解这个模块是怎么定义的,包含什么结构,输入输出信息等
在这里插入图片描述
包含的结构内容看__init__()函数,这些结构如何组成YOLOPAFPN看forward函数

#!/usr/bin/env python
# -*- encoding: utf-8 -*-
# Copyright (c) Megvii Inc. All rights reserved.

import torch
import torch.nn as nn

from .darknet import CSPDarknet
from .network_blocks import BaseConv, CSPLayer, DWConv


class YOLOPAFPN(nn.Module):
    """
    YOLOv3 model. Darknet 53 is the default backbone of this model.
    """

    def __init__(
        self,
        depth=1.0,
        width=1.0,
        in_features=("dark3", "dark4", "dark5"),
        in_channels=[256, 512, 1024],
        depthwise=False,
        act="silu",
    ):
        super().__init__()
        self.backbone = CSPDarknet(depth, width, depthwise=depthwise, act=act)
        self.in_features = in_features
        self.in_channels = in_channels
        Conv = DWConv if depthwise else BaseConv

        self.upsample = nn.Upsample(scale_factor=2, mode="nearest")
        self.lateral_conv0 = BaseConv(
            int(in_channels[2] * width), int(in_channels[1] * width), 1, 1, act=act
        )
        self.C3_p4 = CSPLayer(
            int(2 * in_channels[1] * width),
            int(in_channels[1] * width),
            round(3 * depth),
            False,
            depthwise=depthwise,
            act=act,
        )  # cat

        self.reduce_conv1 = BaseConv(
            int(in_channels[1] * width), int(in_channels[0] * width), 1, 1, act=act
        )
        self.C3_p3 = CSPLayer(
            int(2 * in_channels[0] * width),
            int(in_channels[0] * width),
            round(3 * depth),
            False,
            depthwise=depthwise,
            act=act,
        )

        # bottom-up conv
        self.bu_conv2 = Conv(
            int(in_channels[0] * width), int(in_channels[0] * width), 3, 2, act=act
        )
        self.C3_n3 = CSPLayer(
            int(2 * in_channels[0] * width),
            int(in_channels[1] * width),
            round(3 * depth),
            False,
            depthwise=depthwise,
            act=act,
        )

        # bottom-up conv
        self.bu_conv1 = Conv(
            int(in_channels[1] * width), int(in_channels[1] * width), 3, 2, act=act
        )
        self.C3_n4 = CSPLayer(
            int(2 * in_channels[1] * width),
            int(in_channels[2] * width),
            round(3 * depth),
            False,
            depthwise=depthwise,
            act=act,
        )

    def forward(self, input):
        #  backbone
        out_features = self.backbone(input)
        features = [out_features[f] for f in self.in_features]
        [x2, x1, x0] = features

        fpn_out0 = self.lateral_conv0(x0)  # 1024->512/32
        f_out0 = self.upsample(fpn_out0)  # 512/16
        f_out0 = torch.cat([f_out0, x1], 1)  # 512->1024/16
        f_out0 = self.C3_p4(f_out0)  # 1024->512/16

        fpn_out1 = self.reduce_conv1(f_out0)  # 512->256/16
        f_out1 = self.upsample(fpn_out1)  # 256/8
        f_out1 = torch.cat([f_out1, x2], 1)  # 256->512/8
        pan_out2 = self.C3_p3(f_out1)  # 512->256/8

        p_out1 = self.bu_conv2(pan_out2)  # 256->256/16
        p_out1 = torch.cat([p_out1, fpn_out1], 1)  # 256->512/16
        pan_out1 = self.C3_n3(p_out1)  # 512->512/16

        p_out0 = self.bu_conv1(pan_out1)  # 512->512/32
        p_out0 = torch.cat([p_out0, fpn_out0], 1)  # 512->1024/32
        pan_out0 = self.C3_n4(p_out0)  # 1024->1024/32

        outputs = (pan_out2, pan_out1, pan_out0)
        return outputs

forward函数决定定义模块的结构。从上forward函数可以发现,YOLOPAFPN还需先经过一个backbone:CSPDarknet得到输出后再进入特征融合阶段,从forward函数也就可以了解到YOLOPAFPN的整体结构了。如下是找的YOLOX的网络结构图,红框内即是YOLOPAFPN的结构
--c
根据导入的库可以找到CSPDarknet和相关基础模块的定义代码,分别在models文件夹下的darknet.py和network_blocks.py下
在这里插入图片描述

2、模块迁移,匹配yolov5框架

在了解清楚yolox的YOLOPAFPN的结构后,接下来就可以进行模块的迁移。其实我们已经获得YOLOPAFPN的代码了,直接调过来用不就可以吗?可能有人会有这种疑问,其实我也想这么操作,这样多省事啊,一旦有什么新的开源算法直接拿过来调用就行了。

鉴于yolov5的优越性,作者实在是喜欢yolov5的框架,且为了直观的对比所谓的先进算法带来的涨点,就打算迁移到yolov5框架。另一方面,如果yolov5本身就支持这种直接调用的网络构建那也可以很省事。事实上是yolov5有自己的网络构建方式,我们需要根据yolov5框架的标准来把网络迁移进来。

前面我已经分析过yolov5的网络构建方法和训练过程,这里就不细述了,主要是通过配置参数cfg传入网络的结构参数,如yolov5s.yaml,再通过models/yolo.py中的parse_model()函数来串联网络结构。

**backbone的前3个C3数量对应yolov5s.yaml的配置369分别除了3,变为1/3后的123,和模型深度参数有关depth_multiple: 0.33  model depth multiple**                  
层数,第几层      from  n    params  module                                  arguments
               ch[-1]  数量  参数量 模块名称(m)                           网络结构参数,输入维度,输出维度,卷积核大小,卷积步长
  0                -1  1      3520  models.common.Conv                      [3, 32, 6, 2, 2]              
  1                -1  1     18560  models.common.Conv                      [32, 64, 3, 2]                
  2                -1  1     18816  models.common.C3                        [64, 64, 1]
  3                -1  1     73984  models.common.Conv                      [64, 128, 3, 2]               
  4                -1  2    115712  models.common.C3                        [128, 128, 2]                 
  5                -1  1    295424  models.common.Conv                      [128, 256, 3, 2]              
  6                -1  3    625152  models.common.C3                        [256, 256, 3] 
  7                -1  1   1180672  models.common.Conv                      [256, 512, 3, 2]              
  8                -1  1   1182720  models.common.C3                        [512, 512, 1]                 
  9                -1  1    656896  models.common.SPPF                      [512, 512, 5]                 
 10                -1  1    131584  models.common.Conv                      [512, 256, 1, 1]              
 11                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
 12           [-1, 6]  1         0  models.common.Concat                    [1]                           
 13                -1  1    361984  models.common.C3                        [512, 256, 1, False]          
 14                -1  1     33024  models.common.Conv                      [256, 128, 1, 1]              
 15                -1  1         0  torch.nn.modules.upsampling.Upsample    [None, 2, 'nearest']          
 16           [-1, 4]  1         0  models.common.Concat                    [1]                           
 17                -1  1     90880  models.common.C3                        [256, 128, 1, False]          
 18                -1  1    147712  models.common.Conv                      [128, 128, 3, 2]              
 19          [-1, 14]  1         0  models.common.Concat                    [1]                           
 20                -1  1    296448  models.common.C3                        [256, 256, 1, False]          
 21                -1  1    590336  models.common.Conv                      [256, 256, 3, 2]              
 22          [-1, 10]  1         0  models.common.Concat                    [1]                           
 23                -1  1   1182720  models.common.C3                        [512, 512, 1, False]          
 24      [17, 20, 23]  1     16182  models.yolo.Detect                      [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]


从上可见,yolov5的网络构建是基于一个个基础模块(如Conv、C3等)串联起来实例化得到网络结构,一层层模块标识层数来使首尾相连的模块的输入输出参数变换一致。我改了一个对应yolov5s的版本:
在这里插入图片描述
与yolov5s类似,yolox-s也有控制网络参数量的参数,在网络构建的时候可以一同考虑。
在这里插入图片描述
如上是我构建好的YOLOPAFPN网络训练时打印出来的模块信息,最终成功训练上了
如下是我的yaml配置文件:

# Parameters
nc: 80  # number of classes
depth_multiple: 0.33  # model depth multiple
width_multiple: 0.50  # layer channel multiple
anchors:
  - [10,13, 16,30, 33,23]  # P3/8 anchor_height,anchor_width,每个尺度设置三种锚框,使用时除以下采样倍数
  - [30,61, 62,45, 59,119]  # P4/16 anchor_height,anchor_width,每个尺度设置三种锚框,使用时除以下采样倍数
  - [116,90, 156,198, 373,326]  # P5/32 anchor_height,anchor_width,每个尺度设置三种锚框,使用时除以下采样倍数

# YOLOX backbone
backbone:
  # [from, number, module, args]
  [[-1, 1, XFocus, [64, 3]], #3,32 0表示输入、输出以及当前网络层数
   [-1, 1, Dark2, [128,3,2]], #32 64 1
   [-1, 1, Dark3, [256,3,2]], #64 128 2 
   [-1, 1, Dark4, [512,3,2]], #128 256 3
   [-1, 1, Dark5, [1024,3,2]], #256 512 4
   [-1, 1, LarConv0, [512]], #512 256 5
   [-1, 1, nn.Upsample, [None, 2, 'nearest']], #256 256 6 
   [[-1, 3], 1, Concat, [1]], # 256 512  7
   [-1, 1, C3_p4, [512]], #512 256  8
   [-1, 1, RedConv1,[256]], #256 128 9
   [-1, 1, nn.Upsample, [None, 2, 'nearest']], #128, 128 10
   [[-1, 2], 1, Concat, [1]], #128 256 11
   [-1, 1, C3_p3, [256]], #256, 128 12 
   [-1, 1, ButConv2, [256]], #128 128 13
   [[-1, 9], 1, Concat, [1]], #128 256 14
   [-1, 1, C3_n3, [512]], # 256 256 15
   [-1, 1, ButConv1, [512]], #256 256 16
   [[-1, 5], 1, Concat, [1]], #256 512 17
   [-1, 1, C3_n4, [1024]], #512 512 18
  ]

head:
  # yolov5 head
  [
   [[12, 15, 18], 1, Detect, [nc, anchors]], 
  ]

以上模块XFocus、Dark2-Dark5、LarConv0、C3_p4、RedConv1、ButConv1等的py代码,建议加在common.py中

######################### YOLOX ###########################
def get_activation(name="silu", inplace=True):
    if name == "silu":
        module = nn.SiLU(inplace=inplace)
    elif name == "relu":
        module = nn.ReLU(inplace=inplace)
    elif name == "lrelu":
        module = nn.LeakyReLU(0.1, inplace=inplace)
    else:
        raise AttributeError("Unsupported act type: {}".format(name))
    return module

class BottleneckX(nn.Module):
    # Standard bottleneck
    def __init__(
        self,
        in_channels,
        out_channels,
        shortcut=True,
        expansion=0.5,
        depthwise=False,
        act="silu",
    ):
        super().__init__()
        hidden_channels = int(out_channels * expansion)
        Conv = DWConv if depthwise else BaseConv
        self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)
        self.conv2 = Conv(hidden_channels, out_channels, 3, stride=1, act=act)
        self.use_add = shortcut and in_channels == out_channels

    def forward(self, x):
        y = self.conv2(self.conv1(x))
        if self.use_add:
            y = y + x
        return y


class BaseConv(nn.Module):
    """A Conv2d -> Batchnorm -> silu/leaky relu block"""

    def __init__(
        self, in_channels, out_channels, ksize, stride, groups=1, bias=False, act="silu"
    ):
        super().__init__()
        # same padding
        pad = (ksize - 1) // 2
        self.conv = nn.Conv2d(
            in_channels,
            out_channels,
            kernel_size=ksize,
            stride=stride,
            padding=pad,
            groups=groups,
            bias=bias,
        )
        self.bn = nn.BatchNorm2d(out_channels)
        self.act = get_activation(act, inplace=True)

    def forward(self, x):
        return self.act(self.bn(self.conv(x)))

    def fuseforward(self, x):
        return self.act(self.conv(x))
    
class DWConv(nn.Module):
    """Depthwise Conv + Conv"""

    def __init__(self, in_channels, out_channels, ksize, stride=1, act="silu"):
        super().__init__()
        self.dconv = BaseConv(
            in_channels,
            in_channels,
            ksize=ksize,
            stride=stride,
            groups=in_channels,
            act=act,
        )
        self.pconv = BaseConv(
            in_channels, out_channels, ksize=1, stride=1, groups=1, act=act
        )

    def forward(self, x):
        x = self.dconv(x)
        return self.pconv(x)

class CSPLayer(nn.Module):
    """C3 in yolov5, CSP Bottleneck with 3 convolutions"""

    def __init__(
        self,
        in_channels,
        out_channels,
        n=1,
        shortcut=True,
        expansion=0.5,
        depthwise=False,
        act="silu",
    ):
        """
        Args:
            in_channels (int): input channels.
            out_channels (int): output channels.
            n (int): number of Bottlenecks. Default value: 1.
        """
        # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__()
        hidden_channels = int(out_channels * expansion)  # hidden channels
        self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)
        self.conv2 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=act)
        self.conv3 = BaseConv(2 * hidden_channels, out_channels, 1, stride=1, act=act)
        module_list = [
            BottleneckX(
                hidden_channels, hidden_channels, shortcut, 1.0, depthwise, act=act
            )
            for _ in range(n)
        ]
        self.m = nn.Sequential(*module_list)

    def forward(self, x):
        x_1 = self.conv1(x)
        x_2 = self.conv2(x)
        x_1 = self.m(x_1)
        x = torch.cat((x_1, x_2), dim=1)
        return self.conv3(x)

class SPPBottleneck(nn.Module):
    """Spatial pyramid pooling layer used in YOLOv3-SPP"""

    def __init__(
        self, in_channels, out_channels, kernel_sizes=(5, 9, 13), activation="silu"
    ):
        super().__init__()
        hidden_channels = in_channels // 2
        self.conv1 = BaseConv(in_channels, hidden_channels, 1, stride=1, act=activation)
        self.m = nn.ModuleList(
            [
                nn.MaxPool2d(kernel_size=ks, stride=1, padding=ks // 2)
                for ks in kernel_sizes
            ]
        )
        conv2_channels = hidden_channels * (len(kernel_sizes) + 1)
        self.conv2 = BaseConv(conv2_channels, out_channels, 1, stride=1, act=activation)

    def forward(self, x):
        x = self.conv1(x)
        x = torch.cat([x] + [m(x) for m in self.m], dim=1)
        x = self.conv2(x)
        return x

class XFocus(nn.Module):
    """Focus width and height information into channel space."""

    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu"):
        super().__init__()
        self.conv = BaseConv(in_channels * 4, out_channels, ksize, stride, act=act)

    def forward(self, x):
        # shape of x (b,c,w,h) -> y(b,4c,w/2,h/2)
        patch_top_left = x[..., ::2, ::2]
        patch_top_right = x[..., ::2, 1::2]
        patch_bot_left = x[..., 1::2, ::2]
        patch_bot_right = x[..., 1::2, 1::2]
        x = torch.cat(
            (
                patch_top_left,
                patch_bot_left,
                patch_top_right,
                patch_bot_right,
            ),
            dim=1,
        )
        return self.conv(x)

class Dark2(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        Conv = DWConv if depthwise else BaseConv
        dep_mul = 0.33
        base_depth = max(round(dep_mul * 3), 1)
        self.dark2 = nn.Sequential(
            Conv(in_channels, in_channels * 2, 3, 2, act=act),
            CSPLayer(
                in_channels * 2,
                in_channels * 2,
                n=base_depth,
                depthwise=depthwise,
                act=act,
            ),
        )

    def forward(self, x):
        return self.dark2(x)

class Dark3(nn.Module):
    # def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
    def __init__(self, in_channels, out_channels, ksize, stride, act="silu", depthwise=False):
        super().__init__()
        Conv = DWConv if depthwise else BaseConv
        dep_mul = 0.33
        base_depth = max(round(dep_mul * 3), 1)
        self.dark3 = nn.Sequential(
            Conv(in_channels, in_channels * 2, 3, 2, act=act),
            CSPLayer(
                in_channels * 2,
                in_channels * 2,
                n=base_depth * 3,
                depthwise=depthwise,
                act=act,
            ),
        )

    def forward(self, x):
        return self.dark3(x)
    
class Dark4(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        Conv = DWConv if depthwise else BaseConv
        dep_mul = 0.33
        base_depth = max(round(dep_mul * 3), 1)
        self.dark4 = nn.Sequential(
            Conv(in_channels , in_channels * 2, 3, 2, act=act),
            CSPLayer(
                in_channels * 2,
                in_channels * 2,
                n=base_depth * 3,
                depthwise=depthwise,
                act=act,
            ),
        )

    def forward(self, x):
        return self.dark4(x)

class Dark5(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        Conv = DWConv if depthwise else BaseConv
        dep_mul = 0.33
        base_depth = max(round(dep_mul * 3), 1)
        self.dark5 = nn.Sequential(
            Conv(in_channels , in_channels * 2, 3, 2, act=act),
            SPPBottleneck(in_channels * 2, in_channels * 2, activation=act),
            CSPLayer(
                in_channels * 2,
                in_channels * 2,
                n=base_depth,
                shortcut=False,
                depthwise=depthwise,
                act=act,
            ),
        )

    def forward(self, x):
        return self.dark5(x)

class LarConv0(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        
        self.lateral_conv0 = BaseConv(
            int(in_channels), int(out_channels), 1, 1, act=act
        )
    
    def forward(self, x):
        return self.lateral_conv0(x)
    
class C3_p4(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        depth = 0.33
        self.c3_p4 = CSPLayer(
            int(in_channels),
            int(out_channels),
            round(3 * depth),
            False,
            depthwise=depthwise,
            act=act,
        )  # cat
    
    def forward(self, x):
        return self.c3_p4(x)

class RedConv1(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        self.reduce_conv1 = BaseConv(
            int(in_channels), out_channels, 1, 1, act=act
        )
    
    def forward(self, x):
        return self.reduce_conv1(x)

class C3_p3(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        depth = 0.33
        self.c3_p3 = CSPLayer(
            int(in_channels),
            int(out_channels),
            round(3 * depth),
            False,
            depthwise=depthwise,
            act=act,
        )  
    
    def forward(self, x):
        return self.c3_p3(x)

class ButConv2(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        self.bu_conv2 = BaseConv(
            int(in_channels), int(in_channels), 3, 2, act=act
        )
    def forward(self, x):
        return self.bu_conv2(x)

class C3_n3(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        depth = 0.33
        self.c3_n3 = CSPLayer(
            int(in_channels),
            int(in_channels),
            round(3 * depth),
            False,
            depthwise=depthwise,
            act=act,
        )
    
    def forward(self, x):
        return self.c3_n3(x)

class ButConv1(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        self.bu_conv1 = BaseConv(
            int(in_channels), int(in_channels), 3, 2, act=act
        )
    def forward(self, x):
        return self.bu_conv1(x)

class C3_n4(nn.Module):
    def __init__(self, in_channels, out_channels, ksize=1, stride=1, act="silu", depthwise=False):
        super().__init__()
        depth = 0.33
        self.c3_n4 = CSPLayer(
            int(in_channels),
            int(in_channels),
            round(3 * depth),
            False,
            depthwise=depthwise,
            act=act,
        )
    
    def forward(self, x):
        return self.c3_n4(x)
    

这样基本把yolox的backbone的相关模块在yolov5s中实现了,适用于yolov5的网络构建方式。里面主要需要注意模块之间的输入输出对应。

或者你想要验证复现的YOLOPAFPN是否与yolox一致,可以单独把YOLOPAFPN()拎出来,实例化输入输出看网络结构,本人验证后,发现是一致的。

"""
YOLOPAFPN(
  (backbone): CSPDarknet(
    (stem): Focus(
      (conv): BaseConv(
        (conv): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
    )
    (dark2): Sequential(
      (0): BaseConv(
        (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (1): CSPLayer(
        (conv1): BaseConv(
          (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv2): BaseConv(
          (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv3): BaseConv(
          (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (m): Sequential(
          (0): Bottleneck(
            (conv1): BaseConv(
              (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
            (conv2): BaseConv(
              (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
          )
        )
      )
    )
    (dark3): Sequential(
      (0): BaseConv(
        (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (1): CSPLayer(
        (conv1): BaseConv(
          (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv2): BaseConv(
          (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv3): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (m): Sequential(
          (0): Bottleneck(
            (conv1): BaseConv(
              (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
            (conv2): BaseConv(
              (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
          )
          (1): Bottleneck(
            (conv1): BaseConv(
              (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
            (conv2): BaseConv(
              (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
          )
          (2): Bottleneck(
            (conv1): BaseConv(
              (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
            (conv2): BaseConv(
              (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
          )
        )
      )
    )
    (dark4): Sequential(
      (0): BaseConv(
        (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (1): CSPLayer(
        (conv1): BaseConv(
          (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv2): BaseConv(
          (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv3): BaseConv(
          (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (m): Sequential(
          (0): Bottleneck(
            (conv1): BaseConv(
              (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
            (conv2): BaseConv(
              (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
          )
          (1): Bottleneck(
            (conv1): BaseConv(
              (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
            (conv2): BaseConv(
              (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
          )
          (2): Bottleneck(
            (conv1): BaseConv(
              (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
            (conv2): BaseConv(
              (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
          )
        )
      )
    )
    (dark5): Sequential(
      (0): BaseConv(
        (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): SiLU(inplace=True)
      )
      (1): SPPBottleneck(
        (conv1): BaseConv(
          (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (m): ModuleList(
          (0): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False)
          (1): MaxPool2d(kernel_size=9, stride=1, padding=4, dilation=1, ceil_mode=False)
          (2): MaxPool2d(kernel_size=13, stride=1, padding=6, dilation=1, ceil_mode=False)
        )
        (conv2): BaseConv(
          (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
      (2): CSPLayer(
        (conv1): BaseConv(
          (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv2): BaseConv(
          (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv3): BaseConv(
          (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (m): Sequential(
          (0): Bottleneck(
            (conv1): BaseConv(
              (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
              (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
            (conv2): BaseConv(
              (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
              (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
              (act): SiLU(inplace=True)
            )
          )
        )
      )
    )
  )
  (upsample): Upsample(scale_factor=2.0, mode=nearest)
  (lateral_conv0): BaseConv(
    (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (act): SiLU(inplace=True)
  )
  (C3_p4): CSPLayer(
    (conv1): BaseConv(
      (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (conv2): BaseConv(
      (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (conv3): BaseConv(
      (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (m): Sequential(
      (0): Bottleneck(
        (conv1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv2): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
  )
  (reduce_conv1): BaseConv(
    (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (act): SiLU(inplace=True)
  )
  (C3_p3): CSPLayer(
    (conv1): BaseConv(
      (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (conv2): BaseConv(
      (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (conv3): BaseConv(
      (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (m): Sequential(
      (0): Bottleneck(
        (conv1): BaseConv(
          (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv2): BaseConv(
          (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
  )
  (bu_conv2): BaseConv(
    (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (act): SiLU(inplace=True)
  )
  (C3_n3): CSPLayer(
    (conv1): BaseConv(
      (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (conv2): BaseConv(
      (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (conv3): BaseConv(
      (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (m): Sequential(
      (0): Bottleneck(
        (conv1): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv2): BaseConv(
          (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
  )
  (bu_conv1): BaseConv(
    (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (act): SiLU(inplace=True)
  )
  (C3_n4): CSPLayer(
    (conv1): BaseConv(
      (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (conv2): BaseConv(
      (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (conv3): BaseConv(
      (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (act): SiLU(inplace=True)
    )
    (m): Sequential(
      (0): Bottleneck(
        (conv1): BaseConv(
          (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
        (conv2): BaseConv(
          (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (bn): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (act): SiLU(inplace=True)
        )
      )
    )
  )
)



"""

3、成功训练,完成YOLOPAFPN复现

在成功训练前,还有一步,即把新加入的模块加入到网络构建函数中,不然无法识别构建的网络模块导致无法训练,网络构建函数在yolo.py中的parse_model(),修改如下:

def parse_model(d, ch):  # model_dict, input_channels(3)
    # Parse a YOLOv5 model.yaml dictionary
    LOGGER.info(f"\n{'':>3}{'from':>18}{'n':>3}{'params':>10}  {'module':<40}{'arguments':<30}")
    anchors, nc, gd, gw, act = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple'], d.get('activation')
    if act:
        Conv.default_act = eval(act)  # redefine default activation, i.e. Conv.default_act = nn.SiLU()
        LOGGER.info(f"{colorstr('activation:')} {act}")  # print
    na = (len(anchors[0]) // 2) if isinstance(anchors, list) else anchors  # number of anchors
    no = na * (nc + 5)  # number of outputs = anchors * (classes + 5)

    layers, save, c2 = [], [], ch[-1]  # layers, savelist, ch out
    for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']):  # from, number, module, args
        m = eval(m) if isinstance(m, str) else m  # eval strings
        for j, a in enumerate(args):
            with contextlib.suppress(NameError):
                args[j] = eval(a) if isinstance(a, str) else a  # eval strings

        n = n_ = max(round(n * gd), 1) if n > 1 else n  # depth gain
        if m in {
                Conv, GhostConv, Bottleneck, GhostBottleneck, SPP, SPPF, DWConv, MixConv2d, Focus, CrossConv,
                BottleneckCSP, C3, C3TR, C3SPP, C3Ghost, nn.ConvTranspose2d, DWConvTranspose2d, C3x, XFocus, Dark2, Dark3, Dark4, Dark5, LarConv0, C3_p4,RedConv1, C3_p3,ButConv2,C3_n3,C3_n4,ButConv1}:
            c1, c2 = ch[f], args[0]
            if c2 != no:  # if not output
                c2 = make_divisible(c2 * gw, 8)

            args = [c1, c2, *args[1:]]
            if m in {BottleneckCSP, C3, C3TR, C3Ghost, C3x}:
                args.insert(2, n)  # number of repeats
                n = 1
        elif m is nn.BatchNorm2d:
            args = [ch[f]]
        elif m is Concat:
            c2 = sum(ch[x] for x in f)
        # TODO: channel, gw, gd
        elif m in {Detect, Segment}:
            args.append([ch[x] for x in f])
            if isinstance(args[1], int):  # number of anchors
                args[1] = [list(range(args[1] * 2))] * len(f)
            if m is Segment:
                args[3] = make_divisible(args[3] * gw, 8)
            # import ipdb;ipdb.set_trace()
        elif m in {DetectDcoupleHead}:
            """args是yaml配置文件的字典中每行的列表里模块后的参数"""
            # import ipdb;ipdb.set_trace()
            args.append([ch[x] for x in f])#append导致输入维度参数在最后一个位置
            if isinstance(args[1], int):  # 锚框 number of anchors
                args[1] = [list(range(args[1] * 2))] * len(f)
            # import ipdb;ipdb.set_trace()
            args[2] = gw
        elif m in {DetectXHead}:
            # args.append([int(ch[x] * gw) for x in f])
            # print(args)
            args.append([ch[x] for x in f])
            args[1] = gw
            # print(args)
            # xx = args
            # import ipdb;ipdb.set_trace()
        elif m is Contract:
            c2 = ch[f] * args[0] ** 2
        elif m is Expand:
            c2 = ch[f] // args[0] ** 2
        else:
            c2 = ch[f]

        m_ = nn.Sequential(*(m(*args) for _ in range(n))) if n > 1 else m(*args)  # module
        t = str(m)[8:-2].replace('__main__.', '')  # module type
        np = sum(x.numel() for x in m_.parameters())  # number params
        m_.i, m_.f, m_.type, m_.np = i, f, t, np  # attach index, 'from' index, type, number params
        LOGGER.info(f'{i:>3}{str(f):>18}{n_:>3}{np:10.0f}  {t:<40}{str(args):<30}')  # print
        save.extend(x % i for x in ([f] if isinstance(f, int) else f) if x != -1)  # append to savelist
        layers.append(m_)
        if i == 0:
            ch = []
        ch.append(c2)
    return nn.Sequential(*layers), sorted(save)

基于上述工作,再完善一下相关模块的引用,就完成了在yolov5中复现yolox的backbone:YOLOPAFPN模块
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/762137.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

高效管理物流:利用批量查询申通物流信息的技巧

随着电子商务的飞速发展&#xff0c;快递已经成为了现代人生活中不可或缺的一部分。作为国内一家领先的快递公司&#xff0c;申通快递服务广泛&#xff0c;覆盖全国各地。在使用申通快递服务时&#xff0c;我们通过申通快递单号可以方便地查询物流信息。然而&#xff0c;许多人…

python开发项目基于语音识别的智能垃圾分类系统的设计与实现

博主介绍&#xff1a;擅长Java、微信小程序、Python、Android等&#xff0c;专注于Java技术领域和毕业项目实战✌ &#x1f345;文末获取源码联系&#x1f345; &#x1f447;&#x1f3fb; 精彩专栏推荐订阅&#x1f447;&#x1f3fb; 不然下次找不到哟 Java项目精品实战案例…

Ae 效果:CC Mr. Smoothie

风格化/CC Mr. Smoothie Stylize/CC Mr. Smoothie CC Mr. Smoothie&#xff08;平滑先生&#xff09;效果可以从一个图层上的两个点进行颜色采样&#xff0c;并将这个两点之间的颜色重映射到另一个图层上&#xff0c;可通过控制重映射的平滑度从而创建迷幻的外观效果。 ◆ ◆ …

图像处理之高斯滤波

文章目录 高斯函数1.一维高斯函数2. 二维高斯函数 高斯滤波1.高斯核生成2.滤波过程 高斯函数 高斯函数广泛应用于统计学领域&#xff0c;用于表述正态分布&#xff0c;在信号处理领域&#xff0c;用于定义高斯滤波器&#xff0c;在图像处理领域&#xff0c;二维高斯核函数常用…

创建一个类Person的简单实例

创建一个类Person的简单实例 创建一个类Person&#xff0c;包含以下属性&#xff1a;姓名&#xff08;name&#xff09;、年龄&#xff08;age&#xff09;、朋友&#xff08;friends数组&#xff09;、问候&#xff08;sayhi方法&#xff0c;输出问候语&#xff0c;例如&#…

将TXT转化为PDF的软件,分享两个简单的方法!

在数字化时代&#xff0c;文档的传递和共享已经成为我们日常工作中的一部分。然而&#xff0c;有时我们可能需要将文本文件&#xff08;如TXT格式&#xff09;转换为PDF格式&#xff0c;以便更方便地共享、打印或存档。PDF格式的文件具有普遍的兼容性&#xff0c;可以在不同的设…

ESP32-C3 VSCode开发环境搭建

之前的文章 ESP32MicroPython开发环境的搭建 介绍了ESP32 MicroPython开发环境的搭建&#xff0c;此次计划采用ESP32-C3做小飞机主控&#xff0c;不太适合用MicroPython&#xff0c;要用ESP-IDF原生框架&#xff0c;因为平时一直用vs code编辑器&#xff0c;所以就选了vs code做…

数据库的扩展策略

了解不同的数据库扩展技术可以帮助我们选择适合我们需求和目的的合适策略。 因此&#xff0c;在本文中&#xff0c;我们将展示不同的解决方案和技术&#xff0c;用于扩展数据库服务器。它们分为读取和写入策略。 读取/加载 有时我们的应用程序承受着巨大的负载。为了解决这个…

1.1基于stc89c51系例单片机的空气温湿度检测报警系统

基于STC89C51系列单片机的空气温湿度检测报警系统 文章目录 基于STC89C51系列单片机的空气温湿度检测报警系统概述项目背景硬件设计1. STC89C51单片机2. DHT11温湿度模块3. LCD1602显示模块4. 人机交互模块5. 电源模块 软件设计1. 硬件初始化2. DHT11数据读取3. 数据显示4. 报警…

浅读-《深入浅出Nodejs》

这次算是重读 深入浅出Nodejs&#xff0c;了解到很多之前忽略的细节&#xff0c;收获蛮多&#xff0c;这次顺便将其记录分享&#xff0c;对学习和了解Nodejs有及其大的帮助。 1.Nodejs 事件驱动、非阻塞IO&#xff0c;一个开源和跨平台的 JavaScript 运行时环境&#xff1b;异…

手机直播app源码部署搭建:带货潮流,商城功能!

随着互联网时代的迅猛发展&#xff0c;手机直播app源码平台早已成为了人们获取资讯、娱乐放松等方式的主要载体&#xff0c;手机直播app源码平台的日益火爆&#xff0c;也让商人们有了一个新兴的想法出现&#xff1a;直播app平台如此火爆&#xff0c;平台的用户也如此庞大&…

perl输出中文乱码【win10】

perl输出中文乱码 运行的时候输出的内容变成了中文乱码&#xff0c;原因首先来查找一下自己的perl的模块里面是否有Encode-CN。请运行打开你的cmd并输入perldoc -l Encode::CN 如果出现了地址 则就是有&#xff0c;如果没有需要进行该模块的安装。 安装方式有很多种&#xff0…

Three.js——十二、MeshPhysicalMaterial清漆层、粗糙度、物理材质透光率以及折射率(结尾附代码)

环境贴图作用测试 MeshPhysicalMaterial清漆层 MeshPhysicalMaterial和MeshStandarMaterial都是拥有金属度metalness、粗糙度roughness属性的PBR材质&#xff0c;MeshPhysicalMaterial是MeshStandarMaterial的子集&#xff0c;除了继承了他的这些属性以外&#xff0c;还新增了…

NDK OpenGL实现美颜功能

NDK​系列之OpenGL实现美颜特效&#xff0c;本节主要是在上一节大眼萌的特效视上增加美颜特效。 OpenGL视频特效系列&#xff1a; NDK OpenGL渲染画面效果 NDK OpenGL离屏渲染与工程代码整合 NDK OpenGL仿抖音极快极慢录制特效视频 NDK OpenGL与OpenCV实现大眼萌特效 NDK…

java并发编程 11:JUC之ReentrantLock使用与原理

目录 使用可重入可打断锁超时公平锁条件变量 原理非公平锁实现原理源码流程 锁重入原理可打断原理与不可打断原理公平锁原理条件变量原理await流程signal流程 使用 ReentrantLock是可冲入锁&#xff0c;与 synchronized 一样&#xff0c;都支持可重入。但是相对于 synchronize…

kafka第三课-可视化工具、生产环境问题总结以及性能优化

一、可视化工具 https://pan.baidu.com/s/1qYifoa4 密码&#xff1a;el4o 下载解压之后&#xff0c;编辑该文件&#xff0c;修改zookeeper地址&#xff0c;也就是kafka注册的zookeeper的地址&#xff0c;如果是zookeeper集群&#xff0c;以逗号分开 vi conf/application.conf 启…

如何刻录光盘

如何刻录光盘 1 、将光盘放入光驱&#xff0c;选择“用于CD/DVD播放机” &#xff0c;该模式下&#xff0c;刻录在光盘的文件无法进行编辑和删除 2 、将需要刻录的文件拷贝至光盘内&#xff0c;则会在“准备好写入光盘中的文件”下显示拷贝进去的文件&#xff0c;此时文件还没…

EDI 工作流操作指南

一个完整的EDI工作流中&#xff0c;起始端为通常为文件传输端口&#xff1a;如AS2、OFTP等&#xff0c;末端为数据库端口。此前的文章中我们对AS2端口以及数据库端口已做了详细介绍&#xff0c;本文主要介绍 EDI 文件的格式转换以及映射。 如下图所示&#xff0c;工作流界面中…

安装blissOS重启后无法进入图形化界面

重启blissOS 重启时&#xff0c;按e键两下 进入 上图是一个可编辑页面&#xff0c;不要删除修改前面的内容&#xff0c;移动光标前往quiet&#xff0c;然后删除quiet输入“ nomodeset xforceseva ”&#xff0c;然后按下回车 然后按回车&#xff0c;按b键进入系统 在set-…

class组件constructor方法

class组件constructor方法 https://blog.csdn.net/m0_37557930/article/details/116228217 https://blog.csdn.net/qq_39207948/article/details/113143131 ​ 为何我们使用子类继承父类&#xff0c;就必须在 constructor( ) 方法中调用 super( ) 方法&#xff0c;否则新建实…