输出网络结构图,mmdetection

news2024/12/24 5:09:45

控制台输入:python tools/train.py /home/yuan3080/桌面/detection_paper_6/mmdetection-master1/mmdetection-master_yanhuo/work_dirs/lad_r50_paa_r101_fpn_coco_1x/lad_r50_a_r101_fpn_coco_1x.py

这个是输出方法里面的,不是原始方法。

如下所示,加一个print(model)就可以
,然后运行:控制台输入

之后,之后输出即可,如下所示:

在这里插入图片描述

LAD(
  (backbone): Res2Net(
    (stem): Sequential(
      (0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (2): ReLU(inplace=True)
      (3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (5): ReLU(inplace=True)
      (6): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (8): ReLU(inplace=True)
    )
    (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
    (layer1): Res2Layer(
      (0): Bottle2neck(
        (conv1): Conv2d(64, 104, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(104, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (downsample): Sequential(
          (0): AvgPool2d(kernel_size=1, stride=1, padding=0)
          (1): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (convs): ModuleList(
          (0): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottle2neck(
        (conv1): Conv2d(256, 104, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(104, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (2): Bottle2neck(
        (conv1): Conv2d(256, 104, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(104, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(26, 26, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(26, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
    )
    (layer2): Res2Layer(
      (0): Bottle2neck(
        (conv1): Conv2d(256, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (downsample): Sequential(
          (0): AvgPool2d(kernel_size=2, stride=2, padding=0)
          (1): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (pool): AvgPool2d(kernel_size=3, stride=2, padding=1)
        (convs): ModuleList(
          (0): Conv2d(52, 52, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): Conv2d(52, 52, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (2): Conv2d(52, 52, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottle2neck(
        (conv1): Conv2d(512, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (2): Bottle2neck(
        (conv1): Conv2d(512, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (3): Bottle2neck(
        (conv1): Conv2d(512, 208, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(208, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(52, 52, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(52, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
    )
    (layer3): Res2Layer(
      (0): Bottle2neck(
        (conv1): Conv2d(512, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (downsample): Sequential(
          (0): AvgPool2d(kernel_size=2, stride=2, padding=0)
          (1): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (2): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (pool): AvgPool2d(kernel_size=3, stride=2, padding=1)
        (convs): ModuleList(
          (0): Conv2d(104, 104, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): Conv2d(104, 104, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (2): Conv2d(104, 104, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottle2neck(
        (conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (2): Bottle2neck(
        (conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (3): Bottle2neck(
        (conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (4): Bottle2neck(
        (conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (5): Bottle2neck(
        (conv1): Conv2d(1024, 416, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(416, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(416, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(104, 104, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(104, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
    )
    (layer4): Res2Layer(
      (0): Bottle2neck(
        (conv1): Conv2d(1024, 832, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(832, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (downsample): Sequential(
          (0): AvgPool2d(kernel_size=2, stride=2, padding=0)
          (1): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
          (2): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
        (pool): AvgPool2d(kernel_size=3, stride=2, padding=1)
        (convs): ModuleList(
          (0): Conv2d(208, 208, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (1): Conv2d(208, 208, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
          (2): Conv2d(208, 208, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (1): Bottle2neck(
        (conv1): Conv2d(2048, 832, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(832, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
      (2): Bottle2neck(
        (conv1): Conv2d(2048, 832, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn1): BatchNorm2d(832, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (conv3): Conv2d(832, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (relu): ReLU(inplace=True)
        (convs): ModuleList(
          (0): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (1): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
          (2): Conv2d(208, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        )
        (bns): ModuleList(
          (0): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (1): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
          (2): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        )
      )
    )
  )
  init_cfg={'type': 'Pretrained', 'checkpoint': 'torchvision://resnet50'}
  (neck): FPN(
    (lateral_convs): ModuleList(
      (0): ConvModule(
        (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
      )
      (1): ConvModule(
        (conv): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
      )
      (2): ConvModule(
        (conv): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
      )
    )
    (fpn_convs): ModuleList(
      (0): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (1): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (2): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
      )
      (3): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      )
      (4): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
      )
    )
  )
  init_cfg={'type': 'Xavier', 'layer': 'Conv2d', 'distribution': 'uniform'}
  (bbox_head): LADHead(
    (loss_cls): FocalLoss()
    (loss_bbox): GIoULoss()
    (relu): ReLU(inplace=True)
    (cls_convs): ModuleList(
      (0): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): ReLU(inplace=True)
      )
      (1): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): ReLU(inplace=True)
      )
      (2): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): ReLU(inplace=True)
      )
      (3): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): ReLU(inplace=True)
      )
    )
    (reg_convs): ModuleList(
      (0): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): ReLU(inplace=True)
      )
      (1): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): ReLU(inplace=True)
      )
      (2): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): ReLU(inplace=True)
      )
      (3): ConvModule(
        (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (gn): GroupNorm(32, 256, eps=1e-05, affine=True)
        (activate): ReLU(inplace=True)
      )
    )
    (atss_cls): Conv2d(256, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (atss_reg): Conv2d(256, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (atss_centerness): Conv2d(256, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (scales): ModuleList(
      (0): Scale()
      (1): Scale()
      (2): Scale()
      (3): Scale()
      (4): Scale()
    )
    (loss_centerness): CrossEntropyLoss(avg_non_ignore=False)
  )

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1305115.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

无参数RCE知识点

什么是无参数RCE? 无参rce,就是说在无法传入参数的情况下,仅仅依靠传入没有参数的函数套娃就可以达到命令执行的效果 核心代码 if(; preg_replace(/[^\W]\((?R)?\)/, , $_GET[code])) { eval($_GET[code]); } 这段代码的核心就是只…

gamit一(虚拟机启动不了)

Intel VT-x处于禁用状态怎么办-百度经验 1重新启动电脑 2找到电脑对应的品牌,联想G510是F2, 3进去BIOS,configure里面修改virtual为enable,回车 4F10保存,退出

centos7上安装mysql5.7

1 下载mysql5.7网址 下载后缀名为“.tar.gz”的压缩包 连接虚拟机后 输入: rz 找到你下载的压缩包 2 解压缩 tar -zxvf mysql-5.7.26-linux-glibc2.12-x86_64.tar.gz将减压后的文件移动到/usr/local文件夹下并重命名为mysql mv mysql-5.7.26-linux-glibc2.12-x8…

极简壁纸js逆向(混淆处理)

本文仅用于技术交流,不得以危害或者是侵犯他人利益为目的使用文中介绍的代码模块,若有侵权请练习作者更改。 之前没学js,卡在这个网站,当时用的自动化工具,现在我要一雪前耻。 分析 第一步永远都是打开开发者工具进…

【精选】设计模式——工厂设计模式

工厂设计模式是一种创建型设计模式,其主要目的是通过将对象的创建过程封装在一个工厂类中来实现对象的创建。这样可以降低客户端与具体产品类之间的耦合度,也便于代码的扩展和维护。 工厂设计模式: 以下是Java中两个常见的工厂设计模式示例…

5G下行链路中的MIMO

5G MIMO 影响5G MIMO配置的主要因素是天线的数量和层数UE和gNB有一些预定义的表来定义天线端口和层的数量,选择了特定的表,UE如何确定表中的哪一行用于gNB的每次传输DCI 1-1中该规定了Antenna port 和 层数DMRS 端口数表示正在使用的天线数量&#xff0…

搭建商城系统的构架如何选择?

近期有很多网友在csdn、gitee、知乎的评论区留言,搭建商城系统是选择单体架构还是微服务架构,这里先说结论,如果是纯电商的话,商城系统的架构建议选择单体架构。我们分析下微服务和单体架构的优劣势,就知道了。 一、什…

Gemini与GPT-4的巅峰对决:AI界的双壁之战

随着人工智能技术的飞速发展,AI领域的竞争越来越激烈。在这个充满挑战与机遇的时代,两个备受瞩目的AI巨头——Gemini Pro和GPT-4,成为了人们关注的焦点。这两者都以其强大的功能和卓越的性能,引领着AI领域的发展潮流。本文将详细介…

某省资源交易中心 (js逆向)

该文章只是用于逆向学习,不得以商用或者是破坏他人利益的目的进行使用。如有侵权请联系作者。 网站链接: bse64 aHR0cHM6Ly9nZ3p5ZncuZnVqaWFuLmdvdi5jbi9idXNpbmVzcy9saXN0Lw 分析环节 进入网站 进行翻页请求时我们会发现改请求时ajax请求。 这里&…

Vue 只渲染一次 v-once

v-once 指令&#xff1a;用于只渲染一次&#xff0c;首次渲染后&#xff0c;就不会再重新渲染了。 v-once 指令&#xff1a;也可以用在组件上&#xff0c;使组件只加载一次。 语法格式&#xff1a; // 在标签中使用 <div v-once> {{ 数据 }} </div>// 在组件中使…

【算法】递归、搜索与回溯算法

文章目录 一. 名词解释1. 递归1.1 什么是递归&#xff1f;1.2 为什么会用到递归&#xff1f;1.3 如何理解递归&#xff1f;1.4 如何写好一个递归&#xff1f; 2. 遍历和搜索3. 回溯和剪枝 二. 递归系列专题1. 汉诺塔问题2. 合并两个有序链表3. 反转链表4. 两两交换链表中的节点…

进程(IPC)_D3(2023-12-12)

XMind&#xff08;分图版&#xff09;

通过例子了解Go测试---来自Russ Cox的演讲

大家好. 几周前,我在澳大利亚 GopherCon 上发表了这个演讲[1], 但一些音/视频问题影响了效果,所以我在家重新录制了这个版本,enjoy&#xff01; 这次演讲的主题是编写好的测试&#xff0c;但首先让我们思考一下为什么需要编写测试。为什么程序员要编写测试呢&#xff1f;编程相…

java实现局域网内视频投屏播放(三)投屏原理

常见投屏方案 常见的投屏方案主要有以下几种&#xff1a; DLNA DLNA的全称是DIGITAL LIVING NETWORK ALLIANCE(数字生活网络联盟)。DLNA委员会已经于2017年1月5日正式解散&#xff0c;原因是旧的标准已经无法满足新设备的发展趋势&#xff0c;DLNA标准将来也不会再更新。但是…

主机访问Android模拟器网络服务方法

0x00 背景 因为公司的一个手机app的开发需求&#xff0c;要尝试链接手机开启的web服务。于是在Android Studio的Android模拟器上尝试连接&#xff0c;发现谷歌给模拟器做了网络限制&#xff0c;不能直接连接。当然这个限制似乎从很久以前就存在了。一直没有注意到。 0x01 And…

鸿蒙系统最近删除文件夹的路径

鸿蒙手机上删除文件&#xff0c;会将文件移动到类似回收站的路径下&#xff0c;如何找到这个路径&#xff1f; 先找用文件管理器找到一个文件 比如aaa.jpg &#xff0c;这时在调试的shell下面运行 find . -name aaaa.jpg 得到如下 这时再删除该文件 再次运行 find . -name a…

小白如何启用和使用ChatGPT4插件的详细步骤演示

&#x1f337;&#x1f341; 博主猫头虎&#xff08;&#x1f405;&#x1f43e;&#xff09;带您 Go to New World✨&#x1f341; &#x1f984; 博客首页——&#x1f405;&#x1f43e;猫头虎的博客&#x1f390; &#x1f433; 《面试题大全专栏》 &#x1f995; 文章图文…

野牛物联网-OneNET配置教程

1、 本文愿景 OneNET物联网开放平台是中国移动打造的面向产业互联和智慧生活应用的物联网PaaS平台&#xff0c;也是市面上主流物联网云平台之一&#xff0c;野牛物联网为了便利大家&#xff0c;在此编写了配置接入该平台完整的一个流程。 2、 OneNET平台注册和配置 2.1、 注…

STL 源码剖析

临时对象的产生与运用 #include <stdio.h> #include<stdlib.h> #include<iostream> #include<vector> #include<algorithm> using namespace std;template <typename T> class print { public:void operator()(const T& elem){cout &…