MobileNet v1神经网络剖析

news2025/1/10 1:32:20

本文参考:

MobileNet网络_-断言-的博客-CSDN博客_mobile-ne

Conv2d中的groups参数(分组卷积)怎么理解? 【分组卷积可以减少参数量、且不容易过拟合(类似正则化)】_马鹏森的博客-CSDN博客_conv groups

Pytorch MobileNetV1 学习_DevinDong123的博客-CSDN博客

1 为什么使用MobileNet网络

传统卷积神经网络,内存需求大、运算量大导致无法在移动设备以及嵌入式设备上运行。

MobileNet网络提供了轻量级CNN网络。相比于传统卷积神经网络,在准确率小幅降低的前提下大大减少模型参数与运算量。

2 普通卷积、DW + PW的比较

(1)传统卷积

传统卷积把特征提取和特征组合一次完成并输出,其特点是:

  • 卷积核channel = 输入特征矩阵channel
  • 输出特征矩阵channel = 卷积核个数

(2) DW + PW卷积

DW卷积其特点为:

  • 卷积核channel = 1
  • 输入特征矩阵channel = 卷积核个数 = 输出特征矩阵channel

PW卷积其特点为:

  • 卷积核channel = 输入特征矩阵channel
  • 卷积核大小 = 1
  • 卷积核个数 = 输出特征矩阵channel

(3)参数量对比

假设某一网络卷积层,其卷积核大小为3*3,输入通道为16,输出通道为32。

常规CNN的参数量为:(3*3*16+1)*32=4640个。

深度可分离卷积参数量计算:

先用16个大小为3*3的卷积核(3*3*1)作用于16个通道的输入图像,得到了16个特征图。在做融合操作之前,接着用32个大小为1*1的卷积核(1*1*16)遍历上述得到16个特征图。则参数量为:(3*3*1+1)*16+(1*1*16+1)*32=706个。

在进行depthwise卷积时只使用了一种维度为in_channels的卷积核进行特征提取(没有进行特征组合);在进行pointwise卷积时只使用了output_channels种维度为in_channles*1*1的卷积核进行特征组合。

 3 MobileNet网络的模型结构

 4 DW网络构造

DW的卷积改变了卷积核的输入通道数,本来卷积核的输入通道数=输入特征矩阵的通道数,而DW卷积输入通道数则是为1。

此时用到nn.Conv2d中的groups参数,表示对输入feature map进行分组,然后每组分别卷积,每组的卷积核输入通道数 = 输入特征矩阵通道数 / groups。​​​​​​​ 

如果groups=输入特征矩阵的通道数,则刚好形成了如下的卷积效果:

5 MobileNet pytorch实现及剪枝

import torch
import torch.nn as nn
from nni.compression.pytorch.pruning import L1NormPruner
from nni.compression.pytorch.speedup import ModelSpeedup


class BasicConv2dBlock(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size, downsample=True, **kwargs):
        super(BasicConv2dBlock, self).__init__()
        stride = 2 if downsample else 1
        self.conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride=stride, **kwargs)
        self.bn = nn.BatchNorm2d(out_channels)
        self.relu = nn.ReLU(inplace=True)

    def forward(self, x):
        x = self.conv(x)
        x = self.bn(x)
        x = self.relu(x)
        return x

class DepthSeperabelConv2dBlock(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size, **kwargs):
        super(DepthSeperabelConv2dBlock, self).__init__()

        # 深度卷积
        self.depth_wise = nn.Sequential(
            nn.Conv2d(in_channels, in_channels, kernel_size, groups=in_channels, **kwargs),
            nn.BatchNorm2d(in_channels),
            nn.ReLU(inplace=True)
        )

        # 逐点卷积
        self.point_wise = nn.Sequential(
            nn.Conv2d(in_channels, out_channels, 1),
            nn.BatchNorm2d(out_channels),
            nn.ReLU(inplace=True)
        )

    def forward(self, x):
        x = self.depth_wise(x)
        x = self.point_wise(x)
        return x

class MobileNet(nn.Module):
    def __init__(self, class_num=100):
        super(MobileNet, self).__init__()

        self.stem = nn.Sequential(
            BasicConv2dBlock(3, 32, kernel_size=3, padding=1, bias=False),
            DepthSeperabelConv2dBlock(32, 64, kernel_size=3, padding=1, bias=False)
        )

        self.conv1 = nn.Sequential(
            DepthSeperabelConv2dBlock(64, 128, kernel_size=3, stride=2, padding=1, bias=False),
            DepthSeperabelConv2dBlock(128, 128, kernel_size=3, padding=1, bias=False)
        )

        self.conv2 = nn.Sequential(
            DepthSeperabelConv2dBlock(128, 256, kernel_size=3, stride=2, padding=1, bias=False),
            DepthSeperabelConv2dBlock(256, 256, kernel_size=3, padding=1, bias=False)
        )

        self.conv3 = nn.Sequential(
            DepthSeperabelConv2dBlock(256, 512, kernel_size=3, stride=2, padding=1, bias=False),
            DepthSeperabelConv2dBlock(512, 512, kernel_size=3, padding=1, bias=False),
            DepthSeperabelConv2dBlock(512, 512, kernel_size=3, padding=1, bias=False),
            DepthSeperabelConv2dBlock(512, 512, kernel_size=3, padding=1, bias=False),
            DepthSeperabelConv2dBlock(512, 512, kernel_size=3, padding=1, bias=False),
            DepthSeperabelConv2dBlock(512, 512, kernel_size=3, padding=1, bias=False)
        )

        self.conv4 = nn.Sequential(
            DepthSeperabelConv2dBlock(512, 1024, kernel_size=3, stride=2, padding=1, bias=False),
            DepthSeperabelConv2dBlock(1024, 1024, kernel_size=3, padding=1, bias=False)
        )

        self.fc = nn.Linear(1024, class_num)
        self.avg = nn.AdaptiveAvgPool2d(1)

    def forward(self, x):
        x = self.stem(x)

        x = self.conv1(x)
        x = self.conv2(x)
        x = self.conv3(x)
        x = self.conv4(x)

        x = self.avg(x)
        x = x.view(x.size(0), -1)
        x = self.fc(x)
        return x

def main():
    config_list = [{
        'sparsity_per_layer':0.5,
        'op_types': ['Conv2d']
    }]


    model = MobileNet(10)
    print('-----------raw model------------')
    print(model)

    pruner = L1NormPruner(model, config_list)
    _, masks = pruner.compress()
    for name, mask in masks.items():
        print(name, ' sparsity: ', '{:.2f}'.format(mask['weight'].sum() / mask['weight'].numel()))
    pruner._unwrap_model()
    ModelSpeedup(model, torch.rand(1, 3, 512, 512), masks).speedup_model()

    print('------------after speedup------------')
    print(model)


if __name__ == '__main__':
    main()

 以上代码不仅实现了mobilenet,还通过nni对模型进行了剪枝。

nni剪枝需要15G以上的内存,所以电脑内存没个20G跑不起来。

整体日志如下:

-----------raw model------------

MobileNet(

  (stem): Sequential(

    (0): BasicConv2dBlock(

      (conv): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)

      (bn): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

      (relu): ReLU(inplace=True)

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)

        (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (conv1): Sequential(

    (0): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(64, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=64, bias=False)

        (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)

        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (conv2): Sequential(

    (0): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=128, bias=False)

        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256, bias=False)

        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (conv3): Sequential(

    (0): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=256, bias=False)

        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False)

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (2): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False)

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (3): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False)

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (4): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False)

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (5): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=512, bias=False)

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (conv4): Sequential(

    (0): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=512, bias=False)

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(1024, 1024, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=1024, bias=False)

        (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(1024, 1024, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (fc): Linear(in_features=1024, out_features=10, bias=True)

  (avg): AdaptiveAvgPool2d(output_size=1)

)

stem.0.conv  sparsity:  0.50

stem.1.depth_wise.0  sparsity:  0.50

stem.1.point_wise.0  sparsity:  0.50

conv1.0.depth_wise.0  sparsity:  0.50

conv1.0.point_wise.0  sparsity:  0.50

conv1.1.depth_wise.0  sparsity:  0.50

conv1.1.point_wise.0  sparsity:  0.50

conv2.0.depth_wise.0  sparsity:  0.50

conv2.0.point_wise.0  sparsity:  0.50

conv2.1.depth_wise.0  sparsity:  0.50

conv2.1.point_wise.0  sparsity:  0.50

conv3.0.depth_wise.0  sparsity:  0.50

conv3.0.point_wise.0  sparsity:  0.50

conv3.1.depth_wise.0  sparsity:  0.50

conv3.1.point_wise.0  sparsity:  0.50

conv3.2.depth_wise.0  sparsity:  0.50

conv3.2.point_wise.0  sparsity:  0.50

conv3.3.depth_wise.0  sparsity:  0.50

conv3.3.point_wise.0  sparsity:  0.50

conv3.4.depth_wise.0  sparsity:  0.50

conv3.4.point_wise.0  sparsity:  0.50

conv3.5.depth_wise.0  sparsity:  0.50

conv3.5.point_wise.0  sparsity:  0.50

conv4.0.depth_wise.0  sparsity:  0.50

conv4.0.point_wise.0  sparsity:  0.50

conv4.1.depth_wise.0  sparsity:  0.50

conv4.1.point_wise.0  sparsity:  0.50

[2022-12-06 18:37:49] INFO (nni.compression.pytorch.speedup.compressor/MainThread) start to speed up the model

[2022-12-06 18:37:53] INFO (FixMaskConflict/MainThread) {'stem.0.conv': 1, 'stem.1.depth_wise.0': 1, 'stem.1.point_wise.0': 1, 'conv1.0.depth_wise.0': 1, 'conv1.0.point_wise.0': 1, 'conv1.1.depth_wise.0': 1, 'conv1.1.point_wise.0': 1, 'conv2.0.depth_wise.0': 1, 'conv2.0.point_wise.0': 1, 'conv2.1.depth_wise.0': 1, 'conv2.1.point_wise.0': 1, 'conv3.0.depth_wise.0': 1, 'conv3.0.point_wise.0': 1, 'conv3.1.depth_wise.0': 1, 'conv3.1.point_wise.0': 1, 'conv3.2.depth_wise.0': 1, 'conv3.2.point_wise.0': 1, 'conv3.3.depth_wise.0': 1, 'conv3.3.point_wise.0': 1, 'conv3.4.depth_wise.0': 1, 'conv3.4.point_wise.0': 1, 'conv3.5.depth_wise.0': 1, 'conv3.5.point_wise.0': 1, 'conv4.0.depth_wise.0': 1, 'conv4.0.point_wise.0': 1, 'conv4.1.depth_wise.0': 1, 'conv4.1.point_wise.0': 1}

[2022-12-06 18:37:53] INFO (FixMaskConflict/MainThread) dim0 sparsity: 0.500000

[2022-12-06 18:37:53] INFO (FixMaskConflict/MainThread) dim1 sparsity: 0.000000

[2022-12-06 18:37:53] INFO (FixMaskConflict/MainThread) Dectected conv prune dim" 0

[2022-12-06 18:37:53] INFO (nni.compression.pytorch.speedup.compressor/MainThread) infer module masks...

[2022-12-06 18:37:53] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for stem.0.conv

[2022-12-06 18:37:54] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for stem.0.bn

[2022-12-06 18:37:55] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for stem.0.relu

[2022-12-06 18:37:55] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for stem.1.depth_wise.0

[2022-12-06 18:37:57] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for stem.1.depth_wise.1

[2022-12-06 18:37:59] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for stem.1.depth_wise.2

[2022-12-06 18:38:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for stem.1.point_wise.0

[2022-12-06 18:38:02] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for stem.1.point_wise.1

[2022-12-06 18:38:05] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for stem.1.point_wise.2

[2022-12-06 18:38:07] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.0.depth_wise.0

[2022-12-06 18:38:08] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.0.depth_wise.1

[2022-12-06 18:38:09] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.0.depth_wise.2

[2022-12-06 18:38:09] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.0.point_wise.0

[2022-12-06 18:38:11] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.0.point_wise.1

[2022-12-06 18:38:12] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.0.point_wise.2

[2022-12-06 18:38:13] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.1.depth_wise.0

[2022-12-06 18:38:14] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.1.depth_wise.1

[2022-12-06 18:38:16] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.1.depth_wise.2

[2022-12-06 18:38:17] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.1.point_wise.0

[2022-12-06 18:38:18] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.1.point_wise.1

[2022-12-06 18:38:20] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv1.1.point_wise.2

[2022-12-06 18:38:20] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.0.depth_wise.0

[2022-12-06 18:38:21] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.0.depth_wise.1

[2022-12-06 18:38:21] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.0.depth_wise.2

[2022-12-06 18:38:21] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.0.point_wise.0

[2022-12-06 18:38:22] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.0.point_wise.1

[2022-12-06 18:38:23] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.0.point_wise.2

[2022-12-06 18:38:23] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.1.depth_wise.0

[2022-12-06 18:38:24] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.1.depth_wise.1

[2022-12-06 18:38:25] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.1.depth_wise.2

[2022-12-06 18:38:25] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.1.point_wise.0

[2022-12-06 18:38:26] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.1.point_wise.1

[2022-12-06 18:38:26] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv2.1.point_wise.2

[2022-12-06 18:38:27] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.0.depth_wise.0

[2022-12-06 18:38:27] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.0.depth_wise.1

[2022-12-06 18:38:27] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.0.depth_wise.2

[2022-12-06 18:38:27] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.0.point_wise.0

[2022-12-06 18:38:27] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.0.point_wise.1

[2022-12-06 18:38:28] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.0.point_wise.2

[2022-12-06 18:38:28] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.1.depth_wise.0

[2022-12-06 18:38:28] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.1.depth_wise.1

[2022-12-06 18:38:29] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.1.depth_wise.2

[2022-12-06 18:38:29] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.1.point_wise.0

[2022-12-06 18:38:29] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.1.point_wise.1

[2022-12-06 18:38:30] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.1.point_wise.2

[2022-12-06 18:38:30] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.2.depth_wise.0

[2022-12-06 18:38:30] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.2.depth_wise.1

[2022-12-06 18:38:30] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.2.depth_wise.2

[2022-12-06 18:38:31] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.2.point_wise.0

[2022-12-06 18:38:31] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.2.point_wise.1

[2022-12-06 18:38:31] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.2.point_wise.2

[2022-12-06 18:38:31] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.3.depth_wise.0

[2022-12-06 18:38:32] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.3.depth_wise.1

[2022-12-06 18:38:32] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.3.depth_wise.2

[2022-12-06 18:38:32] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.3.point_wise.0

[2022-12-06 18:38:33] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.3.point_wise.1

[2022-12-06 18:38:33] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.3.point_wise.2

[2022-12-06 18:38:33] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.4.depth_wise.0

[2022-12-06 18:38:34] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.4.depth_wise.1

[2022-12-06 18:38:34] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.4.depth_wise.2

[2022-12-06 18:38:34] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.4.point_wise.0

[2022-12-06 18:38:34] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.4.point_wise.1

[2022-12-06 18:38:35] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.4.point_wise.2

[2022-12-06 18:38:35] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.5.depth_wise.0

[2022-12-06 18:38:35] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.5.depth_wise.1

[2022-12-06 18:38:36] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.5.depth_wise.2

[2022-12-06 18:38:36] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.5.point_wise.0

[2022-12-06 18:38:36] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.5.point_wise.1

[2022-12-06 18:38:36] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv3.5.point_wise.2

[2022-12-06 18:38:36] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.0.depth_wise.0

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.0.depth_wise.1

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.0.depth_wise.2

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.0.point_wise.0

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.0.point_wise.1

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.0.point_wise.2

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.1.depth_wise.0

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.1.depth_wise.1

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.1.depth_wise.2

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.1.point_wise.0

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.1.point_wise.1

[2022-12-06 18:38:37] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for conv4.1.point_wise.2

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for avg

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for .aten::size.83

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for .aten::Int.84

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for .aten::view.85

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.jit_translate/MainThread) View Module output size: [8, -1]

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update mask for fc

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the fc

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the .aten::view.85

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the .aten::Int.84

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the .aten::size.83

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the avg

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.1.point_wise.2

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.1.point_wise.1

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.1.point_wise.0

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.1.depth_wise.2

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.1.depth_wise.1

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.1.depth_wise.0

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.0.point_wise.2

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.0.point_wise.1

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.0.point_wise.0

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.0.depth_wise.2

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.0.depth_wise.1

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv4.0.depth_wise.0

[2022-12-06 18:38:38] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.5.point_wise.2

[2022-12-06 18:38:39] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.5.point_wise.1

[2022-12-06 18:38:39] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.5.point_wise.0

[2022-12-06 18:38:39] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.5.depth_wise.2

[2022-12-06 18:38:39] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.5.depth_wise.1

[2022-12-06 18:38:39] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.5.depth_wise.0

[2022-12-06 18:38:39] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.4.point_wise.2

[2022-12-06 18:38:39] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.4.point_wise.1

[2022-12-06 18:38:40] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.4.point_wise.0

[2022-12-06 18:38:40] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.4.depth_wise.2

[2022-12-06 18:38:40] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.4.depth_wise.1

[2022-12-06 18:38:40] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.4.depth_wise.0

[2022-12-06 18:38:40] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.3.point_wise.2

[2022-12-06 18:38:40] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.3.point_wise.1

[2022-12-06 18:38:40] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.3.point_wise.0

[2022-12-06 18:38:41] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.3.depth_wise.2

[2022-12-06 18:38:41] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.3.depth_wise.1

[2022-12-06 18:38:41] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.3.depth_wise.0

[2022-12-06 18:38:41] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.2.point_wise.2

[2022-12-06 18:38:41] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.2.point_wise.1

[2022-12-06 18:38:41] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.2.point_wise.0

[2022-12-06 18:38:41] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.2.depth_wise.2

[2022-12-06 18:38:41] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.2.depth_wise.1

[2022-12-06 18:38:42] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.2.depth_wise.0

[2022-12-06 18:38:42] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.1.point_wise.2

[2022-12-06 18:38:42] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.1.point_wise.1

[2022-12-06 18:38:42] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.1.point_wise.0

[2022-12-06 18:38:42] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.1.depth_wise.2

[2022-12-06 18:38:42] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.1.depth_wise.1

[2022-12-06 18:38:42] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.1.depth_wise.0

[2022-12-06 18:38:42] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.0.point_wise.2

[2022-12-06 18:38:43] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.0.point_wise.1

[2022-12-06 18:38:43] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.0.point_wise.0

[2022-12-06 18:38:43] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.0.depth_wise.2

[2022-12-06 18:38:43] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.0.depth_wise.1

[2022-12-06 18:38:43] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv3.0.depth_wise.0

[2022-12-06 18:38:43] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.1.point_wise.2

[2022-12-06 18:38:43] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.1.point_wise.1

[2022-12-06 18:38:44] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.1.point_wise.0

[2022-12-06 18:38:44] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.1.depth_wise.2

[2022-12-06 18:38:44] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.1.depth_wise.1

[2022-12-06 18:38:45] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.1.depth_wise.0

[2022-12-06 18:38:45] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.0.point_wise.2

[2022-12-06 18:38:45] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.0.point_wise.1

[2022-12-06 18:38:46] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.0.point_wise.0

[2022-12-06 18:38:46] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.0.depth_wise.2

[2022-12-06 18:38:46] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.0.depth_wise.1

[2022-12-06 18:38:46] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv2.0.depth_wise.0

[2022-12-06 18:38:47] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.1.point_wise.2

[2022-12-06 18:38:47] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.1.point_wise.1

[2022-12-06 18:38:48] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.1.point_wise.0

[2022-12-06 18:38:49] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.1.depth_wise.2

[2022-12-06 18:38:49] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.1.depth_wise.1

[2022-12-06 18:38:50] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.1.depth_wise.0

[2022-12-06 18:38:50] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.0.point_wise.2

[2022-12-06 18:38:51] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.0.point_wise.1

[2022-12-06 18:38:52] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.0.point_wise.0

[2022-12-06 18:38:52] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.0.depth_wise.2

[2022-12-06 18:38:53] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.0.depth_wise.1

[2022-12-06 18:38:53] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the conv1.0.depth_wise.0

[2022-12-06 18:38:54] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the stem.1.point_wise.2

[2022-12-06 18:38:55] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the stem.1.point_wise.1

[2022-12-06 18:38:56] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the stem.1.point_wise.0

[2022-12-06 18:38:57] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the stem.1.depth_wise.2

[2022-12-06 18:38:58] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the stem.1.depth_wise.1

[2022-12-06 18:38:58] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the stem.1.depth_wise.0

[2022-12-06 18:38:59] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the stem.0.relu

[2022-12-06 18:38:59] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the stem.0.bn

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Update the indirect sparsity for the stem.0.conv

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) resolve the mask conflict

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace compressed modules...

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: stem.0.conv, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: stem.0.bn, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 6

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: stem.0.relu, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: stem.1.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: stem.1.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 6

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: stem.1.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: stem.1.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: stem.1.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 14

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: stem.1.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.0.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.0.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 14

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.0.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.0.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.0.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 36

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.0.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.1.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.1.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 36

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.1.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.1.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.1.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 30

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv1.1.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.0.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.0.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 30

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.0.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.0.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.0.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 62

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.0.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.1.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.1.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 62

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.1.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.1.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.1.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 63

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv2.1.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.0.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.0.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 63

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.0.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.0.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.0.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 121

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.0.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.1.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.1.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 121

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.1.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.1.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.1.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 128

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.1.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:00] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.2.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.2.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 128

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.2.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.2.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.2.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 125

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.2.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.3.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.3.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 125

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.3.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.3.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.3.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 135

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.3.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.4.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.4.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 135

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.4.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.4.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.4.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 117

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.4.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.5.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.5.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 117

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.5.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.5.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.5.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 125

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv3.5.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.0.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.0.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 125

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.0.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.0.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.0.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 265

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.0.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.1.depth_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.1.depth_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 265

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.1.depth_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.1.point_wise.0, op_type: Conv2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.1.point_wise.1, op_type: BatchNorm2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace batchnorm2d with num_features: 512

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: conv4.1.point_wise.2, op_type: ReLU)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: avg, op_type: AdaptiveAvgPool2d)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Warning: cannot replace (name: .aten::size.83, op_type: aten::size) which is func type

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Warning: cannot replace (name: .aten::Int.84, op_type: aten::Int) which is func type

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) Warning: cannot replace (name: .aten::view.85, op_type: aten::view) which is func type

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) replace module (name: fc, op_type: Linear)

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compress_modules/MainThread) replace linear with new in_features: 512, out_features: 10

[2022-12-06 18:39:01] INFO (nni.compression.pytorch.speedup.compressor/MainThread) speedup done

------------after speedup------------

MobileNet(

  (stem): Sequential(

    (0): BasicConv2dBlock(

      (conv): Conv2d(3, 6, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)

      (bn): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

      (relu): ReLU(inplace=True)

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(6, 6, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=6, bias=False)

        (1): BatchNorm2d(6, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(6, 14, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(14, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (conv1): Sequential(

    (0): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(14, 14, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=14, bias=False)

        (1): BatchNorm2d(14, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(14, 36, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(36, 36, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=36, bias=False)

        (1): BatchNorm2d(36, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(36, 30, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(30, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (conv2): Sequential(

    (0): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(30, 30, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=30, bias=False)

        (1): BatchNorm2d(30, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(30, 62, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(62, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(62, 62, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=62, bias=False)

        (1): BatchNorm2d(62, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(62, 63, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(63, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (conv3): Sequential(

    (0): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(63, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=63, bias=False)

        (1): BatchNorm2d(63, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(63, 121, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(121, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(121, 121, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=121, bias=False)

        (1): BatchNorm2d(121, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(121, 128, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (2): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128, bias=False)

        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(128, 125, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (3): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(125, 125, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=125, bias=False)

        (1): BatchNorm2d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(125, 135, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(135, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (4): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(135, 135, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=135, bias=False)

        (1): BatchNorm2d(135, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(135, 117, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(117, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (5): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(117, 117, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=117, bias=False)

        (1): BatchNorm2d(117, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(117, 125, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (conv4): Sequential(

    (0): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(125, 125, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=125, bias=False)

        (1): BatchNorm2d(125, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(125, 265, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(265, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

    (1): DepthSeperabelConv2dBlock(

      (depth_wise): Sequential(

        (0): Conv2d(265, 265, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=265, bias=False)

        (1): BatchNorm2d(265, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

      (point_wise): Sequential(

        (0): Conv2d(265, 512, kernel_size=(1, 1), stride=(1, 1))

        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)

        (2): ReLU(inplace=True)

      )

    )

  )

  (fc): Linear(in_features=512, out_features=10, bias=True)

  (avg): AdaptiveAvgPool2d(output_size=1)

)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/71301.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

阿里巴巴正式开源云原生应用脚手架

12 月 3 日,微服务 x 容器开源开发者 Meetup 上海站上,阿里云智能技术专家,云原生应用脚手架项目负责人良名宣布阿里巴巴云原生应用脚手架项目正式开源,并在现场做了相关内容介绍。本次开源的云原生应用脚手架是一款基于 Spring I…

监控Kubernetes集群证书过期时间的三种方案

前言 Kubernetes 中大量用到了证书, 比如 ca证书、以及 kubelet、apiserver、proxy、etcd等组件,还有 kubeconfig 文件。 如果证书过期,轻则无法登录 Kubernetes 集群,重则整个集群异常。 为了解决证书过期的问题,一般有以下几…

关于“堆”,看看这篇文章就够了(附堆的两种应用场景)

… 📘📖📃本文已收录至:数据结构 | C语言 更多知识尽在此专栏中!文章目录📘前言📘正文📖认识堆📖实现堆📃结构📃入堆📃出堆📃建堆算法…

新Crack:Neodynamic ZPLPrinter SDK for .NET Standard

适用于 .NET Standard V4.0.22.1206 的 Neodynamic ZPLPrinter Emulator SDK 添加对带有自定义字体设置的 ^BC 命令的支持。2022 年 12 月 7 日 - 16:03新版本特征 添加了对带有自定义字体设置的 ^BC 命令的支持。关于 Neodynamic ZPLPrinter Emulator SDK for .NET Standard 使…

在r语言中使用GAM(广义相加模型)进行电力负荷时间序列分析

广义相加模型(GAM:Generalized Additive Model),它模型公式如下:有p个自变量,其中X1与y是线性关系,其他变量与y是非线性关系,我们可以对每个变量与y拟合不同关系,对X2可以…

动态规划入门

一、基本思想 一般来说,只要问题可以划分成规模更小的子问题,并且原问题的最优解中包含了子问题的最优解,则可以考虑用动态规划解决。动态规划的实质是分治思想和解决冗余,因此,动态规划是一种将问题实例分解为更小的、…

JAVA SCRIPT设计模式--结构型--设计模式之FlyWeight享元模式(11)

JAVA SCRIPT设计模式是本人根据GOF的设计模式写的博客记录。使用JAVA SCRIPT语言来实现主体功能,所以不可能像C,JAVA等面向对象语言一样严谨,大部分程序都附上了JAVA SCRIPT代码,代码只是实现了设计模式的主体功能,不代…

知识图谱-KGE-语义匹配-双线性模型(打分函数用到了双线性函数)-2014 :MLP

Knowledge Vault & MLP 【paper】 Knowledge Vault: A Web-Scale Approach to Probabilistic Knowledge Fusion 【简介】 本文是谷歌的研究者发表在 KDD 2014 上的工作,提出了一套方法用于自动挖掘知识,并构建成大规模知识库 Knowledge Vault&…

【Linux】期末复习

文章目录1. 认识Linux系统2. Shell命令3. VI编辑器的使用4. Shell脚本编程5. 实验部分1. 认识Linux系统 Linux特点 完全免费开发性多用户、多任务丰富的网络功能可靠安全、性能稳定支持多种平台 2.Linux系统的组成 内核Shell应用程序文件系统 3.Linux版本 Linux版本由形如x1.x2…

(00)TCL脚本运行环境介绍

(00)TCL脚本运行环境介绍 01-TCL简介 02-TCL编辑器 03-TCL运行环境 04-TCL文件 05-结语 (01)TCL简介 Tcl 语言的全称 Tool Command Language,即工具命令语言。这种需要在 EDA 工具中使用的相当之多,或者说几乎每个 EDA 工具都支持 Tcl 语言。所以对于 IC 专业的…

Android Gradle 学习笔记(三)语言和命令

Gradle 支持使用 Groovy DSL 或 Kotlin DSL 来编写脚本。所以在学习具体怎么写脚本时,我们肯定会考虑到底是使用 Kotlin 来写还是 Groovy 来写。 不一定说你是 Kotlin Android 开发者就一定要用 Kotlin 来写 Gradle,我们得判断哪种写法更适合项目、更适…

Kubernetes那点事儿——日志管理

K8s日志管理前言一、日志二、K8s应用日志标准输出应用日志收集1、emptyDir挂载收集2、边车容器收集前言 程序运行中输出的日志默认暂存在Pod中,当Pod销毁重建时,日志也会丢失。所以需要一些持久化的方法保存程序日志。 一、日志 K8s系统日志 kubelet组件…

如何使用 rust 写内核模块

近年来,Rust 语言以内存安全、高可靠性、零抽象等能力获得大量开发者关注,而这些特性恰好是内核编程中所需要的,所以我们看下如何用rust来写Linux内核模块。01Rust 与内核模块Aliware虽然 Rust 支持已经在 LinuxKernel6.1 版本合并到主线了&a…

酷开科技不断革新,引领营销新动向

不管渠道如何变迁,不管场景如何碎片化、多样化,只要家庭文明不解体,只要我们的审美不发生颠覆性变迁,家庭大屏就会是主要营销战场。 随着行业软硬件技术的更迭,智能化OTT终将打通互联网消费场景,带动智能电…

Linux 文件与目录

我们知道Linux的目录结构为树状结构,最顶级的目录为根目录 /。 其他目录通过挂载可以将它们添加到树中,通过解除挂载可以移除它们。 在开始本教程前我们需要先知道什么是绝对路径与相对路径。 绝对路径: 路径的写法,由根目录 /…

186:vue+openlayers 小汽车移动轨迹动画,带开始、暂停、结束控制键

第186个 点击查看专栏目录 本示例的目的是介绍演示如何在vue+openlayers中实现轨迹动画,这里设置了小汽车开始,暂停,结束等的控制键,采用了线段步长位置获取坐标来定位点的方式来显示小车的动态。 直接复制下面的 vue+openlayers源代码,操作2分钟即可运行实现效果; 注意…

全国计算机等级考试-Python

计算机二级python 一、 题型及分值分布1. 单选题共40道,1到10题为公共基础知识,11到40题是python相关的知识,比如数据结构与算法、python基础知识。 每道题1分,共40分;2. 基础编程题共3道,题目会…

DocArray 和 Redis 联手,让推荐系统飞起来

在DocArray中使用Redis后端,基于向量相似性搜索可以快速搭建一个实时商品推荐系统。现在,跟上我们的脚步,一起了解搭建系统的关键步骤,并且深入了解推荐的原理吧!推荐系统会根据用户画像、历史行为(如购买、…

人工智能和数据分析成为 2023 年最大的计划投资

©网络研究院 到 2023 年,新兴技术系统将继续投资和发展,人工智能将引领私营公司计划利用的技术。 IT 分析公司 Info-Tech Research Group 对 2023 年的新行业预测进行了详细说明,预计私营部门公司将继续在其日常业务运营中采用更先进…

科普篇|法治宣传线上答题活动小程序界面功能全介绍

科普篇|法治宣传线上答题活动小程序界面功能全介绍 为深入学习贯彻二十大精神,努力使尊法学法守法用法在全社会蔚然成风,切实推动全民法治宣传教育深入开展,xx举办全民法治宣传线上答题活动。 第一、主界面展示 ①标题、主题、单位名称落款…