STViT-R 代码阅读记录

news2024/10/6 0:07:58

目录

一、SwinTransformer

1、原理

 2、代码

二、STViT-R

1、中心思想

2、代码与原文


本次不做具体的训练。只是看代码。所以只需搭建它的网络,执行一次前向传播即可。

一、SwinTransformer

1、原理

主要思想,将token按区域划分成窗口,只需每个窗口内的token单独进行 self-attention。

但是不同之间的窗口没有进行交互,为了解决这个问题。提出了

 2、代码

1、均匀的划分窗口

x_windows = window_partition(shifted_x, self.window_size)  # nW*B, window_size, window_size, C  window_size 7  # 划分窗口  (64,7,7,96)
x_windows = x_windows.view(-1, self.window_size * self.window_size, C)  # nW*B, window_size*window_size, C  (64,49,96)

二、STViT-R

1、中心思想

在浅层的 transformer保持不变,去提取低层 特征, 保证image token 中包含丰富的空间信息。在深层时,本文提出了 STGM 去生成 语义token, 通过聚类,整个图像由一些具有高级语义信息的标记来表示。。 在第一个STGM过程中,语义token 由 intra and inter-window spatial pooling初始化。 由于这种空间初始化,语义token主要包含局部语义信息,并在空间中实现离散和均匀分布。 在接下来的注意层中,除了进一步的聚类外,语义标记还配备了全局聚类中心,网络可以自适应地选择部分语义标记,以聚焦于全局语义信息。

2、代码与原文

对应

xx = x.reshape(B, H // self.window_size, self.window_size, W // self.window_size, self.window_size, C)  # (1,2,7,2,7,384)
windows = xx.permute(0, 1, 3, 2, 4, 5).contiguous().reshape(-1, self.window_size, self.window_size, C).permute(0, 3, 1, 2)  # (4,384,7,7)
shortcut = self.multi_scale(windows)  # B*nW, W*W, C  multi_scale.py --13  (4,9,384)
if self.use_conv_pos:  # False
    shortcut = self.conv_pos(shortcut)
pool_x = self.norm1(shortcut.reshape(B, -1, C)).reshape(-1, self.multi_scale.num_samples, C)  # (4,9,384)

# 
class multi_scale_semantic_token1(nn.Module):
    def __init__(self, sample_window_size):
        super().__init__()
        self.sample_window_size = sample_window_size  # 3
        self.num_samples = sample_window_size * sample_window_size

    def forward(self, x):  # (4,384,7,7)
        B, C, _, _ = x.size()
        pool_x = F.adaptive_max_pool2d(x, (self.sample_window_size, self.sample_window_size)).view(B, C, self.num_samples).transpose(2, 1)  # (4,9,384)
        return pool_x

注意,这个是按照每个窗口内进行 pooling的。代码中,窗口size为7,分成了4个窗口,故pooling前的 x(4,384,7,7),pooling后,按窗口池化,每个窗口池化后的 size为3,故池化后的输出 (4,9,384)。 至于参数的设置,由于采用的是local,所以文中所述

而且

  

所以 有了如下的操作,将原来窗口的size扩大了,

k_windows = F.unfold(x.permute(0, 3, 1, 2), kernel_size=10, stride=4).view(B, C, 10, 10, -1).permute(0, 4, 2, 3, 1)  # (1,4,10,10,384)
k_windows = k_windows.reshape(-1, 100, C)  # (4,100,384)
k_windows = torch.cat([shortcut, k_windows], dim=1)  # (4,109,384)
k_windows = self.norm1(k_windows.reshape(B, -1, C)).reshape(-1, 100+self.multi_scale.num_samples, C)  # (4,109,384)


 公式1

前边的对应

# P
shortcut = self.multi_scale(windows)  

# MHA(P, X, X)

pool_x = self.norm1(shortcut.reshape(B, -1, C)).reshape(-1, self.multi_scale.num_samples, C)

if self.shortcut:
    x = shortcut + self.drop_path(self.layer_scale_1 * self.attn(pool_x, k_windows))

中间省略了Norm层,所以括号里的 P是 有Norm的,外面的P是 shortcut

后边的对应

x = x + self.drop_path(self.layer_scale_2 * self.mlp(self.norm2(x)))  # (1,36,384)

对应

 elif i == 2:
                if self.use_global:
                    semantic_token = blk(semantic_token+self.semantic_token2, torch.cat([semantic_token, x], dim=1))
                else:  # True
                    semantic_token = blk(semantic_token, torch.cat([semantic_token, x], dim=1))

 文中的

定义为(当只有 use_global时才使用)

        if self.use_global:
            self.semantic_token2 = nn.Parameter(torch.zeros(1, self.num_samples, embed_dim))
            trunc_normal_(self.semantic_token2, std=.02)

最终的对应

x = shortcut + self.drop_path(self.layer_scale_1 * attn)
x = x + self.drop_path(self.layer_scale_2 * self.mlp(self.norm2(x)))

 注意,在 i=1 到 i=5之间的层是 STGM,当i=5时,开始了哑铃的另一侧

对应代码

elif i == 5:
    x = blk(x, semantic_token)  # to layers.py--132

如图中的蓝线,原始的 image token作为Q,然后STGM的语义令牌作为KV,


上述过程循环往复,就组成了多个的哑铃结构 

            if i == 0:
                x = blk(x)  # (1,196,384)  to swin_transformer -- 242
            elif i == 1:
                semantic_token = blk(x)  # to layers.py --179
            elif i == 2:
                if self.use_global:  # True
                    semantic_token = blk(semantic_token+self.semantic_token2, torch.cat([semantic_token, x], dim=1))  # to layers.py--132
                else:  # True
                    semantic_token = blk(semantic_token, torch.cat([semantic_token, x], dim=1))  # to layers.py--132
            elif i > 2 and i < 5:
                semantic_token = blk(semantic_token)  # to layers.py--132
            elif i == 5:
                x = blk(x, semantic_token)  # to layers.py--132
            elif i == 6:
                x = blk(x)
            elif i == 7:
                semantic_token = blk(x)
            elif i == 8:
                semantic_token = blk(semantic_token, torch.cat([semantic_token, x], dim=1))
            elif i > 8 and i < 11:
                semantic_token = blk(semantic_token)
            elif i == 11:
                x = blk(x, semantic_token)
            elif i == 12:
                x = blk(x)
            elif i == 13:
                semantic_token = blk(x)
            elif i == 14:
                semantic_token = blk(semantic_token, torch.cat([semantic_token, x], dim=1))
            elif i > 14 and i < 17:
                semantic_token = blk(semantic_token)
            else:
                x = blk(x, semantic_token)

tiny

SwinTransformer(
  (patch_embed): PatchEmbed(
    (proj): Sequential(
      (0): Conv2d_BN(
        (c): Conv2d(3, 48, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): Hardswish()
      (2): Conv2d_BN(
        (c): Conv2d(48, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (3): Hardswish()
    )
  )
  (pos_drop): Dropout(p=0.0, inplace=False)
  (layers): ModuleList(
    (0): BasicLayer(
      dim=96, input_resolution=(56, 56), depth=2
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          dim=96, input_resolution=(56, 56), num_heads=3, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=96, window_size=(7, 7), num_heads=3
            (qkv): Linear(in_features=96, out_features=288, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=96, out_features=96, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=96, out_features=384, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=384, out_features=96, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SwinTransformerBlock(
          dim=96, input_resolution=(56, 56), num_heads=3, window_size=7, shift_size=3, mlp_ratio=4.0
          (norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=96, window_size=(7, 7), num_heads=3
            (qkv): Linear(in_features=96, out_features=288, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=96, out_features=96, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.018)
          (norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=96, out_features=384, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=384, out_features=96, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
      )
      (downsample): PatchMerging(
        input_resolution=(56, 56), dim=96
        (reduction): Linear(in_features=384, out_features=192, bias=False)
        (norm): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
      )
    )
    (1): BasicLayer(
      dim=192, input_resolution=(28, 28), depth=2
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          dim=192, input_resolution=(28, 28), num_heads=6, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=192, window_size=(7, 7), num_heads=6
            (qkv): Linear(in_features=192, out_features=576, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=192, out_features=192, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.036)
          (norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=192, out_features=768, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=768, out_features=192, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SwinTransformerBlock(
          dim=192, input_resolution=(28, 28), num_heads=6, window_size=7, shift_size=3, mlp_ratio=4.0
          (norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=192, window_size=(7, 7), num_heads=6
            (qkv): Linear(in_features=192, out_features=576, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=192, out_features=192, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.055)
          (norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=192, out_features=768, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=768, out_features=192, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
      )
      (downsample): PatchMerging(
        input_resolution=(28, 28), dim=192
        (reduction): Linear(in_features=768, out_features=384, bias=False)
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
      )
    )
    (2): Deit(
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          dim=384, input_resolution=(14, 14), num_heads=12, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=384, window_size=(7, 7), num_heads=12
            (qkv): Linear(in_features=384, out_features=1152, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.073)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SemanticAttentionBlock(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (multi_scale): multi_scale_semantic_token1()
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.091)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (2): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.109)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (3): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.127)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (4): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.145)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (5): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.164)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
      )
      (downsample): PatchMerging(
        input_resolution=(14, 14), dim=384
        (reduction): Linear(in_features=1536, out_features=768, bias=False)
        (norm): LayerNorm((1536,), eps=1e-05, elementwise_affine=True)
      )
    )
    (3): BasicLayer(
      dim=768, input_resolution=(7, 7), depth=2
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          dim=768, input_resolution=(7, 7), num_heads=24, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=768, window_size=(7, 7), num_heads=24
            (qkv): Linear(in_features=768, out_features=2304, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=768, out_features=768, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.182)
          (norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=768, out_features=3072, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=3072, out_features=768, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SwinTransformerBlock(
          dim=768, input_resolution=(7, 7), num_heads=24, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=768, window_size=(7, 7), num_heads=24
            (qkv): Linear(in_features=768, out_features=2304, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=768, out_features=768, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.200)
          (norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=768, out_features=3072, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=3072, out_features=768, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
      )
    )
  )
  (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
  (avgpool): AdaptiveAvgPool1d(output_size=1)
  (head): Linear(in_features=768, out_features=100, bias=True)
)

网络结构

SwinTransformer(
  (patch_embed): PatchEmbed(
    (proj): Sequential(
      (0): Conv2d_BN(
        (c): Conv2d(3, 48, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (1): Hardswish()
      (2): Conv2d_BN(
        (c): Conv2d(48, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (3): Hardswish()
    )
  )
  (pos_drop): Dropout(p=0.0, inplace=False)
  (layers): ModuleList(
    (0): BasicLayer(
      dim=96, input_resolution=(56, 56), depth=2
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          dim=96, input_resolution=(56, 56), num_heads=3, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=96, window_size=(7, 7), num_heads=3
            (qkv): Linear(in_features=96, out_features=288, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=96, out_features=96, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): Identity()
          (norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=96, out_features=384, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=384, out_features=96, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SwinTransformerBlock(
          dim=96, input_resolution=(56, 56), num_heads=3, window_size=7, shift_size=3, mlp_ratio=4.0
          (norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=96, window_size=(7, 7), num_heads=3
            (qkv): Linear(in_features=96, out_features=288, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=96, out_features=96, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.013)
          (norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=96, out_features=384, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=384, out_features=96, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
      )
      (downsample): PatchMerging(
        input_resolution=(56, 56), dim=96
        (reduction): Linear(in_features=384, out_features=192, bias=False)
        (norm): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
      )
    )
    (1): BasicLayer(
      dim=192, input_resolution=(28, 28), depth=2
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          dim=192, input_resolution=(28, 28), num_heads=6, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=192, window_size=(7, 7), num_heads=6
            (qkv): Linear(in_features=192, out_features=576, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=192, out_features=192, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.026)
          (norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=192, out_features=768, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=768, out_features=192, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SwinTransformerBlock(
          dim=192, input_resolution=(28, 28), num_heads=6, window_size=7, shift_size=3, mlp_ratio=4.0
          (norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=192, window_size=(7, 7), num_heads=6
            (qkv): Linear(in_features=192, out_features=576, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=192, out_features=192, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.039)
          (norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=192, out_features=768, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=768, out_features=192, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
      )
      (downsample): PatchMerging(
        input_resolution=(28, 28), dim=192
        (reduction): Linear(in_features=768, out_features=384, bias=False)
        (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
      )
    )
    (2): Deit(
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          dim=384, input_resolution=(14, 14), num_heads=12, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=384, window_size=(7, 7), num_heads=12
            (qkv): Linear(in_features=384, out_features=1152, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.052)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SemanticAttentionBlock(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (multi_scale): multi_scale_semantic_token1()
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.065)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (2): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.078)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (3): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.091)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (4): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.104)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (5): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.117)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (6): SwinTransformerBlock(
          dim=384, input_resolution=(14, 14), num_heads=12, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=384, window_size=(7, 7), num_heads=12
            (qkv): Linear(in_features=384, out_features=1152, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.130)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (7): SemanticAttentionBlock(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (multi_scale): multi_scale_semantic_token1()
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.143)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (8): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.157)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (9): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.170)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (10): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.183)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (11): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.196)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (12): SwinTransformerBlock(
          dim=384, input_resolution=(14, 14), num_heads=12, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=384, window_size=(7, 7), num_heads=12
            (qkv): Linear(in_features=384, out_features=1152, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.209)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (13): SemanticAttentionBlock(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (multi_scale): multi_scale_semantic_token1()
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.222)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (14): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.235)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (15): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.248)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (16): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.261)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
        (17): Block(
          (norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (attn): Attention(
            (q): Linear(in_features=384, out_features=384, bias=True)
            (kv): Linear(in_features=384, out_features=768, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=384, out_features=384, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
          )
          (drop_path): DropPath(drop_prob=0.274)
          (norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=384, out_features=1536, bias=True)
            (act): GELU()
            (drop1): Dropout(p=0.0, inplace=False)
            (fc2): Linear(in_features=1536, out_features=384, bias=True)
            (drop2): Dropout(p=0.0, inplace=False)
          )
        )
      )
      (downsample): PatchMerging(
        input_resolution=(14, 14), dim=384
        (reduction): Linear(in_features=1536, out_features=768, bias=False)
        (norm): LayerNorm((1536,), eps=1e-05, elementwise_affine=True)
      )
    )
    (3): BasicLayer(
      dim=768, input_resolution=(7, 7), depth=2
      (blocks): ModuleList(
        (0): SwinTransformerBlock(
          dim=768, input_resolution=(7, 7), num_heads=24, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=768, window_size=(7, 7), num_heads=24
            (qkv): Linear(in_features=768, out_features=2304, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=768, out_features=768, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.287)
          (norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=768, out_features=3072, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=3072, out_features=768, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
        (1): SwinTransformerBlock(
          dim=768, input_resolution=(7, 7), num_heads=24, window_size=7, shift_size=0, mlp_ratio=4.0
          (norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (attn): WindowAttention(
            dim=768, window_size=(7, 7), num_heads=24
            (qkv): Linear(in_features=768, out_features=2304, bias=True)
            (attn_drop): Dropout(p=0.0, inplace=False)
            (proj): Linear(in_features=768, out_features=768, bias=True)
            (proj_drop): Dropout(p=0.0, inplace=False)
            (softmax): Softmax(dim=-1)
          )
          (drop_path): DropPath(drop_prob=0.300)
          (norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
          (mlp): Mlp(
            (fc1): Linear(in_features=768, out_features=3072, bias=True)
            (act): GELU()
            (fc2): Linear(in_features=3072, out_features=768, bias=True)
            (drop): Dropout(p=0.0, inplace=False)
          )
        )
      )
    )
  )
  (norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)
  (avgpool): AdaptiveAvgPool1d(output_size=1)
  (head): Linear(in_features=768, out_features=100, bias=True)
)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1012293.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

“批量剪辑,统一视频封面,让你的创作更高效!“

作为一个创作者&#xff0c;你是否经常为每个视频封面而烦恼&#xff1f;使用我们的批量剪辑功能&#xff0c;轻松统一视频封面&#xff0c;让你的创作更高效&#xff01; 首先&#xff0c;我们要进入媒体梦工厂主页面&#xff0c;并在主页面的板块栏里选择“视频封面”板块 第…

运维面试宝典

【Linux基础篇】 1.描述Linux运行级别0-6的各自含义 0 &#xff1a;关机模式 1 &#xff1a;单用户模式 < 破解 root 密码 2 &#xff1a;无网络支持的多用户模式 3 &#xff1a;有网络支持的多用户模式&#xff08;文本模式&#xff0c;工作中最常用的模式&#xff09;…

大数据时代元数据的重要性

元数据&#xff0c;是描述了数据本身&#xff08;如数据库、数据元素、数据模型&#xff09;&#xff0c;数据表示的概念&#xff08;如业务流程、应用系统、软件代码、技术基础设施&#xff0c;数据与概念之间的联系。元数据可以帮助组织理解其自身的数据、系统和流程&#xf…

【数据结构初阶】三、 线性表里的链表(无头+单向+非循环链表)

相关代码gitee自取&#xff1a; C语言学习日记: 加油努力 (gitee.com) 接上期&#xff1a; 【数据结构初阶】二、 线性表里的顺序表_高高的胖子的博客-CSDN博客 引言 通过上期对顺序表的介绍和使用 我们可以知道顺序表有以下优点和缺点&#xff1a; 顺序表优点 尾插 和 尾…

SpringBoot接受请求参数

1.简单参数 1.1原始方法 说明&#xff1a;获取请求传来的name参数&#xff0c;age参数的值。 //简单方式 RestController public class RequestController {GetMapping("/books")public String simpleParam(HttpServletRequest request) {//获取请求参数 name和ag…

推荐国产低功耗20位分辨率模数转换器

RAMSUN提供的类比精密、低功耗、20位分辨率、兼容SPI的模数转换器(ADC)。采用QFN-10和MSOP-10两种封装形式&#xff0c;集成了低漂移电压基准&#xff0c;振荡器&#xff0c;可编程增益放大器(PGA)&#xff0c;抗工频干扰滤波器和数字比较器等功能模块&#xff0c;以简化系统设…

微信公众号怎么添加抢福袋抽奖活动

在微信公众号中添加抢福袋抽奖活动&#xff0c;可以增加用户互动和粘性&#xff0c;同时也能为公众号带来更多的流量和曝光度。下面将从以下几个方面详细阐述在微信公众号中如何添加抢福袋抽奖活动。 一、活动策划 在策划活动之前&#xff0c;需要明确活动的目的和目标用户&am…

接口测试入门

1. 什么是接口测试 顾名思义&#xff0c;接口测试是对系统或组件之间的接口进行测试&#xff0c;主要是校验数据的交换&#xff0c;传递和控制管理过程&#xff0c;以及相互逻辑依赖关系。其中接口协议分为HTTP,WebService,Dubbo,Thrift,Socket等类型&#xff0c;测试类型又主要…

ITR服务体系的常见问题和华为构建ITR的经验分享

大家好&#xff01; 前两天有一个企业负责客户服务、售后部门的朋友和华研荟探讨&#xff0c;企业的服务体系如何搭建&#xff0c;以及如何像华为一样构建ITR流程 他的苦恼是&#xff0c;自己所带领的部门叫做客户服务中心&#xff0c;但是在公司内部不受重视&#xff0c;公司…

如何将文件或者图片压缩成zip文件压缩包

代码&#xff1a; RestController RequestMapping("/download") public class DownloadController {GetMapping("/studentWork")public ResponseEntity<StreamingResponseBody> downloadStudentWork() {HttpHeaders headers new HttpHeaders();hea…

仔仔细细的给您讲,如何建立数据仓库

数据仓库的定位 在整个数据价值生产链路中&#xff0c;数据仓库的主要作用就是中心化分发&#xff0c;将原始数据与数据价值挖掘活动隔离。所有的原始数据都会进入数据仓库&#xff0c;再由数据仓库统一分发给下游的数据使用者。这样的结构实现了原始数据与数据分析工作的解耦…

Linux 企业级夜莺监控分析工具远程访问

目录 前言 1. Linux 部署Nightingale 2. 本地访问测试 3. Linux 安装cpolar 4. 配置Nightingale公网访问地址 5. 公网远程访问Nightingale管理界面 6. 固定Nightingale公网地址 前言 夜莺监控是一款开源云原生观测分析工具&#xff0c;采用 All-in-One 的设计理念&…

xml配置文件密码特殊字符处理

错误姿势&#xff1a; 正确姿势&#xff1a;采取转义符的方式 常用转义符&#xff1a;

位图和布隆过滤器的实现

前言 位图和布隆过滤器是基于哈希思想实现的数据结构&#xff0c;他们在很多的方面都有应用&#xff0c;比如&#xff1a;操作系统中的磁盘标记&#xff0c;快速查找某个数据是否在集合中。布隆过滤器可以高效的进行插入和查询&#xff0c;可以告诉你“某样东西一定不存在或者可…

Sui Gaming AMA精彩内容集锦

9月8日&#xff0c;Sui基金会在Twitter Space举办了一场「游戏」主题的AMA&#xff0c;会议由基金会市场团队的Rainier主持&#xff0c;邀请了Coert Voorhees、Anthony Palma和Bill Allred三位嘉宾分享观点。Coert Voorhees是Arden的联合创始人兼首席执行官&#xff0c;其产品为…

微信小程序——使用 Vant 组件实现 Popup 弹出层(各位置弹出详细代码分享)

✅作者简介&#xff1a;2022年博客新星 第八。热爱国学的Java后端开发者&#xff0c;修心和技术同步精进。 &#x1f34e;个人主页&#xff1a;Java Fans的博客 &#x1f34a;个人信条&#xff1a;不迁怒&#xff0c;不贰过。小知识&#xff0c;大智慧。 &#x1f49e;当前专栏…

【视觉SLAM入门】7.3.后端优化 基于KF/EKF和基于BA图优化的后端,推导及举例分析

"时间倾诉我的故事" 1. 理论推导2. 主流解法3. 用EKF估计状态3.1. 基于EKF代表解法的感悟 4. 用BA法估计状态4.1 构建最小二乘问题4.2 求解BA推导4.3 H的稀疏结构4.4 根据H稀疏性求解4.5 鲁棒核函数4.6 编程注意 5.总结 引入&#xff1a; 前端里程计能给出一个短时间…

Rn视图生成图片并保存到相册

该功能依赖两个组件 完整代码 yarn add react-native-view-shot // 视图生成图片 yarn add expo-media-library // 保存图片import { useState, useRef } from react import ViewShot from "react-native-view-shot" import { View, Text, Button, Image, StyleSh…

【程序猿包邮送书:第五期】考研408书籍数学书籍大放送,多本书籍任君挑选

&#x1f339;欢迎来到爱书不爱输的程序猿的博客, 本博客致力于知识分享&#xff0c;与更多的人进行学习交流 爱书不爱输的程序猿&#xff1a;送书第五期 &#x1f6a9;&#x1f6a9;&#x1f6a9;点击直达福利前言01 《数据结构与算法分析》书籍介绍作者简介目录 02 《计算机网…

【docker-compose 跨节点部署 kafka-kraft SASL用户加密集群】全网最新!

一、概述 文本主要讲解使用Docker-compose在三个节点上部署Kafka3.5.1(现阶段最新版本)-kraft模式&#xff0c;加密使用了用户名密码加密的SASL_PLAINTEXTPLAIN方式。SSL加密在我的docker-compose.yml文件基础上微调一下就好。所有的配置都通过环境变量注入&#xff0c;仅将加…