Swin Transformer 学习笔记(附代码)

news2024/10/1 21:29:19

论文地址:https://arxiv.org/pdf/2103.14030.pdf

代码地址: GitHub - microsoft/Swin-Transformer: This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".

1.是什么?

Swin Transformer是一种用于图像分类的深度学习模型,它在2021年提出并取得了很好的效果。Swin Transformer的网络结构是基于Transformer架构的,但与传统的Transformer不同,它引入了一种新的分层机制,以处理大尺寸图像。

Swin Transformer的核心思想是将图像分割成小的图块,并在这些图块上应用Transformer模块。这种分块的方式使得Swin Transformer能够处理大尺寸的图像,而不会受到传统Transformer中注意力计算复杂度的限制。

Swin Transformer的网络结构由多个阶段组成,每个阶段都包含了多个分块层。在每个分块层中,图块会通过自注意力机制进行特征交互,并通过跨窗口注意力机制进行全局信息的传递。这种分块和跨窗口的机制使得Swin Transformer能够捕捉到不同尺度的特征。

通过在大规模图像分类任务上的实验证明,Swin Transformer在准确性和计算效率方面都取得了很好的表现。它在ImageNet数据集上的Top-1准确率超过了90%,并且相比于其他图像分类模型,具有更低的计算复杂度。

总结起来,Swin Transformer是一种基于Transformer架构的图像分类模型,通过分块和跨窗口的机制,能够处理大尺寸图像并取得较好的准确性和计算效率。

2.为什么?

最初的ViT模型,它有几个明显的问题:

  • 建模能力方面,强行分割patch破坏了原有的邻域结构,也不再具有卷积的那种空间不变性
  • 复杂度方面,之前的ViT是在每层都做全局(global)自注意力。如果保持每个Patch的大小不变,随着图片尺寸的变大,Patch的个数会增加,而Patch的个数等于进入Transformer的Token个数,且Transformer的时间复杂度是O(n^2)。
  • 易用性方面,由于Embedding(结构是全连接)和图片大小是绑定的,所以预训练、精调和推理使用的图片必须是完全同等的尺寸。

相比于Vision Transformer(讲解),Swin Transformer做出了几点改进:

  • 层次化构建方法:使用了类似卷积神经网络中的层次化构建方法(Hierarchical feature maps),比如特征图尺寸中有对图像下采样4倍的,8倍的以及16倍的,这样的backbone有助于在此基础上构建目标检测,实例分割等任务。而Vision Transformer中直接下采样16倍,后面的特征图也是维持这个下采样率不变。
  • 特征图划分:在Swin Transformer中使用了Windows Multi-Head Self-Attention(W-MSA)的概念,比如在下图的4倍下采样和8倍下采样中,将特征图划分成了多个不相交的窗口,并且Multi-Head Self-Attention只在每个窗口内进行。相对于Vision Transformer中直接对整个特征图进行Multi-Head Self-Attention,这样做的目的是减少计算量,尤其是在浅层特征图很大的时候。
  • 窗口变换:特征图划分虽然减少了计算量,但会隔绝不同窗口之间的信息传递,因此所以在论文中作者又提出了Shifted Windows Multi-Head Self-Attention(SW-MSA)的概念,目的在于使信息在相邻窗口中传递。
     

3.怎么样?

3.1网络结构

它分为几个阶段

  1. Embedding Stage(stage1)。将图片划分为若干4*4的patch,使用线性变换来将patch变为Embedding向量,这一步和ViT是一样的。但是注意,这里的patch比ViT的14*14小了很多。
  2. 若干个使用Swin Transformer 的Stage(stage2-4)。这里模仿了经典卷积网络backbone的结构,在每个Stage都将feature map(对应到Vit就是Patch或Token的个数)变成原来的四分之一。这是通过简单地将2*2patch合并成一个来完成的。同时,用Swin Transformer替代了原来的标准Transformer,主要变化如下
    1. 用M*M大小的窗口自注意力代替全局自注意力。因为自注意力机制时间复杂度是O(n^2),通过减少参加自注意力的元素,将原来关于patch数平方复杂度的计算变为关于patch数线性复杂度
    2. 用对角线方向的shift来使Swin Transformer里的每一层窗口都是不同的,这样一个patch有机会和不同的patch交互。这里还使用了一个mask trick来使得这种shift的自注意力计算更高效。
    3. 添加了相对位置偏置(relative position bias),对比发现这比添加绝对位置embedding效果好很多

3.2 Patch Merging 模块

Patch Merging 模块将 尺寸为 H×W 的 Patch 块首先进行拼接并在 channel 维度上进行concatenate 构成了 H/2×W/2×4C 的特征图,然后再进行 Layer Normalization 操作进行正则化,然后通过一个 Linear 层形成了一个 H/2×W/2×2C ,完成了特征图的下采样过程。其中 size 缩小为原来的 1/2,channel 扩大为原来的 2 倍。


3.3 W-MSA 模块

ViT 网络中的 MSA 通过 Self-Attention 使得每一个像素点都可以和其他的像素点进行内积从而得到所有像素点的信息,从而获得丰富的全局信息。但是每个像素点都需要和其他像素点进行信息交换,计算量巨大,网络的执行效率低下。因此 Swin-T 将 MSA 分个多个固定的 Windows 构成了 W-MSA,每个 Windows 之间的像素点只能与该 Windows 中的其他像素点进行内积从而获得信息,这样便大幅的减小了计算量,提高了网络的运算效率。MSA 和 W-MAS 的计算量如下所示:

其中h、w和C分别代表特征图的高度、宽度和深度,M代表每个 Windows 的大小。假定 h=w=112,M=7,C=128可以计算出 W-MSA 节省了40124743680 FLOPs。

3.4 SW-MSA 模块

虽然 W-MSA 通过划分 Windows 的方法减少了计算量,但是由于各个 Windows 之间无法进行信息的交互,因此可以看作其“感受野”缩小,无法得到较全局准确的信息从而影响网络的准确度。

为了实现不同窗口之间的信息交互,我们可以将窗口滑动,偏移窗口使其包含不同的像素点,然后再进行 W-MSA 计算,将两次 W-MSA 计算的结果进行连接便可结合两个不同的 Windows 中的像素点所包含的信息从而实现 Windows 之间的信息共通。

偏移窗口的 W-MSA 构成了 SW-MSA 模块,其 Windows 在 W-MSA 的基础上向右下角偏移了两个 Patch,形成了9个大小不一的块,然后使用 cyclic shift 将这 9 个块平移拼接成与 W-MSA 对应的 4 个大小相同的块,再通过 masked MSA 对这 4 个拼接块进行对应的模板计算完成信息的提取,最后通过 reverse cyclic shift 将信息数据 patch 平移回原先的位置。通过 SW-MSA 机制完成了偏移窗口的像素点的 MSA 计算并实现了不同窗口间像素点的信息交流,从而间接扩大了网络的“感受野”,提高了信息的利用效率。

3.5 Relative position bias 机制 

Swin-T 网络还在 Attention 计算中引入了相对位置偏置机制去提高网络的整体准确率表现,通过引入相对位置偏置机制,其准确度能够提高 1.2%~2.3% 不等。 以 2×2 的特征图为例,首先我们需要对特征图的各个块进行绝对位置的编号,得到每个块的绝对位置索引。然后对每个块计算其与其他块之间的相对位置,计算方法为该块的绝对位置索引减去其他块的绝对位置索引,可以得到每个块的相对位置索引矩阵。将每个块的相对位置索引矩阵展平连接构成了整个特征图的相对位置索引矩阵,具体的计算流程如下图所示。

Swin-T并不是使用二维元组形式的相对位置索引矩阵,而是通过将二维元组形式的相对位置索引映射为一维的相对位置偏置(Relative position bias)来构成相应的矩阵,具体的映射方法如下:1. 将对应的相对位置行索引和列索引分别加上 M-1, 2. 将行索引和列索引分别乘以 2M-1, 3. 将行索引和列索引相加,再使用对应的相对位置偏置表(Relative position bias table)进行映射即可得到最终的相对位置偏置B。具体的计算流程如下所示:

加入了相对位置偏置机制的 Attention 计算公式如下所示:

 

其中B即为上述计算得到的相对位置偏置。

3.6  Swin Transformer 网络详细配置

在论文中 Swin-T 分为四种不同规格的网络,每一种规格有着不同的 Block 堆叠次数和 channel 数以应对不同规模的任务及复杂度要求。一般来说规格越高的网络参数个数越多,所能得到的信息也就越多,预测的准确程度也就越高,但是其计算复杂度和训练时间也就越大,需要根据自己的任务要求和配置运用适合规格的 Swin-T 网络。各规格网络的详细配置如下图所示:

3.7代码实现

# --------------------------------------------------------
# Swin Transformer
# Copyright (c) 2021 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ze Liu
# --------------------------------------------------------

import torch
import torch.nn as nn
import torch.utils.checkpoint as checkpoint
from timm.models.layers import DropPath, to_2tuple, trunc_normal_

try:
    import os, sys

    kernel_path = os.path.abspath(os.path.join('..'))
    sys.path.append(kernel_path)
    from kernels.window_process.window_process import WindowProcess, WindowProcessReverse

except:
    WindowProcess = None
    WindowProcessReverse = None
    print("[Warning] Fused window process have not been installed. Please refer to get_started.md for installation.")


class Mlp(nn.Module):
    def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
        super().__init__()
        out_features = out_features or in_features
        hidden_features = hidden_features or in_features
        self.fc1 = nn.Linear(in_features, hidden_features)
        self.act = act_layer()
        self.fc2 = nn.Linear(hidden_features, out_features)
        self.drop = nn.Dropout(drop)

    def forward(self, x):
        x = self.fc1(x)
        x = self.act(x)
        x = self.drop(x)
        x = self.fc2(x)
        x = self.drop(x)
        return x


def window_partition(x, window_size):
    """
    Args:
        x: (B, H, W, C)
        window_size (int): window size

    Returns:
        windows: (num_windows*B, window_size, window_size, C)
    """
    B, H, W, C = x.shape
    x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
    windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
    return windows


def window_reverse(windows, window_size, H, W):
    """
    Args:
        windows: (num_windows*B, window_size, window_size, C)
        window_size (int): Window size
        H (int): Height of image
        W (int): Width of image

    Returns:
        x: (B, H, W, C)
    """
    B = int(windows.shape[0] / (H * W / window_size / window_size))
    x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
    x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
    return x


class WindowAttention(nn.Module):
    r""" Window based multi-head self attention (W-MSA) module with relative position bias.
    It supports both of shifted and non-shifted window.

    Args:
        dim (int): Number of input channels.
        window_size (tuple[int]): The height and width of the window.
        num_heads (int): Number of attention heads.
        qkv_bias (bool, optional):  If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
        attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
        proj_drop (float, optional): Dropout ratio of output. Default: 0.0
    """

    def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):

        super().__init__()
        self.dim = dim
        self.window_size = window_size  # Wh, Ww
        self.num_heads = num_heads
        head_dim = dim // num_heads
        self.scale = qk_scale or head_dim ** -0.5

        # define a parameter table of relative position bias
        self.relative_position_bias_table = nn.Parameter(
            torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads))  # 2*Wh-1 * 2*Ww-1, nH

        # get pair-wise relative position index for each token inside the window
        coords_h = torch.arange(self.window_size[0])
        coords_w = torch.arange(self.window_size[1])
        coords = torch.stack(torch.meshgrid([coords_h, coords_w]))  # 2, Wh, Ww
        coords_flatten = torch.flatten(coords, 1)  # 2, Wh*Ww
        relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :]  # 2, Wh*Ww, Wh*Ww
        relative_coords = relative_coords.permute(1, 2, 0).contiguous()  # Wh*Ww, Wh*Ww, 2
        relative_coords[:, :, 0] += self.window_size[0] - 1  # shift to start from 0
        relative_coords[:, :, 1] += self.window_size[1] - 1
        relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
        relative_position_index = relative_coords.sum(-1)  # Wh*Ww, Wh*Ww
        self.register_buffer("relative_position_index", relative_position_index)

        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
        self.attn_drop = nn.Dropout(attn_drop)
        self.proj = nn.Linear(dim, dim)
        self.proj_drop = nn.Dropout(proj_drop)

        trunc_normal_(self.relative_position_bias_table, std=.02)
        self.softmax = nn.Softmax(dim=-1)

    def forward(self, x, mask=None):
        """
        Args:
            x: input features with shape of (num_windows*B, N, C)
            mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
        """
        B_, N, C = x.shape
        qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
        q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)

        q = q * self.scale
        attn = (q @ k.transpose(-2, -1))

        relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
            self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1)  # Wh*Ww,Wh*Ww,nH
        relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous()  # nH, Wh*Ww, Wh*Ww
        attn = attn + relative_position_bias.unsqueeze(0)

        if mask is not None:
            nW = mask.shape[0]
            attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
            attn = attn.view(-1, self.num_heads, N, N)
            attn = self.softmax(attn)
        else:
            attn = self.softmax(attn)

        attn = self.attn_drop(attn)

        x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
        x = self.proj(x)
        x = self.proj_drop(x)
        return x

    def extra_repr(self) -> str:
        return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'

    def flops(self, N):
        # calculate flops for 1 window with token length of N
        flops = 0
        # qkv = self.qkv(x)
        flops += N * self.dim * 3 * self.dim
        # attn = (q @ k.transpose(-2, -1))
        flops += self.num_heads * N * (self.dim // self.num_heads) * N
        #  x = (attn @ v)
        flops += self.num_heads * N * N * (self.dim // self.num_heads)
        # x = self.proj(x)
        flops += N * self.dim * self.dim
        return flops


class SwinTransformerBlock(nn.Module):
    r""" Swin Transformer Block.

    Args:
        dim (int): Number of input channels.
        input_resolution (tuple[int]): Input resulotion.
        num_heads (int): Number of attention heads.
        window_size (int): Window size.
        shift_size (int): Shift size for SW-MSA.
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
        drop (float, optional): Dropout rate. Default: 0.0
        attn_drop (float, optional): Attention dropout rate. Default: 0.0
        drop_path (float, optional): Stochastic depth rate. Default: 0.0
        act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm
        fused_window_process (bool, optional): If True, use one kernel to fused window shift & window partition for acceleration, similar for the reversed part. Default: False
    """

    def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
                 mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
                 act_layer=nn.GELU, norm_layer=nn.LayerNorm,
                 fused_window_process=False):
        super().__init__()
        self.dim = dim
        self.input_resolution = input_resolution
        self.num_heads = num_heads
        self.window_size = window_size
        self.shift_size = shift_size
        self.mlp_ratio = mlp_ratio
        if min(self.input_resolution) <= self.window_size:
            # if window size is larger than input resolution, we don't partition windows
            self.shift_size = 0
            self.window_size = min(self.input_resolution)
        assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"

        self.norm1 = norm_layer(dim)
        self.attn = WindowAttention(
            dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
            qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)

        self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
        self.norm2 = norm_layer(dim)
        mlp_hidden_dim = int(dim * mlp_ratio)
        self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)

        if self.shift_size > 0:
            # calculate attention mask for SW-MSA
            H, W = self.input_resolution
            img_mask = torch.zeros((1, H, W, 1))  # 1 H W 1
            h_slices = (slice(0, -self.window_size),
                        slice(-self.window_size, -self.shift_size),
                        slice(-self.shift_size, None))
            w_slices = (slice(0, -self.window_size),
                        slice(-self.window_size, -self.shift_size),
                        slice(-self.shift_size, None))
            cnt = 0
            for h in h_slices:
                for w in w_slices:
                    img_mask[:, h, w, :] = cnt
                    cnt += 1

            mask_windows = window_partition(img_mask, self.window_size)  # nW, window_size, window_size, 1
            mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
            attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
            attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
        else:
            attn_mask = None

        self.register_buffer("attn_mask", attn_mask)
        self.fused_window_process = fused_window_process

    def forward(self, x):
        H, W = self.input_resolution
        B, L, C = x.shape
        assert L == H * W, "input feature has wrong size"

        shortcut = x
        x = self.norm1(x)
        x = x.view(B, H, W, C)

        # cyclic shift
        if self.shift_size > 0:
            if not self.fused_window_process:
                shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
                # partition windows
                x_windows = window_partition(shifted_x, self.window_size)  # nW*B, window_size, window_size, C
            else:
                x_windows = WindowProcess.apply(x, B, H, W, C, -self.shift_size, self.window_size)
        else:
            shifted_x = x
            # partition windows
            x_windows = window_partition(shifted_x, self.window_size)  # nW*B, window_size, window_size, C

        x_windows = x_windows.view(-1, self.window_size * self.window_size, C)  # nW*B, window_size*window_size, C

        # W-MSA/SW-MSA
        attn_windows = self.attn(x_windows, mask=self.attn_mask)  # nW*B, window_size*window_size, C

        # merge windows
        attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)

        # reverse cyclic shift
        if self.shift_size > 0:
            if not self.fused_window_process:
                shifted_x = window_reverse(attn_windows, self.window_size, H, W)  # B H' W' C
                x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
            else:
                x = WindowProcessReverse.apply(attn_windows, B, H, W, C, self.shift_size, self.window_size)
        else:
            shifted_x = window_reverse(attn_windows, self.window_size, H, W)  # B H' W' C
            x = shifted_x
        x = x.view(B, H * W, C)
        x = shortcut + self.drop_path(x)

        # FFN
        x = x + self.drop_path(self.mlp(self.norm2(x)))

        return x

    def extra_repr(self) -> str:
        return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
               f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"

    def flops(self):
        flops = 0
        H, W = self.input_resolution
        # norm1
        flops += self.dim * H * W
        # W-MSA/SW-MSA
        nW = H * W / self.window_size / self.window_size
        flops += nW * self.attn.flops(self.window_size * self.window_size)
        # mlp
        flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio
        # norm2
        flops += self.dim * H * W
        return flops


class PatchMerging(nn.Module):
    r""" Patch Merging Layer.

    Args:
        input_resolution (tuple[int]): Resolution of input feature.
        dim (int): Number of input channels.
        norm_layer (nn.Module, optional): Normalization layer.  Default: nn.LayerNorm
    """

    def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
        super().__init__()
        self.input_resolution = input_resolution
        self.dim = dim
        self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
        self.norm = norm_layer(4 * dim)

    def forward(self, x):
        """
        x: B, H*W, C
        """
        H, W = self.input_resolution
        B, L, C = x.shape
        assert L == H * W, "input feature has wrong size"
        assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."

        x = x.view(B, H, W, C)

        x0 = x[:, 0::2, 0::2, :]  # B H/2 W/2 C
        x1 = x[:, 1::2, 0::2, :]  # B H/2 W/2 C
        x2 = x[:, 0::2, 1::2, :]  # B H/2 W/2 C
        x3 = x[:, 1::2, 1::2, :]  # B H/2 W/2 C
        x = torch.cat([x0, x1, x2, x3], -1)  # B H/2 W/2 4*C
        x = x.view(B, -1, 4 * C)  # B H/2*W/2 4*C

        x = self.norm(x)
        x = self.reduction(x)

        return x

    def extra_repr(self) -> str:
        return f"input_resolution={self.input_resolution}, dim={self.dim}"

    def flops(self):
        H, W = self.input_resolution
        flops = H * W * self.dim
        flops += (H // 2) * (W // 2) * 4 * self.dim * 2 * self.dim
        return flops


class BasicLayer(nn.Module):
    """ A basic Swin Transformer layer for one stage.

    Args:
        dim (int): Number of input channels.
        input_resolution (tuple[int]): Input resolution.
        depth (int): Number of blocks.
        num_heads (int): Number of attention heads.
        window_size (int): Local window size.
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
        qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
        drop (float, optional): Dropout rate. Default: 0.0
        attn_drop (float, optional): Attention dropout rate. Default: 0.0
        drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
        norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
        downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
        use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
        fused_window_process (bool, optional): If True, use one kernel to fused window shift & window partition for acceleration, similar for the reversed part. Default: False
    """

    def __init__(self, dim, input_resolution, depth, num_heads, window_size,
                 mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
                 drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
                 fused_window_process=False):

        super().__init__()
        self.dim = dim
        self.input_resolution = input_resolution
        self.depth = depth
        self.use_checkpoint = use_checkpoint

        # build blocks
        self.blocks = nn.ModuleList([
            SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
                                 num_heads=num_heads, window_size=window_size,
                                 shift_size=0 if (i % 2 == 0) else window_size // 2,
                                 mlp_ratio=mlp_ratio,
                                 qkv_bias=qkv_bias, qk_scale=qk_scale,
                                 drop=drop, attn_drop=attn_drop,
                                 drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
                                 norm_layer=norm_layer,
                                 fused_window_process=fused_window_process)
            for i in range(depth)])

        # patch merging layer
        if downsample is not None:
            self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
        else:
            self.downsample = None

    def forward(self, x):
        for blk in self.blocks:
            if self.use_checkpoint:
                x = checkpoint.checkpoint(blk, x)
            else:
                x = blk(x)
        if self.downsample is not None:
            x = self.downsample(x)
        return x

    def extra_repr(self) -> str:
        return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"

    def flops(self):
        flops = 0
        for blk in self.blocks:
            flops += blk.flops()
        if self.downsample is not None:
            flops += self.downsample.flops()
        return flops


class PatchEmbed(nn.Module):
    r""" Image to Patch Embedding

    Args:
        img_size (int): Image size.  Default: 224.
        patch_size (int): Patch token size. Default: 4.
        in_chans (int): Number of input image channels. Default: 3.
        embed_dim (int): Number of linear projection output channels. Default: 96.
        norm_layer (nn.Module, optional): Normalization layer. Default: None
    """

    def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
        super().__init__()
        img_size = to_2tuple(img_size)
        patch_size = to_2tuple(patch_size)
        patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
        self.img_size = img_size
        self.patch_size = patch_size
        self.patches_resolution = patches_resolution
        self.num_patches = patches_resolution[0] * patches_resolution[1]

        self.in_chans = in_chans
        self.embed_dim = embed_dim

        self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
        if norm_layer is not None:
            self.norm = norm_layer(embed_dim)
        else:
            self.norm = None

    def forward(self, x):
        B, C, H, W = x.shape
        # FIXME look at relaxing size constraints
        assert H == self.img_size[0] and W == self.img_size[1], \
            f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."
        x = self.proj(x).flatten(2).transpose(1, 2)  # B Ph*Pw C
        if self.norm is not None:
            x = self.norm(x)
        return x

    def flops(self):
        Ho, Wo = self.patches_resolution
        flops = Ho * Wo * self.embed_dim * self.in_chans * (self.patch_size[0] * self.patch_size[1])
        if self.norm is not None:
            flops += Ho * Wo * self.embed_dim
        return flops


class SwinTransformer(nn.Module):
    r""" Swin Transformer
        A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows`  -
          https://arxiv.org/pdf/2103.14030

    Args:
        img_size (int | tuple(int)): Input image size. Default 224
        patch_size (int | tuple(int)): Patch size. Default: 4
        in_chans (int): Number of input image channels. Default: 3
        num_classes (int): Number of classes for classification head. Default: 1000
        embed_dim (int): Patch embedding dimension. Default: 96
        depths (tuple(int)): Depth of each Swin Transformer layer.
        num_heads (tuple(int)): Number of attention heads in different layers.
        window_size (int): Window size. Default: 7
        mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
        qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
        qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
        drop_rate (float): Dropout rate. Default: 0
        attn_drop_rate (float): Attention dropout rate. Default: 0
        drop_path_rate (float): Stochastic depth rate. Default: 0.1
        norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
        ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
        patch_norm (bool): If True, add normalization after patch embedding. Default: True
        use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
        fused_window_process (bool, optional): If True, use one kernel to fused window shift & window partition for acceleration, similar for the reversed part. Default: False
    """

    def __init__(self, img_size=224, patch_size=4, in_chans=3, num_classes=1000,
                 embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24],
                 window_size=7, mlp_ratio=4., qkv_bias=True, qk_scale=None,
                 drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
                 norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
                 use_checkpoint=False, fused_window_process=False, **kwargs):
        super().__init__()

        self.num_classes = num_classes
        self.num_layers = len(depths)
        self.embed_dim = embed_dim
        self.ape = ape
        self.patch_norm = patch_norm
        self.num_features = int(embed_dim * 2 ** (self.num_layers - 1))
        self.mlp_ratio = mlp_ratio

        # split image into non-overlapping patches
        self.patch_embed = PatchEmbed(
            img_size=img_size, patch_size=patch_size, in_chans=in_chans, embed_dim=embed_dim,
            norm_layer=norm_layer if self.patch_norm else None)
        num_patches = self.patch_embed.num_patches
        patches_resolution = self.patch_embed.patches_resolution
        self.patches_resolution = patches_resolution

        # absolute position embedding
        if self.ape:
            self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
            trunc_normal_(self.absolute_pos_embed, std=.02)

        self.pos_drop = nn.Dropout(p=drop_rate)

        # stochastic depth
        dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))]  # stochastic depth decay rule

        # build layers
        self.layers = nn.ModuleList()
        for i_layer in range(self.num_layers):
            layer = BasicLayer(dim=int(embed_dim * 2 ** i_layer),
                               input_resolution=(patches_resolution[0] // (2 ** i_layer),
                                                 patches_resolution[1] // (2 ** i_layer)),
                               depth=depths[i_layer],
                               num_heads=num_heads[i_layer],
                               window_size=window_size,
                               mlp_ratio=self.mlp_ratio,
                               qkv_bias=qkv_bias, qk_scale=qk_scale,
                               drop=drop_rate, attn_drop=attn_drop_rate,
                               drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])],
                               norm_layer=norm_layer,
                               downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
                               use_checkpoint=use_checkpoint,
                               fused_window_process=fused_window_process)
            self.layers.append(layer)

        self.norm = norm_layer(self.num_features)
        self.avgpool = nn.AdaptiveAvgPool1d(1)
        self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()

        self.apply(self._init_weights)

    def _init_weights(self, m):
        if isinstance(m, nn.Linear):
            trunc_normal_(m.weight, std=.02)
            if isinstance(m, nn.Linear) and m.bias is not None:
                nn.init.constant_(m.bias, 0)
        elif isinstance(m, nn.LayerNorm):
            nn.init.constant_(m.bias, 0)
            nn.init.constant_(m.weight, 1.0)

    @torch.jit.ignore
    def no_weight_decay(self):
        return {'absolute_pos_embed'}

    @torch.jit.ignore
    def no_weight_decay_keywords(self):
        return {'relative_position_bias_table'}

    def forward_features(self, x):
        x = self.patch_embed(x)
        if self.ape:
            x = x + self.absolute_pos_embed
        x = self.pos_drop(x)

        for layer in self.layers:
            x = layer(x)

        x = self.norm(x)  # B L C
        x = self.avgpool(x.transpose(1, 2))  # B C 1
        x = torch.flatten(x, 1)
        return x

    def forward(self, x):
        x = self.forward_features(x)
        x = self.head(x)
        return x

    def flops(self):
        flops = 0
        flops += self.patch_embed.flops()
        for i, layer in enumerate(self.layers):
            flops += layer.flops()
        flops += self.num_features * self.patches_resolution[0] * self.patches_resolution[1] // (2 ** self.num_layers)
        flops += self.num_features * self.num_classes
        return flops

参考:

ViT学习笔记2:Swin Transformer

Swin Transformer 相比之前的 ViT 模型,做出了哪些改进? 关注者

图解Swin Transformer

经典文献阅读之--Swin Transformer

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1378308.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Mr_HJ / form-generator项目文档学习与记录(续2)

更多ruoyi-nbcio功能请看演示系统 gitee源代码地址 前后端代码&#xff1a; https://gitee.com/nbacheng/ruoyi-nbcio 演示地址&#xff1a;RuoYi-Nbcio后台管理系统 更多nbcio-boot功能请看演示系统 gitee源代码地址 后端代码&#xff1a; https://gitee.com/nbacheng/n…

Python入门0基础学习笔记

1.编程之前 在编写代码之前&#xff0c;还有两件事需要做&#xff1a; 安装 Python 解释器&#xff1a;计算机是没法直接读懂 Python 代码的&#xff0c;需要一个解释器作为中间的翻译&#xff0c;把代码转换成字节码之后再执行。 Python 是翻译一行执行一行。一般说的安装 …

AJAX入门到实战,学习前端框架前必会的(ajax+node.js+webpack+git)(七)

08.什么是模块化&#xff1f; CommonJS 标准 09.ECMAScript 标准 - 默认导出和导入 10.ECMAScript 标准 - 命名导出和导入 11.包的概念 实操&#xff1a; server.js utils/lib/index.js utils/package.json 12.npm - 软件包管理器 13.npm - 安装所有依赖 从别处&#xff08;网…

游戏测试大揭秘,帮你轻松过关

游戏测试可以看作是软件测试的一个分支&#xff0c;黑盒测试最基本的要求是会玩游戏。小公司会要求测试能力更加全面的员工&#xff0c;其中除了功能测试还要会性能测试&#xff0c;兼容测试&#xff0c;弱网测试&#xff0c;自动化测试等。 游戏测试是游戏开发过程中必不可少…

【java八股文】之MYSQL基础篇

1、数据库三大范式是什么 第一范式&#xff1a;每个列都不可以再拆分。 第二范式&#xff1a;在第一范式的基础上&#xff0c;非主键列完全依赖于主键&#xff0c;而不能是依赖于主键的一部分。 第三范式&#xff1a;在第二范式的基础上&#xff0c;非主键列只依赖于主键&#…

打铁需要自身硬,我敢和欧系谬论硬刚源自实力与信心

我揭露欧系数学荒谬的目的是驱逐纯粹数学出中国&#xff0c;以恢复中华数学体系、最终让中华数学领导世界&#xff1b;我从来不隐瞒自己的“野心”&#xff0c;我对此有着绝对的信心。民族情怀是中国数学人的短板 纯粹数学是欧洲人的文化、是欧系数学的主体&#xff0c;它的历…

十四.变量、异常处理

变量、异常处理 1.变量1.1系统变量1.1.1系统变量分类1.1.2查看系统变量 1.2用户变量1.2.1用户变量分类1.2.2会话用户变量1.2.3局部变量1.2.4对比会话用户变量与局部变量 2.定义条件与处理程序2.1案例分析2.2定义条件2.3定义处理程序2.4案例解决 1.变量 在MySQL数据库的存储过程…

acwing4986.互质数的个数

题目不难 有个好的细节想着分享一下 一开始写的有点问题&#xff5e;需要特判掉一个... #include<bits/stdc.h> using namespace std; using ll long long; const int N 1e510;const ll mod 998244353;ll qmi(ll a,ll b){ll ans 1;while(b){if(b&1)ans ans*a%…

从vue小白到高手,从一个内容管理网站开始实战开发第七天,登录功能后台功能设计--通用分页、枚举以及相关工具类

上一篇实现了数据库访问层的相关功能,还没有了解的小伙伴可以去看前面文章实现的内容,因为每一篇内容都是连贯的,不学习的话可能下面的内容学习起来会有点摸不着头脑 从vue小白到高手,从一个内容管理网站开始实战开发第六天,登录功能后台功能设计--API项目中的登录实现(二…

LLM大模型和数据标注

对于那些不精通机器学习的人来说&#xff0c;像 ChatGPT 所基于的 GPT-3.5 这样的大型语言模型似乎是自给自足的。这些模型通过无监督或自我监督学习进行训练。简而言之&#xff0c;只需极少的人工干预&#xff0c;就能生成一个能像人类一样对话的模型。 这就引出了一个问题--…

C# 关于多态性学习

前言 我相信大家都对面向对象的三个特征封装、继承、多态很熟悉&#xff0c;每个人都能说上一两句&#xff0c;但是大多数都仅仅是知道这些是什么&#xff0c;今天这篇文章是对C# 的多态性学习一下&#xff0c;巩固自己的基础&#xff0c;我们都知道同一操作作用于不同的对象&…

内 存 取 证

1.用户密码 从内存中获取到用户admin的密码并且破解密码&#xff0c;以Flag{admin,password}形式提交(密码为6位)&#xff1b; 1&#xff09;查看帮助 -h ./volatility_2.6_lin64_standalone -h 2&#xff09;获取内存镜像文件的信息 imageinfo ./volatility_2.6_lin64_stand…

详解java多态

目录 一、多态的概念 二、多态实现条件 三、重写 3.1重写概念 3.2重写规则 3.3重载和重写的区别&#xff1f;&#xff08;面试重点&#xff09; 四、静态绑定和动态绑定 五、 向上转移和向下转型 5.1 向上转型 5.2 向下转型 六、多态的优缺点 6.1使用多态的好处 6…

VSCode 搭建Java开发环境

笔者使用最多的语言是C&#xff0c;也使用过不少其它语言&#xff0c;像Erlang&#xff0c;Python&#xff0c;Lua&#xff0c;C#等等&#xff0c;目前项目中在使用Go&#xff0c;但是没使用过Java。最近看到C#夺冠&#xff0c;首次荣获 TIOBE 年度编程语言&#xff0c;同时也看…

vscode中文插件以及运行php代码

vscode中文版插件 vscode运行php代码 1.插件安装 2.配置项 左上角文件——首选项——设置——输入php.validate.executablePath {"workbench.colorTheme": "Default Dark Modern","php.validate.executablePath": "E:/phpstudy/php.exe&q…

Pytorch基础:数据读取与预处理——调用PyTorch官方数据集

数据读取与预处理——调用PyTorch官方数据集 1. 从网络端下载 FashionMNIST 数据集到本地2. 数据集可视化 1. 从网络端下载 FashionMNIST 数据集到本地 (base) PS C:\Users\孙明阳> conda activate yang (yang) PS C:\Users\孙明阳> python Python 3.11.5 | packaged by…

Vue-9、Vue事件修饰符

1、prevent 阻止默认事件 <!DOCTYPE html> <html lang"en"> <head><meta charset"UTF-8"><title>事件修饰符</title><!--引入vue--><script type"text/javascript" src"https://cdn.jsdeliv…

创建了使用说明书之后,怎样才能监测用户的行为和反馈?

在当今数字化的时代&#xff0c;了解用户的行为和反馈对于产品和服务的质量提升至关重要。对于使用说明书而言&#xff0c;仅仅创建出来是远远不够的&#xff0c;还需要持续地监测用户的行为和反馈&#xff0c;以便不断优化和改进。那怎样才能有效地监测用户的行为和反馈呢&…

Java十大经典算法—KMP

字符串匹配问题&#xff1a; 1.暴力匹配 public class ViolenceMatch {public static void main(String[] args) {String str1 "硅硅谷 尚硅谷你尚硅 尚硅谷你尚硅谷你尚硅你好";String str2 "尚硅谷你尚硅你好";int index violenceMatch(str1, str2);S…

Web开发:SQLsugar的安装和使用

一、安装 第一步&#xff0c;在你的项目中找到解决方案&#xff0c;右键-管理解决方案的Nuget 第二步&#xff0c;下载对应的包&#xff0c;注意你的框架是哪个就下载哪个的包&#xff0c;一个项目安装一次包即可 点击应用和确定 安装好后会显示sqlsugar的包 二、使用&#xf…