Self-Attention 、 Multi-Head Attention 、VIT 学习记录及源码分享

news2025/1/12 23:07:48

这里写目录标题

    • 1 参考资料
    • 2 重点记录
      • 2.1 Self-Attention
      • 2.2 Multi-Head Attention
    • 3. Vision Transformer(VIT)
      • 3.1 纯VIT
      • 3.2 Hybrid VIT
    • 4 代码使用

前言:想要看懂VIT中的一些内容,需要的基础知识点就是自己跑过一些CV方向的Demo,知道常见CV领域的一些操作,剩下的就是跟着霹导的视频学习就好了,讲解的非常详细,代码说的也很好!!!

1 参考资料

Attention论文地址:https://arxiv.org/abs/1706.03762
VIT论文地址:https://arxiv.org/abs/2010.11929
Attention博客地址:https://blog.csdn.net/qq_37541097/article/details/117691873
VIT博客地址:https://blog.csdn.net/qq_37541097/article/details/118242600
Attention视频讲解地址:https://www.bilibili.com/video/BV15v411W78M/
VIT视频讲解地址:https://www.bilibili.com/video/BV1Jh411Y7WQ/?spm_id_from=333.788

2 重点记录


以下就是作为自己后期复盘的一个记录,只是写了大概,帮助自己记忆,详细的内容请看上面的参考资料!!


2.1 Self-Attention

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

看好啦,这就是霹导的讲解视频,图都是他画的,我哭死,太细了,非常棒!!

Self-Attention,就四个步骤:

  1. 不管什么数据,通过Input Embedding,将其映射到另外一个维度(2D图像中,使用的是Conv2d卷积操作实现的!);
  2. 使用可学习的W参数,计算出每一个数据的q,k,v;
  3. 使用q,k,v计算出对应数据的 α ^ \hat{\alpha } α^,( α ^ \hat{\alpha } α^我认为就是每个元素之间的相互关系度量值);
  4. 使用 α ^ \hat{\alpha } α^和v,计算出最后的b,就是Attention值;

2.2 Multi-Head Attention

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
Multi-Head Attention 一共五个步骤:

  1. 通过Input Embedding操作,将数据映射到另外一个维度(CV中,也是通过Conv2d实现的);
  2. 使用可学习的参数W,每个数据得到Head个q,k,v对;
  3. 将相同Head范围内的每个数据的q,k,v对结合起来,算出Head个 α ^ \hat{\alpha } α^;
  4. 使用Head对 α ^ \hat{\alpha } α^和v,计算出每个数据Head个Attention值;
  5. 使用Concat连接和可学习的参数举证乘法将每一个值的Head个Attention进行结合;

这里的Head就是避免单一的Head学习到的的东西有点少,就多用几个Head,然后将学习到的东西综合一下,这样学习能力上去了,容错性也大了!

3. Vision Transformer(VIT)

3.1 纯VIT

直接上霹导画的图,这个图是VIT-Base(参数最少的一个模型,还有Large和Huge)中图片Patch为16*16大小,这里结构很清晰,也很容易理解!!
在这里插入图片描述

纯VIT指的是,在Input Embedding部分,就是简单的使用Conv2d进行实现的,整个VIT包含三个部分:

  1. Conv2d实现的Embedding部分;
  2. 多个Encoder Block组成的Transformer Encoder部分,Encoder Block中是由Multi-Head Attention和MLP模块组成的;
  3. MLP部分,用来进行最后的分类(使用Linear和激活函数组成的);

3.2 Hybrid VIT

这里的Hybrid 指的是在Embedding部分,使用其他传统CNN结构进行提取,例如ResNet,上霹导图:
在这里插入图片描述
Hybrid VIT 唯一不同就是红色框中的东西,反正Embedding层使用ResNet提取特征之后,维度,大小啥的,用Conv2d调整一下就行了!

4 代码使用

源码地址: 点击跳转

"""
original code from rwightman:
https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
"""
from functools import partial
from collections import OrderedDict

import torch
import torch.nn as nn


def drop_path(x, drop_prob: float = 0., training: bool = False):
    """
    Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).
    This is the same as the DropConnect impl I created for EfficientNet, etc networks, however,
    the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper...
    See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for
    changing the layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use
    'survival rate' as the argument.
    """
    if drop_prob == 0. or not training:
        return x
    keep_prob = 1 - drop_prob
    shape = (x.shape[0],) + (1,) * (x.ndim - 1)  # work with diff dim tensors, not just 2D ConvNets
    random_tensor = keep_prob + torch.rand(shape, dtype=x.dtype, device=x.device)
    random_tensor.floor_()  # binarize
    output = x.div(keep_prob) * random_tensor
    return output


class DropPath(nn.Module):
    """
    Drop paths (Stochastic Depth) per sample  (when applied in main path of residual blocks).
    """
    def __init__(self, drop_prob=None):
        super(DropPath, self).__init__()
        self.drop_prob = drop_prob

    def forward(self, x):
        return drop_path(x, self.drop_prob, self.training)


class PatchEmbed(nn.Module):
    """
    2D Image to Patch Embedding
    """
    def __init__(self, img_size=224, patch_size=16, in_c=3, embed_dim=768, norm_layer=None):
        super().__init__()
        img_size = (img_size, img_size)
        patch_size = (patch_size, patch_size)
        self.img_size = img_size
        self.patch_size = patch_size
        self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
        self.num_patches = self.grid_size[0] * self.grid_size[1]

        self.proj = nn.Conv2d(in_c, embed_dim, kernel_size=patch_size, stride=patch_size)
        self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()

    def forward(self, x):
        B, C, H, W = x.shape
        assert H == self.img_size[0] and W == self.img_size[1], \
            f"Input image size ({H}*{W}) doesn't match model ({self.img_size[0]}*{self.img_size[1]})."

        # flatten: [B, C, H, W] -> [B, C, HW]
        # transpose: [B, C, HW] -> [B, HW, C]
        x = self.proj(x).flatten(2).transpose(1, 2)
        x = self.norm(x)
        return x


class Attention(nn.Module):
    def __init__(self,
                 dim,   # 输入token的dim
                 num_heads=8,
                 qkv_bias=False,
                 qk_scale=None,
                 attn_drop_ratio=0.,
                 proj_drop_ratio=0.):
        super(Attention, self).__init__()
        self.num_heads = num_heads
        head_dim = dim // num_heads
        self.scale = qk_scale or head_dim ** -0.5
        self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
        self.attn_drop = nn.Dropout(attn_drop_ratio)
        self.proj = nn.Linear(dim, dim)
        self.proj_drop = nn.Dropout(proj_drop_ratio)

    def forward(self, x):
        # [batch_size, num_patches + 1, total_embed_dim]
        B, N, C = x.shape

        # qkv(): -> [batch_size, num_patches + 1, 3 * total_embed_dim]
        # reshape: -> [batch_size, num_patches + 1, 3, num_heads, embed_dim_per_head]
        # permute: -> [3, batch_size, num_heads, num_patches + 1, embed_dim_per_head]
        qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
        # [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
        q, k, v = qkv[0], qkv[1], qkv[2]  # make torchscript happy (cannot use tensor as tuple)

        # transpose: -> [batch_size, num_heads, embed_dim_per_head, num_patches + 1]
        # @: multiply -> [batch_size, num_heads, num_patches + 1, num_patches + 1]
        attn = (q @ k.transpose(-2, -1)) * self.scale
        attn = attn.softmax(dim=-1)
        attn = self.attn_drop(attn)

        # @: multiply -> [batch_size, num_heads, num_patches + 1, embed_dim_per_head]
        # transpose: -> [batch_size, num_patches + 1, num_heads, embed_dim_per_head]
        # reshape: -> [batch_size, num_patches + 1, total_embed_dim]
        x = (attn @ v).transpose(1, 2).reshape(B, N, C)
        x = self.proj(x)
        x = self.proj_drop(x)
        return x


class Mlp(nn.Module):
    """
    MLP as used in Vision Transformer, MLP-Mixer and related networks
    """
    def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
        super().__init__()
        out_features = out_features or in_features
        hidden_features = hidden_features or in_features
        self.fc1 = nn.Linear(in_features, hidden_features)
        self.act = act_layer()
        self.fc2 = nn.Linear(hidden_features, out_features)
        self.drop = nn.Dropout(drop)

    def forward(self, x):
        x = self.fc1(x)
        x = self.act(x)
        x = self.drop(x)
        x = self.fc2(x)
        x = self.drop(x)
        return x


class Block(nn.Module):
    def __init__(self,
                 dim,
                 num_heads,
                 mlp_ratio=4.,
                 qkv_bias=False,
                 qk_scale=None,
                 drop_ratio=0.,
                 attn_drop_ratio=0.,
                 drop_path_ratio=0.,
                 act_layer=nn.GELU,
                 norm_layer=nn.LayerNorm):
        super(Block, self).__init__()
        self.norm1 = norm_layer(dim)
        self.attn = Attention(dim, num_heads=num_heads, qkv_bias=qkv_bias, qk_scale=qk_scale,
                              attn_drop_ratio=attn_drop_ratio, proj_drop_ratio=drop_ratio)
        # NOTE: drop path for stochastic depth, we shall see if this is better than dropout here
        self.drop_path = DropPath(drop_path_ratio) if drop_path_ratio > 0. else nn.Identity()
        self.norm2 = norm_layer(dim)
        mlp_hidden_dim = int(dim * mlp_ratio)
        self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop_ratio)

    def forward(self, x):
        x = x + self.drop_path(self.attn(self.norm1(x)))
        x = x + self.drop_path(self.mlp(self.norm2(x)))
        return x


class VisionTransformer(nn.Module):
    def __init__(self, img_size=224, patch_size=16, in_c=3, num_classes=1000,
                 embed_dim=768, depth=12, num_heads=12, mlp_ratio=4.0, qkv_bias=True,
                 qk_scale=None, representation_size=None, distilled=False, drop_ratio=0.,
                 attn_drop_ratio=0., drop_path_ratio=0., embed_layer=PatchEmbed, norm_layer=None,
                 act_layer=None):
        """
        Args:
            img_size (int, tuple): input image size
            patch_size (int, tuple): patch size
            in_c (int): number of input channels
            num_classes (int): number of classes for classification head
            embed_dim (int): embedding dimension
            depth (int): depth of transformer
            num_heads (int): number of attention heads
            mlp_ratio (int): ratio of mlp hidden dim to embedding dim
            qkv_bias (bool): enable bias for qkv if True
            qk_scale (float): override default qk scale of head_dim ** -0.5 if set
            representation_size (Optional[int]): enable and set representation layer (pre-logits) to this value if set
            distilled (bool): model includes a distillation token and head as in DeiT models
            drop_ratio (float): dropout rate
            attn_drop_ratio (float): attention dropout rate
            drop_path_ratio (float): stochastic depth rate
            embed_layer (nn.Module): patch embedding layer
            norm_layer: (nn.Module): normalization layer
        """
        super(VisionTransformer, self).__init__()
        self.num_classes = num_classes
        self.num_features = self.embed_dim = embed_dim  # num_features for consistency with other models
        self.num_tokens = 2 if distilled else 1
        norm_layer = norm_layer or partial(nn.LayerNorm, eps=1e-6)
        act_layer = act_layer or nn.GELU

        self.patch_embed = embed_layer(img_size=img_size, patch_size=patch_size, in_c=in_c, embed_dim=embed_dim)
        num_patches = self.patch_embed.num_patches

        self.cls_token = nn.Parameter(torch.zeros(1, 1, embed_dim))
        self.dist_token = nn.Parameter(torch.zeros(1, 1, embed_dim)) if distilled else None
        self.pos_embed = nn.Parameter(torch.zeros(1, num_patches + self.num_tokens, embed_dim))
        self.pos_drop = nn.Dropout(p=drop_ratio)

        dpr = [x.item() for x in torch.linspace(0, drop_path_ratio, depth)]  # stochastic depth decay rule
        self.blocks = nn.Sequential(*[
            Block(dim=embed_dim, num_heads=num_heads, mlp_ratio=mlp_ratio, qkv_bias=qkv_bias, qk_scale=qk_scale,
                  drop_ratio=drop_ratio, attn_drop_ratio=attn_drop_ratio, drop_path_ratio=dpr[i],
                  norm_layer=norm_layer, act_layer=act_layer)
            for i in range(depth)
        ])
        self.norm = norm_layer(embed_dim)

        # Representation layer
        if representation_size and not distilled:
            self.has_logits = True
            self.num_features = representation_size
            self.pre_logits = nn.Sequential(OrderedDict([
                ("fc", nn.Linear(embed_dim, representation_size)),
                ("act", nn.Tanh())
            ]))
        else:
            self.has_logits = False
            self.pre_logits = nn.Identity()

        # Classifier head(s)
        self.head = nn.Linear(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
        self.head_dist = None
        if distilled:
            self.head_dist = nn.Linear(self.embed_dim, self.num_classes) if num_classes > 0 else nn.Identity()

        # Weight init
        nn.init.trunc_normal_(self.pos_embed, std=0.02)
        if self.dist_token is not None:
            nn.init.trunc_normal_(self.dist_token, std=0.02)

        nn.init.trunc_normal_(self.cls_token, std=0.02)
        self.apply(_init_vit_weights)

    def forward_features(self, x):
        # [B, C, H, W] -> [B, num_patches, embed_dim]
        x = self.patch_embed(x)  # [B, 196, 768]
        # [1, 1, 768] -> [B, 1, 768]
        cls_token = self.cls_token.expand(x.shape[0], -1, -1)
        if self.dist_token is None:
            x = torch.cat((cls_token, x), dim=1)  # [B, 197, 768]
        else:
            x = torch.cat((cls_token, self.dist_token.expand(x.shape[0], -1, -1), x), dim=1)

        x = self.pos_drop(x + self.pos_embed)
        x = self.blocks(x)
        x = self.norm(x)
        if self.dist_token is None:
            return self.pre_logits(x[:, 0])
        else:
            return x[:, 0], x[:, 1]

    def forward(self, x):
        x = self.forward_features(x)
        if self.head_dist is not None:
            x, x_dist = self.head(x[0]), self.head_dist(x[1])
            if self.training and not torch.jit.is_scripting():
                # during inference, return the average of both classifier predictions
                return x, x_dist
            else:
                return (x + x_dist) / 2
        else:
            x = self.head(x)
        return x


def _init_vit_weights(m):
    """
    ViT weight initialization
    :param m: module
    """
    if isinstance(m, nn.Linear):
        nn.init.trunc_normal_(m.weight, std=.01)
        if m.bias is not None:
            nn.init.zeros_(m.bias)
    elif isinstance(m, nn.Conv2d):
        nn.init.kaiming_normal_(m.weight, mode="fan_out")
        if m.bias is not None:
            nn.init.zeros_(m.bias)
    elif isinstance(m, nn.LayerNorm):
        nn.init.zeros_(m.bias)
        nn.init.ones_(m.weight)


def vit_base_patch16_224(num_classes: int = 1000):
    """
    ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
    ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.
    weights ported from official Google JAX impl:
    链接: https://pan.baidu.com/s/1zqb08naP0RPqqfSXfkB2EA  密码: eu9f
    """
    model = VisionTransformer(img_size=224,
                              patch_size=16,
                              embed_dim=768,
                              depth=12,
                              num_heads=12,
                              representation_size=None,
                              num_classes=num_classes)
    return model


def vit_base_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):
    """
    ViT-Base model (ViT-B/16) from original paper (https://arxiv.org/abs/2010.11929).
    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
    weights ported from official Google JAX impl:
    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch16_224_in21k-e5005f0a.pth
    """
    model = VisionTransformer(img_size=224,
                              patch_size=16,
                              embed_dim=768,
                              depth=12,
                              num_heads=12,
                              representation_size=768 if has_logits else None,
                              num_classes=num_classes)
    return model


def vit_base_patch32_224(num_classes: int = 1000):
    """
    ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).
    ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.
    weights ported from official Google JAX impl:
    链接: https://pan.baidu.com/s/1hCv0U8pQomwAtHBYc4hmZg  密码: s5hl
    """
    model = VisionTransformer(img_size=224,
                              patch_size=32,
                              embed_dim=768,
                              depth=12,
                              num_heads=12,
                              representation_size=None,
                              num_classes=num_classes)
    return model


def vit_base_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):
    """
    ViT-Base model (ViT-B/32) from original paper (https://arxiv.org/abs/2010.11929).
    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
    weights ported from official Google JAX impl:
    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_base_patch32_224_in21k-8db57226.pth
    """
    model = VisionTransformer(img_size=224,
                              patch_size=32,
                              embed_dim=768,
                              depth=12,
                              num_heads=12,
                              representation_size=768 if has_logits else None,
                              num_classes=num_classes)
    return model


def vit_large_patch16_224(num_classes: int = 1000):
    """
    ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
    ImageNet-1k weights @ 224x224, source https://github.com/google-research/vision_transformer.
    weights ported from official Google JAX impl:
    链接: https://pan.baidu.com/s/1cxBgZJJ6qUWPSBNcE4TdRQ  密码: qqt8
    """
    model = VisionTransformer(img_size=224,
                              patch_size=16,
                              embed_dim=1024,
                              depth=24,
                              num_heads=16,
                              representation_size=None,
                              num_classes=num_classes)
    return model


def vit_large_patch16_224_in21k(num_classes: int = 21843, has_logits: bool = True):
    """
    ViT-Large model (ViT-L/16) from original paper (https://arxiv.org/abs/2010.11929).
    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
    weights ported from official Google JAX impl:
    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch16_224_in21k-606da67d.pth
    """
    model = VisionTransformer(img_size=224,
                              patch_size=16,
                              embed_dim=1024,
                              depth=24,
                              num_heads=16,
                              representation_size=1024 if has_logits else None,
                              num_classes=num_classes)
    return model


def vit_large_patch32_224_in21k(num_classes: int = 21843, has_logits: bool = True):
    """
    ViT-Large model (ViT-L/32) from original paper (https://arxiv.org/abs/2010.11929).
    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
    weights ported from official Google JAX impl:
    https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-vitjx/jx_vit_large_patch32_224_in21k-9046d2e7.pth
    """
    model = VisionTransformer(img_size=224,
                              patch_size=32,
                              embed_dim=1024,
                              depth=24,
                              num_heads=16,
                              representation_size=1024 if has_logits else None,
                              num_classes=num_classes)
    return model


def vit_huge_patch14_224_in21k(num_classes: int = 21843, has_logits: bool = True):
    """
    ViT-Huge model (ViT-H/14) from original paper (https://arxiv.org/abs/2010.11929).
    ImageNet-21k weights @ 224x224, source https://github.com/google-research/vision_transformer.
    NOTE: converted weights not currently available, too large for github release hosting.
    """
    model = VisionTransformer(img_size=224,
                              patch_size=14,
                              embed_dim=1280,
                              depth=32,
                              num_heads=16,
                              representation_size=1280 if has_logits else None,
                              num_classes=num_classes)
    return 

测试结果:
在这里插入图片描述

注意: 这里的图片大小尽量就是224,如果自己的图片尺寸不够,可以调用torch中的resize方法;如果想要修改尺寸的话,也是可以的,注意能和patch_size整除就行了;建议跟着讲解视频自己敲一遍,加深理解!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/74831.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

ADI Blackfin DSP处理器-BF533的开发详解13:LDF内存分配的详解(含源代码)

硬件准备 ADSP-EDU-BF533:BF533开发板 AD-HP530ICE:ADI DSP仿真器 软件准备 Visual DSP软件 硬件链接 功能介绍 ADSP上的LDF(Linker Description Files)连接器描述文件是处理器用来进行资源分配的文件,通过对LDF文…

中国新能源汽车产销量居世界第一,SCM系统实现企业订单可持续高效流转

近年来,中国汽车产业发生了翻天覆地的变化,而新能源汽车正是这一巨变的中坚力量。从不足10万辆,到突破千万辆,新能源汽车在国家政策扶持下,产品供给不断丰富、企业创新活力竞相迸发、使用环境日臻完善以及消费者认可度…

【MOOC】数据结构-2022秋期末考试

判断题 T 解析 第一个地址为2,第二个地址为21,第三个地址为24,第四个为29,即下标为0。 T 解析 在任一有向图中,所有顶点的入度之和等于所有顶点的出度之和。 F 解析 应该是当且仅当该树是满二叉树 F 解析 应该是交换次…

ABAP中的类与对象(Local class )

文章目录1 Definition1.1 What is the object?1.2 Differentiation of classes2 Factor of class2.1 classification2.2 Class Definition3 Access area4 Create local class4.1 Define the project of class (Attributes , Method, Event)4.2 Implement method of …

【C++进阶】哈希(万字详解)—— 学习篇(上)

🎇C学习历程:入门 博客主页:一起去看日落吗持续分享博主的C学习历程博主的能力有限,出现错误希望大家不吝赐教分享给大家一句我很喜欢的话: 也许你现在做的事情,暂时看不到成果,但不要忘记&…

成功的软件项目管理的职责和方法

软件项目管理是指项目管理的一个分支,专注于软件和Web项目的规划、资源分配、执行、跟踪和交付。软件开发领域的项目管理不同于经典的项目管理,因为软件项目有一个特殊的生命周期,包括多轮测试、更新和客户反馈。大多数IT项目都依赖敏捷方法来…

简介Object类+接口实例(深浅拷贝、对象数组排序)

本期目录前言一、初识Object类🍑1、toString()🍑2、hashCode()🍑3、equals()🍑4、clone()三、对象的深浅拷贝🍑1、浅拷贝🍑2、深拷贝🍑3、深浅拷贝的特点二、对象数组排序🍑1、通过C…

什么是BadUSB攻击以及如何预防

BadUSB 攻击是指 USB 设备存在内置固件漏洞,该漏洞允许自身伪装成人机接口设备。一旦连接到其目标计算机,BadUSB 就可以谨慎地执行有害命令或注入恶意负载。 一种常见的BadUSB攻击类型是橡皮鸭。它可以通过使用使用隐藏漏洞创建的闪存驱动器来执行&…

第二证券|卡塔尔给体育烧的钱,不止世界杯

11月,世界杯史上首次在北半球冬季打响。 全世界的目光也聚焦到了卡塔尔——这个面积仅11576平方千米、人口不足300万的中东小国。 虽然面积小,卡塔尔人花钱却很大方。 本届世界杯总计2200亿美元的投入,为历届世界杯花费之最。巴西世界杯和俄…

java基于springboot的新生宿舍管理系统-计算机毕业设计

项目介绍 随着科学技术的飞速发展,社会的方方面面、各行各业都在努力与现代的先进技术接轨,通过科技手段来提高自身的优势,新生宿舍管理系统当然也不能排除在外。新生宿舍管理系统是以实际运用为开发背景,运用软件工程原理和开发…

海外社媒运营,推特内容营销

Twitter 成立于 2006 年,已成为全球第三大用户社交平台,月活跃用户达 3.89 亿。推特最大的特点就是字数限制和信息短小,正好符合现代人的阅读习惯。 对于跨境卖家来说,推特不仅可以获取有价值的客户信息,收集粉丝反馈…

大学电子系C++模拟考试之一

随手附上一些代码,未必是最优解,仅供参考。 加密四位数 【问题描述】 输入一个四位数,将其加密后输出。方法是将该数每一位的数字加9,然后除以10取余作为该位上的新数字,最后将千位上的数字和十位上的数字互换&#…

7个成功的DTC品牌出海营销策略,提高海外客户的忠诚度!

关键词:DTC品牌出海、DTC营销、客户忠诚度 近年来,普通消费者关心的事情发生了巨大变化。 60% 的消费者会特意从品牌而不是第三方零售商处购买。 从大型零售商处购买再成为主流。人们希望与他们关心并感到关心的品牌建立关系。他们希望支持独立企业并找到…

Spring中IOC容器

IOC入门案例思路分析 1.管理什么(Service和Dao) 2.如何将管理的对象存放到IOC容器(配置applicationContext.xml)第二步 3.将管理的对象存放到IOC容器,如何获取IOC容器 第三步 4.获取到IOC容器后,如何从…

纷繁复杂见真章,华为云产品需求管理利器CodeArts Req解读

摘要:到底什么是需求?又该如何做好需求管理?本文分享自华为云社区《纷繁复杂见真章,华为云产品需求管理利器 CodeArts Req 解读》,作者:华为云头条 。 2022 年 8 月,某国国税局获得数十亿美元新…

【Keras计算机视觉OCR文字识别】文字检测算法中CTPN、CRAFT的讲解(图文解释 超详细)

觉得有帮助麻烦点赞关注收藏~~~ 一、OCR文字识别的概念 OCR(Optical Character Recognition)图像文字识别是人工智能的重要分支,赋予计算机人眼的功能,可以看图识字。如图6-1所示,图像文字识别系统流程一般分为图像采…

干掉满屏的 try-catch,这样写太香了!

背景 软件开发过程中,不可避免的是需要处理各种异常,就我自己来说,至少有一半以上的时间都是在处理各种异常情况,所以代码中就会出现大量的try {...} catch {...} finally {...} 代码块,不仅有大量的冗余代码&#xf…

Windows系统如何部署Rabbit和启动Rabbit服务

如何部署Rabbit和启动Rabbit服务第一步:安装otp下载OPT应用:安装OPT第二部:安装Rabbit下载Rabbit安装Rabbit执行命令,添加可视化插件第三步:启动Rabbit服务第四步:在网页验证rabbit服务器启动第一步&#x…

题目给出一个字符串s1,我们可以用递归的方法将字符串分成两个非空的子串来将s1表示成一个二叉树

题目给出一个字符串s1,我们可以用递归的方法将字符串分成两个非空的子串来将s1表示成一个二叉树 下面是s1“coder”的一种二叉树的表现形式: 将字符串乱序的方法是:选择任意的非叶子节点,交换它的两个孩子节点。 例如&#xff1…

图扑虚拟现实解决方案,实现 VR 数智机房

如今,虚拟现实技术作为连接虚拟世界和现实世界的桥梁,正加速各领域应用形成新场景、新模式、新业态。 效果展示 图扑软件基于自研可视化引擎 HT for Web 搭建的 VR 数据中心机房,是将数据中心的运营搬到 VR 虚拟场景。以数据中心实际场景为…