爆改YOLOv8 | 利用CPA-Enhancer提高低照度物体检测(适用于雨,雪,雾天)

news2024/9/22 17:32:41

1,本文介绍

CPA-Enhancer通过链式思考提示机制实现了对未知退化条件下图像的自适应增强,显著提升了物体检测性能。其插件式设计便于集成到现有检测框架中,并在物体检测及其他视觉任务中设立了新的性能标准,展现了广泛的应用潜力。

关于CPA-Enhancer的详细介绍可以看论文:https://arxiv.org/abs/2403.11220v3

本文将讲解如何将CPA-Enhancer融合进yolov8

话不多说,上代码!

2,将CPA-Enhancer融合进YOLOv8


2.1 步骤一

首先找到如下的目录'ultralytics/nn',然后在这个目录下创建一个'Addmodules'文件夹,然后在这个目录下创建一个Enhancer.py文件,文件名字可以根据你自己的习惯起,然后将CPA-Enhancer的核心代码复制进去。

import torch
import torch.nn as nn
import torch.nn.functional as F
import numbers
from einops import rearrange
from einops.layers.torch import Rearrange
 
__all__ = ['CPA_arch']
 
class RFAConv(nn.Module):  # 基于Group Conv实现的RFAConv
    def __init__(self, in_channel, out_channel, kernel_size=3, stride=1):
        super().__init__()
        self.kernel_size = kernel_size
        self.get_weight = nn.Sequential(nn.AvgPool2d(kernel_size=kernel_size, padding=kernel_size // 2, stride=stride),
                                        nn.Conv2d(in_channel, in_channel * (kernel_size ** 2), kernel_size=1,
                                                  groups=in_channel, bias=False))
        self.generate_feature = nn.Sequential(
            nn.Conv2d(in_channel, in_channel * (kernel_size ** 2), kernel_size=kernel_size, padding=kernel_size // 2,
                      stride=stride, groups=in_channel, bias=False),
            nn.BatchNorm2d(in_channel * (kernel_size ** 2)),
            nn.ReLU())
 
        self.conv = nn.Sequential(nn.Conv2d(in_channel, out_channel, kernel_size=kernel_size, stride=kernel_size),
                                  nn.BatchNorm2d(out_channel),
                                  nn.ReLU())
 
    def forward(self, x):
        b, c = x.shape[0:2]
        weight = self.get_weight(x)
        h, w = weight.shape[2:]
        weighted = weight.view(b, c, self.kernel_size ** 2, h, w).softmax(2)  # b c*kernel**2,h,w ->  b c k**2 h w
        feature = self.generate_feature(x).view(b, c, self.kernel_size ** 2, h,
                                                w)  # b c*kernel**2,h,w ->  b c k**2 h w   获得感受野空间特征
        weighted_data = feature * weighted
        conv_data = rearrange(weighted_data, 'b c (n1 n2) h w -> b c (h n1) (w n2)', n1=self.kernel_size,
                              # b c k**2 h w ->  b c h*k w*k
                              n2=self.kernel_size)
        return self.conv(conv_data)
 
class Downsample(nn.Module):
    def __init__(self, n_feat):
        super(Downsample, self).__init__()
 
        self.body = nn.Sequential(nn.Conv2d(n_feat, n_feat // 2, kernel_size=3, stride=1, padding=1, bias=False),
                                  nn.PixelUnshuffle(2))
 
    def forward(self, x):
        return self.body(x)
 
class Upsample(nn.Module):
    def __init__(self, n_feat):
        super(Upsample, self).__init__()
 
        self.body = nn.Sequential(nn.Conv2d(n_feat, n_feat * 2, kernel_size=3, stride=1, padding=1, bias=False),
                                  nn.PixelShuffle(2))
 
    def forward(self, x):  # (b,c,h,w)
        return self.body(x)  # (b,c/2,h*2,w*2)
 
class SpatialAttention(nn.Module):
    def __init__(self):
        super(SpatialAttention, self).__init__()
        self.sa = nn.Conv2d(2, 1, 7, padding=3, padding_mode='reflect', bias=True)
 
    def forward(self, x):  # x:[b,c,h,w]
        x_avg = torch.mean(x, dim=1, keepdim=True)  # (b,1,h,w)
        x_max, _ = torch.max(x, dim=1, keepdim=True)  # (b,1,h,w)
        x2 = torch.concat([x_avg, x_max], dim=1)  # (b,2,h,w)
        sattn = self.sa(x2)  # 7x7conv (b,1,h,w)
        return sattn * x
 
class ChannelAttention(nn.Module):
    def __init__(self, dim, reduction=8):
        super(ChannelAttention, self).__init__()
        self.gap = nn.AdaptiveAvgPool2d(1)
        self.ca = nn.Sequential(
            nn.Conv2d(dim, dim // reduction, 1, padding=0, bias=True),
            nn.ReLU(inplace=True),  # Relu
            nn.Conv2d(dim // reduction, dim, 1, padding=0, bias=True),
        )
 
    def forward(self, x):  # x:[b,c,h,w]
        x_gap = self.gap(x)  #  [b,c,1,1]
        cattn = self.ca(x_gap)  # [b,c,1,1]
        return cattn * x
 
class Channel_Shuffle(nn.Module):
    def __init__(self, num_groups):
        super(Channel_Shuffle, self).__init__()
        self.num_groups = num_groups
 
    def forward(self, x):
        batch_size, chs, h, w = x.shape
        chs_per_group = chs // self.num_groups
        x = torch.reshape(x, (batch_size, self.num_groups, chs_per_group, h, w))
        # (batch_size, num_groups, chs_per_group, h, w)
        x = x.transpose(1, 2)  # dim_1 and dim_2
        out = torch.reshape(x, (batch_size, -1, h, w))
        return out
 
class TransformerBlock(nn.Module):
    def __init__(self, dim, num_heads, ffn_expansion_factor, bias, LayerNorm_type):
        super(TransformerBlock, self).__init__()
 
        self.norm1 = LayerNorm(dim, LayerNorm_type)
        self.attn = Attention(dim, num_heads, bias)
        self.norm2 = LayerNorm(dim, LayerNorm_type)
        self.ffn = FeedForward(dim, ffn_expansion_factor, bias)
 
    def forward(self, x):
        x = x + self.attn(self.norm1(x))
        x = x + self.ffn(self.norm2(x))
        return x
 
def to_3d(x):
    return rearrange(x, 'b c h w -> b (h w) c')
 
def to_4d(x, h, w):
    return rearrange(x, 'b (h w) c -> b c h w', h=h, w=w)
 
class BiasFree_LayerNorm(nn.Module):
    def __init__(self, normalized_shape):
        super(BiasFree_LayerNorm, self).__init__()
        if isinstance(normalized_shape, numbers.Integral):
            normalized_shape = (normalized_shape,)
        normalized_shape = torch.Size(normalized_shape)
 
        assert len(normalized_shape) == 1
 
        self.weight = nn.Parameter(torch.ones(normalized_shape))
        self.normalized_shape = normalized_shape
 
    def forward(self, x):
        sigma = x.var(-1, keepdim=True, unbiased=False)
        return x / torch.sqrt(sigma + 1e-5) * self.weight
 
class WithBias_LayerNorm(nn.Module):
    def __init__(self, normalized_shape):
        super(WithBias_LayerNorm, self).__init__()
        if isinstance(normalized_shape, numbers.Integral):
            normalized_shape = (normalized_shape,)
        normalized_shape = torch.Size(normalized_shape)
 
        assert len(normalized_shape) == 1
 
        self.weight = nn.Parameter(torch.ones(normalized_shape))
        self.bias = nn.Parameter(torch.zeros(normalized_shape))
        self.normalized_shape = normalized_shape
 
    def forward(self, x):
        device = x.device
        mu = x.mean(-1, keepdim=True)
        sigma = x.var(-1, keepdim=True, unbiased=False)
        result = (x - mu) / torch.sqrt(sigma + 1e-5) * self.weight.to(device) + self.bias.to(device)
        return result
 
class LayerNorm(nn.Module):
    def __init__(self, dim, LayerNorm_type):
        super(LayerNorm, self).__init__()
        if LayerNorm_type == 'BiasFree':
            self.body = BiasFree_LayerNorm(dim)
        else:
            self.body = WithBias_LayerNorm(dim)
 
    def forward(self, x):
        h, w = x.shape[-2:]
        return to_4d(self.body(to_3d(x)), h, w)
 
class FeedForward(nn.Module):
    def __init__(self, dim, ffn_expansion_factor, bias):
        super(FeedForward, self).__init__()
 
        hidden_features = int(dim * ffn_expansion_factor)
 
        self.project_in = nn.Conv2d(dim, hidden_features * 2, kernel_size=1, bias=bias)
 
        self.dwconv = nn.Conv2d(hidden_features * 2, hidden_features * 2, kernel_size=3, stride=1, padding=1,
                                groups=hidden_features * 2, bias=bias)
 
        self.project_out = nn.Conv2d(hidden_features, dim, kernel_size=1, bias=bias)
 
    def forward(self, x):
        device = x.device
        self.project_in = self.project_in.to(device)
        self.dwconv = self.dwconv.to(device)
        self.project_out = self.project_out.to(device)
        x = self.project_in(x)
        x1, x2 = self.dwconv(x).chunk(2, dim=1)
        x = F.gelu(x1) * x2
        x = self.project_out(x)
        return x
 
class Attention(nn.Module):
    def __init__(self, dim, num_heads, bias):
        super(Attention, self).__init__()
        self.num_heads = num_heads
        self.temperature = nn.Parameter(torch.ones(num_heads, 1, 1, dtype=torch.float32), requires_grad=True)
        self.qkv = nn.Conv2d(dim, dim * 3, kernel_size=1, bias=bias)
        self.qkv_dwconv = nn.Conv2d(dim * 3, dim * 3, kernel_size=3, stride=1, padding=1, groups=dim * 3,
                                    bias=bias)
        self.project_out = nn.Conv2d(dim, dim, kernel_size=1, bias=bias)
 
    def forward(self, x):
        b, c, h, w = x.shape
        device = x.device
        self.qkv = self.qkv.to(device)
        self.qkv_dwconv = self.qkv_dwconv.to(device)
        self.project_out = self.project_out.to(device)
        qkv = self.qkv(x)
        qkv = self.qkv_dwconv(qkv)
        q, k, v = qkv.chunk(3, dim=1)
 
        q = rearrange(q, 'b (head c) h w -> b head c (h w)', head=self.num_heads)
        k = rearrange(k, 'b (head c) h w -> b head c (h w)', head=self.num_heads)
        v = rearrange(v, 'b (head c) h w -> b head c (h w)', head=self.num_heads)
 
        q = torch.nn.functional.normalize(q, dim=-1)
        k = torch.nn.functional.normalize(k, dim=-1)
 
        attn = (q @ k.transpose(-2, -1)) * self.temperature.to(device)
        attn = attn.softmax(dim=-1)
 
        out = (attn @ v)
 
        out = rearrange(out, 'b head c (h w) -> b (head c) h w', head=self.num_heads, h=h, w=w)
 
        out = self.project_out(out)
        return out
 
class resblock(nn.Module):
    def __init__(self, dim):
        super(resblock, self).__init__()
        # self.norm = LayerNorm(dim, LayerNorm_type='BiasFree')
 
        self.body = nn.Sequential(nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=1, bias=False),
                                  nn.PReLU(),
                                  nn.Conv2d(dim, dim, kernel_size=3, stride=1, padding=1, bias=False))
 
    def forward(self, x):
        res = self.body((x))
        res += x
        return res
 
#########################################################################
# Chain-of-Thought Prompt Generation Module (CGM)
class CotPromptParaGen(nn.Module):
    def __init__(self,prompt_inch,prompt_size, num_path=3):
        super(CotPromptParaGen, self).__init__()
 
        # (128,32,32)->(64,64,64)->(32,128,128)
        self.chain_prompts=nn.ModuleList([
            nn.ConvTranspose2d(
                in_channels=prompt_inch if idx==0 else prompt_inch//(2**idx),
                out_channels=prompt_inch//(2**(idx+1)),
                kernel_size=3, stride=2, padding=1
            ) for idx in range(num_path)
        ])
    def forward(self,x):
        prompt_params = []
        prompt_params.append(x)
        for pe in self.chain_prompts:
            x=pe(x)
            prompt_params.append(x)
        return prompt_params
 
#########################################################################
# Content-driven Prompt Block (CPB)
class ContentDrivenPromptBlock(nn.Module):
    def __init__(self, dim, prompt_dim, reduction=8, num_splits=4):
        super(ContentDrivenPromptBlock, self).__init__()
        self.dim = dim
        self.num_splits = num_splits
        self.pa2 = nn.Conv2d(2 * dim, dim, 7, padding=3, padding_mode='reflect', groups=dim, bias=True)
        self.sigmoid = nn.Sigmoid()
        self.conv3x3 = nn.Conv2d(prompt_dim, prompt_dim, kernel_size=3, stride=1, padding=1, bias=False)
        self.conv1x1 = nn.Conv2d(dim, prompt_dim, kernel_size=1, stride=1, bias=False)
        self.sa = SpatialAttention()
        self.ca = ChannelAttention(dim, reduction)
        self.myshuffle = Channel_Shuffle(2)
        self.out_conv1 = nn.Conv2d(prompt_dim + dim, dim, kernel_size=1, stride=1, bias=False)
        self.transformer_block = [
            TransformerBlock(dim=dim // num_splits, num_heads=1, ffn_expansion_factor=2.66, bias=False,
                             LayerNorm_type='WithBias') for _ in range(num_splits)]
 
    def forward(self, x, prompt_param):
        # latent: (b,dim*8,h/8,w/8)  prompt_param3: (1, 256, 16, 16)
        x_ = x
        B, C, H, W = x.shape
        cattn = self.ca(x)  # channel-wise attn
        sattn = self.sa(x)  # spatial-wise attn
        pattn1 = sattn + cattn
        pattn1 = pattn1.unsqueeze(dim=2)  # [b,c,1,h,w]
        x = x.unsqueeze(dim=2)  # [b,c,1,h,w]
        x2 = torch.cat([x, pattn1], dim=2)  #  [b,c,2,h,w]
        x2 = Rearrange('b c t h w -> b (c t) h w')(x2)  # [b,c*2,h,w]
        x2 = self.myshuffle(x2)  # [c1,c1_att,c2,c2_att,...]
        pattn2 = self.pa2(x2)
        pattn2 = self.conv1x1(pattn2)  # [b,prompt_dim,h,w]
        prompt_weight = self.sigmoid(pattn2)  # Sigmod
 
        prompt_param = F.interpolate(prompt_param, (H, W), mode="bilinear")
        # (b,prompt_dim,prompt_size,prompt_size) -> (b,prompt_dim,h,w)
        prompt = prompt_weight * prompt_param
        prompt = self.conv3x3(prompt)  # (b,prompt_dim,h,w)
 
        inter_x = torch.cat([x_, prompt], dim=1)  # (b,prompt_dim+dim,h,w)
        inter_x = self.out_conv1(inter_x)  # (b,dim,h,w) dim=64
        splits = torch.split(inter_x, self.dim // self.num_splits, dim=1)
 
        transformered_splits = []
        for i, split in enumerate(splits):
            transformered_split = self.transformer_block[i](split)
            transformered_splits.append(transformered_split)
        result = torch.cat(transformered_splits, dim=1)
        return result
 
#########################################################################
# CPA_Enhancer
class CPA_arch(nn.Module):
    def __init__(self, c_in=3, c_out=3, dim=4, prompt_inch=128, prompt_size=32):
        super(CPA_arch, self).__init__()
        self.conv0 = RFAConv(c_in, dim)
        self.conv1 = RFAConv(dim, dim)
        self.conv2 = RFAConv(dim * 2, dim * 2)
        self.conv3 = RFAConv(dim * 4, dim * 4)
        self.conv4 = RFAConv(dim * 8, dim * 8)
        self.conv5 = RFAConv(dim * 8, dim * 4)
        self.conv6 = RFAConv(dim * 4, dim * 2)
        self.conv7 = RFAConv(dim * 2, c_out)
 
        self.down1 = Downsample(dim)
        self.down2 = Downsample(dim * 2)
        self.down3 = Downsample(dim * 4)
 
        self.prompt_param_ini = nn.Parameter(torch.rand(1, prompt_inch, prompt_size, prompt_size)) # (b,c,h,w)
        self.myPromptParamGen = CotPromptParaGen(prompt_inch=prompt_inch,prompt_size=prompt_size)
        self.prompt1 = ContentDrivenPromptBlock(dim=dim * 2 ** 1, prompt_dim=prompt_inch // 4, reduction=8)  # !!!!
        self.prompt2 = ContentDrivenPromptBlock(dim=dim * 2 ** 2, prompt_dim=prompt_inch // 2, reduction=8)
        self.prompt3 = ContentDrivenPromptBlock(dim=dim * 2 ** 3, prompt_dim=prompt_inch , reduction=8)
 
        self.up3 = Upsample(dim * 8)
        self.up2 = Upsample(dim * 4)
        self.up1 = Upsample(dim * 2)
 
    def forward(self, x):  # (b,c_in,h,w)
 
        prompt_params = self.myPromptParamGen(self.prompt_param_ini)
        prompt_param1 = prompt_params[2] # [1, 64, 64, 64]
        prompt_param2 = prompt_params[1]  # [1, 128, 32, 32]
        prompt_param3 = prompt_params[0]  # [1, 256, 16, 16]
        x0 = self.conv0(x)  # (b,dim,h,w)
        x1 = self.conv1(x0)  # (b,dim,h,w)
        x1_down = self.down1(x1)  # (b,dim,h/2,w/2)
        x2 = self.conv2(x1_down)  # (b,dim,h/2,w/2)
        x2_down = self.down2(x2)
        x3 = self.conv3(x2_down)
        x3_down = self.down3(x3)
        x4 = self.conv4(x3_down)
        device = x4.device
        self.prompt1 = self.prompt1.to(device)
        self.prompt2 = self.prompt2.to(device)
        self.prompt3 = self.prompt3.to(device)
        x4_prompt = self.prompt3(x4, prompt_param3)
        x3_up = self.up3(x4_prompt)
        x5 = self.conv5(torch.cat([x3_up, x3], 1))
        x5_prompt = self.prompt2(x5, prompt_param2)
        x2_up = self.up2(x5_prompt)
        x2_cat = torch.cat([x2_up, x2], 1)
        x6 = self.conv6(x2_cat)
        x6_prompt = self.prompt1(x6, prompt_param1)
        x1_up = self.up1(x6_prompt)
        x7 = self.conv7(torch.cat([x1_up, x1], 1))
        return x7
 
 
 
if __name__ == "__main__":
    # Generating Sample image
    image_size = (1, 3, 640, 640)
    image = torch.rand(*image_size)
    out = CPA_arch(3, 3, 4)
    out = out(image)
    print(out.size())

2.2 步骤二

在Addmodules下创建一个新的py文件名字为'__init__.py',然后在其内部添加如下代码

2.3 步骤三

在task.py进行导入

到此注册成功,复制后面的yaml文件直接运行即可

yaml文件

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLOv8 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
 
# Parameters
nc: 80  # number of classes
scales: # model compound scaling constants, i.e. 'model=yolov8n.yaml' will call yolov8.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.33, 0.25, 1024]  # YOLOv8n summary: 225 layers,  3157200 parameters,  3157184 gradients,   8.9 GFLOPs
  s: [0.33, 0.50, 1024]  # YOLOv8s summary: 225 layers, 11166560 parameters, 11166544 gradients,  28.8 GFLOPs
  m: [0.67, 0.75, 768]   # YOLOv8m summary: 295 layers, 25902640 parameters, 25902624 gradients,  79.3 GFLOPs
  l: [1.00, 1.00, 512]   # YOLOv8l summary: 365 layers, 43691520 parameters, 43691504 gradients, 165.7 GFLOPs
  x: [1.00, 1.25, 512]   # YOLOv8x summary: 365 layers, 68229648 parameters, 68229632 gradients, 258.5 GFLOPs
 
# YOLOv8.0n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, CPA_arch, []]  # 0-P1/2
  - [-1, 1, Conv, [64, 3, 2]]  # 1-P1/2
  - [-1, 1, Conv, [128, 3, 2]]  # 2-P2/4
  - [-1, 3, C2f, [128, True]]
  - [-1, 1, Conv, [256, 3, 2]]  # 4-P3/8
  - [-1, 6, C2f, [256, True]]
  - [-1, 1, Conv, [512, 3, 2]]  # 6-P4/16
  - [-1, 6, C2f, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]]  # 8-P5/32
  - [-1, 3, C2f, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]]  # 10
 
# YOLOv8.0n head
head:
  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 7], 1, Concat, [1]]  # cat backbone P4
  - [-1, 3, C2f, [512]]  # 13
 
  - [-1, 1, nn.Upsample, [None, 2, 'nearest']]
  - [[-1, 5], 1, Concat, [1]]  # cat backbone P3
  - [-1, 3, C2f, [256]]  # 16 (P3/8-small)
 
  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 13], 1, Concat, [1]]  # cat head P4
  - [-1, 3, C2f, [512]]  # 19 (P4/16-medium)
 
  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 10], 1, Concat, [1]]  # cat head P5
  - [-1, 3, C2f, [1024]]  # 22 (P5/32-large)
 
  - [[16, 19, 22], 1, Detect, [nc]]  # Detect(P3, P4, P5)

不知不觉已经看完了哦,动动小手留个点赞吧--_--

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2074819.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

打包资料优化目录

这篇文章主要写一下这一次更新的几个地方,有对原来的代码及模型进行优化的部分,也有新增加的代码和模型,我就把几个比较典型的给列了出来。但是还有好多的更新没有在下面展示出来,因为一个个展示出来太复杂了。如果你对更新的内容…

mybatis框架搭建、mybatis打印日志设置、参数传递使用、myatis插件MyBatisX

一、框架 就是对技术的封装,将基础的技术进行封装,让程序员可以快速的使用,提高效率。 Java后端框架: mybatis:对jdbc进行封装 spring:对整个Java后端架构进行管理的 springweb:对web层&a…

用Python解决优化问题_整数规划模板

整数规划的基本概念 整数规划是一种数学优化方法,它是线性规划的一个扩展。在整数规划中,决策变量被限制为整数,而不是连续的值。这种类型的规划在许多实际应用中非常重要,例如资源分配、生产计划、物流配送等。整数规划可以分为…

R7RS标准之重要特性及用法实例(三十九)

简介: CSDN博客专家,专注Android/Linux系统,分享多mic语音方案、音视频、编解码等技术,与大家一起成长! 新书发布:《Android系统多媒体进阶实战》🚀 优质专栏: Audio工程师进阶系列…

【数据库】深入浅出MySQL SQL优化:原因、定位、分析与索引失效

这是一张AI生成关于MySQL SQL优化的插图。图中展示了一个计算机屏幕,上面可以看到MySQL数据库模式。屏幕周围有代表优化的视觉隐喻,如齿轮、闪电和流线型形状。屏幕上的模式用色彩丰富的注释标出了改进区域,如索引和查询调整。整体风格现代且…

【源码+文档+调试讲解】数据结构课程网络学习平台

摘要 本文介绍了数据结构课程网络学习平台的开发全过程。通过分析企业对于数据结构课程网络学习平台的需求,创建了一个计算机管理数据结构课程网络学习平台的方案。文章介绍了数据结构课程网络学习平台的系统分析部分,包括可行性分析等,系统设…

Python处理JSON

Python处理JSON ####概念 序列化(Serialization):将对象的状态信息转换为可以存储或可以通过网络传输的过程,传输的格式可以是JSON、XML等。反序列化就是从存储区域(JSON,XML)读取反序列化对象…

优化学习管理:Moodle和ONLYOFFICE文档编辑器的完美结合

目录 前言 一、什么是 Moodle 1、简单快速插入表单字段 3、免费表单模板库 4、开启无缝协作 三、在Moodle中集成ONLYOFFICE文档 四、在Moodle安装使用ONLYOFFICE 1、下载安装 2、配置服务器 3、在Moodle中使用ONLYOFFICE 文档活动 五、未来展望 写在最后 前言 在当今教育科技飞…

前端如何在30秒内实现吸管拾色器?

⭐前言 大家好,我是yma16,本文分享 前端react——实现浏览器页面的吸管拾色器功能。 背景: 在chrome web端快速实现一个页面的取色器功能, 分为两个场景 固定区域小范围取色当前页面取色 node系列往期文章 node_windows环境变量…

Vue3-win7搭建vue3环境

Vue3-win7搭建vue3环境 官方要求的信息是是node.js 18.03以上。而我的环境:win7 x64, vscode 1.34。 参考网址: 0、基本的安装 https://blog.csdn.net/m0_49139268/article/details/126159171 a、这里有各种安装包的下载路径(镜…

手撕C++类和对象(中)

1.类的默认成员函数 默认成员函数就是⽤⼾没有显式实现,编译器会⾃动⽣成的成员函数称为默认成员函数。⼀个类,我 们不写的情况下编译器会默认⽣成以下6个默认成员函数,需要注意的是这6个中最重要的是前4个,最 后两个取地址重载不…

[数据集][目标检测]管道漏水泄漏破损检测数据集VOC+YOLO格式2614张4类

数据集格式:Pascal VOC格式YOLO格式(不包含分割路径的txt文件,仅仅包含jpg图片以及对应的VOC格式xml文件和yolo格式txt文件) 图片数量(jpg文件个数):2614 标注数量(xml文件个数):2614 标注数量(txt文件个数):2614 标注…

10天速通Tkinter库——实践项目《植物杂交实验室》

一不小心就拖更了五天,私密马赛。但你们知道这五天我都是怎么过的吗,我起早贪黑(起不来一点),每天勤勤恳恳撸代码,做设计(谁家好人用ppt做设计哇),只为完成《植物杂交实验…

Vue 和 Element Plus 弹框组件详解:从基本实现到异步数据加载与自定义内容(实战)

目录 前言1. 基本知识2. 模版3. 实战 前言 主要是通过一个按钮触发一个按钮框,多种方式的逻辑,多种场景 原先通过实战总结,基本的知识推荐阅读: 详细分析Element Plus中的ElMessageBox弹窗用法(附Demo及模版&#x…

【STM32单片机_(HAL库)】3-4-1【中断EXTI】【智能排队控制系统】LCD1602显示字符串

1.硬件 STM32单片机最小系统LCD1602显示模块 2.软件 驱动文件添加GPIO常用函数main.c程序 #include "sys.h" #include "delay.h" #include "led.h" #include "lcd1602.h"int main(void) {HAL_Init(); /* …

摄像头实时检查程序,插入设备,自动显示画面,支持多个摄像头,支持拍照,照片放大缩小

支持的特性 插入摄像头设备后&#xff0c;无需手动选择&#xff0c;自动显示摄像头画面&#xff0c;需要预先授权支持多个摄像头切换显示多个摄像头时支持 默认显示特定名称的摄像头支持拍照支持照片放大&#xff0c;缩小 显示效果 完整代码 <!DOCTYPE html> <html…

使用 AMD GPUs 进行基于 Transformers 的时间序列预测

Using AMD GPUs for Enhanced Time Series Forecasting with Transformers — ROCm Blogs 时间序列预测&#xff08;TSF&#xff09;是信号处理、数据科学和机器学习&#xff08;ML&#xff09;等领域的关键概念。TSF 通过分析系统的过去时间模式来预测其未来行为&#xff0c;利…

私域流量升级下的新机遇——“开源 AI 智能名片S2B2C 商城小程序”与新兴技术的融合

摘要&#xff1a;本文深入探讨了随着私域流量应用的进一步升级&#xff0c;智能对话式营销持续火爆的同时&#xff0c;CEM&#xff08;客户体验管理&#xff09;、MA&#xff08;营销自动化&#xff09;、CDP&#xff08;客户数据平台&#xff09;及 DAM&#xff08;数据资产管…

《黑神话:悟空》之光线追踪技术

8月20日&#xff0c;国产单机游戏《黑神话&#xff1a;悟空》终于上市&#xff0c;并以实力演绎了爆款游戏的“盛况空前”。 这款游戏的成功&#xff0c;不仅源自对经典文学《西游记》的深刻解读与创新演绎&#xff0c;更在于其背后强大的科技力量支撑。 空间计算功不可没 土…

游戏服务器架构:基于匿名函数的高性能异步定时器系统

作者&#xff1a;码客&#xff08;ygluu 卢益贵&#xff09; 关键词&#xff1a;游戏服务器架构、匿名函数、高性能、异步定时器。 一、前言 本文主要介绍适用于MMO/RPG游戏服务端的、基于匿名函数做定时器回调函数的、高性能异步触发的定时器系统的设计方案&#xff0c;以解决…