目标检测 pytorch复现SSD目标检测项目

news2024/11/15 17:17:02

目标检测 pytorch复现SSD目标检测项目

  • 0、简介
  • 1、模型整体框架(以VGG16为特征提取网络)
  • 3、默认框(default box)的生成--相当于Faster-RCNN中生成的anchor
  • 4、预测层的实现原理:
  • 5、正负样本的选取
  • 6、损失的计算原理
  • 6、以ResNet50作为特征提取backbone
  • 7、ResNet50+SSD网络模型搭建
  • 8、Default Box的生成原理
  • **、训练自己的SSD目标检测模型

0、简介

SSD(Single Shot MultiBox Detector)是大神Wei Liu在 ECCV 2016上发表的一种的目标检测算法。对于输入图像大小300x300的版本在VOC2007数据集上达到了72.1%mAP的准确率并且检测速度达到了惊人的58FPS( Faster RCNN:73.2%mAP,7FPS; YOLOv1: 63.4%mAP,45FPS ),500x500的版本达到了75.1%mAP的准确率。当然算法YOLOv2已经赶上了SSD,YOLOv3已经超越SSD,但SSD算法依旧值得研究。
在这里插入图片描述

1、模型整体框架(以VGG16为特征提取网络)

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
即根据论文,体征提取backbonk对应着上图中VGG16黄色框选的区域,也就是Conv5_3输出数据为提取的数据特征

在不同特征层上,匹配不同尺度大小的目标:
在这里插入图片描述
上图中8x8的特征矩阵相对于4x4的特征矩阵,保留的特征信息会更多些,所以,在相对底层的特征矩阵上去预测小目标,在相对高层的特征矩阵上去预测大些的目标。

3、默认框(default box)的生成–相当于Faster-RCNN中生成的anchor

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

class DefaultBoxes(object):
    def __init__(self, fig_size, feat_size, steps, scales, aspect_ratios, scale_xy=0.1, scale_wh=0.2):
        self.fig_size = fig_size   # 输入网络的图像大小 300
        # [38, 19, 10, 5, 3, 1]
        self.feat_size = feat_size  # 每个预测层的feature map尺寸

        self.scale_xy_ = scale_xy
        self.scale_wh_ = scale_wh

        # According to https://github.com/weiliu89/caffe
        # Calculation method slightly different from paper
        # [8, 16, 32, 64, 100, 300]
        self.steps = steps    # 每个特征层上的一个cell在原图上的跨度

        # [21, 45, 99, 153, 207, 261, 315]
        self.scales = scales  # 每个特征层上预测的default box的scale  [21, 45, 99, 153, 207, 261, 315]

        fk = fig_size / np.array(steps)     # 计算每层特征层的fk
        # [[2], [2, 3], [2, 3], [2, 3], [2], [2]]
        self.aspect_ratios = aspect_ratios  # 每个预测特征层上预测的default box的ratios

        self.default_boxes = []
        # size of feature and number of feature
        # 遍历每层特征层,计算default box
        for idx, sfeat in enumerate(self.feat_size):
            sk1 = scales[idx] / fig_size  # scale转为相对值[0-1]
            sk2 = scales[idx + 1] / fig_size  # scale转为相对值[0-1]
            sk3 = sqrt(sk1 * sk2)
            # 先添加两个1:1比例的default box宽和高
            all_sizes = [(sk1, sk1), (sk3, sk3)]

            # 再将剩下不同比例的default box宽和高添加到all_sizes中
            for alpha in aspect_ratios[idx]:
                w, h = sk1 * sqrt(alpha), sk1 / sqrt(alpha)
                all_sizes.append((w, h))
                all_sizes.append((h, w))

            # 计算当前特征层对应原图上的所有default box
            for w, h in all_sizes:
                for i, j in itertools.product(range(sfeat), repeat=2):  # i -> 行(y), j -> 列(x)
                    # 计算每个default box的中心坐标(范围是在0-1之间)
                    cx, cy = (j + 0.5) / fk[idx], (i + 0.5) / fk[idx]
                    self.default_boxes.append((cx, cy, w, h))

        # 将default_boxes转为tensor格式
        self.dboxes = torch.as_tensor(self.default_boxes, dtype=torch.float32)  # 这里不转类型会报错
        self.dboxes.clamp_(min=0, max=1)  # 将坐标(x, y, w, h)都限制在0-1之间

        # For IoU calculation
        # ltrb is left top coordinate and right bottom coordinate
        # 将(x, y, w, h)转换成(xmin, ymin, xmax, ymax),方便后续计算IoU(匹配正负样本时)
        self.dboxes_ltrb = self.dboxes.clone()
        self.dboxes_ltrb[:, 0] = self.dboxes[:, 0] - 0.5 * self.dboxes[:, 2]   # xmin
        self.dboxes_ltrb[:, 1] = self.dboxes[:, 1] - 0.5 * self.dboxes[:, 3]   # ymin
        self.dboxes_ltrb[:, 2] = self.dboxes[:, 0] + 0.5 * self.dboxes[:, 2]   # xmax
        self.dboxes_ltrb[:, 3] = self.dboxes[:, 1] + 0.5 * self.dboxes[:, 3]   # ymax

    @property
    def scale_xy(self):
        return self.scale_xy_

    @property
    def scale_wh(self):
        return self.scale_wh_

    def __call__(self, order='ltrb'):
        # 根据需求返回对应格式的default box
        if order == 'ltrb':
            return self.dboxes_ltrb

        if order == 'xywh':
            return self.dboxes


4、预测层的实现原理:

假设生成k个Default box,则:
在这里插入图片描述
在Feature Map上的每个像素上,都会生成k个default box
即对每个Default box,都会预测C个类别分数(C包括了背景类别)

5、正负样本的选取

正样本的选取:
在这里插入图片描述
匹配准则1:
对于每一个GT box ,去匹配与其IOU值最大的Default Box
匹配准则2:
对于任意的Default Box,只要与任何一个GT Box的IOU值大于0.5,也认为其为正样本

负样本的选取:
在这里插入图片描述
对于剩下的负样本,选取Confidence Loss(即该值越大,表示网络将对应的box预测为目标的概率越大)靠前的样本作为负样本
负样本与正样本的比例为3:1

6、损失的计算原理

类别损失:
在这里插入图片描述
在这里插入图片描述
定位损失:
定位损失值针对于正样本而言
在这里插入图片描述

class Loss(nn.Module):
    """
        Implements the loss as the sum of the followings:
        1. Confidence Loss: All labels, with hard negative mining
        2. Localization Loss: Only on positive labels
        Suppose input dboxes has the shape 8732x4
    """
    def __init__(self, dboxes):
        super(Loss, self).__init__()
        # Two factor are from following links
        # http://jany.st/post/2017-11-05-single-shot-detector-ssd-from-scratch-in-tensorflow.html
        self.scale_xy = 1.0 / dboxes.scale_xy  # 10
        self.scale_wh = 1.0 / dboxes.scale_wh  # 5

        self.location_loss = nn.SmoothL1Loss(reduction='none')
        # [num_anchors, 4] -> [4, num_anchors] -> [1, 4, num_anchors]
        self.dboxes = nn.Parameter(dboxes(order="xywh").transpose(0, 1).unsqueeze(dim=0),
                                   requires_grad=False)

        self.confidence_loss = nn.CrossEntropyLoss(reduction='none')

    def _location_vec(self, loc):
        # type: (Tensor) -> Tensor
        """
        Generate Location Vectors
        计算ground truth相对anchors的回归参数
        :param loc: anchor匹配到的对应GTBOX Nx4x8732
        :return:
        """
        gxy = self.scale_xy * (loc[:, :2, :] - self.dboxes[:, :2, :]) / self.dboxes[:, 2:, :]  # Nx2x8732
        gwh = self.scale_wh * (loc[:, 2:, :] / self.dboxes[:, 2:, :]).log()  # Nx2x8732
        return torch.cat((gxy, gwh), dim=1).contiguous()

    def forward(self, ploc, plabel, gloc, glabel):
        # type: (Tensor, Tensor, Tensor, Tensor) -> Tensor
        """
            ploc, plabel: Nx4x8732, Nxlabel_numx8732
                predicted location and labels

            gloc, glabel: Nx4x8732, Nx8732
                ground truth location and labels
        """
        # 获取正样本的mask  Tensor: [N, 8732]
        mask = torch.gt(glabel, 0)  # (gt: >)
        # mask1 = torch.nonzero(glabel)
        # 计算一个batch中的每张图片的正样本个数 Tensor: [N]
        pos_num = mask.sum(dim=1)

        # 计算gt的location回归参数 Tensor: [N, 4, 8732]
        vec_gd = self._location_vec(gloc)

        # sum on four coordinates, and mask
        # 计算定位损失(只有正样本)
        loc_loss = self.location_loss(ploc, vec_gd).sum(dim=1)  # Tensor: [N, 8732]
        loc_loss = (mask.float() * loc_loss).sum(dim=1)  # Tenosr: [N]

        # hard negative mining Tenosr: [N, 8732]
        con = self.confidence_loss(plabel, glabel)

        # positive mask will never selected
        # 获取负样本
        con_neg = con.clone()
        con_neg[mask] = 0.0
        # 按照confidence_loss降序排列 con_idx(Tensor: [N, 8732])
        _, con_idx = con_neg.sort(dim=1, descending=True)
        _, con_rank = con_idx.sort(dim=1)  # 这个步骤比较巧妙

        # number of negative three times positive
        # 用于损失计算的负样本数是正样本的3倍(在原论文Hard negative mining部分),
        # 但不能超过总样本数8732
        neg_num = torch.clamp(3 * pos_num, max=mask.size(1)).unsqueeze(-1)
        neg_mask = torch.lt(con_rank, neg_num)  # (lt: <) Tensor [N, 8732]

        # confidence最终loss使用选取的正样本loss+选取的负样本loss
        con_loss = (con * (mask.float() + neg_mask.float())).sum(dim=1)  # Tensor [N]

        # avoid no object detected
        # 避免出现图像中没有GTBOX的情况
        total_loss = loc_loss + con_loss
        # eg. [15, 3, 5, 0] -> [1.0, 1.0, 1.0, 0.0]
        num_mask = torch.gt(pos_num, 0).float()  # 统计一个batch中的每张图像中是否存在正样本
        pos_num = pos_num.float().clamp(min=1e-6)  # 防止出现分母为零的情况
        ret = (total_loss * num_mask / pos_num).mean(dim=0)  # 只计算存在正样本的图像损失
        return ret

6、以ResNet50作为特征提取backbone

在这里插入图片描述
ResNet50+SSD整体架构:↓↓↓
请添加图片描述

7、ResNet50+SSD网络模型搭建

res50_backbone.py ResNet50特征提取网络搭建

import torch.nn as nn
import torch


class Bottleneck(nn.Module):
    expansion = 4

    def __init__(self, in_channel, out_channel, stride=1, downsample=None):
        super(Bottleneck, self).__init__()
        self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=out_channel,
                               kernel_size=1, stride=1, bias=False)  # squeeze channels
        self.bn1 = nn.BatchNorm2d(out_channel)
        # -----------------------------------------
        self.conv2 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel,
                               kernel_size=3, stride=stride, bias=False, padding=1)
        self.bn2 = nn.BatchNorm2d(out_channel)
        # -----------------------------------------
        self.conv3 = nn.Conv2d(in_channels=out_channel, out_channels=out_channel*self.expansion,
                               kernel_size=1, stride=1, bias=False)  # unsqueeze channels
        self.bn3 = nn.BatchNorm2d(out_channel*self.expansion)
        self.relu = nn.ReLU(inplace=True)
        self.downsample = downsample

    def forward(self, x):
        identity = x
        if self.downsample is not None:
            identity = self.downsample(x)

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)

        out += identity
        out = self.relu(out)

        return out


class ResNet(nn.Module):

    def __init__(self, block, blocks_num, num_classes=1000, include_top=True):
        super(ResNet, self).__init__()
        self.include_top = include_top
        self.in_channel = 64

        self.conv1 = nn.Conv2d(3, self.in_channel, kernel_size=7, stride=2,
                               padding=3, bias=False)
        self.bn1 = nn.BatchNorm2d(self.in_channel)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, blocks_num[0])
        self.layer2 = self._make_layer(block, 128, blocks_num[1], stride=2)
        self.layer3 = self._make_layer(block, 256, blocks_num[2], stride=2)
        self.layer4 = self._make_layer(block, 512, blocks_num[3], stride=2)
        if self.include_top:
            self.avgpool = nn.AdaptiveAvgPool2d((1, 1))  # output size = (1, 1)
            self.fc = nn.Linear(512 * block.expansion, num_classes)

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')

    def _make_layer(self, block, channel, block_num, stride=1):
        downsample = None
        if stride != 1 or self.in_channel != channel * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.in_channel, channel * block.expansion, kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(channel * block.expansion))

        layers = []
        layers.append(block(self.in_channel, channel, downsample=downsample, stride=stride))
        self.in_channel = channel * block.expansion

        for _ in range(1, block_num):
            layers.append(block(self.in_channel, channel))

        return nn.Sequential(*layers)

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)

        if self.include_top:
            x = self.avgpool(x)
            x = torch.flatten(x, 1)
            x = self.fc(x)

        return x


def resnet50(num_classes=1000, include_top=True):
    return ResNet(Bottleneck, [3, 4, 6, 3], num_classes=num_classes, include_top=include_top)

ssd_model.py ResNet50+SSD结构搭建

import torch
from torch import nn, Tensor
from torch.jit.annotations import List

from .res50_backbone import resnet50
from .utils import dboxes300_coco, Encoder, PostProcess


class Backbone(nn.Module):
    def __init__(self, pretrain_path=None):
        super(Backbone, self).__init__()
        net = resnet50()
        self.out_channels = [1024, 512, 512, 256, 256, 256]   # 对应着每一个预测特征层的channels

        if pretrain_path is not None:
            net.load_state_dict(torch.load(pretrain_path))

        self.feature_extractor = nn.Sequential(*list(net.children())[:7])   # 构建特征提取部分

        conv4_block1 = self.feature_extractor[-1][0]

        # 修改conv4_block1的步距,从2->1
        conv4_block1.conv1.stride = (1, 1)
        conv4_block1.conv2.stride = (1, 1)
        conv4_block1.downsample[0].stride = (1, 1)

    def forward(self, x):
        x = self.feature_extractor(x)
        return x


class SSD300(nn.Module):
    def __init__(self, backbone=None, num_classes=21):
        super(SSD300, self).__init__()
        if backbone is None:
            raise Exception("backbone is None")
        if not hasattr(backbone, "out_channels"):
            raise Exception("the backbone not has attribute: out_channel")
        self.feature_extractor = backbone

        self.num_classes = num_classes
        # out_channels = [1024, 512, 512, 256, 256, 256] for resnet50
        self._build_additional_features(self.feature_extractor.out_channels)
        self.num_defaults = [4, 6, 6, 6, 4, 4]
        location_extractors = []
        confidence_extractors = []

        # out_channels = [1024, 512, 512, 256, 256, 256] for resnet50
        for nd, oc in zip(self.num_defaults, self.feature_extractor.out_channels):
            # nd is number_default_boxes, oc is output_channel
            location_extractors.append(nn.Conv2d(oc, nd * 4, kernel_size=3, padding=1))
            confidence_extractors.append(nn.Conv2d(oc, nd * self.num_classes, kernel_size=3, padding=1))

        self.loc = nn.ModuleList(location_extractors)
        self.conf = nn.ModuleList(confidence_extractors)
        self._init_weights()

        default_box = dboxes300_coco()
        self.compute_loss = Loss(default_box)
        self.encoder = Encoder(default_box)
        self.postprocess = PostProcess(default_box)

    def _build_additional_features(self, input_size):
        """
        为backbone(resnet50)添加额外的一系列卷积层,得到相应的一系列特征提取器
        :param input_size:
        :return:
        """
        additional_blocks = []
        # input_size = [1024, 512, 512, 256, 256, 256] for resnet50
        middle_channels = [256, 256, 128, 128, 128]
        for i, (input_ch, output_ch, middle_ch) in enumerate(zip(input_size[:-1], input_size[1:], middle_channels)):
            padding, stride = (1, 2) if i < 3 else (0, 1)
            layer = nn.Sequential(
                nn.Conv2d(input_ch, middle_ch, kernel_size=1, bias=False),
                nn.BatchNorm2d(middle_ch),
                nn.ReLU(inplace=True),
                nn.Conv2d(middle_ch, output_ch, kernel_size=3, padding=padding, stride=stride, bias=False),
                nn.BatchNorm2d(output_ch),
                nn.ReLU(inplace=True),
            )
            additional_blocks.append(layer)
        self.additional_blocks = nn.ModuleList(additional_blocks)

    def _init_weights(self):
        layers = [*self.additional_blocks, *self.loc, *self.conf]
        for layer in layers:
            for param in layer.parameters():
                if param.dim() > 1:
                    nn.init.xavier_uniform_(param)

    # Shape the classifier to the view of bboxes
    def bbox_view(self, features, loc_extractor, conf_extractor):
        locs = []
        confs = []
        for f, l, c in zip(features, loc_extractor, conf_extractor):
            # [batch, n*4, feat_size, feat_size] -> [batch, 4, -1]
            locs.append(l(f).view(f.size(0), 4, -1))
            # [batch, n*classes, feat_size, feat_size] -> [batch, classes, -1]
            confs.append(c(f).view(f.size(0), self.num_classes, -1))

        locs, confs = torch.cat(locs, 2).contiguous(), torch.cat(confs, 2).contiguous()
        return locs, confs

    def forward(self, image, targets=None):
        x = self.feature_extractor(image)

        # Feature Map 38x38x1024, 19x19x512, 10x10x512, 5x5x256, 3x3x256, 1x1x256
        detection_features = torch.jit.annotate(List[Tensor], [])  # [x]
        detection_features.append(x)
        for layer in self.additional_blocks:
            x = layer(x)
            detection_features.append(x)

        # Feature Map 38x38x4, 19x19x6, 10x10x6, 5x5x6, 3x3x4, 1x1x4
        locs, confs = self.bbox_view(detection_features, self.loc, self.conf)

        # For SSD 300, shall return nbatch x 8732 x {nlabels, nlocs} results
        # 38x38x4 + 19x19x6 + 10x10x6 + 5x5x6 + 3x3x4 + 1x1x4 = 8732

        if self.training:
            if targets is None:
                raise ValueError("In training mode, targets should be passed")
            # bboxes_out (Tensor 8732 x 4), labels_out (Tensor 8732)
            bboxes_out = targets['boxes']
            bboxes_out = bboxes_out.transpose(1, 2).contiguous()
            # print(bboxes_out.is_contiguous())
            labels_out = targets['labels']
            # print(labels_out.is_contiguous())

            # ploc, plabel, gloc, glabel
            loss = self.compute_loss(locs, confs, bboxes_out, labels_out)
            return {"total_losses": loss}

        # 将预测回归参数叠加到default box上得到最终预测box,并执行非极大值抑制虑除重叠框
        # results = self.encoder.decode_batch(locs, confs)
        results = self.postprocess(locs, confs)
        return results


class Loss(nn.Module):
    """
        Implements the loss as the sum of the followings:
        1. Confidence Loss: All labels, with hard negative mining
        2. Localization Loss: Only on positive labels
        Suppose input dboxes has the shape 8732x4
    """
    def __init__(self, dboxes):
        super(Loss, self).__init__()
        # Two factor are from following links
        # http://jany.st/post/2017-11-05-single-shot-detector-ssd-from-scratch-in-tensorflow.html
        self.scale_xy = 1.0 / dboxes.scale_xy  # 10
        self.scale_wh = 1.0 / dboxes.scale_wh  # 5

        self.location_loss = nn.SmoothL1Loss(reduction='none')
        # [num_anchors, 4] -> [4, num_anchors] -> [1, 4, num_anchors]
        self.dboxes = nn.Parameter(dboxes(order="xywh").transpose(0, 1).unsqueeze(dim=0),
                                   requires_grad=False)

        self.confidence_loss = nn.CrossEntropyLoss(reduction='none')

    def _location_vec(self, loc):
        # type: (Tensor) -> Tensor
        """
        Generate Location Vectors
        计算ground truth相对anchors的回归参数
        :param loc: anchor匹配到的对应GTBOX Nx4x8732
        :return:
        """
        gxy = self.scale_xy * (loc[:, :2, :] - self.dboxes[:, :2, :]) / self.dboxes[:, 2:, :]  # Nx2x8732
        gwh = self.scale_wh * (loc[:, 2:, :] / self.dboxes[:, 2:, :]).log()  # Nx2x8732
        return torch.cat((gxy, gwh), dim=1).contiguous()

    def forward(self, ploc, plabel, gloc, glabel):
        # type: (Tensor, Tensor, Tensor, Tensor) -> Tensor
        """
            ploc, plabel: Nx4x8732, Nxlabel_numx8732
                predicted location and labels

            gloc, glabel: Nx4x8732, Nx8732
                ground truth location and labels
        """
        # 获取正样本的mask  Tensor: [N, 8732]
        mask = torch.gt(glabel, 0)  # (gt: >)
        # mask1 = torch.nonzero(glabel)
        # 计算一个batch中的每张图片的正样本个数 Tensor: [N]
        pos_num = mask.sum(dim=1)

        # 计算gt的location回归参数 Tensor: [N, 4, 8732]
        vec_gd = self._location_vec(gloc)

        # sum on four coordinates, and mask
        # 计算定位损失(只有正样本)
        loc_loss = self.location_loss(ploc, vec_gd).sum(dim=1)  # Tensor: [N, 8732]
        loc_loss = (mask.float() * loc_loss).sum(dim=1)  # Tenosr: [N]

        # hard negative mining Tenosr: [N, 8732]
        con = self.confidence_loss(plabel, glabel)

        # positive mask will never selected
        # 获取负样本
        con_neg = con.clone()
        con_neg[mask] = 0.0
        # 按照confidence_loss降序排列 con_idx(Tensor: [N, 8732])
        _, con_idx = con_neg.sort(dim=1, descending=True)
        _, con_rank = con_idx.sort(dim=1)  # 这个步骤比较巧妙

        # number of negative three times positive
        # 用于损失计算的负样本数是正样本的3倍(在原论文Hard negative mining部分),
        # 但不能超过总样本数8732
        neg_num = torch.clamp(3 * pos_num, max=mask.size(1)).unsqueeze(-1)
        neg_mask = torch.lt(con_rank, neg_num)  # (lt: <) Tensor [N, 8732]

        # confidence最终loss使用选取的正样本loss+选取的负样本loss
        con_loss = (con * (mask.float() + neg_mask.float())).sum(dim=1)  # Tensor [N]

        # avoid no object detected
        # 避免出现图像中没有GTBOX的情况
        total_loss = loc_loss + con_loss
        # eg. [15, 3, 5, 0] -> [1.0, 1.0, 1.0, 0.0]
        num_mask = torch.gt(pos_num, 0).float()  # 统计一个batch中的每张图像中是否存在正样本
        pos_num = pos_num.float().clamp(min=1e-6)  # 防止出现分母为零的情况
        ret = (total_loss * num_mask / pos_num).mean(dim=0)  # 只计算存在正样本的图像损失
        return ret

8、Default Box的生成原理

**、训练自己的SSD目标检测模型

ResNet50官方预训练权重
预训练权重下载地址(下载后放入src文件夹中):
ResNet50+SSD: https://ngc.nvidia.com/catalog/models
搜索ssd -> 找到SSD for PyTorch(FP32) -> download FP32 -> 解压文件

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/453298.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

SpringCloud-9、Sleuth+Zipkin

先吐槽下csdn&#xff0c;编辑器不知道怎么回事&#xff0c;快捷键一下就没有&#xff0c;现在用起来糟心 --- - 这些都用不了&#xff0c;求帮助。 基本介绍 Sleuth:分布式服务跟踪组件 /ZipKin Sleuth/ZipKin-搭建链路监控实例 官网&#xff1a;GitHub - spring-cloud/s…

【移动端网页布局】移动端网页布局基础概念 ⑦ ( 在 PhotoShop 中使用 Cutterman 切二倍图 | 使用二倍图作为背景图像 )

文章目录 一、在 PhotoShop 中使用 Cutterman 切二倍图二、使用二倍图作为背景图像 一、在 PhotoShop 中使用 Cutterman 切二倍图 参考 【CSS】PhotoShop 切图 ③ ( PhotoShop 切图插件 - Cutterman | 下载、安装、启动、注册、登录 Cutterman - 切图神奇 插件 | 使用插件进行切…

selenium应用之抓取b站黑马视频目录建立学习计划Excel

需求故事&#xff1a; 最近时间一下子多了起来&#xff0c;用来学习Java是最合适不过了&#xff0c;但是去b站看视频难免会没有自制力&#xff0c;于是决定用selenium来抓取b站黑马Java视频的目录创建一个学习计划的Excel&#xff0c;便于进行学习进度的管理。 注&#xff1a;纯…

【无模型自适应】基于紧格式动态线性化的无模型自适应控制matlab代码

例题来源&#xff1a;侯忠生教授的《无模型自适应控制&#xff1a;理论与应用》&#xff08;2013年科学出版社&#xff09;。 对应书本 4.2 单输入单输出系统(SISO)紧格式动态线性化(CFDL)的无模型自适应控制(MFAC) 例题4.1 题目要求 matlab代码 clc; clear all;%% 期望轨迹…

【opencv】图像数字化——矩阵的运算( 5 乘法运算)

5 乘法运算 5.1使用“*”运算符 对于Mat对象的乘法&#xff0c;两个Mat只能同时是float或者double类型&#xff0c;对于其它数据类型的矩阵乘法会报错src1的列数等于src2的行数mn * npmp #include <opencv2/core/core.hpp> #include<iostream> using namesp…

Android程序员向音视频进阶,有前景吗

随着移动互联网的普及和发展&#xff0c;Android开发成为了很多人的就业选择&#xff0c;希望在这个行业能获得自己的一席之地。然而&#xff0c;随着时间的推移&#xff0c;越来越多的人进入到了Android开发行业&#xff0c;就导致目前Android开发的工作越来越难找&#xff0c…

【博学谷学习记录】超强总结,用心分享 | 架构师 MinIO学习总结

文章目录 MinIO对象存储的概念计算机数据存储系统-架构模式对象存储的优势常见的对象存储系统/服务&#xff08;Object Storage Service&#xff0c;OSS&#xff09; MinIO简介特点高级特性小结 MinIO部署基于 linux Binary 部署 MinIO ServerMinIO数据组织结构MinIO Client**基…

【论文精读】Emergent Abilities of Large Language Models

1. Emergence 涌现&#xff08;emergence&#xff09;或称创发、突现、呈展、演生&#xff0c;是一种现象&#xff0c;为许多小实体相互作用后产生了大实体&#xff0c;而这个大实体展现了组成它的小实体所不具有的特性。 水分子聚集后组成了雪花是一个物理上的创发现象 扩大&…

C++ 类和对象(上)

类 面向对象的三大特性&#xff1a;封装&#xff0c;继承&#xff0c;多态 C语言结构体中只能定义变量&#xff0c;在C中&#xff0c;结构体内不仅可以定义变量&#xff0c;也可以定义函数。比如&#xff1a; 之前在数据结构初阶中&#xff0c;用C语言方式实现的栈&#xff0c;…

springboot入门和yaml数据格式和读取yaml型数据和多环境配置和命令行启动参数设置

springboot入门 搞掉了手动的spring&#xff0c;mybatis&#xff0c;springmvc配置类&#xff0c;只需要创建一个控制类即可 控制类&#xff1a; package com.itjh.controller;import org.springframework.web.bind.annotation.*;RestController RequestMapping("/book…

KDYZ-YM压敏电阻测试仪

一、概述 晶闸管的伏安特性是晶闸管的基本特性&#xff0c;这项特性的好坏&#xff0c;直接影响到器件在整机上的正常使用。因此&#xff0c;检测晶闸管的伏安特性在晶闸管器件的生产、经销及使用过程中都是十分重要的。 该测试仪的测试方法符合国标JB/T7624-94《整流二极管测试…

AI:人工智能领域AI工具产品集合分门别类(文本类、图片类、编程类、办公类、视频类、音频类、多模态类)的简介、使用方法(持续更新)之详细攻略

AI&#xff1a;人工智能领域AI工具产品集合分门别类(文本类、图片类、编程类、办公类、视频类、音频类、多模态类)的简介、使用方法(持续更新)之详细攻略 导读&#xff1a;由于ChatGPT、GPT-4近期火爆整个互联网&#xff0c;掀起了人工智能相关的二次开发应用的热潮&#xff0c…

MySQL 的 Replace into 与 Insert into on duplicate key update 真正的不同之处

相同点&#xff1a; &#xff08;1&#xff09;没有key的时候&#xff0c;replace与insert .. on deplicate udpate相同。 &#xff08;2&#xff09;有key的时候&#xff0c;都保留主键值&#xff0c;并且auto_increment自动1。 不同点 有key的时候&#xff0c;replace是dele…

Python数据结构与算法-RAS算法(p96)

一、RSA加密算法简介 1、加密算法概念 传统密码: 加密算法是秘密的 现代密码系统:加密算法是公开的&#xff0c;密钥是秘密的&#xff1b;&#xff08;密钥可能是随机生成的&#xff0c;与他人不一致&#xff09; 对称加密—加密和解密用的同一个密钥 非对称加密—加密和解密用…

Kali下部署-Nessus漏扫工具

Nessus 是全世界最多人使用的系统漏洞扫描与分析软件。总共有超过75,000个机构使用Nessus 作为扫描该机构电脑系统的软件。 特点&#xff1a; 1、提供完整的电脑漏洞扫描服务&#xff0c;并随时更新漏洞库。 2、可以在本机或者是远端上进行遥控&#xff0c;进行系统的漏洞扫…

深入理解AMQP协议

一.AMQP 是什么 AMQP&#xff08;Advanced Message Queuing Protocol&#xff0c; 高级消息队列协议&#xff09;是一个提供统一消息服务的 应用层标准高级 消息队列协议&#xff0c;是 应用层协议的一个 开放标准,为面向消息的中间件设计&#xff0c;是一个进程间传递 异步消息…

线性模型的介绍

一、背景 在一个理想的连续世界中&#xff0c;任何非线性的东西都可以被线性的东西来拟合&#xff0c;所以理论上线性模型可以模拟物理世界中的绝大多数现象。 线性模型&#xff08;Linear Model&#xff09;是机器学习中应用最广泛的模型&#xff0c;指通过样本特征的线性组…

生产力提速增效的4大敲门砖

引言&#xff1a; 本文章将分四大板块介绍提高程序员生产力的方案&#xff0c;最大化利用你的IDE &#xff0c;其中Live Template篇&#xff0c;插件篇非常值的一看&#xff0c; 用好才能提速增效 Productity Guide篇 Postfix Completion篇 Live Template篇 插件篇 Product…

NGFW的protal认证实验

实验topo 用到工具&#xff1a;ensp&#xff0c;kali&#xff0c;cloud云的网段是192.168.43.0&#xff1b;连接cloud的g0/0/0地址就是你登录web&#xff0c;protal的地址 实验说明&#xff1a;建议不在真机上面配置直接用&#xff0c;因为真机不稳定。这里用kali当真机&#x…

【网络应用开发】实验5—— JDBC数据库访问与DAO设计模式

目录 JDBC数据库访问与DAO设计模式预习报告 一、实验目的 二、实验原理 三、实验预习内容 1. JDBC常用的类对象与接口有哪些&#xff1f;它们的功能如何&#xff1f; 2.使用数据源访问数据库的基本思想是什么&#xff1f;这样做有什么好处&#xff1f; 3.什么是DAO&am…