昇思25天学习打卡营第15天|基于MobileNetv2的垃圾分类

news2024/9/21 12:35:31

一、关于MobileNetv2

MobileNet网络专注于移动端、嵌入式或IoT设备的轻量级CNN网络。MobileNet网络使用深度可分离卷积Depthwise Separable Convolution)的思想在准确率小幅度降低的前提下,大大减小了模型参数与运算量。并引入宽度系数 α和分辨率系数 β使模型满足不同应用场景的需求。

因为MobileNet网络中Relu激活函数处理低维特征信息时会存在大量的丢失,所以MobileNetV2网络提出使用倒残差结构(Inverted residual block)和Linear Bottlenecks来设计网络,以提高模型的准确率,且优化后的模型更小。
在这里插入图片描述

二、MobileNetV2模型构建

使用MindSpore定义MobileNetV2网络的各模块时需要继承mindspore.nn.Cell。Cell是所有神经网络(Conv2d等)的基类。

__all__ = ['MobileNetV2', 'MobileNetV2Backbone', 'MobileNetV2Head', 'mobilenet_v2']

def _make_divisible(v, divisor, min_value=None):
    if min_value is None:
        min_value = divisor
    new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)
    if new_v < 0.9 * v:
        new_v += divisor
    return new_v

class GlobalAvgPooling(nn.Cell):
    """
    Global avg pooling definition.

    Args:

    Returns:
        Tensor, output tensor.

    Examples:
        >>> GlobalAvgPooling()
    """

    def __init__(self):
        super(GlobalAvgPooling, self).__init__()

    def construct(self, x):
        x = P.mean(x, (2, 3))
        return x

class ConvBNReLU(nn.Cell):
    """
    Convolution/Depthwise fused with Batchnorm and ReLU block definition.

    Args:
        in_planes (int): Input channel.
        out_planes (int): Output channel.
        kernel_size (int): Input kernel size.
        stride (int): Stride size for the first convolutional layer. Default: 1.
        groups (int): channel group. Convolution is 1 while Depthiwse is input channel. Default: 1.

    Returns:
        Tensor, output tensor.

    Examples:
        >>> ConvBNReLU(16, 256, kernel_size=1, stride=1, groups=1)
    """

    def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1):
        super(ConvBNReLU, self).__init__()
        padding = (kernel_size - 1) // 2
        in_channels = in_planes
        out_channels = out_planes
        if groups == 1:
            conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad_mode='pad', padding=padding)
        else:
            out_channels = in_planes
            conv = nn.Conv2d(in_channels, out_channels, kernel_size, stride, pad_mode='pad',
                             padding=padding, group=in_channels)

        layers = [conv, nn.BatchNorm2d(out_planes), nn.ReLU6()]
        self.features = nn.SequentialCell(layers)

    def construct(self, x):
        output = self.features(x)
        return output

class InvertedResidual(nn.Cell):
    """
    Mobilenetv2 residual block definition.

    Args:
        inp (int): Input channel.
        oup (int): Output channel.
        stride (int): Stride size for the first convolutional layer. Default: 1.
        expand_ratio (int): expand ration of input channel

    Returns:
        Tensor, output tensor.

    Examples:
        >>> ResidualBlock(3, 256, 1, 1)
    """

    def __init__(self, inp, oup, stride, expand_ratio):
        super(InvertedResidual, self).__init__()
        assert stride in [1, 2]

        hidden_dim = int(round(inp * expand_ratio))
        self.use_res_connect = stride == 1 and inp == oup

        layers = []
        if expand_ratio != 1:
            layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1))
        layers.extend([
            ConvBNReLU(hidden_dim, hidden_dim,
                       stride=stride, groups=hidden_dim),
            nn.Conv2d(hidden_dim, oup, kernel_size=1,
                      stride=1, has_bias=False),
            nn.BatchNorm2d(oup),
        ])
        self.conv = nn.SequentialCell(layers)
        self.cast = P.Cast()

    def construct(self, x):
        identity = x
        x = self.conv(x)
        if self.use_res_connect:
            return P.add(identity, x)
        return x

class MobileNetV2Backbone(nn.Cell):
    """
    MobileNetV2 architecture.

    Args:
        class_num (int): number of classes.
        width_mult (int): Channels multiplier for round to 8/16 and others. Default is 1.
        has_dropout (bool): Is dropout used. Default is false
        inverted_residual_setting (list): Inverted residual settings. Default is None
        round_nearest (list): Channel round to . Default is 8
    Returns:
        Tensor, output tensor.

    Examples:
        >>> MobileNetV2(num_classes=1000)
    """

    def __init__(self, width_mult=1., inverted_residual_setting=None, round_nearest=8,
                 input_channel=32, last_channel=1280):
        super(MobileNetV2Backbone, self).__init__()
        block = InvertedResidual
        # setting of inverted residual blocks
        self.cfgs = inverted_residual_setting
        if inverted_residual_setting is None:
            self.cfgs = [
                # t, c, n, s
                [1, 16, 1, 1],
                [6, 24, 2, 2],
                [6, 32, 3, 2],
                [6, 64, 4, 2],
                [6, 96, 3, 1],
                [6, 160, 3, 2],
                [6, 320, 1, 1],
            ]

        # building first layer
        input_channel = _make_divisible(input_channel * width_mult, round_nearest)
        self.out_channels = _make_divisible(last_channel * max(1.0, width_mult), round_nearest)
        features = [ConvBNReLU(3, input_channel, stride=2)]
        # building inverted residual blocks
        for t, c, n, s in self.cfgs:
            output_channel = _make_divisible(c * width_mult, round_nearest)
            for i in range(n):
                stride = s if i == 0 else 1
                features.append(block(input_channel, output_channel, stride, expand_ratio=t))
                input_channel = output_channel
        features.append(ConvBNReLU(input_channel, self.out_channels, kernel_size=1))
        self.features = nn.SequentialCell(features)
        self._initialize_weights()

    def construct(self, x):
        x = self.features(x)
        return x

    def _initialize_weights(self):
        """
        Initialize weights.

        Args:

        Returns:
            None.

        Examples:
            >>> _initialize_weights()
        """
        self.init_parameters_data()
        for _, m in self.cells_and_names():
            if isinstance(m, nn.Conv2d):
                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
                m.weight.set_data(Tensor(np.random.normal(0, np.sqrt(2. / n),
                                                          m.weight.data.shape).astype("float32")))
                if m.bias is not None:
                    m.bias.set_data(
                        Tensor(np.zeros(m.bias.data.shape, dtype="float32")))
            elif isinstance(m, nn.BatchNorm2d):
                m.gamma.set_data(
                    Tensor(np.ones(m.gamma.data.shape, dtype="float32")))
                m.beta.set_data(
                    Tensor(np.zeros(m.beta.data.shape, dtype="float32")))

    @property
    def get_features(self):
        return self.features

class MobileNetV2Head(nn.Cell):
    """
    MobileNetV2 architecture.

    Args:
        class_num (int): Number of classes. Default is 1000.
        has_dropout (bool): Is dropout used. Default is false
    Returns:
        Tensor, output tensor.

    Examples:
        >>> MobileNetV2(num_classes=1000)
    """

    def __init__(self, input_channel=1280, num_classes=1000, has_dropout=False, activation="None"):
        super(MobileNetV2Head, self).__init__()
        # mobilenet head
        head = ([GlobalAvgPooling(), nn.Dense(input_channel, num_classes, has_bias=True)] if not has_dropout else
                [GlobalAvgPooling(), nn.Dropout(0.2), nn.Dense(input_channel, num_classes, has_bias=True)])
        self.head = nn.SequentialCell(head)
        self.need_activation = True
        if activation == "Sigmoid":
            self.activation = nn.Sigmoid()
        elif activation == "Softmax":
            self.activation = nn.Softmax()
        else:
            self.need_activation = False
        self._initialize_weights()

    def construct(self, x):
        x = self.head(x)
        if self.need_activation:
            x = self.activation(x)
        return x

    def _initialize_weights(self):
        """
        Initialize weights.

        Args:

        Returns:
            None.

        Examples:
            >>> _initialize_weights()
        """
        self.init_parameters_data()
        for _, m in self.cells_and_names():
            if isinstance(m, nn.Dense):
                m.weight.set_data(Tensor(np.random.normal(
                    0, 0.01, m.weight.data.shape).astype("float32")))
                if m.bias is not None:
                    m.bias.set_data(
                        Tensor(np.zeros(m.bias.data.shape, dtype="float32")))
    @property
    def get_head(self):
        return self.head

class MobileNetV2(nn.Cell):
    """
    MobileNetV2 architecture.

    Args:
        class_num (int): number of classes.
        width_mult (int): Channels multiplier for round to 8/16 and others. Default is 1.
        has_dropout (bool): Is dropout used. Default is false
        inverted_residual_setting (list): Inverted residual settings. Default is None
        round_nearest (list): Channel round to . Default is 8
    Returns:
        Tensor, output tensor.

    Examples:
        >>> MobileNetV2(backbone, head)
    """

    def __init__(self, num_classes=1000, width_mult=1., has_dropout=False, inverted_residual_setting=None, \
        round_nearest=8, input_channel=32, last_channel=1280):
        super(MobileNetV2, self).__init__()
        self.backbone = MobileNetV2Backbone(width_mult=width_mult, \
            inverted_residual_setting=inverted_residual_setting, \
            round_nearest=round_nearest, input_channel=input_channel, last_channel=last_channel).get_features
        self.head = MobileNetV2Head(input_channel=self.backbone.out_channel, num_classes=num_classes, \
            has_dropout=has_dropout).get_head

    def construct(self, x):
        x = self.backbone(x)
        x = self.head(x)
        return x

class MobileNetV2Combine(nn.Cell):
    """
    MobileNetV2Combine architecture.

    Args:
        backbone (Cell): the features extract layers.
        head (Cell):  the fully connected layers.
    Returns:
        Tensor, output tensor.

    Examples:
        >>> MobileNetV2(num_classes=1000)
    """

    def __init__(self, backbone, head):
        super(MobileNetV2Combine, self).__init__(auto_prefix=False)
        self.backbone = backbone
        self.head = head

    def construct(self, x):
        x = self.backbone(x)
        x = self.head(x)
        return x

def mobilenet_v2(backbone, head):
    return MobileNetV2Combine(backbone, head)

加载ImageNet数据上预训练的MobileNetv2进行Fine-tuning,只训练最后修改的FC层,并在训练过程中保存Checkpoint。预训练模型资源下载。

def switch_precision(net, data_type):
    if ms.get_context('device_target') == "Ascend":
        net.to_float(data_type)
        for _, cell in net.cells_and_names():
            if isinstance(cell, nn.Dense):
                cell.to_float(ms.float32)

三、垃圾分类数据集

数据集下载:pip install download

from download import download

# 下载data_en数据集
url = "https://ascend-professional-construction-dataset.obs.cn-north-4.myhuaweicloud.com:443/MindStudio-pc/data_en.zip" 
path = download(url, "./", kind="zip", replace=True)

在这里插入图片描述

数据集标签:

# 垃圾分类数据集标签,以及用于标签映射的字典。
garbage_classes = {
    '干垃圾': ['贝壳', '打火机', '旧镜子', '扫把', '陶瓷碗', '牙刷', '一次性筷子', '脏污衣服'],
    '可回收物': ['报纸', '玻璃制品', '篮球', '塑料瓶', '硬纸板', '玻璃瓶', '金属制品', '帽子', '易拉罐', '纸张'],
    '湿垃圾': ['菜叶', '橙皮', '蛋壳', '香蕉皮'],
    '有害垃圾': ['电池', '药片胶囊', '荧光灯', '油漆桶']
}

class_cn = ['贝壳', '打火机', '旧镜子', '扫把', '陶瓷碗', '牙刷', '一次性筷子', '脏污衣服',
            '报纸', '玻璃制品', '篮球', '塑料瓶', '硬纸板', '玻璃瓶', '金属制品', '帽子', '易拉罐', '纸张',
            '菜叶', '橙皮', '蛋壳', '香蕉皮',
            '电池', '药片胶囊', '荧光灯', '油漆桶']
class_en = ['Seashell', 'Lighter','Old Mirror', 'Broom','Ceramic Bowl', 'Toothbrush','Disposable Chopsticks','Dirty Cloth',
            'Newspaper', 'Glassware', 'Basketball', 'Plastic Bottle', 'Cardboard','Glass Bottle', 'Metalware', 'Hats', 'Cans', 'Paper',
            'Vegetable Leaf','Orange Peel', 'Eggshell','Banana Peel',
            'Battery', 'Tablet capsules','Fluorescent lamp', 'Paint bucket']

index_en = {'Seashell': 0, 'Lighter': 1, 'Old Mirror': 2, 'Broom': 3, 'Ceramic Bowl': 4, 'Toothbrush': 5, 'Disposable Chopsticks': 6, 'Dirty Cloth': 7,
            'Newspaper': 8, 'Glassware': 9, 'Basketball': 10, 'Plastic Bottle': 11, 'Cardboard': 12, 'Glass Bottle': 13, 'Metalware': 14, 'Hats': 15, 'Cans': 16, 'Paper': 17,
            'Vegetable Leaf': 18, 'Orange Peel': 19, 'Eggshell': 20, 'Banana Peel': 21,
            'Battery': 22, 'Tablet capsules': 23, 'Fluorescent lamp': 24, 'Paint bucket': 25}

数据集:

def create_dataset(dataset_path, config, training=True, buffer_size=1000):
    """
    create a train or eval dataset

    Args:
        dataset_path(string): the path of dataset.
        config(struct): the config of train and eval in diffirent platform.

    Returns:
        train_dataset, val_dataset
    """
    data_path = os.path.join(dataset_path, 'train' if training else 'test')
    ds = de.ImageFolderDataset(data_path, num_parallel_workers=4, class_indexing=config.class_index)
    resize_height = config.image_height
    resize_width = config.image_width
    
    normalize_op = C.Normalize(mean=[0.485*255, 0.456*255, 0.406*255], std=[0.229*255, 0.224*255, 0.225*255])
    change_swap_op = C.HWC2CHW()
    type_cast_op = C2.TypeCast(mstype.int32)

    if training:
        crop_decode_resize = C.RandomCropDecodeResize(resize_height, scale=(0.08, 1.0), ratio=(0.75, 1.333))
        horizontal_flip_op = C.RandomHorizontalFlip(prob=0.5)
        color_adjust = C.RandomColorAdjust(brightness=0.4, contrast=0.4, saturation=0.4)
    
        train_trans = [crop_decode_resize, horizontal_flip_op, color_adjust, normalize_op, change_swap_op]
        train_ds = ds.map(input_columns="image", operations=train_trans, num_parallel_workers=4)
        train_ds = train_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=4)
        
        train_ds = train_ds.shuffle(buffer_size=buffer_size)
        ds = train_ds.batch(config.batch_size, drop_remainder=True)
    else:
        decode_op = C.Decode()
        resize_op = C.Resize((int(resize_width/0.875), int(resize_width/0.875)))
        center_crop = C.CenterCrop(resize_width)
        
        eval_trans = [decode_op, resize_op, center_crop, normalize_op, change_swap_op]
        eval_ds = ds.map(input_columns="image", operations=eval_trans, num_parallel_workers=4)
        eval_ds = eval_ds.map(input_columns="label", operations=type_cast_op, num_parallel_workers=4)
        ds = eval_ds.batch(config.eval_batch_size, drop_remainder=True)

    return ds

其中config参数:

# 训练超参
config = EasyDict({
    "num_classes": 26,
    "image_height": 224,
    "image_width": 224,
    #"data_split": [0.9, 0.1],
    "backbone_out_channels":1280,
    "batch_size": 32,
    "eval_batch_size": 32,
    "epochs": 100,
    "lr_max": 0.05,
    "momentum": 0.9,
    "weight_decay": 1e-4,
    "save_ckpt_epochs": 1,
    "dataset_path": "./data_en",
    "class_index": index_en,
    "pretrained_ckpt": "./mobilenetV2-200_1067.ckpt" # mobilenetV2-200_1067.ckpt 
})

四、模型推理

下载训练好的MobileNetv2-垃圾分类模型。然后,预测你的图片。

CKPT="save_mobilenetV2_model.ckpt"
def image_process(image):
    """Precess one image per time.
    
    Args:
        image: shape (H, W, C)
    """
    mean=[0.485*255, 0.456*255, 0.406*255]
    std=[0.229*255, 0.224*255, 0.225*255]
    image = (np.array(image) - mean) / std
    image = image.transpose((2,0,1))
    img_tensor = Tensor(np.array([image], np.float32))
    return img_tensor

def infer_one(network, image_path):
    image = Image.open(image_path).resize((config.image_height, config.image_width))
    logits = network(image_process(image))
    pred = np.argmax(logits.asnumpy(), axis=1)[0]
    print(image_path, class_en[pred])

def infer():
    backbone = MobileNetV2Backbone(last_channel=config.backbone_out_channels)
    head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)
    network = mobilenet_v2(backbone, head)
    load_checkpoint(CKPT, network)
    for i in range(91, 100):
        infer_one(network, f'data_en/test/Cardboard/000{i}.jpg')
infer()

五、导出AIR/GEIR/ONNX模型文件

backbone = MobileNetV2Backbone(last_channel=config.backbone_out_channels)
head = MobileNetV2Head(input_channel=backbone.out_channels, num_classes=config.num_classes)
network = mobilenet_v2(backbone, head)
load_checkpoint(CKPT, network)

input = np.random.uniform(0.0, 1.0, size=[1, 3, 224, 224]).astype(np.float32)
# export(network, Tensor(input), file_name='mobilenetv2.air', file_format='AIR')
# export(network, Tensor(input), file_name='mobilenetv2.pb', file_format='GEIR')
export(network, Tensor(input), file_name='mobilenetv2.onnx', file_format='ONNX')

六、资源下载

1.预训练模型MobileNetv2.
2.MobileNetv2-垃圾分类模型下载。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1927758.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

MySQL集群、Redis集群、RabbitMQ集群

一、MySQL集群 1、集群原理 MySQL-MMM 是 Master-Master Replication Manager for MySQL&#xff08;mysql 主主复制管理器&#xff09;的简称。脚本&#xff09;。MMM 基于 MySQL Replication 做的扩展架构&#xff0c;主要用来监控 mysql 主主复制并做失败转移。其原理是将真…

解决vscode项目中无法识别宏定义的问题

在c_cpp_properties.json中的"defines":[]中定义的宏无法被识别。 从而导致代码中的宏开关无法生效&#xff0c;造成代码的阅读不便利。 排查路线是&#xff1a; 关闭所有插件&#xff0c;删除当前工程目录下的.vscode文件夹。 经过一系列排查发现是C/C插件与clangd插…

能把进程和线程讲的这么透彻的,没有20年功夫还真不行【0基础也能看懂】

本篇会加入个人的所谓鱼式疯言 ❤️❤️❤️鱼式疯言:❤️❤️❤️此疯言非彼疯言 而是理解过并总结出来通俗易懂的大白话, 小编会尽可能的在每个概念后插入鱼式疯言,帮助大家理解的. &#x1f92d;&#x1f92d;&#x1f92d;可能说的不是那么严谨.但小编初心是能让更多人…

数据库基本查询(表的增删查改)

一、增加 1、添加信息 insert 语法 insert into table_name (列名) values (列数据1&#xff0c;列数据2&#xff0c;列数据3...) 若插入时主键或唯一键冲突就无法插入。 但如果我们就是要修改一列信息也可以用insert insert into table_name (列名) values (列数据1&am…

nginx的正向与反向代理

正向代理与反向代理的区别 虽然正向代理和反向代理都涉及代理服务器接收客户端请求并向服务端转发请求&#xff0c;但它们之间存在一些关键的区别&#xff1a; 正向代理&#xff1a; 在正向代理中&#xff0c;代理服务器代表客户端向服务器发送请求&#xff0c;并将服务…

【Linux】安装PHP扩展-igbinary

说明 本文档是在centos7.6的环境下&#xff0c;安装PHP7.4之后&#xff0c;安装对应的PHP扩展igbinary。 一、igbinary简述 igbinary 是一个 PHP 扩展&#xff0c;主要用于序列化和反序列化数据&#xff0c;其设计目的是为了提高序列化过程中的性能和内存效率。 优点&#…

wifi信号处理的CRC8、CRC32

&#x1f9d1;&#x1f3fb;个人简介&#xff1a;具有3年工作经验&#xff0c;擅长通信算法的MATLAB仿真和FPGA实现。代码事宜&#xff0c;私信博主&#xff0c;程序定制、设计指导。 &#x1f680;wifi信号处理的CRC8、CRC32 目录 &#x1f680;1.CRC概述 &#x1f680;1.C…

LeNet入门和Pytorch实现

1. LeNet简介 LeNet是一系列网络的合称&#xff0c;包括LeNet1-LeNet5&#xff0c;是卷积神经网络的开山之作。 文献&#xff1a;LeCun Y, Boser B, Denker J, et al. Handwritten digit recognition with a back-propagation network[J]. Advances in neural information pro…

鸿蒙开发:Universal Keystore Kit(密钥管理服务)【查询密钥是否存在(C/C++)】

查询密钥是否存在(C/C) HUKS提供了接口供应用查询指定密钥是否存在。 在CMake脚本中链接相关动态库 target_link_libraries(entry PUBLIC libhuks_ndk.z.so)开发步骤 构造对应参数。 指定密钥别名keyAlias&#xff0c;密钥别名最大长度为64字节。查询密钥需要的属性TAG&#…

DZS-12CE/S延时中间继电器 导轨安装 约瑟JOSEF

中间继电器型号&#xff1a; DZS-254 DZS-145 DZS-233 DZS-121 DZS-112 DZS-121 DZS-12BG DZS-12B DZS-213 DZS-234 DZS-11B/Q DZS-226 DZS-652 DZS-17E/302 DZS-12CE/S DZS-821 DZS-226 DZS-249 DZS-254G DZS-12E DZS-895 DZS-234 DZS-655G DZS-651 DZS-115 DZS-…

使用自制Qt工具配合mitmproxy进行网络调试

在软件开发和网络调试过程中&#xff0c;抓包工具是不可或缺的。传统的抓包工具如Fiddler或Charles Proxy通常需要设置系统代理&#xff0c;这会抓到其他应用程序的网络连接&#xff0c;需要设置繁琐的过滤&#xff0c;导致不必要的干扰。为了解决这个问题&#xff0c;我们可以…

一个引发openssl崩溃问题案例

1 背景 最近用libevent写了一个https代理功能&#xff0c;在调研的时候&#xff0c;遇到了一个项目用到了本地多个openssl库引发的ssl握手崩溃问题。 2 开发环境 项目库版本号依赖项libeventlibevent-2.1.8-stableopenssl 1.1openssl1.0u / 1.1.1w / 3.3.1...... 3 问题现象…

FlinkErr:org/apache/hadoop/hive/ql/parse/SemanticException

在flink项目中跑 上面这段代码出现如下这个异常&#xff0c; java.lang.NoClassDefFoundError: org/apache/thrift/TException 加上下面这个依赖后不报错 <dependency> <groupId>org.apache.thrift</groupId> <artifactId>libthrift</artifactId…

springmvc1

以前的servlet程序&#xff1a; springmvc 不同的处理器&#xff1a;不同的方法或者处理类 所有的请求都会经过dispathcherservlet的doservice方法&#xff1a; mvc原理&#xff1a; 前端控制器&#xff1a;jsp或者什么东西

axios以post方式提交表单形式数据

某些后端框架请求接口必须走form表单提交的那种形式&#xff0c;但前端很少有<form action"接口地址" method"post"></form>这种写法去提交表单数据&#xff0c;所以前端需要用axios模拟一个表单提交接口。 Content-Type 代表发送端&#xff0…

【Unity学习笔记】第二十 · 物理引擎脉络梳理(数值积分、碰撞检测、约束解决)

转载请注明出处: https://blog.csdn.net/weixin_44013533/article/details/139808452 作者&#xff1a;CSDN|Ringleader| 物理引擎综述 物理引擎是利用物理规则模拟物体运动和碰撞的模块&#xff0c;以在重力、弹力、摩擦力等各种力作用下做出真实运动表现&#xff0c;并对碰…

实现将Nginx的每个网站配置单独的nginx配置文件——每个网站单独管理

一、问题描述 Nginx默认地配置文件【nginx.conf】是包含了所有网站的配置内容,如果我们需要配置很多网站的话,就需要在默认的配置文件中给每个网站都添加一条server记录,这样下去nginx默认配置文件会变得很大,很难管理(比如有些网站不使用了,需要注销掉,也需要到该文件操…

YOLOv8白皮书-第Y7周:训练自己的数据集

本文为365天深度学习训练营中的学习记录博客 原作者&#xff1a;K同学啊|接辅导、项目定制 本文可以参考《YOLOv5白皮书-第Y2周:训练自己的数据集》 这次试着用YOLOv8训练自己的数据。 一、配置环境 1、官网下载源码 官网地址:【YOLOv8开源地址】 2、安装需要的环境 配置P…

快捷工具(提升工作效率)

文章目录 一、notepad++设置转json1.下载插件二、截图工具(可以将截图并粘贴到窗口)1.下载安装软件:snipaste三、idea 日志控制台查找日志1.idea 安装插件:Grep console四、beyond compare 4项目工程比较工具1.浏览器下载安装。本地运行五、xampp快速部署本地mysql,tomacat1.浏…

物体检测单阶段SSD

Faster RCNN 数据增广&#xff1a;