OAK相机如何将 YOLOv9 模型转换成 blob 格式?

news2024/10/6 8:32:09

编辑:OAK中国
首发:oakchina.cn
喜欢的话,请多多👍⭐️✍
内容可能会不定期更新,官网内容都是最新的,请查看首发地址链接。

Hello,大家好,这里是OAK中国,我是Ashely。

专注科技,专注分享。

最近真的很忙,已经好久不发博客了。这个月有朋友问怎么在OAK相机上部署yolov9,正好给大家出个教程。

1.其他Yolo转换及使用教程请参考
2.检测类的yolo模型建议使用在线转换(地址),如果在线转换不成功,你再根据本教程来做本地转换。

▌.pt 转换为 .onnx

使用下列脚本(将脚本放到 YOLOv9 根目录中)将 pytorch 模型转换为 onnx 模型,若已安装 openvino_dev,则可进一步转换为 OpenVINO 模型:

示例用法:

python export_onnx.py -w <path_to_model>.pt -imgsz 640 

export_onnx.py :

#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import argparse
import json
import logging
import math
import os
import platform
import sys
import time
import warnings
from io import BytesIO
from pathlib import Path

import torch
from torch import nn

warnings.filterwarnings("ignore")

FILE = Path(__file__).resolve()
ROOT = FILE.parents[0]  # YOLO root directory
if str(ROOT) not in sys.path:
    sys.path.append(str(ROOT))  # add ROOT to PATH
if platform.system() != "Windows":
    ROOT = Path(os.path.relpath(ROOT, Path.cwd()))  # relative

from models.experimental import attempt_load
from models.yolo import DDetect, Detect, DualDDetect, DualDetect, TripleDDetect, TripleDetect
from utils.torch_utils import select_device

try:
    from rich import print
    from rich.logging import RichHandler

    logging.basicConfig(
        level="INFO",
        format="%(message)s",
        datefmt="[%X]",
        handlers=[
            RichHandler(
                rich_tracebacks=False,
                show_path=False,
            )
        ],
    )
except ImportError:
    logging.basicConfig(
        level="INFO",
        format="%(asctime)s\t%(levelname)s\t%(message)s",
        datefmt="[%X]",
    )


class DetectV9(nn.Module):
    """YOLOv9 Detect head for detection models"""

    dynamic = False  # force grid reconstruction
    export = False  # export mode
    shape = None
    anchors = torch.empty(0)  # init
    strides = torch.empty(0)  # init

    def __init__(self, old_detect):
        super().__init__()
        self.nc = old_detect.nc  # number of classes
        self.nl = old_detect.nl  # number of detection layers
        self.reg_max = old_detect.reg_max  # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)
        self.no = old_detect.no  # number of outputs per anchor
        self.stride = old_detect.stride  # strides computed during build

        self.cv2 = old_detect.cv2
        self.cv3 = old_detect.cv3
        self.dfl = old_detect.dfl
        self.f = old_detect.f
        self.i = old_detect.i

    def forward(self, x):
        shape = x[0].shape  # BCHW

        d1 = [torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1) for i in range(self.nl)]

        box, cls = torch.cat([xi.view(shape[0], self.no, -1) for xi in d1], 2).split((self.reg_max * 4, self.nc), 1)
        box = self.dfl(box)
        cls_output = cls.sigmoid()
        # Get the max
        conf, _ = cls_output.max(1, keepdim=True)
        # Concat
        y = torch.cat([box, conf, cls_output], dim=1)
        # Split to 3 channels
        outputs = []
        start, end = 0, 0
        for xi in x:
            end += xi.shape[-2] * xi.shape[-1]
            outputs.append(y[:, :, start:end].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))
            start += xi.shape[-2] * xi.shape[-1]

        return outputs

    def bias_init(self):
        # Initialize Detect() biases, WARNING: requires stride availability
        m = self  # self.model[-1]  # Detect() module

        for a, b, s in zip(m.cv2, m.cv3, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (.01 objects, 80 classes, 640 img)


class DualDetectV9(DetectV9):
    def __init__(self, old_detect):
        super().__init__(old_detect)

        self.cv4 = old_detect.cv4
        self.cv5 = old_detect.cv5
        self.dfl2 = old_detect.dfl2

    def forward(self, x):
        shape = x[0].shape  # BCHW

        d2 = [torch.cat((self.cv4[i](x[self.nl + i]), self.cv5[i](x[self.nl + i])), 1) for i in range(self.nl)]

        box2, cls2 = torch.cat([di.view(shape[0], self.no, -1) for di in d2], 2).split((self.reg_max * 4, self.nc), 1)
        box2 = self.dfl2(box2)
        cls_output2 = cls2.sigmoid()
        # Get the max
        conf2, _ = cls_output2.max(1, keepdim=True)
        # Concat
        y2 = torch.cat([box2, conf2, cls_output2], dim=1)

        # Split to 3 channels
        outputs2 = []
        start2, end2 = 0, 0
        for _i, xi in enumerate(x[3:]):
            end2 += xi.shape[-2] * xi.shape[-1]
            outputs2.append(y2[:, :, start2:end2].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))
            start2 += xi.shape[-2] * xi.shape[-1]

        return outputs2

    def bias_init(self):
        # Initialize Detect() biases, WARNING: requires stride availability
        m = self  # self.model[-1]  # Detect() module

        for a, b, s in zip(m.cv2, m.cv3, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)
        for a, b, s in zip(m.cv4, m.cv5, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)


class TripleDetectV9(DualDetectV9):
    def __init__(self, old_detect):
        super().__init__(old_detect)

        self.cv6 = old_detect.cv6
        self.cv7 = old_detect.cv7
        self.dfl3 = old_detect.dfl3

    def forward(self, x):
        shape = x[0].shape  # BCHW

        d3 = [
            torch.cat(
                (self.cv6[i](x[self.nl * 2 + i]), self.cv7[i](x[self.nl * 2 + i])),
                1,
            )
            for i in range(self.nl)
        ]

        box3, cls3 = torch.cat([di.view(shape[0], self.no, -1) for di in d3], 2).split((self.reg_max * 4, self.nc), 1)
        box3 = self.dfl3(box3)
        cls_output3 = cls3.sigmoid()
        # Get the max
        conf3, _ = cls_output3.max(1, keepdim=True)
        # Concat
        y3 = torch.cat([box3, conf3, cls_output3], dim=1)

        # Split to 3 channels
        outputs3 = []
        start3, end3 = 0, 0
        for _i, xi in enumerate(x[6:]):
            end3 += xi.shape[-2] * xi.shape[-1]
            outputs3.append(y3[:, :, start3:end3].view(xi.shape[0], -1, xi.shape[-2], xi.shape[-1]))
            start3 += xi.shape[-2] * xi.shape[-1]

        return outputs3

    def bias_init(self):
        # Initialize Detect() biases, WARNING: requires stride availability
        m = self  # self.model[-1]  # Detect() module

        for a, b, s in zip(m.cv2, m.cv3, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)
        for a, b, s in zip(m.cv4, m.cv5, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)
        for a, b, s in zip(m.cv6, m.cv7, m.stride):  # from
            a[-1].bias.data[:] = 1.0  # box
            b[-1].bias.data[: m.nc] = math.log(
                5 / m.nc / (640 / s) ** 2
            )  # cls (5 objects and 80 classes per 640 image)


def parse_args():
    parser = argparse.ArgumentParser(
        description="Tool for converting Yolov9 models to the blob format used by OAK",
        formatter_class=argparse.ArgumentDefaultsHelpFormatter,
    )
    parser.add_argument(
        "-m",
        "-i",
        "-w",
        "--input_model",
        type=Path,
        required=True,
        help="weights path",
    )
    parser.add_argument(
        "-imgsz",
        "--img-size",
        nargs="+",
        type=int,
        default=[640, 640],
        help="image size",
    )  # height, width
    parser.add_argument("-op", "--opset", type=int, default=12, help="opset version")

    parser.add_argument(
        "-n",
        "--name",
        type=str,
        help="The name of the model to be saved, none means using the same name as the input model",
    )
    parser.add_argument(
        "-o",
        "--output_dir",
        type=Path,
        help="Directory for saving files, none means using the same path as the input model",
    )
    parser.add_argument(
        "-b",
        "--blob",
        action="store_true",
        help="OAK Blob export",
    )
    parser.add_argument(
        "-s",
        "--spatial_detection",
        action="store_true",
        help="Inference with depth information",
    )
    parser.add_argument(
        "-sh",
        "--shaves",
        type=int,
        help="Inference with depth information",
    )
    parser.add_argument(
        "-t",
        "--convert_tool",
        type=str,
        help="Which tool is used to convert, docker: should already have docker (https://docs.docker.com/get-docker/) and docker-py (pip install docker) installed; blobconverter: uses an online server to convert the model and should already have blobconverter (pip install blobconverter); local: use openvino-dev (pip install openvino-dev) and openvino 2022.1 ( https://docs.oakchina.cn/en/latest /pages/Advanced/Neural_networks/local_convert_openvino.html#id2) to convert",
        default="blobconverter",
        choices=["docker", "blobconverter", "local"],
    )

    args = parser.parse_args()
    args.input_model = args.input_model.resolve().absolute()
    if args.name is None:
        args.name = args.input_model.stem

    if args.output_dir is None:
        args.output_dir = args.input_model.parent

    args.img_size *= 2 if len(args.img_size) == 1 else 1  # expand

    if args.shaves is None:
        args.shaves = 5 if args.spatial_detection else 6

    return args


def export(input_model, img_size, output_model, opset, **kwargs):
    t = time.time()

    # Load PyTorch model
    device = select_device("cpu")
    # load FP32 model
    model = attempt_load(input_model, device=device, inplace=True, fuse=True)
    labels = model.module.names if hasattr(model, "module") else model.names  # get class names
    labels = labels if isinstance(labels, list) else list(labels.values())

    # check num classes and labels
    assert model.nc == len(labels), f"Model class count {model.nc} != len(names) {len(labels)}"

    # Replace with the custom Detection Head
    if isinstance(model.model[-1], (Detect, DDetect)):
        logging.info("Replacing model.model[-1] with DetectV9")
        model.model[-1] = DetectV9(model.model[-1])
    elif isinstance(model.model[-1], (DualDetect, DualDDetect)):
        logging.info("Replacing model.model[-1] with DualDetectV9")
        model.model[-1] = DualDetectV9(model.model[-1])
    elif isinstance(model.model[-1], (TripleDetect, TripleDDetect)):
        logging.info("Replacing model.model[-1] with TripleDetectV9")
        model.model[-1] = TripleDetectV9(model.model[-1])

    num_branches = model.model[-1].nl

    # Input
    img = torch.zeros(1, 3, *img_size).to(device)  # image size(1,3,320,320) Detection

    model.eval()

    model(img)  # dry runs

    # ONNX export
    try:
        import onnx

        print()
        logging.info(f"Starting ONNX export with onnx {onnx.__version__}...")
        output_list = ["output%s_yolov6r2" % (i + 1) for i in range(num_branches)]
        with BytesIO() as f:
            torch.onnx.export(
                model,
                img,
                f,
                verbose=False,
                opset_version=opset,
                input_names=["images"],
                output_names=output_list,
            )

            # Checks
            onnx_model = onnx.load_from_string(f.getvalue())  # load onnx model
            onnx.checker.check_model(onnx_model)  # check onnx model

        try:
            import onnxsim

            logging.info("Starting to simplify ONNX...")
            onnx_model, check = onnxsim.simplify(onnx_model)
            assert check, "assert check failed"

        except ImportError:
            logging.warning(
                "onnxsim is not found, if you want to simplify the onnx, "
                + "you should install it:\n\t"
                + "pip install -U onnxsim onnxruntime\n"
                + "then use:\n\t"
                + f'python -m onnxsim "{output_model}" "{output_model}"'
            )
        except Exception:
            logging.exception("Simplifier failure")

        onnx.save(onnx_model, output_model)
        logging.info(f"ONNX export success, saved as:\n\t{output_model}")

    except Exception:
        logging.exception("ONNX export failure")

    # generate anchors and sides
    anchors = []

    # generate masks
    masks = {}

    logging.info(f"anchors:\n\t{anchors}")
    logging.info(f"anchor_masks:\n\t{masks}")
    export_json = output_model.with_suffix(".json")
    export_json.write_text(
        json.dumps(
            {
                "nn_config": {
                    "output_format": "detection",
                    "NN_family": "YOLO",
                    "input_size": f"{img_size[0]}x{img_size[1]}",
                    "NN_specific_metadata": {
                        "classes": model.nc,
                        "coordinates": 4,
                        "anchors": anchors,
                        "anchor_masks": masks,
                        "iou_threshold": 0.3,
                        "confidence_threshold": 0.5,
                    },
                },
                "mappings": {"labels": labels},
            },
            indent=4,
        )
    )
    logging.info(f"Anchors data export success, saved as:\n\t{export_json}")

    # Finish
    logging.info("Export complete (%.2fs).\n" % (time.time() - t))


def convert(convert_tool, output_model, shaves, output_dir, name, **kwargs):
    t = time.time()

    export_dir: Path = output_dir.joinpath(name + "_openvino")
    export_dir.mkdir(parents=True, exist_ok=True)

    export_xml = export_dir.joinpath(name + ".xml")
    export_blob = export_dir.joinpath(name + ".blob")

    if convert_tool == "blobconverter":
        import blobconverter

        blobconverter.from_onnx(
            model=str(output_model),
            data_type="FP16",
            shaves=shaves,
            use_cache=False,
            version="2021.4",
            output_dir=export_dir,
            optimizer_params=[
                "--scale=255",
                "--reverse_input_channel",
                # "--use_new_frontend",
            ],
            # download_ir=True,
        )
        """
        with ZipFile(blob_path, "r", ZIP_LZMA) as zip_obj:
            for name in zip_obj.namelist():
                zip_obj.extract(
                    name,
                    export_dir,
                )
        blob_path.unlink()
        """
    elif convert_tool == "docker":
        import docker

        export_dir = Path("/io").joinpath(export_dir.name)
        export_xml = export_dir.joinpath(name + ".xml")
        export_blob = export_dir.joinpath(name + ".blob")

        client = docker.from_env()
        image = client.images.pull("openvino/ubuntu20_dev", tag="2022.3.1")
        docker_output = client.containers.run(
            image=image.tags[0],
            command=f"bash -c \"mo -m {name}.onnx -n {name} -o {export_dir} --static_shape --reverse_input_channels --scale=255 --use_new_frontend && echo 'MYRIAD_ENABLE_MX_BOOT NO' | tee /tmp/myriad.conf >> /dev/null && /opt/intel/openvino/tools/compile_tool/compile_tool -m {export_xml} -o {export_blob} -ip U8 -VPU_NUMBER_OF_SHAVES {shaves} -VPU_NUMBER_OF_CMX_SLICES {shaves} -d MYRIAD -c /tmp/myriad.conf\"",
            remove=True,
            volumes=[
                f"{output_dir}:/io",
            ],
            working_dir="/io",
        )
        logging.info(docker_output.decode("utf8"))
    else:
        import subprocess as sp

        # OpenVINO export
        logging.info("Starting to export OpenVINO...")
        OpenVINO_cmd = f"mo --input_model {output_model} --output_dir {export_dir} --data_type FP16 --scale 255 --reverse_input_channel"
        try:
            sp.check_output(OpenVINO_cmd, shell=True)
            logging.info(f"OpenVINO export success, saved as {export_dir}")
        except sp.CalledProcessError:
            logging.exception("")
            logging.warning("OpenVINO export failure!")
            logging.warning(f"By the way, you can try to export OpenVINO use:\n\t{OpenVINO_cmd}")

        # OAK Blob export
        logging.info("Then you can try to export blob use:")
        blob_cmd = (
            "echo 'MYRIAD_ENABLE_MX_BOOT ON' | tee /tmp/myriad.conf"
            + f"compile_tool -m {export_xml} -o {export_blob} -ip U8 -d MYRIAD -VPU_NUMBER_OF_SHAVES {shaves} -VPU_NUMBER_OF_CMX_SLICES {shaves} -c /tmp/myriad.conf"
        )
        logging.info(f"{blob_cmd}")

        logging.info(
            "compile_tool maybe in the path: /opt/intel/openvino/tools/compile_tool/compile_tool, if you install openvino 2022.1 with apt"
        )

    logging.info("Convert complete (%.2fs).\n" % (time.time() - t))


if __name__ == "__main__":
    args = parse_args()
    logging.info(args)
    print()
    output_model = args.output_dir / (args.name + ".onnx")

    export(output_model=output_model, **vars(args))
    if args.blob:
        convert(output_model=output_model, **vars(args))

可以使用 Netron 查看模型结构:
在这里插入图片描述

▌转换

openvino 本地转换

onnx -> openvino

mo 是 openvino_dev 2022.1 中脚本,安装命令为 pip install openvino-dev

mo --input_model yolov9-c.onnx --scale=255 --reverse_input_channel

openvino -> blob
compile_tool 是 OpenVINO Runtime 中脚本

<path>/compile_tool -m yolov9-c.xml 
-ip U8 -d MYRIAD 
-VPU_NUMBER_OF_SHAVES 6 
-VPU_NUMBER_OF_CMX_SLICES 6

在线转换

blobconvert 网页 http://blobconverter.luxonis.com/

  • 进入网页,按下图指示操作:
    在这里插入图片描述

  • 修改参数,转换模型:
    在这里插入图片描述

  1. 选择 onnx 模型
  2. 修改 optimizer_params--data_type=FP16 --scale=255 --reverse_input_channel
  3. 修改 shaves6
  4. 转换

blobconverter python 代码:

blobconverter.from_onnx(
            "yolov9-c.onnx",	
            optimizer_params=[
                "--scale=255",
                "--reverse_input_channel",
            ],
            shaves=6,
        )

blobconvert cli

blobconverter --onnx yolov9-c.onnx -sh 6 -o . --optimizer-params "scale=255 --reverse_input_channel"

▌DepthAI 示例

正确解码需要可配置的网络相关参数:

  • setNumClasses – YOLO 检测类别的数量
  • setIouThreshold – iou 阈值
  • setConfidenceThreshold – 置信度阈值,低于该阈值的对象将被过滤掉
# coding=utf-8
import cv2
import depthai as dai
import numpy as np

numClasses = 80
model = dai.OpenVINO.Blob("yolov9-c.blob")
dim = next(iter(model.networkInputs.values())).dims
W, H = dim[:2]

output_name, output_tenser = next(iter(model.networkOutputs.items()))
if "yolov6" in output_name:
    numClasses = output_tenser.dims[2] - 5
else:
    numClasses = output_tenser.dims[2] // 3 - 5

labelMap = [
    # "class_1","class_2","..."
    "class_%s" % i
    for i in range(numClasses)
]

# Create pipeline
pipeline = dai.Pipeline()

# Define sources and outputs
camRgb = pipeline.create(dai.node.ColorCamera)
detectionNetwork = pipeline.create(dai.node.YoloDetectionNetwork)
xoutRgb = pipeline.create(dai.node.XLinkOut)
xoutNN = pipeline.create(dai.node.XLinkOut)

xoutRgb.setStreamName("image")
xoutNN.setStreamName("nn")

# Properties
camRgb.setPreviewSize(W, H)
camRgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
camRgb.setInterleaved(False)
camRgb.setColorOrder(dai.ColorCameraProperties.ColorOrder.BGR)

# Network specific settings
detectionNetwork.setBlob(model)
detectionNetwork.setConfidenceThreshold(0.5)

# Yolo specific parameters
detectionNetwork.setNumClasses(numClasses)
detectionNetwork.setCoordinateSize(4)
detectionNetwork.setAnchors([])
detectionNetwork.setAnchorMasks({})
detectionNetwork.setIouThreshold(0.5)

# Linking
camRgb.preview.link(detectionNetwork.input)
camRgb.preview.link(xoutRgb.input)
detectionNetwork.out.link(xoutNN.input)

# Connect to device and start pipeline
with dai.Device(pipeline) as device:
    # Output queues will be used to get the rgb frames and nn data from the outputs defined above
    imageQueue = device.getOutputQueue(name="image", maxSize=4, blocking=False)
    detectQueue = device.getOutputQueue(name="nn", maxSize=4, blocking=False)

    frame = None
    detections = []

    # nn data, being the bounding box locations, are in <0..1> range - they need to be normalized with frame width/height
    def frameNorm(frame, bbox):
        normVals = np.full(len(bbox), frame.shape[0])
        normVals[::2] = frame.shape[1]
        return (np.clip(np.array(bbox), 0, 1) * normVals).astype(int)

    def drawText(frame, text, org, color=(255, 255, 255), thickness=1):
        cv2.putText(
            frame, text, org, cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0), thickness + 3, cv2.LINE_AA
        )
        cv2.putText(
            frame, text, org, cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, thickness, cv2.LINE_AA
        )

    def drawRect(frame, topLeft, bottomRight, color=(255, 255, 255), thickness=1):
        cv2.rectangle(frame, topLeft, bottomRight, (0, 0, 0), thickness + 3)
        cv2.rectangle(frame, topLeft, bottomRight, color, thickness)

    def displayFrame(name, frame):
        color = (128, 128, 128)
        for detection in detections:
            bbox = frameNorm(
                frame, (detection.xmin, detection.ymin, detection.xmax, detection.ymax)
            )
            drawText(
                frame=frame,
                text=labelMap[detection.label],
                org=(bbox[0] + 10, bbox[1] + 20),
            )
            drawText(
                frame=frame,
                text=f"{detection.confidence:.2%}",
                org=(bbox[0] + 10, bbox[1] + 35),
            )
            drawRect(
                frame=frame,
                topLeft=(bbox[0], bbox[1]),
                bottomRight=(bbox[2], bbox[3]),
                color=color,
            )
        # Show the frame
        cv2.imshow(name, frame)

    while True:
        imageQueueData = imageQueue.tryGet()
        detectQueueData = detectQueue.tryGet()

        if imageQueueData is not None:
            frame = imageQueueData.getCvFrame()

        if detectQueueData is not None:
            detections = detectQueueData.detections

        if frame is not None:
            displayFrame("rgb", frame)

        if cv2.waitKey(1) == ord("q"):
            break

▌参考资料

https://docs.oakchina.cn/en/latest/
https://www.oakchina.cn/selection-guide/


OAK中国
| OpenCV AI Kit在中国区的官方代理商和技术服务商
| 追踪AI技术和产品新动态

戳「+关注」获取最新资讯↗↗

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1717213.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

如何做好流程优化?看这里的目的、原则和方法

流程管理的本质是通过构造卓越的业务流程让流程增值&#xff0c;为客户创造真正的价值。 但卓越的业务流程并不是一蹴而就的&#xff0c;有一个过程&#xff0c;这个过程就是业务流程和流程管理体系不断优化提升的过程&#xff08;可以参照流程成熟度评价模型&#xff09;。 …

[pdf,epub]《软件方法》2024版电子书共290页(202405更新)

DDD领域驱动设计批评文集 做强化自测题获得“软件方法建模师”称号 《软件方法》各章合集 已上传本账号CSDN资源。 或者到以下链接下载&#xff1a; http://www.umlchina.com/url/softmeth2024.html&#xff0c;或点击“阅读原文”。 如果需要提取码&#xff1a;umlc 已排…

【SpringMVC】_简单示例计算器

目录 1. 需求分析 2. 接口定义 3. 请求参数 4. 响应数据 5. 服务器代码 6. 前端页面代码 7. 运行测试 为阶段性总结与应用&#xff0c;现将以Spring MVC项目创建一个可以实现加法的计算器为例 1. 需求分析 加法计算器功能&#xff0c;对两个整数进行相加&#xff0c;需…

uniapp跨端代码编写(h5和钉钉小程序)

页面开发 差异。小程序编译机制不一样&#xff0c;我在写h5的时候&#xff0c;页面布局啥的都是用uniapp的扩展组件来修改的&#xff08;都是改的原生组件的样式&#xff09;&#xff0c;小程序编译有组件隔离&#xff0c;不能直接修改组件的原生样式&#xff0c;查了很多资料…

Golang | Leetcode Golang题解之第120题三角形最小路径和

题目&#xff1a; 题解&#xff1a; func minimumTotal(triangle [][]int) int {n : len(triangle)f : make([]int, n)f[0] triangle[0][0]for i : 1; i < n; i {f[i] f[i - 1] triangle[i][i]for j : i - 1; j > 0; j-- {f[j] min(f[j - 1], f[j]) triangle[i][j]…

[RK3588-Andoird12] 关于LED灯控芯片is31fl3216和is31fl3236调试

问题描述 RK默认dts配置中并没有issi,is31fl32xx相关的配置指导。 is31fl3236是12X3 36路RGB is31fl3216是6X3 18路RGB 解决方案&#xff1a; is31fl3236 dts配置如下&#xff1a; &i2c1 {clock-frequency <400000>;status "okay";is31fl3236: led-co…

苏州金龙新V系客车科技助力“粤”动广州

粤动活力新V系&#xff01; 5月23日&#xff0c;苏州金龙新V系智慧客车推介会在羊城广州举行。活动现场展出了4款新V系代表车型&#xff0c;来自广东省旅游客运、道路运输行业的200余位从业者齐聚一堂&#xff0c;共同品鉴、体验了苏州金龙新V系产品的“新、心、芯”魅力。苏州…

接口设计的最佳实践-下篇

大多数程序员&#xff0c;做得最多的事&#xff0c;也不过是写接口这件事而已。 今天继续总结下接口设计需要注意的点。尽量每种都给出具体的场景、案例等&#xff0c;希望大家能有所收获。 1、接口幂等 幂等性&#xff1a;是指一个操作或者一个服务&#xff0c;无论执行多少…

GD32F470+lwip 丢包问题分析及解决

最近在用GD32和管理机之间用TCP协议开发一个功能&#xff0c;功能都没问题&#xff0c;后面跑大量发包时候的连续测试时&#xff0c;总是会出现偶发性的&#xff0c;大概几分钟到数十分钟的一次丢包。尽管在应用层做了超时机制&#xff0c;一旦超时就会重新建立socket链接并重新…

JavaScript的内存管理机制

No.内容链接1Openlayers 【入门教程】 - 【源代码示例300】 2Leaflet 【入门教程】 - 【源代码图文示例 150】 3Cesium 【入门教程】 - 【源代码图文示例200】 4MapboxGL【入门教程】 - 【源代码图文示例150】 5前端就业宝典 【面试题详细答案 1000】 文章目录 一、内存…

ubuntu离线安装kubesphere(包括docker、harbor)

这边使用虚拟机下载依赖配置环境以及模拟服务器各个节点&#xff0c;使用两个虚拟机模拟离线不联网环境的服务器&#xff0c;使用一个虚拟机联网下载依赖包&#xff0c;然后传入两个不能联网的虚拟机安装所有环境&#xff08;我这边偷懒就用两个虚拟机中的一个联网下载安装包。…

【M365运维】一个Bitlocker硬盘加密问题的处理

【问题】 新采购的电脑&#xff0c;出厂时已经有厂家做好了Autopilot的预配置&#xff0c;拿到手后根据标准流程完成系统的安装&#xff0c;却发现硬盘没有被Bitlocker加密。 表象&#xff1a; 1. 硬盘没有被加密的锁形图标&#xff1b; 2. 尝试手工启用Bitlocker, 出现组策略冲…

【二叉树】Leetcode 117. 填充每个节点的下一个右侧节点指针 II【中等】

填充每个节点的下一个右侧节点指针 II 给定一个二叉树&#xff1a; struct Node { int val; Node *left; Node *right; Node *next; } 填充它的每个 next 指针&#xff0c;让这个指针指向其下一个右侧节点。如果找不到下一个右侧节点&#xff0c;则将 next 指针设置为 NULL 。…

gpt-4o api申请开发部署应用:一篇全面的指南

利用 GPT-4o API 开发创新应用&#xff1a;一篇全面的指南 OpenAI 的 GPT-4o 是一款集成了音频、视觉和文本处理能力的多模态人工智能模型&#xff0c;它的出现代表了人工智能领域的重大进步。在本篇文章中&#xff0c;我们将详细介绍如何通过 OpenAI API 使用 GPT-4o&#xf…

linux centos nfs挂载两台服务器挂载统一磁盘目录权限问题

查看用户id id 用户名另一台为 修改uid和gid为相同id&#xff0c;添加附加组 usermod -u500 -Gwheel epms groupmod -g500 epms

RDD实战:排序算子 - sortBy()

在本实战案例中&#xff0c;我们将使用Apache Spark的sortBy()算子来对一个包含学生信息的RDD进行排序操作。 排序规则如下&#xff1a; 首先按照性别升序排列。在性别相同的情况下&#xff0c;按照年龄降序排列。 步骤1&#xff1a;创建学生信息列表 首先&#xff0c;我们创…

微服务架构-微服务架构的挑战与微服务化的具体时机

目录 一、微服务架构的挑战 1.1 概述 1.2 服务拆分 1.3 开发挑战 1.4 测试挑战 1.4.1 开箱即用、一键部署的集成环境 1.4.2 测试场景和测试确定性 1.4.3 微服务相关的非功能测试 1.4.4 自动化测试 1.5 运维挑战 1.5.1 监控 1.5.2 部署 1.5.3 问题追查 1.5.4 依赖管…

编辑任何场景! 3DitScene:通过语言引导的解耦 Gaussian Splatting开源来袭!

文章&#xff1a;https://arxiv.org/pdf/2405.18424 项目&#xff1a;https://zqh0253.github.io/3DitScene/ huggingface:https://huggingface.co/spaces/qihang/3Dit-Scene 场景图像编辑在娱乐、摄影和广告设计中至关重要。现有方法仅专注于2D个体对象或3D全局场景编辑&…

C++系列——————类和对象(上)

提示&#xff1a;文章写完后&#xff0c;目录可以自动生成&#xff0c;如何生成可参考右边的帮助文档 文章目录 前言一、面向对象的三大特征二、类的引入2.1类的定义 三.类的访问限定符3.1访问限定符的介绍3.2.访问限定符的使用 四、类的作用域五、类的实例化六、类对象模型6.1…

oracle中的INTERVAL函数学习总结

Oracle 从9i数据库开始引入了一种新特性&#xff0c;可以用来存储时间间隔&#xff0c;出现了INTERVAL 函数。这个函数的表达式比较多&#xff0c;初学比较费劲不好掌握&#xff0c;经过以几个小时的查阅资料和实验&#xff0c;总结如下&#xff1a; interval year t…