AI项目六:基于YOLOV5的CPU版本部署openvino

news2024/11/14 3:02:45

若该文为原创文章,转载请注明原文出处。

一、CPU版本DEMO测试

1、创建一个新的虚拟环境

conda create -n course_torch_openvino python=3.8

2、激活环境

conda activate course_torch_openvino

3、安装pytorch cpu版本

pip install torch torchvision torchaudio  -i https://pypi.tuna.tsinghua.edu.cn/simple

4、安装

使用的是yolov5-5版本,github上下载。

pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

5、运行demo

python demo.py

完整代码 

import cv2
import numpy as np
import torch
import time

# model = torch.hub.load('./yolov5', 'custom', path='./weights/ppe_yolo_n.pt',source='local')  # local repo
model = torch.hub.load('./yolov5', 'custom', 'weights/poker_n.pt',source='local')
model.conf = 0.4

cap = cv2.VideoCapture(0)

fps_time = time.time()

while True:

    ret,frame = cap.read()

    frame = cv2.flip(frame,1)

    img_cvt = cv2.cvtColor(frame,cv2.COLOR_BGR2RGB)

    # Inference
    results = model(img_cvt)
    result_np = results.pandas().xyxy[0].to_numpy()

    for box in result_np:
        l,t,r,b = box[:4].astype('int')
        
        cv2.rectangle(frame,(l,t),(r,b),(0,255,0),5)
        cv2.putText(frame,str(box[-1]),(l,t-20),cv2.FONT_ITALIC,1,(0,255,0),2)

    now = time.time()
    fps_text = 1/(now - fps_time)
    fps_time =  now

    cv2.putText(frame,str(round(fps_text,2)),(50,50),cv2.FONT_ITALIC,1,(0,255,0),2)


    cv2.imshow('demo',frame)

    if cv2.waitKey(10) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

运行正常

二、YOLOV5转换成openvino

1、安装onnx

pip install onnx==1.11.0

2、修改文件

修改export.py 的第121行,修改成

opset_version=10

3、导出onnx

使用训练好的best.pt文件,把best.pt转成onnx文件

转换命令为:

python export.py --weights ../weights/best.pt --img 640 --batch 1

4、转成openvino

转换前先安装环境

pip install openvino-dev[onnx]==2021.4.0 
pip install openvino==2021.4.0

验证一下,输入mo -h

接下来转换模型,使用下面命令导出模型

mo --input_model weights/best.onnx  --model_name weights/ir_model   -s 255 --reverse_input_channels --output Conv_294,Conv_245,Conv_196

会生成3个文件, ir_model.xml就是要用的文件。

5、运行

python yolov5_demo.py -i cam -m weights/ir_model.xml   -d CPU

代码:


import logging
import os
import sys
from argparse import ArgumentParser, SUPPRESS
from math import exp as exp
from time import time,sleep
import numpy as np
import cv2
from openvino.inference_engine import IENetwork, IECore

logging.basicConfig(format="[ %(levelname)s ] %(message)s", level=logging.INFO, stream=sys.stdout)
log = logging.getLogger()


def build_argparser():
    parser = ArgumentParser(add_help=False)
    args = parser.add_argument_group('Options')
    args.add_argument('-h', '--help', action='help', default=SUPPRESS, help='Show this help message and exit.')
    args.add_argument("-m", "--model", help="Required. Path to an .xml file with a trained model.",
                      required=True, type=str)
    args.add_argument("-i", "--input", help="Required. Path to an image/video file. (Specify 'cam' to work with "
                                            "camera)", required=True, type=str)
    args.add_argument("-l", "--cpu_extension",
                      help="Optional. Required for CPU custom layers. Absolute path to a shared library with "
                           "the kernels implementations.", type=str, default=None)
    args.add_argument("-d", "--device",
                      help="Optional. Specify the target device to infer on; CPU, GPU, FPGA, HDDL or MYRIAD is"
                           " acceptable. The sample will look for a suitable plugin for device specified. "
                           "Default value is CPU", default="CPU", type=str)
    
    args.add_argument("-t", "--prob_threshold", help="Optional. Probability threshold for detections filtering",
                      default=0.5, type=float)
    args.add_argument("-iout", "--iou_threshold", help="Optional. Intersection over union threshold for overlapping "
                                                       "detections filtering", default=0.4, type=float)
 
    return parser


class YoloParams:
    # ------------------------------------------- Extracting layer parameters ------------------------------------------
    # Magic numbers are copied from yolo samples
    def __init__(self,  side):
        self.num = 3 #if 'num' not in param else int(param['num'])
        self.coords = 4 #if 'coords' not in param else int(param['coords'])
        self.classes = 80 #if 'classes' not in param else int(param['classes'])
        self.side = side
        self.anchors = [10.0, 13.0, 16.0, 30.0, 33.0, 23.0, 30.0, 61.0, 62.0, 45.0, 59.0, 119.0, 116.0, 90.0, 156.0,198.0,373.0, 326.0] #if 'anchors' not in param else [float(a) for a in param['anchors'].split(',')]

 


def letterbox(img, size=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True):
    # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232
    shape = img.shape[:2]  # current shape [height, width]
    w, h = size

    # Scale ratio (new / old)
    r = min(h / shape[0], w / shape[1])
    if not scaleup:  # only scale down, do not scale up (for better test mAP)
        r = min(r, 1.0)

    # Compute padding
    ratio = r, r  # width, height ratios
    new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
    dw, dh = w - new_unpad[0], h - new_unpad[1]  # wh padding
    if auto:  # minimum rectangle
        dw, dh = np.mod(dw, 64), np.mod(dh, 64)  # wh padding
    elif scaleFill:  # stretch
        dw, dh = 0.0, 0.0
        new_unpad = (w, h)
        ratio = w / shape[1], h / shape[0]  # width, height ratios

    dw /= 2  # divide padding into 2 sides
    dh /= 2

    if shape[::-1] != new_unpad:  # resize
        img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
    top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
    left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
    img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border

    top2, bottom2, left2, right2 = 0, 0, 0, 0
    if img.shape[0] != h:
        top2 = (h - img.shape[0])//2
        bottom2 = top2
        img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color)  # add border
    elif img.shape[1] != w:
        left2 = (w - img.shape[1])//2
        right2 = left2
        img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color)  # add border
    return img


def scale_bbox(x, y, height, width, class_id, confidence, im_h, im_w, resized_im_h=640, resized_im_w=640):
    gain = min(resized_im_w / im_w, resized_im_h / im_h)  # gain  = old / new
    pad = (resized_im_w - im_w * gain) / 2, (resized_im_h - im_h * gain) / 2  # wh padding
    x = int((x - pad[0])/gain)
    y = int((y - pad[1])/gain)

    w = int(width/gain)
    h = int(height/gain)
 
    xmin = max(0, int(x - w / 2))
    ymin = max(0, int(y - h / 2))
    xmax = min(im_w, int(xmin + w))
    ymax = min(im_h, int(ymin + h))
    # Method item() used here to convert NumPy types to native types for compatibility with functions, which don't
    # support Numpy types (e.g., cv2.rectangle doesn't support int64 in color parameter)
    return dict(xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, class_id=class_id.item(), confidence=confidence.item())


def entry_index(side, coord, classes, location, entry):
    side_power_2 = side ** 2
    n = location // side_power_2
    loc = location % side_power_2
    return int(side_power_2 * (n * (coord + classes + 1) + entry) + loc)


def parse_yolo_region(blob, resized_image_shape, original_im_shape, params, threshold):
    # ------------------------------------------ Validating output parameters ------------------------------------------    
 
    out_blob_n, out_blob_c, out_blob_h, out_blob_w = blob.shape
    predictions = 1.0/(1.0+np.exp(-blob)) 
    
                   
    # ------------------------------------------ Extracting layer parameters -------------------------------------------
    orig_im_h, orig_im_w = original_im_shape
    resized_image_h, resized_image_w = resized_image_shape
    objects = list()
 
    side_square = params.side * params.side

    # ------------------------------------------- Parsing YOLO Region output -------------------------------------------
    bbox_size = int(out_blob_c/params.num) #4+1+num_classes
    index=0
    for row, col, n in np.ndindex(params.side, params.side, params.num):
        bbox = predictions[0, n*bbox_size:(n+1)*bbox_size, row, col]
        x, y, width, height, object_probability = bbox[:5]
        class_probabilities = bbox[5:]
        if object_probability < threshold:
            continue
        x = (2*x - 0.5 + col)*(resized_image_w/out_blob_w)
        y = (2*y - 0.5 + row)*(resized_image_h/out_blob_h)
        if int(resized_image_w/out_blob_w) == 8 & int(resized_image_h/out_blob_h) == 8: #80x80, 
            idx = 0
        elif int(resized_image_w/out_blob_w) == 16 & int(resized_image_h/out_blob_h) == 16: #40x40
            idx = 1
        elif int(resized_image_w/out_blob_w) == 32 & int(resized_image_h/out_blob_h) == 32: # 20x20
            idx = 2

        width = (2*width)**2* params.anchors[idx * 6 + 2 * n]
        height = (2*height)**2 * params.anchors[idx * 6 + 2 * n + 1]
        class_id = np.argmax(class_probabilities)
        confidence = object_probability
        objects.append(scale_bbox(x=x, y=y, height=height, width=width, class_id=class_id, confidence=confidence,im_h=orig_im_h, im_w=orig_im_w, resized_im_h=resized_image_h, resized_im_w=resized_image_w))
        if index >30:
            break
        index+=1
    return objects


def intersection_over_union(box_1, box_2):
    width_of_overlap_area = min(box_1['xmax'], box_2['xmax']) - max(box_1['xmin'], box_2['xmin'])
    height_of_overlap_area = min(box_1['ymax'], box_2['ymax']) - max(box_1['ymin'], box_2['ymin'])
    if width_of_overlap_area < 0 or height_of_overlap_area < 0:
        area_of_overlap = 0
    else:
        area_of_overlap = width_of_overlap_area * height_of_overlap_area
    box_1_area = (box_1['ymax'] - box_1['ymin']) * (box_1['xmax'] - box_1['xmin'])
    box_2_area = (box_2['ymax'] - box_2['ymin']) * (box_2['xmax'] - box_2['xmin'])
    area_of_union = box_1_area + box_2_area - area_of_overlap
    if area_of_union == 0:
        return 0
    return area_of_overlap / area_of_union


def main():
    args = build_argparser().parse_args()

    # ------------- 1. Plugin initialization for specified device and load extensions library if specified -------------
    ie = IECore()
    if args.cpu_extension and 'CPU' in args.device:
        ie.add_extension(args.cpu_extension, "CPU")
    # -------------------- 2. Reading the IR generated by the Model Optimizer (.xml and .bin files) --------------------
    model = args.model
    net = ie.read_network(model=model)

    # ---------------------------------------------- 4. Preparing inputs -----------------------------------------------
    input_blob = next(iter(net.input_info))

    #  Defaulf batch_size is 1
    net.batch_size = 1

    # Read and pre-process input images
    n, c, h, w = net.input_info[input_blob].input_data.shape
    
    # labels_map = [x.strip() for x in f]
    labels_map = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
        'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
        'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
        'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
        'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
        'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
        'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
        'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
        'hair drier', 'toothbrush']

    input_stream = 0 if args.input == "cam" else args.input

    is_async_mode = True
    cap = cv2.VideoCapture(input_stream)
    number_input_frames = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
    number_input_frames = 1 if number_input_frames != -1 and number_input_frames < 0 else number_input_frames

    wait_key_code = 1

    # Number of frames in picture is 1 and this will be read in cycle. Sync mode is default value for this case
    if number_input_frames != 1:
        ret, frame = cap.read()
    else:
        is_async_mode = False
        wait_key_code = 0

    # ----------------------------------------- 5. Loading model to the plugin -----------------------------------------
    exec_net = ie.load_network(network=net, num_requests=2, device_name=args.device)

    cur_request_id = 0
    next_request_id = 1
    render_time = 0
    parsing_time = 0

    # ----------------------------------------------- 6. Doing inference -----------------------------------------------
    initial_w = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    initial_h = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
    origin_im_size = (initial_h,initial_w)
    while cap.isOpened():
        # Here is the first asynchronous point: in the Async mode, we capture frame to populate the NEXT infer request
        # in the regular mode, we capture frame to the CURRENT infer request
        
        if is_async_mode:
            ret, next_frame = cap.read()
        else:
            ret, frame = cap.read()

        if not ret:
            break

        if is_async_mode:
            request_id = next_request_id
            in_frame = letterbox(frame, (w, h))
        else:
            request_id = cur_request_id
            in_frame = letterbox(frame, (w, h))

        in_frame0 = in_frame
        # resize input_frame to network size
        in_frame = in_frame.transpose((2, 0, 1))  # Change data layout from HWC to CHW
        in_frame = in_frame.reshape((n, c, h, w))

        # Start inference
        start_time = time()
        exec_net.start_async(request_id=request_id, inputs={input_blob: in_frame})
        

        # Collecting object detection results
        objects = list()
        if exec_net.requests[cur_request_id].wait(-1) == 0:
            output = exec_net.requests[cur_request_id].output_blobs
            start_time = time()
            
            for layer_name, out_blob in output.items():
                layer_params = YoloParams(side=out_blob.buffer.shape[2])
                objects += parse_yolo_region(out_blob.buffer, in_frame.shape[2:],
                                             frame.shape[:-1], layer_params,
                                             args.prob_threshold)
                
                
            parsing_time = time() - start_time
         
        
        # Filtering overlapping boxes with respect to the --iou_threshold CLI parameter
        objects = sorted(objects, key=lambda obj : obj['confidence'], reverse=True)
        for i in range(len(objects)):
            if objects[i]['confidence'] == 0:
                continue
            for j in range(i + 1, len(objects)):
                if intersection_over_union(objects[i], objects[j]) > args.iou_threshold:
                    objects[j]['confidence'] = 0

        # Drawing objects with respect to the --prob_threshold CLI parameter
        objects = [obj for obj in objects if obj['confidence'] >= args.prob_threshold]

        
      

        for obj in objects:
            # Validation bbox of detected object
            if obj['xmax'] > origin_im_size[1] or obj['ymax'] > origin_im_size[0] or obj['xmin'] < 0 or obj['ymin'] < 0:
                continue
            color = (0,255,0)
            det_label = labels_map[obj['class_id']] if labels_map and len(labels_map) >= obj['class_id'] else \
                str(obj['class_id'])


            cv2.rectangle(frame, (obj['xmin'], obj['ymin']), (obj['xmax'], obj['ymax']), color, 2)
            cv2.putText(frame,
                        "#" + det_label + ' ' + str(round(obj['confidence'] * 100, 1)) + ' %',
                        (obj['xmin'], obj['ymin'] - 7), cv2.FONT_ITALIC, 1, color, 2)

        # Draw performance stats over frame
        async_mode_message = "Async mode: ON"if is_async_mode else "Async mode: OFF"
        
        cv2.putText(frame, async_mode_message, (10, int(origin_im_size[0] - 20)), cv2.FONT_ITALIC, 1,
                    (10, 10, 200), 2)

        fps_time = time() - start_time
        if fps_time !=0:
            fps = 1 / fps_time
            cv2.putText(frame, 'fps:'+str(round(fps,2)), (50, 50), cv2.FONT_ITALIC, 1, (0, 255, 0), 2)
        
        cv2.imshow("DetectionResults", frame)


        if is_async_mode:
            cur_request_id, next_request_id = next_request_id, cur_request_id
            frame = next_frame

        key = cv2.waitKey(wait_key_code)
        # ESC key
        if key == 27:
            break
        # Tab key
        if key == 9:
            exec_net.requests[cur_request_id].wait()
            is_async_mode = not is_async_mode
            log.info("Switched to {} mode".format("async" if is_async_mode else "sync"))

    cv2.destroyAllWindows()

if __name__ == '__main__':
    sys.exit(main() or 0)

三、总结

通过openvino加速,CPU没有GPU下,从原本的20帧左右提升到50多帧,效果还可以,就 是用自己的模型,训练出来的效果不怎么好。

使用树莓派等嵌入板子使用openvino效果还可以。

如有侵权,或需要完整代码,请及时联系博主。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1013024.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

vcruntime140_1.dll修复方法分享,教你安全靠谱的修复手段

在使用Windows操作系统的过程中&#xff0c;我们有时会遇到vcruntime140_1.dll文件丢失或损坏的情况。本文将详细介绍vcruntime140_1.dll的作用&#xff0c;以及多种解决方法和修复该文件时需要注意的问题&#xff0c;希望能帮助读者更好地处理这一问题。 一.vcruntime140_1.dl…

数据结构——【堆】

一、堆的相关概念 1.1、堆的概念 1、堆在逻辑上是一颗完全二叉树&#xff08;类似于一颗满二叉树只缺了右下角&#xff09;。 2、堆的实现利用的是数组&#xff0c;我们通常会利用动态数组来存放元素&#xff0c;这样可以快速拓容也不会很浪费空间&#xff0c;我们是将这颗完…

【Java】SpringData JPA快速上手,关联查询,JPQL语句书写

JPA框架 文章目录 JPA框架认识SpringData JPA使用JPA快速上手方法名称拼接自定义SQL关联查询JPQL自定义SQL语句 ​ 在我们之前编写的项目中&#xff0c;我们不难发现&#xff0c;实际上大部分的数据库交互操作&#xff0c;到最后都只会做一个事情&#xff0c;那就是把数据库中的…

电容 stm32

看到stm32电源部分都会和电容配套使用&#xff0c;所以对电容的作用产生了疑惑 电源 负电荷才能在导体内部自由移动&#xff0c;电池内部的化学能驱使着电源正电附近的电子移动向电源负极区域。 电容 将电容接上电池&#xff0c;电容的两端一段被抽走电子&#xff0c;一端蓄积…

【STL容器】vector

文章目录 前言vector1.1 vector的定义1.2 vector的迭代器1.3 vector的元素操作1.3.1 Member function1.3.2 capacity1.3.3 modify 1.4 vector的优缺点 前言 vector是STL的容器&#xff0c;它提供了动态数组的功能。 注&#xff1a;文章出现的代码并非STL库里的源码&#xff0c…

C++ PrimerPlus 复习 第三章 处理数据

第一章 命令编译链接文件 make文件 第二章 进入c 第三章 处理数据 文章目录 C变量的命名规则&#xff1b;C内置的整型——unsigned long、long、unsigned int、int、unsigned short、short、char、unsigned char、signed char和bool&#xff1b;如何知道自己计算机类型宽度获…

Jenkins Maven pom jar打包未拉取最新包解决办法,亲测可行

Jenkins Maven pom jar打包未拉取最新包解决办法&#xff0c;亲测可行 1. 发布新版的snapshots版本的jar包&#xff0c;默认Jenkins打包不拉取snapshots包2. 设置了snapshot拉取后&#xff0c;部分包还未更新&#xff0c;需要把包版本以snapshot结尾3. IDEA无法更新snapshots包…

超炫的开关效果

超炫的开关动画 代码如下 <!DOCTYPE html> <html> <head><meta charset"UTF-8"><title>Switch</title><style>/* 谁家好人 一个按钮写三百多行样式代码 &#x1f622;&#x1f622;*/*,*:after,*:before {box-sizing: bor…

代码随想录--栈与队列-用队列实现栈

使用队列实现栈的下列操作&#xff1a; push(x) -- 元素 x 入栈pop() -- 移除栈顶元素top() -- 获取栈顶元素empty() -- 返回栈是否为空 &#xff08;这里要强调是单向队列&#xff09; 用两个队列que1和que2实现队列的功能&#xff0c;que2其实完全就是一个备份的作用 impo…

产教融合 | 力软联合重庆科技学院开展低代码应用开发培训

近日&#xff0c;力软与重庆科技学院联合推出了为期两周的低代码应用开发培训课程&#xff0c;来自重庆科技学院相关专业的近百名师生参加了此次培训。 融合研学与实践&#xff0c;方能成为当代数字英才。本次培训全程采用线下模式&#xff0c;以“力软低代码平台”为软件开发…

聚焦真实用例,重仓亚洲,孙宇晨畅谈全球加密新格局下的主动变革

「如果能全面监管&#xff0c;加密行业仍有非常大的增长空间。目前加密用户只有 1 亿左右&#xff0c;如果监管明朗&#xff0c;我们可以在 3-5 年内获得 20-30 亿用户。」—— 孙宇晨 9 月 14 日&#xff0c;波场 TRON 创始人、火币 HTX 全球顾问委员会成员孙宇晨受邀出席 20…

RHCSA的一些简单操作命令

目录 1、查看Linux版本信息 2、ssh远程登陆 3、解析[zxlocalhost ~]$ 与 [rootlocalhost ~]# 4、退出命令exit 5、su——switch user 6、打印用户所处的路径信息pwd 7、修改路径 8、输出文件\目录信息 9、重置root账号密码 10、修改主机名称 1&#xff09;临时修改…

SkyWalking入门之Agent原理初步分析

一、简介 当前稍微上点体量的互联网公司已经逐渐采用微服务的开发模式&#xff0c;将之前早期的单体架构系统拆分为很多的子系统&#xff0c;子系统封装为微服务&#xff0c;彼此间通过HTTP协议RESET API的方式进行相互调用或者gRPC协议进行数据协作。 早期微服务只有几个的情况…

三种方式部署单机版Minio,10行命令干就完了~

必要步骤&#xff1a;安装MinIO 拉取MinIO镜像 docker pull quay.io/minio/minio 创建文件挂载点 mkdir /home/docker/MinIO/data &#xff08;文件挂载点映射&#xff0c;默认是/mydata/minio/data&#xff0c;修改为/home/docker/MinIO&#xff0c;文件存储位置自行修改&…

随笔-嗨,中奖了

好久没有动笔了&#xff0c;都懒惰了。 前段时间&#xff0c;老妹凑着暑假带着双胞胎的一个和老妈来了北京&#xff0c;听着小家伙叫舅舅&#xff0c;还是挺稀奇的。周末带着他们去了北戴河&#xff0c;全家人都是第一次见大海&#xff0c;感觉&#xff0c;&#xff0c;&#…

qiankun 乾坤主应用访问微应用css静态图片资源报404

发现static前没有加我指定的前缀 只有加了后才会出来 解决方案: env定义前缀 .env.development文件中 # static前缀 VUE_APP_PUBLIC_PREFIX"" .env.production文件中 # static前缀 VUE_APP_PUBLIC_PREFIX"/szgl" settings文件是封了一下src\settings…

测试平台前端部署

这里写目录标题 一、前端代码打包1、打包命令2、打包完成后,将dist文件夹拷贝到nginx文件夹中3、重新编写default.conf4、将之前启动的容器进行停止并且删除,再重新创建容器5、制作Dockerfile二、编写Dockerfile一、前端代码打包 1、打包命令 npm run build2、打包完成后,…

Kubernetes学习篇之组件

Kubernetes学习篇之组件 文章目录 Kubernetes学习篇之组件前言概述控制平面组件(Control Plane Components)kube-apiserveretcdkube-schedulerkube-controller-managercloud-controller-manager Node 组件kubeletkube-proxy容器运行时(Container Runtime) 插件(Addons)DNSWeb界…

驱动开发,IO多路复用实现过程,epoll方式

1.框架图 被称为当前时代最好用的io多路复用方式&#xff1b; 核心操作&#xff1a;一棵树&#xff08;红黑树&#xff09;、一张表&#xff08;内核链表&#xff09;以及三个接口&#xff1b; 思想&#xff1a;&#xff08;fd代表文件描述符&#xff09; epoll要把检测的事件…

Ubuntu安装深度学习环境相关(yolov8-python部署)

Ubuntu安装深度学习环境相关(yolov8-python部署) 本文将从如下几个方面总结相关的工作过程&#xff1a; Ubuntu系统安装(联想小新pro16) 2.显卡驱动安装3.测试深度学习模型 1. Ubunut 系统安装 之前在台式机上安装过Ubuntu&#xff0c;以为再在笔记本上安装会是小菜一碟&…