OpenVINO实验说明

news2024/9/21 8:00:39

文章目录

  • 0. 注
    • 1. 使用环境
    • 2. OpenVINO
  • 1. 安装OpenVINO
    • 1. 安装虚拟环境平台
    • 2. 创建虚拟环境
    • 3. 下载OpenVINO notebook
    • 4. 安装依赖
  • 2. 重训练
    • 1. 下载yolov7源码
    • 2. 修改配置文件
      • 2.1 weights
      • 2.2 cfg
      • 2.3 data
      • 2.4 epochs
      • 2.5 img-size
      • 2.6 batch-size
      • 2.7 device
      • 2.8 可能出现的错误
        • 2.8.1 raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly‘.format(pids_str))
        • 2.8.2 Hint: This means that multiple copies of the OpenMP runtime have been linked into the program.
      • 2.9 改动一览
    • 3. 重训练
  • 3. 量化提优
    • 1. 准备工作
    • 2. 226-yolov7-optimization.ipynb部分
      • 2.1 Prerequisite
      • 2.2 Check model inferences
    • 3. Export to ONNX
    • 4. Convert ONNX Model to OpenVINO Intermediate Representation (IR)
    • 5. Verify model inference
    • 6. Select inference device
    • 7. Verify model accuracy
    • 8. Optimize model using NNCF Post-training Quantization API
    • 9. Validate Quantized model inference
    • 10. Validate quantized model accuracy
  • 4. 性能比对

0. 注

在这里插入图片描述

本文档用于2023年8月22日OpenVINO实验演示说明,以Windows为例

1. 使用环境

OS:Windows10

CPU:13th Gen Intel® Core™ i9-13900HX

GPU:NVIDIA GeForce RTX 4080 Laptop GPU

2. OpenVINO

OpenVINO是英特尔推出的一款深度学习工具套件,兼容多种框架训练好的模型

轻松实现“一次写入,处处部署”的开源工具套件

OpenVINO notebook仓库https://github.com/openvinotoolkit/openvino_notebooks

1. 安装OpenVINO

1. 安装虚拟环境平台

本次实验使用Anaconda虚拟环境平台,便于项目运行环境的部署

Anaconda下载地址https://www.anaconda.com/download#downloads

按照自己机器情况选择版本下载

image-20230822172858867

2. 创建虚拟环境

# 打开anaconda prompt,输入命令
conda create -n openvino python=3.9 # 此处的openvino为新建虚拟环境的环境名,可自行取名

# 环境创建完成后,输入命令,进入虚拟环境
conda activate openvino

3. 下载OpenVINO notebook

# 在虚拟环境中,输入命令,下载openvino notebook
git clone --depth=1 https://github.com/openvinotoolkit/openvino_notebooks.git

# 进入openvino notebook
cd openvino_notebooks

4. 安装依赖

python -m pip install --upgrade pip wheel setuptools
pip install -r requirements.txt # 注:requirements文件中使用的是CPU版本torch

# 若需要GPU版本,在上面命令基础上,再使用下面命令,否则跳过下面命令(cuda11.7,按需)
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117

至此,OpenVINO notebook及其环境部分,已处理完毕

2. 重训练

1. 下载yolov7源码

# 在你想要放置项目的目录下
git clone https://github.com/WongKinYiu/yolov7.git

# 接着cd进入项目,安装依赖
pip install -r requirements.txt

2. 修改配置文件

打开项目中的train.py,下拉到if __name__ == '__main__':部分

2.1 weights

修改weights参数:将parser.add_argument('--weights', type=str, default='',help='initial weights path')的default部分引号内填入你的预训练权重yolov7-tiny.pt位置

2.2 cfg

修改cfg参数:使用如下设置

parser.add_argument('--cfg', type=str, default='cfg/training/yolov7-tiny.yaml', help='model.yaml path')

2.3 data

此处是数据部分配置,将pothole整个数据集和pothole.yaml复制进项目中(与train.py平级)

修改data参数:使用如下配置

parser.add_argument('--data', type=str, default='pothole.yaml', help='data.yaml path')

2.4 epochs

此处修改重训练的轮数,为10即可(当然在准确度可提升的情况下多多益善)

parser.add_argument('--epochs', type=int, default=10)

2.5 img-size

此处修改输入图像的尺寸,通常以输入图像的最长为标准,且是32的倍数,如本实验有一部分(尺寸较大图片)分辨率为1104x828,我设置的尺寸为1280(取了个整)

parser.add_argument('--img-size', nargs='+', type=int, default=[1280, 1280], help='[train, test] image sizes')

注:default处下标为0 的1280指的是训练输入图像的尺寸是1280 x 1280,下标为1的1280指的是验证图像的尺寸是1280 x 1280,输入到模型中都是正方形

2.6 batch-size

batch-size的设置需要考虑到GPU的显存大小,性能较好或者是显存较大的显卡,如12G以上等,可以将batch-size设置为16,如果报错显存不足,可以考虑设置为8、4或者是2

parser.add_argument('--batch-size', type=int, default=16, help='total batch size for all GPUs')

2.7 device

此处设置你选择训练的设备,是选择CPU训练还是GPU的单卡或者多卡训练

我选择的是GPU训练(GPU训练通常都比CPU快得多)且只有一块GPU,所以设置为

parser.add_argument('--device', default='0', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')

注:default中填入0表示使用cuda:0来训练,cuda:0是指第一块GPU,序号从0开始

如果选择使用CPU训练,可以这样设置

parser.add_argument('--device', default='cpu', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')

2.8 可能出现的错误

2.8.1 raise RuntimeError(‘DataLoader worker (pid(s) {}) exited unexpectedly‘.format(pids_str))

parser.add_argument('--workers', type=int, default=0, help='maximum number of dataloader workers')的default设置为0即可

2.8.2 Hint: This means that multiple copies of the OpenMP runtime have been linked into the program.

if __name__ == '__main__':内添加一行os.environ['KMP_DUPLICATE_LIB_OK']='True'即可,如

image-20230822175954086

2.9 改动一览

根据自己的情况微调即可

image-20230823080538605

3. 重训练

训练方式多选一即可:

可以直接点击main左边的运行,开始训练

image-20230822181921310

或者点击右上角开始训练

image-20230822182000364

或者开启终端,进入虚拟环境,使用命令开始训练

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

训练情况如下所示:

每轮训练30s左右

训练完毕后,窗口会提示训练后的模型所在位置

img

3. 量化提优

1. 准备工作

# 准备jupyter:anaconda prompt打开虚拟环境
pip install jupyter

# 将openvivo中第226节复制进yolo项目中(与detect.py平级)
# 使用jupyter打开yolo项目
jupyter notebook

2. 226-yolov7-optimization.ipynb部分

226-yolov7-optimization.ipynb部分不全是需要运行的,只需要运行优化相关代码即可

2.1 Prerequisite

只需执行from pathlib import Path

2.2 Check model inferences

检查模型推理:

!python -W ignore detect.py --weights yolov7.pt --conf 0.25 --img-size 1280 --source img1.jpg

运行完成后提示:The image with the result is saved in: runs\detect\exp10\img1.jpg

查看推理结果:

from PIL import Image

# visualize prediction result
Image.open('runs/detect/exp10/img1.jpg')

image-20230822183415666

3. Export to ONNX

转换模型格式,从pytorch的pt格式转换为onnx格式

!python -W ignore export.py --weights yolov7.pt --grid

image-20230822185708416

4. Convert ONNX Model to OpenVINO Intermediate Representation (IR)

将onnx转换为IR可以更好地使用OpenVINO来进行优化

from openvino.tools import mo
from openvino.runtime import serialize

model = mo.convert_model('yolov7.onnx')
# serialize model for saving IR
serialize(model, 'yolov7.xml')

5. Verify model inference

验证模型推理

preprossessing部分直接使用即可(执行)

import numpy as np
import torch
from PIL import Image
from utils.datasets import letterbox
from utils.plots import plot_one_box


def preprocess_image(img0: np.ndarray):
    """
    Preprocess image according to YOLOv7 input requirements. 
    Takes image in np.array format, resizes it to specific size using letterbox resize, converts color space from BGR (default in OpenCV) to RGB and changes data layout from HWC to CHW.
    
    Parameters:
      img0 (np.ndarray): image for preprocessing
    Returns:
      img (np.ndarray): image after preprocessing
      img0 (np.ndarray): original image
    """
    # resize
    img = letterbox(img0, auto=False)[0]
    
    # Convert
    img = img.transpose(2, 0, 1)
    img = np.ascontiguousarray(img)
    return img, img0


def prepare_input_tensor(image: np.ndarray):
    """
    Converts preprocessed image to tensor format according to YOLOv7 input requirements. 
    Takes image in np.array format with unit8 data in [0, 255] range and converts it to torch.Tensor object with float data in [0, 1] range
    
    Parameters:
      image (np.ndarray): image for conversion to tensor
    Returns:
      input_tensor (torch.Tensor): float tensor ready to use for YOLOv7 inference
    """
    input_tensor = image.astype(np.float32)  # uint8 to fp16/32
    input_tensor /= 255.0  # 0 - 255 to 0.0 - 1.0
    
    if input_tensor.ndim == 3:
        input_tensor = np.expand_dims(input_tensor, 0)
    return input_tensor


# label names for visualization
NAMES = ['person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light',
         'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
         'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee',
         'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard',
         'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple',
         'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch',
         'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
         'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear',
         'hair drier', 'toothbrush']

# colors for visualization
COLORS = {name: [np.random.randint(0, 255) for _ in range(3)]
          for i, name in enumerate(NAMES)}

postprocessing部分直接使用即可(执行)

from typing import List, Tuple, Dict
from utils.general import scale_coords, non_max_suppression
from openvino.runtime import Model


def detect(model: Model, image_path: Path, conf_thres: float = 0.25, iou_thres: float = 0.45, classes: List[int] = None, agnostic_nms: bool = False):
    """
    OpenVINO YOLOv7 model inference function. Reads image, preprocess it, runs model inference and postprocess results using NMS.
    Parameters:
        model (Model): OpenVINO compiled model.
        image_path (Path): input image path.
        conf_thres (float, *optional*, 0.25): minimal accpeted confidence for object filtering
        iou_thres (float, *optional*, 0.45): minimal overlap score for remloving objects duplicates in NMS
        classes (List[int], *optional*, None): labels for prediction filtering, if not provided all predicted labels will be used
        agnostic_nms (bool, *optiona*, False): apply class agnostinc NMS approach or not
    Returns:
       pred (List): list of detections with (n,6) shape, where n - number of detected boxes in format [x1, y1, x2, y2, score, label] 
       orig_img (np.ndarray): image before preprocessing, can be used for results visualization
       inpjut_shape (Tuple[int]): shape of model input tensor, can be used for output rescaling
    """
    output_blob = model.output(0)
    img = np.array(Image.open(image_path))
    preprocessed_img, orig_img = preprocess_image(img)
    input_tensor = prepare_input_tensor(preprocessed_img)
    predictions = torch.from_numpy(model(input_tensor)[output_blob])
    pred = non_max_suppression(predictions, conf_thres, iou_thres, classes=classes, agnostic=agnostic_nms)
    return pred, orig_img, input_tensor.shape


def draw_boxes(predictions: np.ndarray, input_shape: Tuple[int], image: np.ndarray, names: List[str], colors: Dict[str, int]):
    """
    Utility function for drawing predicted bounding boxes on image
    Parameters:
        predictions (np.ndarray): list of detections with (n,6) shape, where n - number of detected boxes in format [x1, y1, x2, y2, score, label]
        image (np.ndarray): image for boxes visualization
        names (List[str]): list of names for each class in dataset
        colors (Dict[str, int]): mapping between class name and drawing color
    Returns:
        image (np.ndarray): box visualization result
    """
    if not len(predictions):
        return image
    # Rescale boxes from input size to original image size
    predictions[:, :4] = scale_coords(input_shape[2:], predictions[:, :4], image.shape).round()

    # Write results
    for *xyxy, conf, cls in reversed(predictions):
        label = f'{names[int(cls)]} {conf:.2f}'
        plot_one_box(xyxy, image, label=label, color=colors[names[int(cls)]], line_thickness=1)
    return image

读取模型:

from openvino.runtime import Core
core = Core()
# read converted model
model = core.read_model('yolov7.xml')

6. Select inference device

选择设备(选择使用cpu还是核显还是自动)

import ipywidgets as widgets

device = widgets.Dropdown(
    options=core.available_devices + ["AUTO"],
    value='AUTO',
    description='Device:',
    disabled=False,
)

device
# load model on CPU device
compiled_model = core.compile_model(model, device.value)

boxes, image, input_shape = detect(compiled_model, 'inference/images/img.jpg')
image_with_boxes = draw_boxes(boxes[0], input_shape, image, NAMES, COLORS)
# visualize results
Image.fromarray(image_with_boxes)

外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传

7. Verify model accuracy

验证模型精度

from collections import namedtuple
import yaml
from utils.datasets import create_dataloader
from utils.general import check_dataset, box_iou, xywh2xyxy, colorstr

# read dataset config
DATA_CONFIG = 'data/pothole.yaml'
with open(DATA_CONFIG) as f:
    data = yaml.load(f, Loader=yaml.SafeLoader)

# Dataloader
TASK = 'val'  # path to train/val/test images
Option = namedtuple('Options', ['single_cls'])  # imitation of commandline provided options for single class evaluation
opt = Option(False)
dataloader = create_dataloader(
    data[TASK], 640, 1, 32, opt, pad=0.5,
    prefix=colorstr(f'{TASK}: ')
)[0]
import numpy as np
from tqdm.notebook import tqdm
from utils.metrics import ap_per_class
from openvino.runtime import Tensor


def test(data,
         model: Model,
         dataloader: torch.utils.data.DataLoader,
         conf_thres: float = 0.001,
         iou_thres: float = 0.65,  # for NMS
         single_cls: bool = False,
         v5_metric: bool = False,
         names: List[str] = None,
         num_samples: int = None
        ):
    """
    YOLOv7 accuracy evaluation. Processes validation dataset and compites metrics.
    
    Parameters:
        model (Model): OpenVINO compiled model.
        dataloader (torch.utils.DataLoader): validation dataset.
        conf_thres (float, *optional*, 0.001): minimal confidence threshold for keeping detections
        iou_thres (float, *optional*, 0.65): IOU threshold for NMS
        single_cls (bool, *optional*, False): class agnostic evaluation
        v5_metric (bool, *optional*, False): use YOLOv5 evaluation approach for metrics calculation
        names (List[str], *optional*, None): names for each class in dataset
        num_samples (int, *optional*, None): number samples for testing
    Returns:
        mp (float): mean precision
        mr (float): mean recall
        map50 (float): mean average precision at 0.5 IOU threshold
        map (float): mean average precision at 0.5:0.95 IOU thresholds
        maps (Dict(int, float): average precision per class
        seen (int): number of evaluated images
        labels (int): number of labels
    """

    model_output = model.output(0)
    check_dataset(data)  # check
    nc = 1 if single_cls else int(data['nc'])  # number of classes
    iouv = torch.linspace(0.5, 0.95, 10)  # iou vector for mAP@0.5:0.95
    niou = iouv.numel()

    if v5_metric:
        print("Testing with YOLOv5 AP metric...")
    
    seen = 0
    p, r, mp, mr, map50, map = 0., 0., 0., 0., 0., 0.
    stats, ap, ap_class = [], [], []
    for sample_id, (img, targets, _, shapes) in enumerate(tqdm(dataloader)):
        if num_samples is not None and sample_id == num_samples:
            break
        img = prepare_input_tensor(img.numpy())
        targets = targets
        height, width = img.shape[2:]

        with torch.no_grad():
            # Run model
            out = torch.from_numpy(model(Tensor(img))[model_output])  # inference output            
            # Run NMS
            targets[:, 2:] *= torch.Tensor([width, height, width, height])  # to pixels

            out = non_max_suppression(out, conf_thres=conf_thres, iou_thres=iou_thres, labels=None, multi_label=True)
        # Statistics per image
        for si, pred in enumerate(out):
            labels = targets[targets[:, 0] == si, 1:]
            nl = len(labels)
            tcls = labels[:, 0].tolist() if nl else []  # target class
            seen += 1

            if len(pred) == 0:
                if nl:
                    stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))
                continue
            # Predictions
            predn = pred.clone()
            scale_coords(img[si].shape[1:], predn[:, :4], shapes[si][0], shapes[si][1])  # native-space pred
            # Assign all predictions as incorrect
            correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device='cpu')
            if nl:
                detected = []  # target indices
                tcls_tensor = labels[:, 0]
                # target boxes
                tbox = xywh2xyxy(labels[:, 1:5])
                scale_coords(img[si].shape[1:], tbox, shapes[si][0], shapes[si][1])  # native-space labels
                # Per target class
                for cls in torch.unique(tcls_tensor):
                    ti = (cls == tcls_tensor).nonzero(as_tuple=False).view(-1)  # prediction indices
                    pi = (cls == pred[:, 5]).nonzero(as_tuple=False).view(-1)  # target indices
                    # Search for detections
                    if pi.shape[0]:
                        # Prediction to target ious
                        ious, i = box_iou(predn[pi, :4], tbox[ti]).max(1)  # best ious, indices
                        # Append detections
                        detected_set = set()
                        for j in (ious > iouv[0]).nonzero(as_tuple=False):
                            d = ti[i[j]]  # detected target
                            if d.item() not in detected_set:
                                detected_set.add(d.item())
                                detected.append(d)
                                correct[pi[j]] = ious[j] > iouv  # iou_thres is 1xn
                                if len(detected) == nl:  # all targets already located in image
                                    break
            # Append statistics (correct, conf, pcls, tcls)
            stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls))
    # Compute statistics
    stats = [np.concatenate(x, 0) for x in zip(*stats)]  # to numpy
    if len(stats) and stats[0].any():
        p, r, ap, f1, ap_class = ap_per_class(*stats, plot=True, v5_metric=v5_metric, names=names)
        ap50, ap = ap[:, 0], ap.mean(1)  # AP@0.5, AP@0.5:0.95
        mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
        nt = np.bincount(stats[3].astype(np.int64), minlength=nc)  # number of targets per class
    else:
        nt = torch.zeros(1)
    maps = np.zeros(nc) + map
    for i, c in enumerate(ap_class):
        maps[c] = ap[i]
    return mp, mr, map50, map, maps, seen, nt.sum()
mp, mr, map50, map, maps, num_images, labels = test(data=data, model=compiled_model, dataloader=dataloader, names=NAMES)
# Print results
s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Labels', 'Precision', 'Recall', 'mAP@.5', 'mAP@.5:.95')
print(s)
pf = '%20s' + '%12i' * 2 + '%12.3g' * 4  # print format
print(pf % ('all', num_images, labels, mp, mr, map50, map))

image-20230822185947993

8. Optimize model using NNCF Post-training Quantization API

优化过程三步走:

  1. Create a Dataset for quantization.创建量化数据集
  2. Run nncf.quantize for getting an optimized model.使用nncf.quantize获得量化模型
  3. Serialize an OpenVINO IR model, using the openvino.runtime.serialize function.使用openvino.runtime.serialize序列化IR模型

使用nncf接口优化模型:

import nncf  # noqa: F811


def transform_fn(data_item):
    """
    Quantization transform function. Extracts and preprocess input data from dataloader item for quantization.
    Parameters:
       data_item: Tuple with data item produced by DataLoader during iteration
    Returns:
        input_tensor: Input data for quantization
    """
    img = data_item[0].numpy()
    input_tensor = prepare_input_tensor(img) 
    return input_tensor


quantization_dataset = nncf.Dataset(dataloader, transform_fn)
quantized_model = nncf.quantize(model, quantization_dataset, preset=nncf.QuantizationPreset.MIXED)

serialize(quantized_model, 'yolov7-tiny_int8.xml')

9. Validate Quantized model inference

验证量化后模型推理

选择设备:

device
int8_compiled_model = core.compile_model(quantized_model, device.value)
boxes, image, input_shape = detect(int8_compiled_model, 'inference/images/img.jpg')
image_with_boxes = draw_boxes(boxes[0], input_shape, image, NAMES, COLORS)
Image.fromarray(image_with_boxes)

image-20230822185131952

10. Validate quantized model accuracy

验证量化后模型精度

int8_result = test(data=data, model=int8_compiled_model, dataloader=dataloader, names=NAMES)
mp, mr, map50, map, maps, num_images, labels = int8_result
# Print results
s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Labels', 'Precision', 'Recall', 'mAP@.5', 'mAP@.5:.95')
print(s)
pf = '%20s' + '%12i' * 2 + '%12.3g' * 4  # print format
print(pf % ('all', num_images, labels, mp, mr, map50, map))

image-20230822190152793

可见精度几乎不变

4. 性能比对

将原模型和量化后的模型进行性能比较

选择设备:

device

原模型benchmark:

# Inference FP32 model (OpenVINO IR)

!benchmark_app -m yolov7.xml -d $device.value -api async

量化后模型benchmark:

# Inference INT8 model (OpenVINO IR)

!benchmark_app -m yolov7-tiny_int8.xml -d $device.value -api async

结果显示:

image-20230822184832067

Performance ComparisonPrecisionRecallmAP@.5mAP@.5:.95Throughput
Original Models0.6850.5320.5560.25968.90 FPS
Quantized Models0.7030.5220.5570.259 103.89 FPS

可见openvino优化是很有效果的,在精度差不多的情况下能够大幅提升推理速度

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/988330.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Java程序员所需Javascript知识

它是一种脚本语言&#xff0c;可以用来更改页面内容&#xff0c;控制多媒体&#xff0c;制作图像、动画等等 js 代码位置 <script>// js 代码 </script>引入 js 脚本&#xff0c;在js脚本中写js代码 <script src"js脚本路径"></script>注…

如何查看APK的MD5签名及无法显示MD5签名的解决办法

https://blog.asroads.com/post/3358e0c4.html 之前的文章内已经介绍了不少的关于Android环境下出Apk 包遇到的各种填坑操作&#xff0c;以及一些设置小技巧&#xff0c;但坑是未知的&#xff0c;今天又踩一个坑&#xff0c;这次的问题是和电脑环境有关的。于是下面记录一下。 …

HotSpot垃圾收集算法实现细节

文章目录 根节点枚举安全点安全区域记忆集与卡表写屏障三色标记 根节点枚举 在可达性分析算法中&#xff0c;由于GC Roots众多&#xff0c;所以在从GC Roots集合中进行引用链查找时会耗费大量时间。 迄今为止&#xff0c;所有收集器在根节点枚举这一步骤时都是必须暂停用户线…

CS420 附加篇笔记 P1 - 如何寻找基址、偏移、实体的地址和指针

文章目录 IntroHealth variableEntityHow cheat engine worksWhat is an object / a classStatic addressesPointersRelative addressesSummary Intro 这一篇进入了进阶内容&#xff0c;讲的内容也变得即有广泛又有深入&#xff0c;推荐有一定基础和实践或者编程经验的观看&…

【MongoDB】Ubuntu22.04 下安装 MongoDB | 用户权限认证 | skynet.db.mongo 模块使用

文章目录 Ubuntu 22.04 安装 MongoDB后台启动 MongoDBshell 连入 MongoDB 服务 MongoDB 用户权限认证创建 root 用户开启认证重启 MongoDB 服务创建其他用户查看用户信息验证用户权限删除用户 skynet.db.mongo 模块使用authensureIndexfind、findOneinsert、safe_insertdelete、…

关于灾备系统中滚动备份是什么?

备份可以为数据提供安全性和某种形式的“撤销”功能&#xff0c;减少甚至消除不稳定性和风险。最常见的备份类型是完全备份和增量备份。但是&#xff0c;如果您需要频繁的、实时的备份&#xff0c;那么滚动备份就是一种更好的方法。 滚动备份&#xff1a; 在可接受的时间间隔…

ArcGIS 10.8软件安装包下载及安装教程

【软件名称】&#xff1a;ArcGIS 10.6 【安装环境】&#xff1a;Windows 【下载链接 】&#xff1a; 链接&#xff1a;https://pan.baidu.com/s/1wKpTeiFdhMBmbRWrJRCsoA 提取码&#xff1a;0987 复制这段内容后打开百度网盘手机App&#xff0c;操作更方便哦 软件简介 ArcGIS D…

手术麻醉管理系统源码的开发及应用

手术麻醉管理系统针对麻醉科、手术室和外科病房开发&#xff0c;用于管理与手术麻醉相关的信息&#xff0c;实现有关数据的自动采集、报告的自动生成以及病历的电子化&#xff0c;是医院信息系统的一个重要组成部分。采集和管理的数据包含患者的手术信息、麻醉信息&#xff0c;…

大数据和数据要素有什么关系?

大数据与数据要素之间存在密切的关系。大数据是指海量、多样化、高速生成的数据&#xff0c;而数据要素是指构成数据的基本元素或属性。数据要素包括但不限于数据的类型、结构、格式、单位、精度等。 大数据的产生和应用离不开数据要素的支持。数据要素确定了数据的基本特征和…

14. 线性代数 - 线性方程组

文章目录 线性方程组矩阵行列式全排列和逆序数N阶行列式(非)齐次线性方程Hi,大家好。我是茶桁。 结束了「微积分」部分的学习之后我们稍作休整,今天正式开始另外一部分:「线性代数」的学习。小伙伴们放松完回来要开始紧张起来了。 我们之前说过,不管是哪一个工程学科,根…

一定要看!超好用的音频剪辑软件推荐

“有没有好用的音频剪辑软件推荐呀&#xff1f;最近需要剪辑一个混合音乐&#xff0c;用来参加学校的歌曲比赛&#xff0c;但是现在没有办法进行剪辑&#xff0c;音频现在很多杂音&#xff0c;根本用不了&#xff0c;求推荐一个好用的音频剪辑软件&#xff0c;谢谢啦” 随着科…

WordPress 网站 “Error Establishing a Database Connection” 建立数据库连接时出错的解决方法

WordPress 网站 “Error Establishing a Database Connection” 建立数据库连接时出错的解决方法 有事半年没管网站&#xff0c;今天突然访问网站居然出现了这个&#xff1a; 以下是解决方案&#xff1a; 检查数据库是否运行&#xff0c;重启数据库 1.检查数据库是否正常运…

在校学生如何白嫖黑群辉虚拟机和内网穿透,实现海量资源的公网访问?(小白专用)

文章目录 前言本教程解决的问题是&#xff1a;按照本教程方法操作后&#xff0c;达到的效果是前排提醒&#xff1a; 1 搭建群辉虚拟机1.1 下载黑群辉文件vmvare虚拟机安装包1.2 安装VMware虚拟机&#xff1a;1.3 解压黑 群辉虚拟机文件1.4 虚拟机初始化1.5 没有搜索到黑群辉的解…

Purple Pi OH(Debian/Ubuntu)使用python控制gpio

本文分享的是Purple Pi OH开源主板搭载Debian/Ubuntu系统如何使用python控制gpio。 Purple Pi OH作为一款兼容树莓派的开源主板&#xff0c;采用瑞芯微RK3566 (Cortex-A55) 四核64位超强CPU,主频最高达1.8 GHz,算力高达1Tops&#xff0c;支持INT8/INT16&#xff0c;支持Tensor…

好玩的js特效

记录一些好玩的js特效 1、鱼跳跃特效 引入jquery:https://code.jquery.com/jquery-3.7.1.min.js 源码如下&#xff1a; <!--引入jquery--> <script src"https://code.jquery.com/jquery-3.7.1.min.js"></script> <!--引入跳跃源码--> <s…

【PHP】使用TCPDF导出PDF文件

目录 一、安装TCPDF类库 二、安装字体 三、使用TCPDF导出PDF文件 目的&#xff1a;PHP通过TCPDF类库导出文件为PDF。 开发语言及类库&#xff1a;ThinkPHP、TCPDF 效果图如下 一、安装TCPDF类库 在项目根目录使用composer安装TCPDF&#xff0c;安装完成后会在vendor目录下…

深化超低时延技术合作,中科驭数助力金仕达开创极速行情新高度

近日&#xff0c;金仕达副总经理吴江带领FPGA低延时、终端和分布式团队主要负责人赴中科驭数武汉研发中心考察调研。双方深入探讨低延时技术&#xff0c;并在FPGA国产化成果、高性能开发平台等方向展开合作研讨。以此次交流为起点&#xff0c;双方将充分发挥各自优势&#xff0…

2023-简单点-编译是什么?gcc是什么?

编译目的 把一种 程序 变成 另一种更接近机器指令 编译的术语 “接近专家的最快方法第一步&#xff0c;直接了解100行业黑话” 那么来了解一下&#xff0c;编译过程中的黑话&#xff1a; 词法分析语法分析中间代码目标代码代码优化出错管理表格管理 gcc是个什么? 一种编译…

【liunx】进程的状态

进程的状态 1.进程的状态2.僵尸进程3.孤儿进程 1.进程的状态 我们或多或少了解到进程的状态分为&#xff1a; 运行&#xff0c;新建&#xff0c;就绪&#xff0c;挂起&#xff0c;阻塞&#xff0c;等待&#xff0c;停止&#xff0c;挂机&#xff0c;死亡… 首先解释一点&…