Mac 电脑配置yolov8运行环境实现目标追踪、计数、画出轨迹、多线程

news2024/11/16 0:49:23

🥇 版权: 本文由【墨理学AI】原创首发、各位读者大大、敬请查阅、感谢三连
🎉 声明: 作为全网 AI 领域 干货最多的博主之一,❤️ 不负光阴不负卿 ❤️

0-9

文章目录

    • 📙 Mac 电脑 配置 yolov8 环境
    • 📙 代码运行
        • 推理测试
        • 模型训练 - 转 onnx
        • 视频-目标检测
        • 调用 Mac 电脑摄像头
        • PersistingTracksLoop 持续目标跟踪
        • Plotting Tracks 画轨迹
        • Multithreaded Tracking - 多线程运行示例
    • 📙 YOLO 系列实战博文汇总如下
        • 🟦 YOLO 理论讲解学习篇
        • 🟧 Yolov5 系列
        • 🟨 YOLOX 系列
        • 🟦 Yolov3 系列
        • 🟨 YOLOX 系列
        • 🟦 持续补充更新
    • ❤️ 人生苦短, 欢迎和墨理一起学AI

📙 Mac 电脑 配置 yolov8 环境

  • YOLO 推理测试、小数据集训练,基础版 Mac 即可满足
  • 博主这里代码运行的 Mac 版本为 M1 Pro

conda 环境搭建步骤如下


conda create -n yolopy39 python=3.9
conda activate yolopy39

pip3 install torch torchvision torchaudio

# ultralytics 对 opencv-python 的版本需求如下
pip3 install opencv-python>=4.6.0
# 因此我选择安装的版本如下
pip3 install opencv-python==4.6.0.66

cd Desktop

mkdir moli

cd moli

git clone https://github.com/ultralytics/ultralytics.git
pip install -e .

pwd             
/Users/moli/Desktop/moli/ultralytics


📙 代码运行

代码运行主要参考如下两个官方教程

  • https://github.com/ultralytics/ultralytics
  • https://docs.ultralytics.com/modes/track/#persisting-tracks-loop
推理测试

yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'

yolo predict model=yolov8n.pt source='https://ultralytics.com/images/bus.jpg'
# 输出如下
Matplotlib is building the font cache; this may take a moment.
Downloading https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8n.pt to 'yolov8n.pt'...
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6.25M/6.25M [01:34<00:00, 69.6kB/s]
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)
[W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.
YOLOv8n summary (fused): 168 layers, 3,151,904 parameters, 0 gradients, 8.7 GFLOPs

Downloading https://ultralytics.com/images/bus.jpg to 'bus.jpg'...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 134k/134k [00:00<00:00, 470kB/s]
image 1/1 /Users/moli/Desktop/moli/ultralytics/bus.jpg: 640x480 4 persons, 1 bus, 1 stop sign, 221.3ms
Speed: 5.8ms preprocess, 221.3ms inference, 4.0ms postprocess per image at shape (1, 3, 640, 480)
Results saved to /Users/moli/Desktop/moli/ultralytics/runs/detect/predict

模型训练 - 转 onnx

vim train_test.py

from ultralytics import YOLO

# Load a model
model = YOLO("yolov8n.yaml")  # build a new model from scratch
model = YOLO("yolov8n.pt")  # load a pretrained model (recommended for training)

# Use the model
model.train(data="coco8.yaml", epochs=3)  # train the model
metrics = model.val()  # evaluate model performance on the validation set
results = model("https://ultralytics.com/images/bus.jpg")  # predict on an image

# 转换 onnx 也是封装好的模块,这里调用传参即可
path = model.export(format="onnx")  # export the model to ONNX format

运行输出如下

python train_test.py 

[W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)
engine/trainer: task=detect, mode=train, model=yolov8n.pt, data=coco8.yaml, epochs=3, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=/Users/moli/Desktop/moli/ultralytics/runs/detect/train

Dataset 'coco8.yaml' images not found ⚠️, missing path '/Users/moli/Desktop/moli/datasets/coco8/images/val'
Downloading https://ultralytics.com/assets/coco8.zip to '/Users/moli/Desktop/moli/datasets/coco8.zip'...
100%|███████████████████████████████████████████████████████████████████████████████████████████| 433k/433k [00:03<00:00, 135kB/s]
Unzipping /Users/moli/Desktop/moli/datasets/coco8.zip to /Users/moli/Desktop/moli/datasets/coco8...: 100%|██████████| 25/25 [00:00
Dataset download success ✅ (5.4s), saved to /Users/moli/Desktop/moli/datasets


...
...

Logging results to /Users/moli/Desktop/moli/ultralytics/runs/detect/train
Starting training for 3 epochs...

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
        1/3         0G      1.412      2.815      1.755         22        640: 100%|██████████| 1/1 [00:01<00:00,  1.90s/it]
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  1.30
                   all          4         17      0.613      0.883      0.888      0.616

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
        2/3         0G      1.249      2.621      1.441         23        640: 100%|██████████| 1/1 [00:01<00:00,  1.51s/it]
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  2.24
                   all          4         17      0.598      0.896      0.888      0.618

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
        3/3         0G      1.142      4.221      1.495         16        640: 100%|██████████| 1/1 [00:01<00:00,  1.50s/it]
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  2.06
                   all          4         17       0.58      0.833      0.874      0.613

3 epochs completed in 0.002 hours.
Optimizer stripped from /Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/last.pt, 6.5MB
Optimizer stripped from /Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.pt, 6.5MB

Validating /Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.pt...
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)
Model summary (fused): 168 layers, 3,151,904 parameters, 0 gradients, 8.7 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  1.72
                   all          4         17      0.599      0.898      0.888      0.618
                person          3         10      0.647        0.5       0.52       0.29
                   dog          1          1      0.315          1      0.995      0.597
                 horse          1          2      0.689          1      0.995      0.598
              elephant          1          2      0.629      0.887      0.828      0.332
              umbrella          1          1      0.539          1      0.995      0.995
          potted plant          1          1      0.774          1      0.995      0.895
Speed: 4.2ms preprocess, 134.0ms inference, 0.0ms loss, 0.8ms postprocess per image
Results saved to /Users/moli/Desktop/moli/ultralytics/runs/detect/train
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)
Model summary (fused): 168 layers, 3,151,904 parameters, 0 gradients, 8.7 GFLOPs
val: Scanning /Users/moli/Desktop/moli/datasets/coco8/labels/val.cache... 4 images, 0 backgrounds, 0 corrupt: 100%|██████████| 4/4
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%|██████████| 1/1 [00:00<00:00,  2.02
                   all          4         17      0.599      0.898      0.888      0.618
                person          3         10      0.647        0.5       0.52       0.29
                   dog          1          1      0.315          1      0.995      0.597
                 horse          1          2      0.689          1      0.995      0.598
              elephant          1          2      0.629      0.887      0.828      0.332
              umbrella          1          1      0.539          1      0.995      0.995
          potted plant          1          1      0.774          1      0.995      0.895
Speed: 4.1ms preprocess, 113.0ms inference, 0.0ms loss, 0.7ms postprocess per image
Results saved to /Users/moli/Desktop/moli/ultralytics/runs/detect/train2

image 1/1 /Users/moli/Desktop/moli/ultralytics/ultralytics/assets/bus.jpg: 640x480 4 persons, 1 bus, 188.4ms
Speed: 3.9ms preprocess, 188.4ms inference, 1.0ms postprocess per image at shape (1, 3, 640, 480)
Ultralytics YOLOv8.2.77 🚀 Python-3.9.19 torch-2.2.2 CPU (Apple M1 Pro)

# 开始模型转换

PyTorch: starting from '/Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (6.2 MB)
requirements: Ultralytics requirement ['onnx>=1.12.0'] not found, attempting AutoUpdate...

Looking in indexes: http://pypi.douban.com/simple, http://mirrors.aliyun.com/pypi/simple/, https://pypi.tuna.tsinghua.edu.cn/simple/, http://pypi.mirrors.ustc.edu.cn/simple/
Collecting onnx>=1.12.0
  Downloading http://mirrors.ustc.edu.cn/pypi/packages/4e/35/abbf2fa3dbb96b430f6e810e3fb7bc042ed150f371cb1aedb47052c40f8e/onnx-1.16.2-cp39-cp39-macosx_11_0_universal2.whl (16.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.5/16.5 MB 11.4 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.20 in /Users/moli/opt/anaconda3/envs/yolopy39/lib/python3.9/site-packages (from onnx>=1.12.0) (1.26.4)
Collecting protobuf>=3.20.2 (from onnx>=1.12.0)
  Downloading http://mirrors.ustc.edu.cn/pypi/packages/ca/bc/bceb11aa96dd0b2ae7002d2f46870fbdef7649a0c28420f0abb831ee3294/protobuf-5.27.3-cp38-abi3-macosx_10_9_universal2.whl (412 kB)
Installing collected packages: protobuf, onnx
Successfully installed onnx-1.16.2 protobuf-5.27.3

requirements: AutoUpdate success ✅ 22.0s, installed 1 package: ['onnx>=1.12.0']
requirements: ⚠️ Restart runtime or rerun command for updates to take effect


ONNX: starting export with onnx 1.16.2 opset 17...
ONNX: export success ✅ 24.4s, saved as '/Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.onnx' (12.2 MB)

Export complete (26.1s)
Results saved to /Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights
Predict:         yolo predict task=detect model=/Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.onnx imgsz=640  
Validate:        yolo val task=detect model=/Users/moli/Desktop/moli/ultralytics/runs/detect/train/weights/best.onnx imgsz=640 data=/Users/moli/Desktop/moli/ultralytics/ultralytics/cfg/datasets/coco8.yaml  
Visualize:       https://netron.app

可以看到运行成功、训练、转换 onnx 如下

ls runs/detect/train/       

F1_curve.png			R_curve.png			confusion_matrix_normalized.png	results.csv			train_batch1.jpg		val_batch0_pred.jpg
PR_curve.png			args.yaml			labels.jpg			results.png			train_batch2.jpg		weights
P_curve.png			confusion_matrix.png		labels_correlogram.jpg		train_batch0.jpg		val_batch0_labels.jpg

(yolopy39) moli@molideMacBook-Pro ultralytics % ls runs/detect/train/weights 
best.onnx	best.pt		last.pt

视频-目标检测
cat yolov8_1.py 
from ultralytics import YOLO

# Load an official or custom model
model = YOLO("yolov8n.pt")  # Load an official Detect model
#model = YOLO("yolov8n-seg.pt")  # Load an official Segment model
#model = YOLO("yolov8n-pose.pt")  # Load an official Pose model
#model = YOLO("path/to/best.pt")  # Load a custom trained model

# Perform tracking with the model
source = 'video/people.mp4'
results = model.track(source, show=True)  # Tracking with default tracker

代码运行效果如下:

1-001

调用 Mac 电脑摄像头

source = 0 即可

from ultralytics import YOLO

# Load an official or custom model
model = YOLO("yolov8n.pt")  # Load an official Detect model

#source = 'video/people.mp4'
source = 0
results = model.track(source, show=True)  # Tracking with default tracker

# results = model.track(source, show=True, tracker="bytetrack.yaml")  # with ByteTrack

效果示例如下

1-0001

PersistingTracksLoop 持续目标跟踪
  • https://docs.ultralytics.com/modes/track/#tracker-selection

vim yolov8PersistingTracksLoop.py

                  
import cv2

from ultralytics import YOLO

# Load the YOLOv8 model
model = YOLO("yolov8n.pt")

# Open the video file
video_path = "./video/test_people.mp4"
cap = cv2.VideoCapture(video_path)

# Loop through the video frames
while cap.isOpened():
    # Read a frame from the video
    success, frame = cap.read()

    if success:
        # Run YOLOv8 tracking on the frame, persisting tracks between frames
        results = model.track(frame, persist=True)

        # Visualize the results on the frame
        annotated_frame = results[0].plot()

        # Display the annotated frame
        cv2.imshow("YOLOv8 Tracking", annotated_frame)

        # Break the loop if 'q' is pressed
        if cv2.waitKey(1) & 0xFF == ord("q"):
            break
    else:
        # Break the loop if the end of the video is reached
        break

# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()

python3 yolov8PersistingTracksLoop.py 运行效果如下

2-0003

Plotting Tracks 画轨迹

vim yolov8PlottingTracks.py


from collections import defaultdict

import cv2
import numpy as np

from ultralytics import YOLO

# Load the YOLOv8 model
model = YOLO("yolov8n.pt")

# Open the video file
video_path = "./video/test_people.mp4"
cap = cv2.VideoCapture(video_path)

# Store the track history
track_history = defaultdict(lambda: [])

# Loop through the video frames
while cap.isOpened():
    # Read a frame from the video
    success, frame = cap.read()

    if success:
        # Run YOLOv8 tracking on the frame, persisting tracks between frames
        results = model.track(frame, persist=True)

        # Get the boxes and track IDs
        boxes = results[0].boxes.xywh.cpu()
        track_ids = results[0].boxes.id.int().cpu().tolist()

        # Visualize the results on the frame
        annotated_frame = results[0].plot()

        # Plot the tracks
        for box, track_id in zip(boxes, track_ids):
            x, y, w, h = box
            track = track_history[track_id]
            track.append((float(x), float(y)))  # x, y center point
            if len(track) > 30:  # retain 90 tracks for 90 frames
                track.pop(0)

            # Draw the tracking lines
            points = np.hstack(track).astype(np.int32).reshape((-1, 1, 2))
            cv2.polylines(annotated_frame, [points], isClosed=False, color=(230, 230, 230), thickness=10)

        # Display the annotated frame
        cv2.imshow("YOLOv8 Tracking", annotated_frame)

        # Break the loop if 'q' is pressed
        if cv2.waitKey(1) & 0xFF == ord("q"):
            break
    else:
        # Break the loop if the end of the video is reached
        break

# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()

python3 yolov8PlottingTracks.py 运行效果如下,可以看看行人后有轨迹

2-0005

Multithreaded Tracking - 多线程运行示例

vim yolov8MultithreadedTracking.py

  • 这里加载两个模型,运行两个线程,出现线程拥挤、导致无法弹窗,代码需要进一步修改
import threading

import cv2

from ultralytics import YOLO


def run_tracker_in_thread(filename, model, file_index):
    """
    Runs a video file or webcam stream concurrently with the YOLOv8 model using threading.

    This function captures video frames from a given file or camera source and utilizes the YOLOv8 model for object
    tracking. The function runs in its own thread for concurrent processing.

    Args:
        filename (str): The path to the video file or the identifier for the webcam/external camera source.
        model (obj): The YOLOv8 model object.
        file_index (int): An index to uniquely identify the file being processed, used for display purposes.

    Note:
        Press 'q' to quit the video display window.
    """
    video = cv2.VideoCapture(filename)  # Read the video file

    while True:
        ret, frame = video.read()  # Read the video frames

        # Exit the loop if no more frames in either video
        if not ret:
            break

        # Track objects in frames if available
        results = model.track(frame, persist=True)
        res_plotted = results[0].plot()
        cv2.imshow(f"Tracking_Stream_{file_index}", res_plotted)

        key = cv2.waitKey(1)
        if key == ord("q"):
            break

    # Release video sources
    video.release()


# Load the models
model1 = YOLO("yolov8n.pt")
model2 = YOLO("yolov8n-seg.pt")

# Define the video files for the trackers
video_file1 = "video/test_people.mp4"  # Path to video file, 0 for webcam
#video_file2 = 'video/test_traffic.mp4'  # Path to video file, 0 for webcam, 1 for external camera
video_file2 = 0
# Create the tracker threads
tracker_thread1 = threading.Thread(target=run_tracker_in_thread, args=(video_file1, model1, 1), daemon=True)
tracker_thread2 = threading.Thread(target=run_tracker_in_thread, args=(video_file2, model2, 2), daemon=True)

# Start the tracker threads
tracker_thread1.start()
tracker_thread2.start()

# Wait for the tracker threads to finish
tracker_thread1.join()
tracker_thread2.join()

# Clean up and close windows
cv2.destroyAllWindows()


📙 YOLO 系列实战博文汇总如下


🟦 YOLO 理论讲解学习篇
🟧 Yolov5 系列
  • 💜 YOLOv5 环境搭建 | coco128 训练示例 |❤️ 详细记录❤️ |【YOLOv5】
  • 💜 YOLOv5 COCO数据集 训练 | 【YOLOv5 训练】
🟨 YOLOX 系列
  • 💛 YOLOX 环境搭建 | 测试 | COCO训练复现 【YOLOX 实战】
  • 💛 YOLOX (pytorch)模型 ONNX export | 运行推理【YOLOX 实战二】
  • 💛 YOLOX (pytorch)模型 转 ONNX 转 ncnn 之运行推理【YOLOX 实战三】
  • 💛 YOLOX (pytorch)模型 转 tensorRT 之运行推理【YOLOX 实战四】
🟦 Yolov3 系列
  • 💙 yolov3(darknet )训练 - 测试 - 模型转换❤️darknet 转 ncnn 之C++运行推理❤️【yolov3 实战一览】
  • 💙 YOLOv3 ncnn 模型 yolov3-spp.cpp ❤️【YOLOv3之Ncnn推理实现———附代码】
🟨 YOLOX 系列
  • Ubuntu 22.04 搭建 yolov8 环境 运行示例代码(轨迹跟踪、过线 人数统计、目标热力图)
🟦 持续补充更新

❤️ 人生苦短, 欢迎和墨理一起学AI


  • 🎉 作为全网 AI 领域 干货最多的博主之一,❤️ 不负光阴不负卿 ❤️
  • ❤️ 如果文章对你有些许帮助、蟹蟹各位读者大大点赞、评论鼓励博主的每一分认真创作

9-9

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2167769.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【最新华为OD机试E卷-支持在线评测】字符串变换最小字符串(100分)多语言题解-(Python/C/JavaScript/Java/Cpp)

🍭 大家好这里是春秋招笔试突围 ,一枚热爱算法的程序员 💻 ACM金牌🏅️团队 | 大厂实习经历 | 多年算法竞赛经历 ✨ 本系列打算持续跟新华为OD-E/D卷的多语言AC题解 🧩 大部分包含 Python / C / Javascript / Java / Cpp 多语言代码 👏 感谢大家的订阅➕ 和 喜欢�…

计算机知识竞赛网站设计与实现

摘 要 信息数据从传统到当代&#xff0c;是一直在变革当中&#xff0c;突如其来的互联网让传统的信息管理看到了革命性的曙光&#xff0c;因为传统信息管理从时效性&#xff0c;还是安全性&#xff0c;还是可操作性等各个方面来讲&#xff0c;遇到了互联网时代才发现能补上自古…

C++的明星之我是类001

文章目录 类类定义格式访问限定符类域 实例化实例化概念对象大小 this指针两道nt题目题目一题目二 C和C语言实现stack对比 类 类定义格式 新增一个关键字class&#xff0c;后加上类的名字&#xff0c;{}中为类的主体&#xff0c;类中的函数称为类的⽅法或者成员函数定义在类⾯…

OccLLaMA:首个结合3D占用预测、语言、行为构建的生成式世界模型

导读&#xff1a; OccLLaMA是首个结合3D占用预测作为视觉表征的生成式世界模型。大量实验表明&#xff0c;OccLLaMA在多个任务上实现了不错的性能&#xff0c;包括4D占用预测、运动规划和视觉问答&#xff0c;展示了其作为自动驾驶基础模型的潜力。©️【深蓝AI】编译 1. 研…

如何在谷歌浏览器上玩大型多人在线游戏

在如今的数字时代&#xff0c;谷歌浏览器已经成为了许多人上网冲浪的首选工具。除了浏览网页、观看视频之外&#xff0c;你还可以在谷歌浏览器上畅玩各种大型多人在线游戏。本文将为你详细介绍如何在谷歌浏览器上玩大型多人在线游戏的步骤。 &#xff08;本文由https://chrome…

【Java代码审计】敏感信息泄露篇

【Java代码审计】敏感信息泄露篇 1.敏感信息泄露概述2.TurboMail 5.2.0 敏感信息泄露3.开发组件敏感信息泄露1.敏感信息泄露概述 敏感信息是业务系统中对保密性要求较高的数据,通常包括系统敏感信息以及应用敏感信息 系统敏感信息指的是业务系统本身的基础环境信息,例如系统…

望繁信科技CTO李进峰受邀在上海外国语大学开展流程挖掘专题讲座

2023年&#xff0c;望繁信科技联合创始人兼CTO李进峰博士受邀在上海外国语大学国际工商管理学院&#xff08;以下简称“上外管院”&#xff09;开展专题讲座&#xff0c;畅谈流程挖掘的发展及对企业数字化转型的价值。演讲吸引了上外教授和来自各行各业的领军企业学员百余人。 …

句子成分——每日一...

一、 "Who made you read so many books and realize that there is a bigger world beyond Shuangshui Village..." If you have been working from sunrise to sunset in this world since childhood, you will have the same ideal as many villagers: after a …

嵌入式硬件工程师与嵌入式软件工程师的区别(详细版)

嵌入式硬件工程师与嵌入式软件工程师的区别&#xff08;详细版&#xff09; 这里写目录标题 嵌入式硬件工程师与嵌入式软件工程师的区别&#xff08;详细版&#xff09;什么是嵌入式硬件工程师&#xff1f;什么是嵌入式软件工程师&#xff1f;嵌入式硬件工程师与嵌入式软件工程…

关于vue2+uniapp+uview+vuex 私募基金项目小程序总结

1.关于权限不同tabbar处理 uniapp 实现不同用户展示不同的tabbar(底部导航栏)_uniapp tabbar-CSDN博客 但是里面还有两个问题 一个是role应该被本地存储并且初始化 第二个问题是假设我有3个角色 每个角色每个tabbar不一样的&#xff0c;点击tabbar时候会导致错乱 第三个问题…

webpack使用

一、简介 概述 本次使用webpack4进行构建打包 二、webpack 安装webpack、webpack-cli npm install webpack4.2.0 webpack-cli4.2.0 -D 三、loader 加载器概述 raw-loader&#xff1a;加载文件原始内容&#xff08;utf-8&#xff09; file-loader&#xff1a;把文件输出…

【深度学习】(4)--卷积神经网络

文章目录 卷积神经网络一、画面不变性二、图像识别三、卷积网络结构1. 原理2. 卷积层3. 池化层4. 全连接层 四、感受野 总结 卷积神经网络 卷积神经网络&#xff08;Convolutional Neural Network&#xff0c;简称CNN&#xff09;是一种深度学习模型&#xff0c;特别适用于处理…

探索 Snowflake 与 Databend 的云原生数仓技术与应用实践 | Data Infra NO.21 回顾

上周六&#xff0c;第二十一期「Data Infra 研究社」在线上与大家相见。活动邀请到了西门子数据分析师陈砚林与 Databend 联合创始人王吟&#xff0c;为我们带来了一场关于 Snowflake 和 Databend 的技术探索。Snowflake&#xff0c;这个市值曾超过 700 亿美元的云原生数据仓库…

20240926 关于Goland处理wsl-GOROOT原理猜测

GOROOT的原理 go sdk与java jdk类似&#xff0c;是go的编译工具链的集合。 在windows上&#xff0c;我们通过在系统环境变量中添加GOROOT并设置为go sdk地址&#xff0c;使得命令行可以访问到go sdk并执行go test、build等命令&#xff0c;这样设置的变量是全局生效的&#x…

zico2打靶记录

一、环境搭建 下载地址&#xff1a;https://download.vulnhub.com/zico/zico2.ova 直接双击下载的.ova文件即可在VMware中打开 设置好保存路径后在虚拟机的设置中删除仅主机这个网卡 然后启动靶机 二、信息收集 扫描靶机ip arp-scan -l 扫描一下开放的端口 nmap -p- -sV…

C++面向对象基础

目录 一.函数 1.内联函数 2.函数重载 3.哑元函数 二.类和对象 2.1 类的定义 2.2 创建对象 三. 封装&#xff08;重点&#xff09; 四. 构造函数 constructor&#xff08;重点&#xff09; 4.1 基础使用 4.2 构造初始化列表 4.3 构造函数的调用方式&#xff08;掌握…

如何守护变美神器安全?红外热像仪:放开那根美发棒让我来!

随着智能家电市场的迅速发展&#xff0c;制造商们越来越关注生产过程中效率和质量的提升。如何守护变美神器安全&#xff1f;红外热像仪&#xff1a;放开那根卷发棒让我来&#xff01; 美发棒生产遇到什么困境&#xff1f; 美发棒生产过程中会出现设备加热不均情况&#xff0c…

【pytorch】pytorch入门4:神经网络的卷积层

文章目录 前言一、定义概念 缩写二、性质三、代码总结参考文献 前言 使用 B站小土堆课程的笔记 一、定义概念 缩写 卷积层是神经网络中用于突出特征来进行分类任务的层。 二、性质 卷积核例子&#xff1a;vgg16 model 三、代码 添加库 python代码块import os import …

无线领夹麦克风哪个牌子好,2024年新款领夹麦克风推荐

在短视频和直播风靡的当下&#xff0c;音频质量成为了衡量内容品质的重要标尺。市面上琳琅满目的无线领夹麦克风产品&#xff0c;却让许多创作者陷入了选择困难中&#xff0c;高昂的价格、复杂的操作、以及参差不齐的音质表现&#xff0c;让不少人在追求专业音频的道路上交了“…

Excel中用位置筛选解法

有 2022 年 1 月的日销售额统计表如下所示&#xff1a; 筛选出偶数日的销售额&#xff1a; spl("E(?1).select(#%20)",A1:B32)#表示当前行号 免费课程、软件免费下载