【深度学习】YOLOv8训练过程,YOLOv8实战教程,目标检测任务SOTA,关键点回归

news2024/10/6 0:35:24

文章目录

  • 可用资源
  • 资源安装
  • 模型训练(检测)
  • 模型pridict
  • 模型导出

可用资源

https://github.com/ultralytics/ultralytics
在这里插入图片描述
官方教程:https://docs.ultralytics.com/modes/train/

资源安装

更建议下载代码后使用 下面指令安装,这样可以更改源码,如果不需要更改源码就直接pip install ultralytics也是可以的。

pip install -e .

这样安装后,可以直接修改yolov8源码,并且可以立即生效。此图是命令解释:
在这里插入图片描述
安装成功后:
在这里插入图片描述
pip list可以看到安装的包:
在这里插入图片描述

模型训练(检测)

可以重新创建一个新的工程去使用安装好的ultralytics包,这样修改源码可以在別的工程。
在这里插入图片描述
下载一个demo数据集:https://ultralytics.com/assets/coco128.zip

最终文件:
在这里插入图片描述
train_coco.py,我给的绝对路径:

from ultralytics import YOLO

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "2"

model = YOLO('yolov8n.yaml').load('yolov8n.pt')  # build from YAML and transfer weights

# Train the model
model.train(data='/ssd/xiedong/workplace/yolov8_script/coco128.yaml', epochs=100, imgsz=640)

coco128.yaml,这个文件在yolov8的源码中是有的,拉出来改一下,改为绝对路径。

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: /ssd/xiedong/workplace/yolov8_script/coco128  # dataset root dir
train: images/train2017  # train images (relative to 'path') 128 images
val: images/train2017  # val images (relative to 'path') 128 images
test:  # test images (optional)

# Classes
names:
  0: person
  1: bicycle
  2: car
  3: motorcycle
  4: airplane
  5: bus
  6: train
  7: truck
  8: boat
  9: traffic light
  10: fire hydrant
  11: stop sign
  12: parking meter
  13: bench
  14: bird
  15: cat
  16: dog
  17: horse
  18: sheep
  19: cow
  20: elephant
  21: bear
  22: zebra
  23: giraffe
  24: backpack
  25: umbrella
  26: handbag
  27: tie
  28: suitcase
  29: frisbee
  30: skis
  31: snowboard
  32: sports ball
  33: kite
  34: baseball bat
  35: baseball glove
  36: skateboard
  37: surfboard
  38: tennis racket
  39: bottle
  40: wine glass
  41: cup
  42: fork
  43: knife
  44: spoon
  45: bowl
  46: banana
  47: apple
  48: sandwich
  49: orange
  50: broccoli
  51: carrot
  52: hot dog
  53: pizza
  54: donut
  55: cake
  56: chair
  57: couch
  58: potted plant
  59: bed
  60: dining table
  61: toilet
  62: tv
  63: laptop
  64: mouse
  65: remote
  66: keyboard
  67: cell phone
  68: microwave
  69: oven
  70: toaster
  71: sink
  72: refrigerator
  73: book
  74: clock
  75: vase
  76: scissors
  77: teddy bear
  78: hair drier
  79: toothbrush


# Download script/URL (optional)
download: https://ultralytics.com/assets/coco128.zip

即可看到成功训练起来的情况:

Transferred 355/355 items from pretrained weights
Ultralytics YOLOv8.0.119 🚀 Python-3.7.16 torch-1.12.1+cu116 CUDA:0 (NVIDIA A100-PCIE-40GB, 40390MiB)
WARNING ⚠️ Upgrade to torch>=2.0.0 for deterministic training.
yolo/engine/trainer: task=detect, mode=train, model=yolov8n.yaml, data=/ssd/xiedong/workplace/yolov8_script/coco128.yaml, epochs=100, patience=50, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=None, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=0, resume=False, amp=True, fraction=1.0, profile=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, show=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, vid_stride=1, line_width=None, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, boxes=True, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=None, workspace=4, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, label_smoothing=0.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0, cfg=None, v5loader=False, tracker=botsort.yaml, save_dir=runs/detect/train5

                   from  n    params  module                                       arguments                     
  0                  -1  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]                 
  1                  -1  1      4672  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2]                
  2                  -1  1      7360  ultralytics.nn.modules.block.C2f             [32, 32, 1, True]             
  3                  -1  1     18560  ultralytics.nn.modules.conv.Conv             [32, 64, 3, 2]                
  4                  -1  2     49664  ultralytics.nn.modules.block.C2f             [64, 64, 2, True]             
  5                  -1  1     73984  ultralytics.nn.modules.conv.Conv             [64, 128, 3, 2]               
  6                  -1  2    197632  ultralytics.nn.modules.block.C2f             [128, 128, 2, True]           
  7                  -1  1    295424  ultralytics.nn.modules.conv.Conv             [128, 256, 3, 2]              
  8                  -1  1    460288  ultralytics.nn.modules.block.C2f             [256, 256, 1, True]           
  9                  -1  1    164608  ultralytics.nn.modules.block.SPPF            [256, 256, 5]                 
 10                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 11             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 12                  -1  1    148224  ultralytics.nn.modules.block.C2f             [384, 128, 1]                 
 13                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 14             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 15                  -1  1     37248  ultralytics.nn.modules.block.C2f             [192, 64, 1]                  
 16                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]                
 17            [-1, 12]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 18                  -1  1    123648  ultralytics.nn.modules.block.C2f             [192, 128, 1]                 
 19                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]              
 20             [-1, 9]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 21                  -1  1    493056  ultralytics.nn.modules.block.C2f             [384, 256, 1]                 
 22        [15, 18, 21]  1    897664  ultralytics.nn.modules.head.Detect           [80, [64, 128, 256]]          
YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs

Transferred 355/355 items from pretrained weights
TensorBoard: Start with 'tensorboard --logdir runs/detect/train5', view at http://localhost:6006/
AMP: running Automatic Mixed Precision (AMP) checks with YOLOv8n...
AMP: checks passed ✅
train: Scanning /ssd/xiedong/workplace/yolov8_script/coco128/labels/train2017...
train: New cache created: /ssd/xiedong/workplace/yolov8_script/coco128/labels/train2017.cache
albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8))
val: Scanning /ssd/xiedong/workplace/yolov8_script/coco128/labels/train2017.cach
Plotting labels to runs/detect/train5/labels.jpg... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 57 weight(decay=0.0), 64 weight(decay=0.0005), 63 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/train5
Starting training for 100 epochs...

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      1/100      2.55G      1.179      1.595      1.254        127        640: 1
                 Class     Images  Instances      Box(P          R      mAP50  m
                   all        128        929      0.641      0.534      0.612      0.454

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      2/100      2.54G      1.249      1.534      1.247        209        640: 1
                 Class     Images  Instances      Box(P          R      mAP50  m
                   all        128        929      0.693      0.532      0.632      0.469

训练的参数调整:

KeyValueDescription
modelNonepath to model file, i.e. yolov8n.pt, yolov8n.yaml
dataNonepath to data file, i.e. coco128.yaml
epochs100number of epochs to train for
patience50epochs to wait for no observable improvement for early stopping of training
batch16number of images per batch (-1 for AutoBatch)
imgsz640size of input images as integer or w,h
saveTruesave train checkpoints and predict results
save_period-1Save checkpoint every x epochs (disabled if < 1)
cacheFalseTrue/ram, disk or False. Use cache for data loading
deviceNonedevice to run on, i.e. cuda device=0 or device=0,1,2,3 or device=cpu
workers8number of worker threads for data loading (per RANK if DDP)
projectNoneproject name
nameNoneexperiment name
exist_okFalsewhether to overwrite existing experiment
pretrainedFalsewhether to use a pretrained model
optimizer'auto'optimizer to use, choices=[SGD, Adam, Adamax, AdamW, NAdam, RAdam, RMSProp, auto]
verboseFalsewhether to print verbose output
seed0random seed for reproducibility
deterministicTruewhether to enable deterministic mode
single_clsFalsetrain multi-class data as single-class
rectFalserectangular training with each batch collated for minimum padding
cos_lrFalseuse cosine learning rate scheduler
close_mosaic0(int) disable mosaic augmentation for final epochs
resumeFalseresume training from last checkpoint
ampTrueAutomatic Mixed Precision (AMP) training, choices=[True, False]
fraction1.0dataset fraction to train on (default is 1.0, all images in train set)
profileFalseprofile ONNX and TensorRT speeds during training for loggers
lr00.01initial learning rate (i.e. SGD=1E-2, Adam=1E-3)
lrf0.01final learning rate (lr0 * lrf)
momentum0.937SGD momentum/Adam beta1
weight_decay0.0005optimizer weight decay 5e-4
warmup_epochs3.0warmup epochs (fractions ok)
warmup_momentum0.8warmup initial momentum
warmup_bias_lr0.1warmup initial bias lr
box7.5box loss gain
cls0.5cls loss gain (scale with pixels)
dfl1.5dfl loss gain
pose12.0pose loss gain (pose-only)
kobj2.0keypoint obj loss gain (pose-only)
label_smoothing0.0label smoothing (fraction)
nbs64nominal batch size
overlap_maskTruemasks should overlap during training (segment train only)
mask_ratio4mask downsample ratio (segment train only)
dropout0.0use dropout regularization (classify train only)
valTruevalidate/test during training

模型pridict

https://docs.ultralytics.com/modes/predict/

import cv2
from ultralytics import YOLO
import os

os.environ["CUDA_VISIBLE_DEVICES"] = "2"

import matplotlib.pyplot as plt

img = cv2.imread("img.png")
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

img_1 = cv2.imread("img_1.png")
img_1 = cv2.cvtColor(img_1, cv2.COLOR_BGR2RGB)

model = YOLO('yolov8n.yaml').load('yolov8n.pt')

inputs = [img, img_1]  # list of numpy arrays
results = model(inputs)  # list of Results objects

for img,result in zip(inputs, results):
    boxes = result.boxes  # Boxes object for bbox outputs
    masks = result.masks  # Masks object for segmentation masks outputs
    probs = result.probs  # Class probabilities for classification outputs
 

模型导出

https://docs.ultralytics.com/modes/export/#arguments

from ultralytics import YOLO

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "2"

model = YOLO('yolov8n.yaml').load('yolov8n.pt')  # build from YAML and transfer weights

# Export the model
model.export(format='onnx')

导出成功:

                   from  n    params  module                                       arguments                     
  0                  -1  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]                 
  1                  -1  1      4672  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2]                
  2                  -1  1      7360  ultralytics.nn.modules.block.C2f             [32, 32, 1, True]             
  3                  -1  1     18560  ultralytics.nn.modules.conv.Conv             [32, 64, 3, 2]                
  4                  -1  2     49664  ultralytics.nn.modules.block.C2f             [64, 64, 2, True]             
  5                  -1  1     73984  ultralytics.nn.modules.conv.Conv             [64, 128, 3, 2]               
  6                  -1  2    197632  ultralytics.nn.modules.block.C2f             [128, 128, 2, True]           
  7                  -1  1    295424  ultralytics.nn.modules.conv.Conv             [128, 256, 3, 2]              
  8                  -1  1    460288  ultralytics.nn.modules.block.C2f             [256, 256, 1, True]           
  9                  -1  1    164608  ultralytics.nn.modules.block.SPPF            [256, 256, 5]                 
 10                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 11             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 12                  -1  1    148224  ultralytics.nn.modules.block.C2f             [384, 128, 1]                 
 13                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']          
 14             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 15                  -1  1     37248  ultralytics.nn.modules.block.C2f             [192, 64, 1]                  
 16                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]                
 17            [-1, 12]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 18                  -1  1    123648  ultralytics.nn.modules.block.C2f             [192, 128, 1]                 
 19                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]              
 20             [-1, 9]  1         0  ultralytics.nn.modules.conv.Concat           [1]                           
 21                  -1  1    493056  ultralytics.nn.modules.block.C2f             [384, 256, 1]                 
 22        [15, 18, 21]  1    897664  ultralytics.nn.modules.head.Detect           [80, [64, 128, 256]]          
YOLOv8n summary: 225 layers, 3157200 parameters, 3157184 gradients, 8.9 GFLOPs

Transferred 355/355 items from pretrained weights
Ultralytics YOLOv8.0.119 🚀 Python-3.7.16 torch-1.12.1+cu116 CPU
YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients, 8.7 GFLOPs

PyTorch: starting from yolov8n.yaml with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (0.0 MB)

ONNX: starting export with onnx 1.14.0 opset 10...
ONNX: export success ✅ 10.7s, saved as yolov8n.onnx (12.2 MB)

Export complete (28.4s)
Results saved to /ssd/xiedong/workplace/yolov8_script
Predict:         yolo predict task=detect model=yolov8n.onnx imgsz=640 
Validate:        yolo val task=detect model=yolov8n.onnx imgsz=640 data=None 
Visualize:       https://netron.app

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/664355.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Hug pylons, not trees 拥抱电网,而非树木 | 经济学人20230408版双语精翻

《经济学人》4月8日周报封面即社论区&#xff08;Leaders&#xff09;精选文章&#xff1a;《拥抱电网&#xff0c;而非树木》&#xff08;Hug pylons, not trees&#xff09;。 Hug pylons, not trees 拥抱电网&#xff0c;而非树木 The case for an environmentalism that bu…

100天精通Golang(基础入门篇)——第9天:Go语言程序的循环语句

&#x1f337; 博主 libin9iOak带您 Go to Golang Language.✨ &#x1f984; 个人主页——libin9iOak的博客&#x1f390; &#x1f433; 《面试题大全》 文章图文并茂&#x1f995;生动形象&#x1f996;简单易学&#xff01;欢迎大家来踩踩~&#x1f33a; &#x1f30a; 《I…

UWB定位的两种解法

UWB(Ultra-Wideband)技术是一种短脉冲无线电技术(短脉冲意味着信号的带宽很大&#xff0c;因此称为超宽带)&#xff0c;其应用非常广泛&#xff0c;其中之一就是室内定位&#xff0c;通过计算信号传播的时间差&#xff0c;可以得到标签和基站之间的距离,如果有足够多的基站&…

Unity核心1——图片导入与图片设置

一、图片导入概述 ​ Unity 支持的图片格式有很多 BMP&#xff1a;是 Windows 操作系统的标准图像文件格式&#xff0c;特点是几乎不进行压缩&#xff0c;占磁盘空间大 TIF&#xff1a;基本不损失图片信息的图片格式&#xff0c;缺点是体积大 JPG&#xff1a;一般指 JPEG 格…

【Elasticsearch】 之 Translog/FST/FOR/RBM算法

目录 Translog FST/FOR/RBM算法解析 FST FOR&#xff08;Frame of Reference&#xff09;: RBM&#xff08;Roaring Bitmaps&#xff09;-(for filter cache) Translog es是近实时的存储搜索引。近实时&#xff0c;并不能保证被立刻看到。数据被看到的时候数据已经作为一…

工业级以太网RJ45温湿度监控系统解决方案之关键POE供电温湿度传感器

目 录 一、关键词…………………………………………………………………………3 二、 产品概述………………………………………………………………………3 三、 应用范围………………………………………………………………………3 四、 产品特点………………………………

Linux0.11内核源码解析-file_dev.c

目录 功能描述 int file_read(struct m_inode * inode, struct file * filp, char * buf, int count) int file_write(struct m_inode * inode, struct file * filp, char * buf, int count) 功能描述 该文件主要是由两个函数file_read()和file_write()组成&#xff0c;提供…

Nginx网站服务——服务基础

文章目录 一.Nginx服务基础1.关于Nginx的特点2.简述Nginx和Apache的差异3.Nginx 相对于 Apache 的优点4.Apache 相对于 Nginx 的优点5.阻塞与非阻塞6.同步与异步7.nginx的应用场景 二.编译安装nginx服务1.在线安装nginx1.1 yum部署Nginx1.2 扩展源安装完后直接安装Nginx 2.ngin…

MySQL数据库---存储引擎(MyISAM与InnoDB)

目录 前言一、存储引擎概念介绍二、MyISAM三、InnoDB四、配置合适的存储引擎总结 前言 数据库存储引擎是数据库底层软件组织&#xff0c;数据库管理系统&#xff08;DBMS&#xff09;使用数据引擎进行创建、查询、更新和删除数据。不同的存储引擎提供不同的存储机制、索引技巧…

Vue中如何进行图像识别与人脸对比

Vue中如何进行图像识别与人脸对比 随着人工智能的发展&#xff0c;图像识别和人脸识别技术已经被广泛应用于各种应用程序中。Vue作为一种流行的前端框架&#xff0c;提供了许多实用工具和库&#xff0c;可以帮助我们在应用程序中进行图像识别和人脸识别。在本文中&#xff0c;…

docker换源(docker镜像源)pull超时(pull镜像超时)/etc/docker/daemon.json

文章目录 pull了n次都超时&#xff0c;也是醉了更换镜像源步骤1. 打开终端并以管理员身份登录到Docker主机。2. 编辑Docker配置文件daemon.json。该文件用于配置Docker守护进程的参数。3. 在daemon.json文件中添加以下内容&#xff0c;将<镜像源地址>替换为您选择的镜像源…

基于matlab仿真具有不同传感器模式的锥形阵列(附源码)

一、前言 此示例说明如何在不同的阵列配置上应用锥形和模型细化。它还演示了如何创建具有不同元素模式的数组。 二、ULA 逐渐变细 本节介绍如何在均匀线性阵列 &#xff08;ULA&#xff09; 的元素上应用泰勒窗口以降低旁瓣电平。 比较锥形阵列和非锥形阵列的响应。请注意锥形U…

外部局域网直接访问WSL2

1. 开启hyper-v 1、首先&#xff0c;进入控制面板—程序—启用或关闭windows功能&#xff0c;勾选hyper-v&#xff0c;确认后重启电脑。2、打开 Windows PowerShell&#xff0c;输入 systeminfo 命令 能够看到出现了很多处理器的信息&#xff0c;最末尾有个 Hyper-V 要求&…

Redis 2023面试5题(一)

一、Redis是单线程还是多线程 在面试中&#xff0c;当被问到Redis是单线程还是多线程这个问题时&#xff0c;可以按照以下思路进行回答&#xff1a; 首先&#xff0c;Redis的核心业务部分是单线程的&#xff0c;即命令处理部分是单线程的。然而&#xff0c;Redis也支持多路复…

Java---第四章(数组基础,冒泡排序,二分查找,多维数组)

Java---第四章 一 数组基本知识数组操作 二 数组实操数组排序二分查找二维数组 一 数组 基本知识 概念&#xff1a; 数组是编程语言中的一种常见的数据结构&#xff0c;能够存储一组相同类型的数据 作用&#xff1a; 存储一组相同类型的数据&#xff0c;方便进行数理统计&am…

springboot3生命周期监听的使用和源码解析

定义SpringApplicationRunListener来监听springApplication的启动 1.通过实现springApplicationRunListener来实现监听。 2.在 META-INF/spring.factories 中配置 org.springframework.boot.SpringApplicationRunListener自己的Listener。 在默认的springboot配置中就有给我…

视觉SLAM十四讲——ch12实践(建图)

视觉SLAM十四讲——ch12的实践操作及避坑 0.实践前小知识介绍1. 实践操作前的准备工作2. 实践过程2.1 单目稠密重建2.2 RGB-D稠密建图2.3 点云地图2.4 从点云重建网格2.5 八叉树地图 3. 遇到的问题及解决办法3.1 cmake ..时&#xff0c;出现opencv版本问题3.2 make -j8时&#…

使用腾讯云服务器从零搭建个人网站

前期准备工作 1.服务器重装系统 选择ubuntu18的系统镜像 2.开放端口 需要开放80&#xff0c;27017&#xff0c;3000&#xff0c;22端口 80端口用于配置nginx服务27017端口用于连接mongondb数据库3000端口是启动项目的端口22端口用于ssh远程连接服务器&#xff0c;一般默认会…

SpringBoot - @Transactional注解详解

简介 Spring中的Transactional注解&#xff0c;基于动态代理的机制&#xff0c;提供了一种透明的事务管理机制&#xff0c;方便快捷的解决在开发中碰到的问题&#xff0c;Transactional 的事务开启 &#xff0c;或者是基于接口的或者是基于类的代理被创建。Spring为了更好的支…

数据库SQL查询(二)之连接查询

本文介绍SQL查询&#xff0c;如何在海量数据中筛选想要数据&#xff1b; 数据库管理系统选择&#xff1a;关系型数据库mysql 数据库管理工具选择&#xff1a;navicat 本文中查询语句和查询案例参考自&#xff1a;https://edu.csdn.net/course/detail/27673?ops_request_mis…