AI项目二十:基于YOLOv8实例分割的DeepSORT多目标跟踪

news2024/11/26 10:41:40

若该文为原创文章,转载请注明原文出处。

前面提及目标跟踪使用的方法有很多,更多的是Deepsort方法。

本篇博客记录YOLOv8的实例分割+deepsort视觉跟踪算法。结合YOLOv8的目标检测分割和deepsort的特征跟踪,该算法在复杂环境下确保了目标的准确与稳定跟踪。在计算机视觉中,这种跟踪技术在安全监控、无人驾驶等领域有着广泛应用。

源码地址:GitHub - MuhammadMoinFaisal/YOLOv8_Segmentation_DeepSORT_Object_Tracking: YOLOv8 Segmentation with DeepSORT Object Tracking (ID + Trails)

感谢Muhammad Moin

一、环境搭建教程

使用的是Anaconda3,环境自行安装,可以参考前面的文章搭建。

1、创建虚拟环境

conda create -n YOLOv8-Seg-Deepsort python=3.8

2、激活

conda activate YOLOv8-Seg-Deepsort

二、下载代码

代码可以使用源码,也可以使用我的,我把YOLOv8_Segmentation_DeepSORT_Object_Tracking和YOLOv8-DeepSORT-Object-Tracking整合在一起了。

下载地址:

Yinyifeng18/YOLOv8_Segmentation_DeepSORT_Object_Tracking (github.com)

git clone https://github.com/Yinyifeng18/YOLOv8_Segmentation_DeepSORT_Object_Tracking.git

三、、安装依赖项

pip install -e ".[dev]"

如果使用的是源码,会出现下面错误:

AttributeError: module 'numpy' has no attribute 'float'
 
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

出错错误的原因是所用的代码是依赖于旧版本的Numpy。您可以将你的Numpy版本降级到1.23.5。

pip install numpy==1.23.5

四、测试

1、转到检测或分割目录下

cd YOLOv8_Segmentation_DeepSORT_Object_Tracking\ultralytics\yolo\v8\detect

cd YOLOv8_Segmentation_DeepSORT_Object_Tracking\ultralytics\yolo\v8\segment

2、测试

python predict.py model=yolov8l.pt source="test3.mp4" show=True

python predict.py model=yolov8x-seg.pt source="test3.mp4" show=True

使用是实例分割测试,运行结果。

如果想保存视频,直接参数save=True

五、代码説明

DeepSort需要DeepSORT 文件,下载地址是:


https://drive.google.com/drive/folders/1kna8eWGrSfzaR6DtNJ8_GchGgPMv3VC8?usp=sharing
  • 下载DeepSORT Zip文件后,将其解压缩到子文件夹中,然后将deep_sort_pytorch文件夹放入ultralytics/yolo/v8/segment文件夹中

  • 目录结果如下

这里直接附predict.py代码

# Ultralytics YOLO 🚀, GPL-3.0 license

import hydra
import torch

from ultralytics.yolo.utils import DEFAULT_CONFIG, ROOT, ops
from ultralytics.yolo.utils.checks import check_imgsz
from ultralytics.yolo.utils.plotting import colors, save_one_box

from ultralytics.yolo.v8.detect.predict import DetectionPredictor
from numpy import random


import cv2
from deep_sort_pytorch.utils.parser import get_config
from deep_sort_pytorch.deep_sort import DeepSort
#Deque is basically a double ended queue in python, we prefer deque over list when we need to perform insertion or pop up operations
#at the same time
from collections import deque
import numpy as np
palette = (2 ** 11 - 1, 2 ** 15 - 1, 2 ** 20 - 1)
data_deque = {}

deepsort = None

object_counter = {}

object_counter1 = {}

line = [(100, 500), (1050, 500)]
def init_tracker():
    global deepsort
    cfg_deep = get_config()
    cfg_deep.merge_from_file("deep_sort_pytorch/configs/deep_sort.yaml")

    deepsort= DeepSort(cfg_deep.DEEPSORT.REID_CKPT,
                            max_dist=cfg_deep.DEEPSORT.MAX_DIST, min_confidence=cfg_deep.DEEPSORT.MIN_CONFIDENCE,
                            nms_max_overlap=cfg_deep.DEEPSORT.NMS_MAX_OVERLAP, max_iou_distance=cfg_deep.DEEPSORT.MAX_IOU_DISTANCE,
                            max_age=cfg_deep.DEEPSORT.MAX_AGE, n_init=cfg_deep.DEEPSORT.N_INIT, nn_budget=cfg_deep.DEEPSORT.NN_BUDGET,
                            use_cuda=True)
##########################################################################################
def xyxy_to_xywh(*xyxy):
    """" Calculates the relative bounding box from absolute pixel values. """
    bbox_left = min([xyxy[0].item(), xyxy[2].item()])
    bbox_top = min([xyxy[1].item(), xyxy[3].item()])
    bbox_w = abs(xyxy[0].item() - xyxy[2].item())
    bbox_h = abs(xyxy[1].item() - xyxy[3].item())
    x_c = (bbox_left + bbox_w / 2)
    y_c = (bbox_top + bbox_h / 2)
    w = bbox_w
    h = bbox_h
    return x_c, y_c, w, h

def xyxy_to_tlwh(bbox_xyxy):
    tlwh_bboxs = []
    for i, box in enumerate(bbox_xyxy):
        x1, y1, x2, y2 = [int(i) for i in box]
        top = x1
        left = y1
        w = int(x2 - x1)
        h = int(y2 - y1)
        tlwh_obj = [top, left, w, h]
        tlwh_bboxs.append(tlwh_obj)
    return tlwh_bboxs

def compute_color_for_labels(label):
    """
    Simple function that adds fixed color depending on the class
    """
    if label == 0: #person
        color = (85,45,255)
    elif label == 2: # Car
        color = (222,82,175)
    elif label == 3:  # Motobike
        color = (0, 204, 255)
    elif label == 5:  # Bus
        color = (0, 149, 255)
    else:
        color = [int((p * (label ** 2 - label + 1)) % 255) for p in palette]
    return tuple(color)

def draw_border(img, pt1, pt2, color, thickness, r, d):
    x1,y1 = pt1
    x2,y2 = pt2
    # Top left
    cv2.line(img, (x1 + r, y1), (x1 + r + d, y1), color, thickness)
    cv2.line(img, (x1, y1 + r), (x1, y1 + r + d), color, thickness)
    cv2.ellipse(img, (x1 + r, y1 + r), (r, r), 180, 0, 90, color, thickness)
    # Top right
    cv2.line(img, (x2 - r, y1), (x2 - r - d, y1), color, thickness)
    cv2.line(img, (x2, y1 + r), (x2, y1 + r + d), color, thickness)
    cv2.ellipse(img, (x2 - r, y1 + r), (r, r), 270, 0, 90, color, thickness)
    # Bottom left
    cv2.line(img, (x1 + r, y2), (x1 + r + d, y2), color, thickness)
    cv2.line(img, (x1, y2 - r), (x1, y2 - r - d), color, thickness)
    cv2.ellipse(img, (x1 + r, y2 - r), (r, r), 90, 0, 90, color, thickness)
    # Bottom right
    cv2.line(img, (x2 - r, y2), (x2 - r - d, y2), color, thickness)
    cv2.line(img, (x2, y2 - r), (x2, y2 - r - d), color, thickness)
    cv2.ellipse(img, (x2 - r, y2 - r), (r, r), 0, 0, 90, color, thickness)

    cv2.rectangle(img, (x1 + r, y1), (x2 - r, y2), color, -1, cv2.LINE_AA)
    cv2.rectangle(img, (x1, y1 + r), (x2, y2 - r - d), color, -1, cv2.LINE_AA)
    
    cv2.circle(img, (x1 +r, y1+r), 2, color, 12)
    cv2.circle(img, (x2 -r, y1+r), 2, color, 12)
    cv2.circle(img, (x1 +r, y2-r), 2, color, 12)
    cv2.circle(img, (x2 -r, y2-r), 2, color, 12)
    
    return img

def UI_box(x, img, color=None, label=None, line_thickness=None):
    # Plots one bounding box on image img
    tl = line_thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1  # line/font thickness
    color = color or [random.randint(0, 255) for _ in range(3)]
    c1, c2 = (int(x[0]), int(x[1])), (int(x[2]), int(x[3]))
    cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA)
    if label:
        tf = max(tl - 1, 1)  # font thickness
        t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0]

        img = draw_border(img, (c1[0], c1[1] - t_size[1] -3), (c1[0] + t_size[0], c1[1]+3), color, 1, 8, 2)

        cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA)


def intersect(A,B,C,D):
    return ccw(A,C,D) != ccw(B,C,D) and ccw(A,B,C) != ccw(A,B,D)

def ccw(A,B,C):
    return (C[1]-A[1]) * (B[0]-A[0]) > (B[1]-A[1]) * (C[0]-A[0])


def get_direction(point1, point2):
    direction_str = ""

    # calculate y axis direction
    if point1[1] > point2[1]:
        direction_str += "South"
    elif point1[1] < point2[1]:
        direction_str += "North"
    else:
        direction_str += ""

    # calculate x axis direction
    if point1[0] > point2[0]:
        direction_str += "East"
    elif point1[0] < point2[0]:
        direction_str += "West"
    else:
        direction_str += ""

    return direction_str
def draw_boxes(img, bbox, names,object_id, identities=None, offset=(0, 0)):
    cv2.line(img, line[0], line[1], (46,162,112), 3)

    height, width, _ = img.shape
    # remove tracked point from buffer if object is lost
    for key in list(data_deque):
      if key not in identities:
        data_deque.pop(key)

    for i, box in enumerate(bbox):
        x1, y1, x2, y2 = [int(i) for i in box]
        x1 += offset[0]
        x2 += offset[0]
        y1 += offset[1]
        y2 += offset[1]

        # code to find center of bottom edge
        center = (int((x2+x1)/ 2), int((y2+y2)/2))

        # get ID of object
        id = int(identities[i]) if identities is not None else 0

        # create new buffer for new object
        if id not in data_deque:  
          data_deque[id] = deque(maxlen= 64)
        color = compute_color_for_labels(object_id[i])
        obj_name = names[object_id[i]]
        label = '{}{:d}'.format("", id) + ":"+ '%s' % (obj_name)

        # add center to buffer
        data_deque[id].appendleft(center)
        if len(data_deque[id]) >= 2:
          direction = get_direction(data_deque[id][0], data_deque[id][1])
          if intersect(data_deque[id][0], data_deque[id][1], line[0], line[1]):
              cv2.line(img, line[0], line[1], (255, 255, 255), 3)
              if "South" in direction:
                if obj_name not in object_counter:
                    object_counter[obj_name] = 1
                else:
                    object_counter[obj_name] += 1
              if "North" in direction:
                if obj_name not in object_counter1:
                    object_counter1[obj_name] = 1
                else:
                    object_counter1[obj_name] += 1
        UI_box(box, img, label=label, color=color, line_thickness=2)
        # draw trail
        for i in range(1, len(data_deque[id])):
            # check if on buffer value is none
            if data_deque[id][i - 1] is None or data_deque[id][i] is None:
                continue
            # generate dynamic thickness of trails
            thickness = int(np.sqrt(64 / float(i + i)) * 1.5)
            # draw trails
            cv2.line(img, data_deque[id][i - 1], data_deque[id][i], color, thickness)
    
    #4. Display Count in top right corner
        for idx, (key, value) in enumerate(object_counter1.items()):
            cnt_str = str(key) + ":" +str(value)
            cv2.line(img, (width - 500,25), (width,25), [85,45,255], 40)
            cv2.putText(img, f'Number of Vehicles Entering', (width - 500, 35), 0, 1, [225, 255, 255], thickness=2, lineType=cv2.LINE_AA)
            cv2.line(img, (width - 150, 65 + (idx*40)), (width, 65 + (idx*40)), [85, 45, 255], 30)
            cv2.putText(img, cnt_str, (width - 150, 75 + (idx*40)), 0, 1, [255, 255, 255], thickness = 2, lineType = cv2.LINE_AA)

        for idx, (key, value) in enumerate(object_counter.items()):
            cnt_str1 = str(key) + ":" +str(value)
            cv2.line(img, (20,25), (500,25), [85,45,255], 40)
            cv2.putText(img, f'Numbers of Vehicles Leaving', (11, 35), 0, 1, [225, 255, 255], thickness=2, lineType=cv2.LINE_AA)    
            cv2.line(img, (20,65+ (idx*40)), (127,65+ (idx*40)), [85,45,255], 30)
            cv2.putText(img, cnt_str1, (11, 75+ (idx*40)), 0, 1, [225, 255, 255], thickness=2, lineType=cv2.LINE_AA)
    
    
    
    return img


class SegmentationPredictor(DetectionPredictor):

    def postprocess(self, preds, img, orig_img):
        masks = []
        # TODO: filter by classes
        p = ops.non_max_suppression(preds[0],
                                    self.args.conf,
                                    self.args.iou,
                                    agnostic=self.args.agnostic_nms,
                                    max_det=self.args.max_det,
                                    nm=32)
        proto = preds[1][-1]
        for i, pred in enumerate(p):
            shape = orig_img[i].shape if self.webcam else orig_img.shape
            if not len(pred):
                continue
            if self.args.retina_masks:
                pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], shape).round()
                masks.append(ops.process_mask_native(proto[i], pred[:, 6:], pred[:, :4], shape[:2]))  # HWC
            else:
                masks.append(ops.process_mask(proto[i], pred[:, 6:], pred[:, :4], img.shape[2:], upsample=True))  # HWC
                pred[:, :4] = ops.scale_boxes(img.shape[2:], pred[:, :4], shape).round()

        return (p, masks)

    def write_results(self, idx, preds, batch):
        p, im, im0 = batch
        log_string = ""
        if len(im.shape) == 3:
            im = im[None]  # expand for batch dim
        self.seen += 1
        if self.webcam:  # batch_size >= 1
            log_string += f'{idx}: '
            frame = self.dataset.count
        else:
            frame = getattr(self.dataset, 'frame', 0)

        self.data_path = p
        self.txt_path = str(self.save_dir / 'labels' / p.stem) + ('' if self.dataset.mode == 'image' else f'_{frame}')
        log_string += '%gx%g ' % im.shape[2:]  # print string
        self.annotator = self.get_annotator(im0)

        preds, masks = preds
        det = preds[idx]
        if len(det) == 0:
            return log_string
        # Segments
        mask = masks[idx]
        if self.args.save_txt:
            segments = [
                ops.scale_segments(im0.shape if self.args.retina_masks else im.shape[2:], x, im0.shape, normalize=True)
                for x in reversed(ops.masks2segments(mask))]

        # Print results
        for c in det[:, 5].unique():
            n = (det[:, 5] == c).sum()  # detections per class
            log_string += f"{n} {self.model.names[int(c)]}{'s' * (n > 1)}, "  # add to string

        # Mask plotting
        self.annotator.masks(
            mask,
            colors=[colors(x, True) for x in det[:, 5]],
            im_gpu=torch.as_tensor(im0, dtype=torch.float16).to(self.device).permute(2, 0, 1).flip(0).contiguous() /
            255 if self.args.retina_masks else im[idx])

        det = reversed(det[:, :6])
        self.all_outputs.append([det, mask])
        xywh_bboxs = []
        confs = []
        oids = []
        outputs = []
        # Write results
        for j, (*xyxy, conf, cls) in enumerate(reversed(det[:, :6])):
            x_c, y_c, bbox_w, bbox_h = xyxy_to_xywh(*xyxy)
            xywh_obj = [x_c, y_c, bbox_w, bbox_h]
            xywh_bboxs.append(xywh_obj)
            confs.append([conf.item()])
            oids.append(int(cls))
        xywhs = torch.Tensor(xywh_bboxs)
        confss = torch.Tensor(confs)
          
        outputs = deepsort.update(xywhs, confss, oids, im0)
        if len(outputs) > 0:
            bbox_xyxy = outputs[:, :4]
            identities = outputs[:, -2]
            object_id = outputs[:, -1]
            
            draw_boxes(im0, bbox_xyxy, self.model.names, object_id,identities)
        return log_string


@hydra.main(version_base=None, config_path=str(DEFAULT_CONFIG.parent), config_name=DEFAULT_CONFIG.name)
def predict(cfg):
    init_tracker()
    cfg.model = cfg.model or "yolov8n-seg.pt"
    cfg.imgsz = check_imgsz(cfg.imgsz, min_dim=2)  # check image size
    cfg.source = cfg.source if cfg.source is not None else ROOT / "assets"

    predictor = SegmentationPredictor(cfg)
    predictor()


if __name__ == "__main__":
    predict()

这里给的是对象分割和 DeepSORT 跟踪(ID + 轨迹)和车辆计数

没有分割在detect目录下,自行测试。

测试结果

如有侵权,或需要完整代码,请及时联系博主。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1632240.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

信创 | 信创产品行业有哪些?已取得了哪些进展?

信创产业是一条庞大的产业链&#xff0c;涉及IT基础设施产品&#xff08;如CPU芯片、服务器、存储、交换机、路由器等&#xff09;&#xff0c;以及基础软件、应用软件、网络安全等领域。信创产业的核心目标是建立自主可控的信息技术底层架构和标准&#xff0c;全面推进国产替代…

Models_M1

a1 Hugging Face a2 openai/whisper-large-v3 示 a3 ByteDance/Hyper-SD 示​​​​​​​ a4 OpenGVLab/InternV…

LeetCode-旋转链表

每日一题&#xff0c;很久没做链表的题了&#xff0c;今天做l一道相对简单的力扣中等难度题。 题目要求 给你一个链表的头节点 head &#xff0c;旋转链表&#xff0c;将链表每个节点向右移动 k 个位置。 示例 1&#xff1a; 输入&#xff1a;head [1,2,3,4,5], k 2 输出&…

FPGA 以太网概念简单学习

1 MAC和PHY 从硬件的角度来说&#xff0c;以太网接口电路主要由 MAC &#xff08; Media Access Control &#xff09;控制器和物理层接口 PHY&#xff08;Physical Layer &#xff0c; PHY &#xff09;两大部分构成。 MAC 指媒体访问控制子层协议&#xff0c;它和 PHY 接…

使用yolov8+QT+onnrunxtime进行开发的注意事项

1、本来想尝试做一个C的yolov8在QT5.15.2的应用&#xff1b; 因此&#xff0c;在实现这个目标的时候&#xff0c;我先用了yolov8自带的export进行导出&#xff0c;使用的代码很简单&#xff0c;如下所示&#xff1a; import os from ultralytics import YOLO# model YOLO(&q…

优卡特脸爱云一脸通智慧平台 UpLoadPic.ashx 文件上传致RCE漏洞复现

0x01 产品简介 脸爱云一脸通智慧管理平台是一套功能强大,运行稳定,操作简单方便,用户界面美观,轻松统计数据的一脸通系统。无需安装,只需在后台配置即可在浏览器登录。功能包括:系统管理中心、人员信息管理中心、设备管理中心、消费管理子系统、订餐管理子系统、水控管理…

uniapp分包,以及通过uni-simple-router进行分包

先说一下uniapp的直接分包方式&#xff0c;很简单&#xff1a; 配置分包信息 打开manifest.json源码视图&#xff0c;添加 “optimization”:{“subPackages”:true} 开启分包优化 我们在根目录下创建一个pagesA文件夹&#xff0c;用来放置需要分包的页面 然后配置路由 运行到…

OpenNMS安装

环境要求 硬件要求 Just Testing 1Minimum Server Specification 2Minimum Server Specification 2CPU2GHz dual core x86_643GHz quad core x86_64 and aboveRAM4GB (physical)16GB (physical) and aboveStorage (disk space)50-GB HDD, SSD1TB with SSD and above You can i…

Python并发编程:揭开多线程与异步编程的神秘面纱

第一章&#xff1a;并发编程导论 1.1 并发与并行概念解析 1.1.1 并发性与并行性的区别 想象一下繁忙的厨房中多位厨师同时准备不同的菜肴——即使他们共享有限的空间和资源&#xff0c;也能协同工作&#xff0c;这就是并发性的一个生动比喻。并发性意味着多个任务在同一时间…

基于 dockerfile 编写LNMP

目录 一. 环境准备 二. 部署 nginx 2.1 建立工作目录&#xff0c;并上传需要的安装包 2.2 配置 nginx.conf 文件 2.3 编写 dockerfile 2.4 构建一个新的镜像 2.5 启动一个新的容器 三. 部署MySQL 3.1 建立工作目录&#xff0c;并上传安装包 3.2 编写 Dockerfile 3.…

ROS学习笔记(14)拉普拉斯变换和PID

0.前提 近些时间在对睿抗的ROS仿真赛进行小组安排&#xff0c;对小组成员进行了一些安排&#xff0c;也要求他们以本次比赛写下自己的比赛经历博客&#xff0c;他们的培训由我来安排和负责&#xff0c;因此我得加吧油&#xff0c;起码保证我的进度得快过他们&#xff0c;才能安…

源码编译安装curl _ 统信UOS _ 麒麟KOS _ 中科方德

原文链接&#xff1a;源码编译安装curl | 统信UOS | 麒麟KOS | 中科方德 Hello&#xff0c;大家好啊&#xff01;今天我们来探讨一个非常实用的话题&#xff1a;在统信UOS、麒麟KOS以及中科方德桌面操作系统上如何从源码编译安装curl。Curl是一个广泛使用的命令行工具和库&…

【Kafka】Kafka高性能之道(六)

Kafka高性能之道 Kafka高性能原因 高效使用磁盘 1)顺序写磁盘&#xff0c;顺序写磁盘性能高于随机写内存。 2)Append Only 数据不更新&#xff0c;无记录级的数据删除(只会整个segment删s除)。读操作可直接在page cache内进行。如果进程重启&#xff0c;JVM内的cache会失效&a…

Pandas dataframe 中显示包含NaN值的单元格

大部分教程只讲如何打印含有NA的列或行。这个函数可以直接定位到单元格&#xff0c;当dataframe的行和列都很多的时候更加直观。 # Finding NaN locations for df.loc def locate_na(df):nan_indices set()nan_columns set()for col, vals in df_descriptors.items():for in…

grafana监控模板 regex截取ip地址

查看prometheus的node服务启动指标up&#xff0c;也可以查看其他的服务 配置监控模板 配置正则截取ip regex截取ip地址 /.*instance"([^"]*):9100*/ #提取&#xff08;instance"&#xff09;开头&#xff0c;&#xff08;:9001&#xff09;结束字段

Qt Creator中变量与函数的注释 - 鼠标悬浮可显示

Qt Creator中变量与函数的注释 - 鼠标悬浮可显示 引言一、变量注释二、函数注释三、参考链接 引言 代码注释在软件开发中起着至关重要的作用。它们不仅有助于开发者理解和维护代码&#xff0c;还能促进团队协作&#xff0c;提高代码的可读性和可维护性。适当的注释应该是简洁明…

头脑风暴式会议设计6步法

头脑风暴是一种常用的会议讨论工具&#xff0c;可以释放参与者的思想&#xff0c;让参与者更加有创意地思考&#xff0c;产生新的想法和见解。引导者在设计会议时&#xff0c;遵循一定的步骤和流程&#xff0c;便能高效激发创新思维&#xff0c;共创出有效的问题解决方案。下图…

百度竞价开户详解:步骤、优势与注意事项

随着互联网的普及&#xff0c;网络营销已成为企业不可或缺的一部分。其中&#xff0c;百度竞价作为一种高效的网络推广方式&#xff0c;受到了越来越多企业的青睐。本文将详细介绍百度竞价开户的流程、优势以及注意事项&#xff0c;帮助企业更好地利用这一工具提升品牌知名度和…

linux运行python怎么结束

假如你已经进入到【>>>】&#xff0c;那么输入【quit&#xff08;&#xff09;】&#xff0c;然后按一下回车键即可退出了。 如果是想要关闭窗口的&#xff0c;那么直接在这个窗口上按【ctrld】。

正点原子[第二期]Linux之ARM(MX6U)裸机篇学习笔记-6.5

前言&#xff1a; 本文是根据哔哩哔哩网站上“正点原子[第二期]Linux之ARM&#xff08;MX6U&#xff09;裸机篇”视频的学习笔记&#xff0c;在这里会记录下正点原子 I.MX6ULL 开发板的配套视频教程所作的实验和学习笔记内容。本文大量引用了正点原子教学视频和链接中的内容。…