一、AI应用系统实战项目
项目名称 | 项目名称 |
---|---|
1.人脸识别与管理系统 | 2.车牌识别与管理系统 |
3.手势识别系统 | 4.人脸面部活体检测系统 |
5.YOLOv8自动标注 | 6.人脸表情识别系统 |
7.行人跌倒检测系统 | 8.PCB板缺陷检测系统 |
9.安全帽检测系统 | 10.生活垃圾分类检测 |
11.火焰烟雾检测系统 | 12.路面坑洞检测系统 |
13.钢材表面缺陷检测 | 14.102种犬类检测系统 |
15.面部口罩检测系统 | 16.西红柿成熟度检测 |
17.血细胞检测计数 | 18.舰船分类检测系统 |
19.吸烟行为检测 | 20.水稻害虫检测识别 |
21.车辆行人检测计数 | 22.小麦害虫检测识别 |
23.玉米害虫检测识别 | 24.200种鸟类检测识别 |
25.交通标志检测识别 | 26.苹果病害识别 |
27.肺炎诊断系统 | 28.100种中草药识别 |
29.102种花卉识别 | 30.100种蝴蝶识别 |
31.车辆行人追踪系统 | 32.水稻病害识别 |
33.车牌检测识别系统 | 34.草莓病害检测分割 |
35.复杂环境船舶检测 | 36.裂缝检测分析系统 |
37.田间杂草检测系统 | 38.葡萄病害识别 |
39.路面坑洞检测分割 | 40.遥感地面物体检测 |
41.无人机视角检测 | 42.木薯病害识别预防 |
43.野火烟雾检测 | 44.脑肿瘤检测 |
45.玉米病害检测 | 46.橙子病害识别 |
47.车辆追踪计数 | 48.行人追踪计数 |
49.反光衣检测预警 | 50.人员闯入报警 |
51.高密度人脸检测 | 52.肾结石检测 |
53.水果检测识别 | 54.蔬菜检测识别 |
55.水果质量检测 | 56.非机动车头盔检测 |
57.螺栓螺母检测 | 58.焊缝缺陷检测 |
59.金属品瑕疵检测 | 60.链条缺陷检测 |
61.条形码检测识别 | 62.交通信号灯检测 |
63.草莓成熟度检测 | 64.水下海生物检测 |
《------正文------》
如何使用YOLOv11进行目标检测
img
介绍
继YOLOv 8、YOLOv 9和YOLOv10之后,最近刚发布了最新的YOLOv11!这一新的迭代不仅建立在其版本的优势之上,而且还引入了几个突破性的增强功能,为目标检测和计算机视觉设定了新的基准。
与以前的版本一样,YOLOv 11擅长检测、分类和定位图像和视频中的对象。然而,它更进一步,通过整合显著的增强功能,提高了跨多个用例的性能和适应性。让我们来看看使YOLOv 11在该系列中脱颖而出的关键增强功能。
YOLOv11关键创新
-
增强的特征提取:YOLOv11使用改进的主干和颈部架构,显著提高了特征提取能力。这导致更准确的物体检测和更轻松地处理复杂视觉任务的能力。
-
针对效率和速度进行了优化:凭借精致的架构设计和优化的训练管道,YOLOv11在保持高精度的同时提供更快的处理速度。这种平衡确保了YOLOv11是实时和大规模应用的理想选择。
-
更高的精度,更少的参数:YOLOv11m是YOLOv11的一个中等大小的变体,在COCO数据集上实现了更高的平均精度(mAP),同时使用的参数比YOLOv8m少22%。这种改进使其在不影响性能的情况下提高了计算效率。
-
跨环境的适应性:无论是部署在边缘设备、云平台还是由NVIDIA GPU驱动的系统上,YOLOv11都能为各种部署场景提供最大的灵活性。
-
广泛的支持任务:YOLOv 11将其功能扩展到传统的对象检测之外,以支持实例分割,图像分类,姿态估计和面向对象检测(OBB)。这种多功能性使其成为应对各种计算机视觉挑战的强大工具。
这些增强功能的集成使YOLOv 11成为尖端计算机视觉应用的强大引擎。请继续关注,我们将探索YOLOv 11如何突破这个动态领域的可能界限!
如何将YOLOv 11用于图像检测
步骤1:安装必要的库
pip install opencv-python ultralytics
步骤2:导入库
import cv2
from ultralytics import YOLO
步骤3:选择模型型号
model = YOLO("yolo11x.pt")
在这个网站上:,您可以比较不同的模型,并权衡各自的优点和缺点。在这种情况下,我们选择yolov11x.pt。
步骤4:编写一个函数来预测和检测图像中的对象
def predict(chosen_model, img, classes=[], conf=0.5):
if classes:
results = chosen_model.predict(img, classes=classes, conf=conf)
else:
results = chosen_model.predict(img, conf=conf)
return results
def predict_and_detect(chosen_model, img, classes=[], conf=0.5, rectangle_thickness=2, text_thickness=1):
results = predict(chosen_model, img, classes, conf=conf)
for result in results:
for box in result.boxes:
cv2.rectangle(img, (int(box.xyxy[0][0]), int(box.xyxy[0][1])),
(int(box.xyxy[0][2]), int(box.xyxy[0][3])), (255, 0, 0), rectangle_thickness)
cv2.putText(img, f"{result.names[int(box.cls[0])]}",
(int(box.xyxy[0][0]), int(box.xyxy[0][1]) - 10),
cv2.FONT_HERSHEY_PLAIN, 1, (255, 0, 0), text_thickness)
return img, results
步骤5:使用YOLOv11检测图像中的对象
# read the image
image = cv2.imread("YourImagePath")
result_img, _ = predict_and_detect(model, image, conf=0.5)
步骤6:保存并绘制结果图像
cv2.imshow("Image", result_img)
cv2.imwrite("YourSavePath", result_img)
cv2.waitKey(0)
完整代码:
from ultralytics import YOLO
import cv2
def predict(chosen_model, img, classes=[], conf=0.5):
if classes:
results = chosen_model.predict(img, classes=classes, conf=conf)
else:
results = chosen_model.predict(img, conf=conf)
return results
def predict_and_detect(chosen_model, img, classes=[], conf=0.5, rectangle_thickness=2, text_thickness=1):
results = predict(chosen_model, img, classes, conf=conf)
for result in results:
for box in result.boxes:
cv2.rectangle(img, (int(box.xyxy[0][0]), int(box.xyxy[0][1])),
(int(box.xyxy[0][2]), int(box.xyxy[0][3])), (255, 0, 0), rectangle_thickness)
cv2.putText(img, f"{result.names[int(box.cls[0])]}",
(int(box.xyxy[0][0]), int(box.xyxy[0][1]) - 10),
cv2.FONT_HERSHEY_PLAIN, 1, (255, 0, 0), text_thickness)
return img, results
model = YOLO("yolo11x.pt")
# read the image
image = cv2.imread("YourImagePath.png")
result_img, _ = predict_and_detect(model, image, classes=[], conf=0.5)
cv2.imshow("Image", result_img)
cv2.imwrite("YourSavePath.png", result_img)
cv2.waitKey(0)
如何将YOLOv11用于视频检测
步骤1:安装必要的库
pip install opencv-python ultralytics
步骤2和3:导入库与模型
import cv2
from ultralytics import YOLO
model = YOLO("yolo11x.pt")
步骤4:创建Videowriter以保存视频的结果
# defining function for creating a writer (for mp4 videos)
def create_video_writer(video_cap, output_filename):
# grab the width, height, and fps of the frames in the video stream.
frame_width = int(video_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(video_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(video_cap.get(cv2.CAP_PROP_FPS))
# initialize the FourCC and a video writer object
fourcc = cv2.VideoWriter_fourcc(*'MP4V')
writer = cv2.VideoWriter(output_filename, fourcc, fps,
(frame_width, frame_height))
return writer
步骤5:使用YOLOv 11检测视频中的对象
output_filename = "YourFilename.mp4"
video_path = r"YourVideoPath.mp4"
cap = cv2.VideoCapture(video_path)
writer = create_video_writer(cap, output_filename)
while True:
success, img = cap.read()
if not success:
break
result_img, _ = predict_and_detect(model, img, classes=[], conf=0.5)
writer.write(result_img)
cv2.imshow("Image", result_img)
cv2.waitKey(1)
writer.release()
完整代码
import cv2
from ultralytics import YOLO
def predict(chosen_model, img, classes=[], conf=0.5):
if classes:
results = chosen_model.predict(img, classes=classes, conf=conf)
else:
results = chosen_model.predict(img, conf=conf)
return results
def predict_and_detect(chosen_model, img, classes=[], conf=0.5, rectangle_thickness=2, text_thickness=1):
results = predict(chosen_model, img, classes, conf=conf)
for result in results:
for box in result.boxes:
cv2.rectangle(img, (int(box.xyxy[0][0]), int(box.xyxy[0][1])),
(int(box.xyxy[0][2]), int(box.xyxy[0][3])), (255, 0, 0), rectangle_thickness)
cv2.putText(img, f"{result.names[int(box.cls[0])]}",
(int(box.xyxy[0][0]), int(box.xyxy[0][1]) - 10),
cv2.FONT_HERSHEY_PLAIN, 1, (255, 0, 0), text_thickness)
return img, results
# defining function for creating a writer (for mp4 videos)
def create_video_writer(video_cap, output_filename):
# grab the width, height, and fps of the frames in the video stream.
frame_width = int(video_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(video_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = int(video_cap.get(cv2.CAP_PROP_FPS))
# initialize the FourCC and a video writer object
fourcc = cv2.VideoWriter_fourcc(*'MP4V')
writer = cv2.VideoWriter(output_filename, fourcc, fps,
(frame_width, frame_height))
return writer
model = YOLO("yolo11x.pt")
output_filename = "YourFilename.mp4"
video_path = r"YourVideoPath.mp4"
cap = cv2.VideoCapture(video_path)
writer = create_video_writer(cap, output_filename)
while True:
success, img = cap.read()
if not success:
break
result_img, _ = predict_and_detect(model, img, classes=[], conf=0.5)
writer.write(result_img)
cv2.imshow("Image", result_img)
cv2.waitKey(1)
writer.release()
结论
在本教程中,我们学习了如何使用YOLOv 11检测图像和视频中的对象。如果你觉得这段代码很有帮助,感谢点赞关注!