yolo常用操作(长话短说)热力图,特征图,结构图,训练,测试,预测

news2025/4/26 2:30:11

训练

from ultralytics import YOLO

model = YOLO(r'yolo11n.yaml') # 改为模型文件名

model.load('yolo11n.pt') # 权重文件名,官网下载

results = model.train(
    data=r'fish.yaml', # 数据yaml文件
    epochs=300,
    batch=8,
    device=0,
    workers=0,
    workspace=4
    )

yaml文件不会搞的,指路(yaml文件制作)

测试

import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLO


if __name__ == '__main__':
    model = YOLO(r'best.pt') # 选择训练好的权重路径
    model.val(data=r'fish.yaml', # 数据yaml文件
              split='val', # split可以选择train、val、test 根据自己的数据集情况来选择.
              imgsz=640,
              batch=16,
              # iou=0.7,
              # conf=0.4,
              # rect=False,
              # save_json=True, # if you need to cal coco metrice
              project='runs/val', # 保存文件夹名称
              name='exp', # 文件名
              )

预测

import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLO

if __name__ == '__main__':
    model = YOLO('best.pt') # 预测权重
    model.predict(source='test', # 预测图片文件夹
                  imgsz=640,
                  project='runs/detect', # 保存文件夹名称
                  name='exp', # 文件名
                  save=True,
                  # conf=0.2,
                  # iou=0.7,
                  # agnostic_nms=True,
                  # visualize=True, # visualize model features maps
                  # line_width=2, # line width of the bounding boxes
                  # show_conf=False, # do not show prediction confidence
                  # show_labels=False, # do not show prediction labels
                  # save_txt=True, # save results as .txt file
                  # save_crop=True, # save cropped images with results
                )

结构图

简答输出:

import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLO

if __name__ == '__main__':
    # choose your yaml file
    model = YOLO('yolo11.yaml') # 输出的模型文件
    model.info(detailed=True)
    try:
        model.profile(imgsz=[640, 640])
    except Exception as e:
        print(e)
        pass
    model.fuse()

详细输出(onnx):

要下载onnx库:

pip install onnx
from ultralytics import YOLO
# 加载训练好的模型
model = YOLO(r"best.pt") # 权重文件
# # 将模型转为onnx格式
success = model.export(format='onnx')

然后直接打开你运行框中的链接即可

特征图

import warnings
warnings.filterwarnings('ignore')
from ultralytics import YOLO


pth_path = r"best.pt" # 模型权重文件
test_path = r"test" # 要特征的预测的图片
model = YOLO(pth_path)  # load a custom model
metrics = model.predict(test_path, conf=0.5, save=True, visualize=True)

热力图

import warnings

warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import torch, yaml, cv2, os, shutil, sys, copy
import numpy as np

np.random.seed(0)
import matplotlib.pyplot as plt
from tqdm import trange
from PIL import Image
from ultralytics import YOLO
from ultralytics.nn.tasks import attempt_load_weights
from ultralytics.utils.torch_utils import intersect_dicts
from ultralytics.utils.ops import xywh2xyxy, non_max_suppression
from pytorch_grad_cam import GradCAMPlusPlus, GradCAM, XGradCAM, EigenCAM, HiResCAM, LayerCAM, RandomCAM, EigenGradCAM, \
    KPCA_CAM, AblationCAM
from pytorch_grad_cam.utils.image import show_cam_on_image, scale_cam_image
from pytorch_grad_cam.activations_and_gradients import ActivationsAndGradients


def letterbox(im, new_shape=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True, stride=32):
    # Resize and pad image while meeting stride-multiple constraints
    shape = im.shape[:2]  # current shape [height, width]
    if isinstance(new_shape, int):
        new_shape = (new_shape, new_shape)

    # Scale ratio (new / old)
    r = min(new_shape[0] / shape[0], new_shape[1] / shape[1])
    if not scaleup:  # only scale down, do not scale up (for better val mAP)
        r = min(r, 1.0)

    # Compute padding
    ratio = r, r  # width, height ratios
    new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
    dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1]  # wh padding
    if auto:  # minimum rectangle
        dw, dh = np.mod(dw, stride), np.mod(dh, stride)  # wh padding
    elif scaleFill:  # stretch
        dw, dh = 0.0, 0.0
        new_unpad = (new_shape[1], new_shape[0])
        ratio = new_shape[1] / shape[1], new_shape[0] / shape[0]  # width, height ratios

    dw /= 2  # divide padding into 2 sides
    dh /= 2

    if shape[::-1] != new_unpad:  # resize
        im = cv2.resize(im, new_unpad, interpolation=cv2.INTER_LINEAR)
    top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
    left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
    im = cv2.copyMakeBorder(im, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border
    return im, ratio, (top, bottom, left, right)


class ActivationsAndGradients:
    """ Class for extracting activations and
    registering gradients from targetted intermediate layers """

    def __init__(self, model, target_layers, reshape_transform):
        self.model = model
        self.gradients = []
        self.activations = []
        self.reshape_transform = reshape_transform
        self.handles = []
        for target_layer in target_layers:
            self.handles.append(
                target_layer.register_forward_hook(self.save_activation))
            # Because of https://github.com/pytorch/pytorch/issues/61519,
            # we don't use backward hook to record gradients.
            self.handles.append(
                target_layer.register_forward_hook(self.save_gradient))

    def save_activation(self, module, input, output):
        activation = output

        if self.reshape_transform is not None:
            activation = self.reshape_transform(activation)
        self.activations.append(activation.cpu().detach())

    def save_gradient(self, module, input, output):
        if not hasattr(output, "requires_grad") or not output.requires_grad:
            # You can only register hooks on tensor requires grad.
            return

        # Gradients are computed in reverse order
        def _store_grad(grad):
            if self.reshape_transform is not None:
                grad = self.reshape_transform(grad)
            self.gradients = [grad.cpu().detach()] + self.gradients

        output.register_hook(_store_grad)

    def post_process(self, result):
        if self.model.end2end:
            logits_ = result[:, :, 4:]
            boxes_ = result[:, :, :4]
            sorted, indices = torch.sort(logits_[:, :, 0], descending=True)
            return logits_[0][indices[0]], boxes_[0][indices[0]]
        elif self.model.task == 'detect':
            logits_ = result[:, 4:]
            boxes_ = result[:, :4]
            sorted, indices = torch.sort(logits_.max(1)[0], descending=True)
            return torch.transpose(logits_[0], dim0=0, dim1=1)[indices[0]], torch.transpose(boxes_[0], dim0=0, dim1=1)[
                indices[0]]
        elif self.model.task == 'segment':
            logits_ = result[0][:, 4:4 + self.model.nc]
            boxes_ = result[0][:, :4]
            mask_p, mask_nm = result[1][2].squeeze(), result[1][1].squeeze().transpose(1, 0)
            c, h, w = mask_p.size()
            mask = (mask_nm @ mask_p.view(c, -1))
            sorted, indices = torch.sort(logits_.max(1)[0], descending=True)
            return torch.transpose(logits_[0], dim0=0, dim1=1)[indices[0]], torch.transpose(boxes_[0], dim0=0, dim1=1)[
                indices[0]], mask[indices[0]]
        elif self.model.task == 'pose':
            logits_ = result[:, 4:4 + self.model.nc]
            boxes_ = result[:, :4]
            poses_ = result[:, 4 + self.model.nc:]
            sorted, indices = torch.sort(logits_.max(1)[0], descending=True)
            return torch.transpose(logits_[0], dim0=0, dim1=1)[indices[0]], torch.transpose(boxes_[0], dim0=0, dim1=1)[
                indices[0]], torch.transpose(poses_[0], dim0=0, dim1=1)[indices[0]]
        elif self.model.task == 'obb':
            logits_ = result[:, 4:4 + self.model.nc]
            boxes_ = result[:, :4]
            angles_ = result[:, 4 + self.model.nc:]
            sorted, indices = torch.sort(logits_.max(1)[0], descending=True)
            return torch.transpose(logits_[0], dim0=0, dim1=1)[indices[0]], torch.transpose(boxes_[0], dim0=0, dim1=1)[
                indices[0]], torch.transpose(angles_[0], dim0=0, dim1=1)[indices[0]]
        elif self.model.task == 'classify':
            return result[0]

    def __call__(self, x):
        self.gradients = []
        self.activations = []
        model_output = self.model(x)
        if self.model.task == 'detect':
            post_result, pre_post_boxes = self.post_process(model_output[0])
            return [[post_result, pre_post_boxes]]
        elif self.model.task == 'segment':
            post_result, pre_post_boxes, pre_post_mask = self.post_process(model_output)
            return [[post_result, pre_post_boxes, pre_post_mask]]
        elif self.model.task == 'pose':
            post_result, pre_post_boxes, pre_post_pose = self.post_process(model_output[0])
            return [[post_result, pre_post_boxes, pre_post_pose]]
        elif self.model.task == 'obb':
            post_result, pre_post_boxes, pre_post_angle = self.post_process(model_output[0])
            return [[post_result, pre_post_boxes, pre_post_angle]]
        elif self.model.task == 'classify':
            data = self.post_process(model_output)
            return [data]

    def release(self):
        for handle in self.handles:
            handle.remove()


class yolo_detect_target(torch.nn.Module):
    def __init__(self, ouput_type, conf, ratio, end2end) -> None:
        super().__init__()
        self.ouput_type = ouput_type
        self.conf = conf
        self.ratio = ratio
        self.end2end = end2end

    def forward(self, data):
        post_result, pre_post_boxes = data
        result = []
        for i in trange(int(post_result.size(0) * self.ratio)):
            if (self.end2end and float(post_result[i, 0]) < self.conf) or (
                    not self.end2end and float(post_result[i].max()) < self.conf):
                break
            if self.ouput_type == 'class' or self.ouput_type == 'all':
                if self.end2end:
                    result.append(post_result[i, 0])
                else:
                    result.append(post_result[i].max())
            elif self.ouput_type == 'box' or self.ouput_type == 'all':
                for j in range(4):
                    result.append(pre_post_boxes[i, j])
        return sum(result)


class yolo_segment_target(yolo_detect_target):
    def __init__(self, ouput_type, conf, ratio, end2end):
        super().__init__(ouput_type, conf, ratio, end2end)

    def forward(self, data):
        post_result, pre_post_boxes, pre_post_mask = data
        result = []
        for i in trange(int(post_result.size(0) * self.ratio)):
            if float(post_result[i].max()) < self.conf:
                break
            if self.ouput_type == 'class' or self.ouput_type == 'all':
                result.append(post_result[i].max())
            elif self.ouput_type == 'box' or self.ouput_type == 'all':
                for j in range(4):
                    result.append(pre_post_boxes[i, j])
            elif self.ouput_type == 'segment' or self.ouput_type == 'all':
                result.append(pre_post_mask[i].mean())
        return sum(result)


class yolo_pose_target(yolo_detect_target):
    def __init__(self, ouput_type, conf, ratio, end2end):
        super().__init__(ouput_type, conf, ratio, end2end)

    def forward(self, data):
        post_result, pre_post_boxes, pre_post_pose = data
        result = []
        for i in trange(int(post_result.size(0) * self.ratio)):
            if float(post_result[i].max()) < self.conf:
                break
            if self.ouput_type == 'class' or self.ouput_type == 'all':
                result.append(post_result[i].max())
            elif self.ouput_type == 'box' or self.ouput_type == 'all':
                for j in range(4):
                    result.append(pre_post_boxes[i, j])
            elif self.ouput_type == 'pose' or self.ouput_type == 'all':
                result.append(pre_post_pose[i].mean())
        return sum(result)


class yolo_obb_target(yolo_detect_target):
    def __init__(self, ouput_type, conf, ratio, end2end):
        super().__init__(ouput_type, conf, ratio, end2end)

    def forward(self, data):
        post_result, pre_post_boxes, pre_post_angle = data
        result = []
        for i in trange(int(post_result.size(0) * self.ratio)):
            if float(post_result[i].max()) < self.conf:
                break
            if self.ouput_type == 'class' or self.ouput_type == 'all':
                result.append(post_result[i].max())
            elif self.ouput_type == 'box' or self.ouput_type == 'all':
                for j in range(4):
                    result.append(pre_post_boxes[i, j])
            elif self.ouput_type == 'obb' or self.ouput_type == 'all':
                result.append(pre_post_angle[i])
        return sum(result)


class yolo_classify_target(yolo_detect_target):
    def __init__(self, ouput_type, conf, ratio, end2end):
        super().__init__(ouput_type, conf, ratio, end2end)

    def forward(self, data):
        return data.max()


class yolo_heatmap:
    def __init__(self, weight, device, method, layer, backward_type, conf_threshold, ratio, show_result, renormalize,
                 task, img_size):
        device = torch.device(device)
        model_yolo = YOLO(weight)
        model_names = model_yolo.names
        print(f'model class info:{model_names}')
        model = copy.deepcopy(model_yolo.model)
        model.to(device)
        model.info()
        for p in model.parameters():
            p.requires_grad_(True)
        model.eval()

        model.task = task
        if not hasattr(model, 'end2end'):
            model.end2end = False

        if task == 'detect':
            target = yolo_detect_target(backward_type, conf_threshold, ratio, model.end2end)
        elif task == 'segment':
            target = yolo_segment_target(backward_type, conf_threshold, ratio, model.end2end)
        elif task == 'pose':
            target = yolo_pose_target(backward_type, conf_threshold, ratio, model.end2end)
        elif task == 'obb':
            target = yolo_obb_target(backward_type, conf_threshold, ratio, model.end2end)
        elif task == 'classify':
            target = yolo_classify_target(backward_type, conf_threshold, ratio, model.end2end)
        else:
            raise Exception(f"not support task({task}).")

        target_layers = [model.model[l] for l in layer]
        method = eval(method)(model, target_layers)
        method.activations_and_grads = ActivationsAndGradients(model, target_layers, None)

        colors = np.random.uniform(0, 255, size=(len(model_names), 3)).astype(np.int32)
        self.__dict__.update(locals())

    def post_process(self, result):
        result = non_max_suppression(result, conf_thres=self.conf_threshold, iou_thres=0.65)[0]
        return result

    def draw_detections(self, box, color, name, img):
        xmin, ymin, xmax, ymax = list(map(int, list(box)))
        cv2.rectangle(img, (xmin, ymin), (xmax, ymax), tuple(int(x) for x in color), 2)  # 绘制检测框
        cv2.putText(img, str(name), (xmin, ymin - 5), cv2.FONT_HERSHEY_SIMPLEX, 0.8, tuple(int(x) for x in color), 2,
                    lineType=cv2.LINE_AA)  # 绘制类别、置信度
        return img

    def renormalize_cam_in_bounding_boxes(self, boxes, image_float_np, grayscale_cam):
        """Normalize the CAM to be in the range [0, 1]
        inside every bounding boxes, and zero outside of the bounding boxes. """
        renormalized_cam = np.zeros(grayscale_cam.shape, dtype=np.float32)
        for x1, y1, x2, y2 in boxes:
            x1, y1 = max(x1, 0), max(y1, 0)
            x2, y2 = min(grayscale_cam.shape[1] - 1, x2), min(grayscale_cam.shape[0] - 1, y2)
            renormalized_cam[y1:y2, x1:x2] = scale_cam_image(grayscale_cam[y1:y2, x1:x2].copy())
        renormalized_cam = scale_cam_image(renormalized_cam)
        eigencam_image_renormalized = show_cam_on_image(image_float_np, renormalized_cam, use_rgb=True)
        return eigencam_image_renormalized

    def process(self, img_path, save_path):
        # img process
        try:
            img = cv2.imdecode(np.fromfile(img_path, np.uint8), cv2.IMREAD_COLOR)
        except:
            print(f"Warning... {img_path} read failure.")
            return
        img, _, (top, bottom, left, right) = letterbox(img, new_shape=(self.img_size, self.img_size),
                                                       auto=True)  # 如果需要完全固定成宽高一样就把auto设置为False
        img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
        img = np.float32(img) / 255.0
        tensor = torch.from_numpy(np.transpose(img, axes=[2, 0, 1])).unsqueeze(0).to(self.device)
        print(f'tensor size:{tensor.size()}')

        try:
            grayscale_cam = self.method(tensor, [self.target])
        except AttributeError as e:
            print(f"Warning... self.method(tensor, [self.target]) failure.")
            return

        grayscale_cam = grayscale_cam[0, :]
        cam_image = show_cam_on_image(img, grayscale_cam, use_rgb=True)

        pred = self.model_yolo.predict(tensor, conf=self.conf_threshold, iou=0.7)[0]
        if self.renormalize and self.task in ['detect', 'segment', 'pose']:
            cam_image = self.renormalize_cam_in_bounding_boxes(pred.boxes.xyxy.cpu().detach().numpy().astype(np.int32),
                                                               img, grayscale_cam)
        if self.show_result:
            cam_image = pred.plot(img=cam_image,
                                  conf=True,  # 显示置信度
                                  font_size=None,  # 字体大小,None为根据当前image尺寸计算
                                  line_width=None,  # 线条宽度,None为根据当前image尺寸计算
                                  labels=False,  # 显示标签
                                  )

        # 去掉padding边界
        cam_image = cam_image[top:cam_image.shape[0] - bottom, left:cam_image.shape[1] - right]
        cam_image = Image.fromarray(cam_image)
        cam_image.save(save_path)

    def __call__(self, img_path, save_path):
        # remove dir if exist
        if os.path.exists(save_path):
            shutil.rmtree(save_path)
        # make dir if not exist
        os.makedirs(save_path, exist_ok=True)

        if os.path.isdir(img_path):
            for img_path_ in os.listdir(img_path):
                self.process(f'{img_path}/{img_path_}', f'{save_path}/{img_path_}')
        else:
            self.process(img_path, f'{save_path}/result.png')


def get_params():
    params = {
        'weight': 'D:\TestMain\yolov11\yolo11n.pt',  # 现在只需要指定权重即可,不需要指定cfg
        'device': 'cuda:0',
        'method': 'GradCAMPlusPlus',
        # GradCAMPlusPlus, GradCAM, XGradCAM, EigenCAM, HiResCAM, LayerCAM, RandomCAM, EigenGradCAM, KPCA_CAM
        'layer': [10, 12, 14, 16, 18],
        'backward_type': 'all',
        # detect:<class, box, all> segment:<class, box, segment, all> pose:<box, keypoint, all> obb:<box, angle, all> classify:<all>
        'conf_threshold': 0.2,  # 0.2
        'ratio': 0.02,  # 0.02-0.1
        'show_result': False,  # 不需要绘制结果请设置为False
        'renormalize': False,  # 需要把热力图限制在框内请设置为True(仅对detect,segment,pose有效)
        'task': 'detect',  # 任务(detect,segment,pose,obb,classify)
        'img_size': 640,  # 图像尺寸
    }
    return params


# pip install grad-cam==1.5.4 --no-deps
if __name__ == '__main__':
    model = yolo_heatmap(**get_params())
    model(r'fish', r'out') # 前面输出文件,后面输出文件

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2342879.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【C++指南】告别C字符串陷阱:如何实现封装string?

&#x1f31f; 各位看官好&#xff0c;我是egoist2023&#xff01; &#x1f30d; 种一棵树最好是十年前&#xff0c;其次是现在&#xff01; &#x1f4ac; 注意&#xff1a;本章节只详讲string中常用接口及实现&#xff0c;有其他需求查阅文档介绍。 &#x1f680; 今天通过了…

国内ip地址怎么改?详细教程

在中国&#xff0c;更改IP地址需要遵守规则&#xff0c;并确保所有操作合规。在特定情况下&#xff0c;可能需要修改IP地址以满足不同需求或解决特定问题。以下是一些常见且合法的IP地址变更方法及注意事项&#xff1a; 一、理解IP地址 IP地址是设备在网络中的唯一标识&#x…

模式设计简介

设计模式简介 设计模式是软件开发中经过验证的最佳实践解决方案,它是针对特定问题的通用解决方案,能够帮助开发者提升代码的可维护性、可扩展性和复用性。设计模式并非具体的代码实现,而是一种解决问题的思路和方法论,它源于大量的实践经验总结,旨在解决软件开发过程中反…

众趣科技X世界读书日丨数字孪生技术赋能图书馆空间智慧化运营

4月23日&#xff0c;是第30个“世界读书日”&#xff0c;不仅是庆祝阅读的日子&#xff0c;更是思考知识传播未来的契机。 图书馆作为主要传播图书的场所&#xff0c;在科技的发展中&#xff0c;图书馆正面临前所未有的挑战&#xff0c;联合国数据显示&#xff0c;全球近30%的…

MySQL 事务(详细版)

目录 一、事务简介 1、事务的概念 2、事务执行的案例 3、对于事务的理解 二、事务操作 &#xff08;一&#xff09;未控制事务 &#xff08;二&#xff09;控制事务一 &#xff08;三&#xff09;控制事务二 三、事务四大特性 四、并发事务问题 五、事务隔离…

c++之网络编程

网络编程&#xff1a;使得计算机程序能够在网络中发送和接受数据&#xff0c;从而实现分布式系统和网络服务的功能。 作用&#xff1a;使应用程序能够通过网络协议与其他计算机程序进行数据交换 基本概念 套接字&#xff08;socket&#xff09;&#xff1a; 套接字是网络通信…

MySQL8的安装方法

概述&#xff1a; MySQL对于开发人员来说&#xff0c;并不陌生。但是很多朋友提起安装MySQL就很头疼&#xff0c;如果一不小心安装失败&#xff0c;再现安装第二遍就变得更加头疼。今天给大家分享一个比较非常简单好安装的方法&#xff0c;并且删除或者卸载也都非常容易 下载…

CF每日4题

1500左右的做到还是有点吃力 2093E 1500 二分答案 题意&#xff1a;给定一个长度为 n 的数组&#xff0c;现在要把它切成 k 份&#xff0c;求每一份最小的MEX中的最大值。 就是找最大值&#xff0c;但是这个值是所有段最小的值采用二分答案&#xff0c;二分这个值&#xff0…

基于 Spring Boot 瑞吉外卖系统开发(七)

基于 Spring Boot 瑞吉外卖系统开发&#xff08;七&#xff09; 新增菜品页面 菜品管理页面提供了一个“新增菜品”按钮&#xff0c;单击该按钮时&#xff0c;会打开新增菜品页面。 菜品分类列表 首先要获取分类列表数据。 请求路径/category/list&#xff0c;请求方法GE…

二项式分布html实验

二项式分布html实验 本文将带你一步步搭建一个纯前端的二项分布 Monte-Carlo 模拟器。 只要一个 HTML 文件&#xff0c;打开就能运行&#xff1a; 动态输入试验次数 n、成功概率 p 与重复次数 m点击按钮立刻得到「模拟频数 vs 理论频数」柱状图随着 m 增大&#xff0c;两组柱状…

大模型如何作为reranker?

大模型如何作为reranker&#xff1f; 作者&#xff1a;爱工作的小小酥 原文地址&#xff1a;https://zhuanlan.zhihu.com/p/31805674335 只为了感动自己而去做一些事情纯属浪费时间。 ————爱工作的小小酥 引言 用于检索的模型中&#xff0c;我们最熟悉的就是单塔和双塔了&…

发放优惠券

文章目录 概要整体架构流程技术细节小结 概要 发放优惠券 处于暂停状态&#xff0c;或者待发放状态的优惠券&#xff0c;在优惠券列表中才会出现发放按钮&#xff0c;可以被发放&#xff1a; 需求分析以及接口设计 需要我们选择发放方式&#xff0c;使用期限。 发放方式分…

试完5个AI海报工具后,我投了秒出设计一票!

随着AI技术的不断发展&#xff0c;越来越多的AI生成工具进入了设计领域&#xff0c;海报生成工具成为了其中的重要一员。今天&#xff0c;我们将为大家介绍三款热门的AI海报生成工具&#xff0c;并进行对比分析&#xff0c;帮助大家选择最适合的工具。 1. 秒出设计&#xff1a;…

PH热榜 | 2025-04-25

1. LambdaTest Accessibility Testing Suite 标语&#xff1a;轻松点击&#xff0c;确保网站的包容性和合规性。 介绍&#xff1a;LambdaTest 的可访问性测试工具可以自动识别你的网站和网络应用中是否符合 WCAG&#xff08;网页内容无障碍指南&#xff09;标准。你可以设置定…

模方ModelFun是什么?如何安装?

摘要&#xff1a;本文主要介绍模方ModelFun的软件简介、特性、安装环境配置、插件及软件安装。 1.软件简介 模方是一款实景三维模型的场景修饰与单体化建模工具&#xff0c;是建模的后处理软件&#xff0c;包括网格模型编辑和单体化建模两大模块。 场景修饰模块可以对 OBJ、OSG…

[AI Workflow] 基于多语种知识库的 Dify Workflow 构建与优化实践

在实际应用中,基于用户提供的资料快速构建高质量的知识库,并以此背景精准回答专业问题,是提升人工智能系统实用性的重要方向。然而,在跨语种环境下(如中、日、英混合资料与提问),传统的知识检索和回答生成流程往往面临匹配不准确、信息检索不全面的问题。 本文将介绍一种…

Pycharm(十六)面向对象进阶

一、继承 概述&#xff1a; 实际开发中&#xff0c;我们发现很多类中的步分内容是相似的&#xff0c;或者相同的&#xff0c;每次写很麻烦&#xff0c;针对这种情况&#xff0c; 我们可以把这些相似&#xff08;相同的&#xff09;部分抽取出来&#xff0c;单独地放到1个类中&…

WebGL图形编程实战【4】:光影交织 × 逐片元光照与渲染技巧

现实世界中的物体被光线照射时&#xff0c;会反射一部分光。只有当反射光线进人你的眼睛时&#xff0c;你才能够看到物体并辩认出它的颜色。 光源类型 平行光&#xff08;Directional Light&#xff09;&#xff1a;光线是相互平行的&#xff0c;平行光具有方向。平行光可以看…

Java高频面试之并发编程-07

hello啊&#xff0c;各位观众姥爷们&#xff01;&#xff01;&#xff01;本baby今天来报道了&#xff01;哈哈哈哈哈嗝&#x1f436; 面试官&#xff1a;线程之间有哪些通信方式&#xff1f; 在 Java 多线程编程中&#xff0c;线程间通信&#xff08;Inter-Thread Communica…