【学习记录】使用CARLA录制双目摄像头SLAM数据

news2024/11/14 0:52:59

一、数据录制

数据录制的部分参考了网上的部分代码,代码本身并不复杂,基本都是简单的CARLA语法,关键的一点在于,CARLA内部本身并没有预设的双目摄像头,需要我们添加两个朝向相同的摄像头来组成双目系统,这一点体现在添加相机时的位置和角度。之后利用回调函数进行保存,为了方便构建真值信息,这里我同时添加了gnss和imu的信息。为了让录制过程车辆不会停下来等红绿灯,交通管理的部分已经注释掉了。代码如下:


#!/usr/bin/env python
import glob
import os
import sys
import time
try:
    sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
        sys.version_info.major,
        sys.version_info.minor,
        'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
except IndexError:
    pass

import carla
import random
import numpy as np
import cv2
from queue import Queue, Empty
import copy
import random
random.seed(0)
from agents.navigation.basic_agent import BasicAgent 
from agents.navigation.behavior_agent import BehaviorAgent 

# args
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--host', metavar='H',    default='127.0.0.1', help='IP of the host server (default: 127.0.0.1)')
parser.add_argument('--port', '-p',           default=2000, type=int, help='TCP port to listen to (default: 2000)')
parser.add_argument('--tm_port',              default=8000, type=int, help='Traffic Manager Port (default: 8000)')
parser.add_argument('--ego-spawn', type=list, default=None, help='[x,y] in world coordinate')
parser.add_argument('--top-view',             default=True, help='Setting spectator to top view on ego car')
parser.add_argument('--map',                  default='Town10HD_Opt', help='Town Map')
parser.add_argument('--sync',                 default=True, help='Synchronous mode execution')
parser.add_argument('--sensor-h',             default=2.4, help='Sensor Height')
parser.add_argument('--save-path',            default='/home/zhihe/Documents/Dataset/CARLA/Town10/', help='Synchronous mode execution')
parser.add_argument('--behavior', type=str,   default='normal', help='Choose one of the possible agent behaviors')
args = parser.parse_args()

IM_WIDTH = 1392
IM_HEIGHT = 512

actor_list, sensor_list = [], []
sensor_type = ['rgb','lidar','imu','gnss']
def main(args):
    client = carla.Client(args.host, args.port)
    client.set_timeout(5.0)
    
    traffic_manager = client.get_trafficmanager()
    # world = client.get_world()
    world_name = args.map
    world = client.load_world(world_name)
    # 获取所有的交通灯
    traffic_lights = world.get_actors().filter('traffic.traffic_light')

    # 将所有交通灯设置为绿灯并冻结状态
    for traffic_light in traffic_lights:
        traffic_light.set_state(carla.TrafficLightState.Green)
        traffic_light.freeze(True)  # 冻结当前状态,保持绿灯不变

    blueprint_library = world.get_blueprint_library()
    try:
        original_settings = world.get_settings()
        settings = world.get_settings()
        settings.fixed_delta_seconds = 0.05
        settings.synchronous_mode = True
        world.apply_settings(settings)

        traffic_manager.set_synchronous_mode(True)
        spectator = world.get_spectator()

        points_in_map = world.get_map().get_spawn_points()
        start_position = points_in_map[56]
        end_position = points_in_map[84]
        ego_vehicle = world.spawn_actor(random.choice(blueprint_library.filter("model3")), start_position)
        actor_list.append(ego_vehicle)

        if args.sync:
            world.tick()
        else:
            world.wait_for_tick()

        physics_control = ego_vehicle.get_physics_control()
        physics_control.use_sweep_wheel_collision = True
        ego_vehicle.apply_physics_control(physics_control)

        #-------------------------- 进入传感器部分 --------------------------#
        sensor_queue = Queue()
        cam_bp = blueprint_library.find('sensor.camera.rgb')
        # lidar_bp = blueprint_library.find('sensor.lidar.ray_cast')
        imu_bp = blueprint_library.find('sensor.other.imu')
        gnss_bp = blueprint_library.find('sensor.other.gnss')

        cam_bp.set_attribute("image_size_x", "{}".format(IM_WIDTH))
        cam_bp.set_attribute("image_size_y", "{}".format(IM_HEIGHT))
        cam_bp.set_attribute("fov", "60")
        # cam_bp.set_attribute('sensor_tick', '0.1')

        cam01 = world.spawn_actor(cam_bp, carla.Transform(carla.Location(y=-1, z=args.sensor_h),carla.Rotation(yaw=0)), attach_to=ego_vehicle)
        cam01.listen(lambda data: sensor_callback(data, sensor_queue, "rgb_left"))
        sensor_list.append(cam01)

        cam02 = world.spawn_actor(cam_bp, carla.Transform(carla.Location(y=1, z=args.sensor_h),carla.Rotation(yaw=0)), attach_to=ego_vehicle)
        cam02.listen(lambda data: sensor_callback(data, sensor_queue, "rgb_right"))
        sensor_list.append(cam02)

        # lidar_bp.set_attribute('channels', '64')
        # lidar_bp.set_attribute('points_per_second', '200000')
        # lidar_bp.set_attribute('range', '32')
        # lidar_bp.set_attribute('rotation_frequency', str(int(1/settings.fixed_delta_seconds))) 
        
        # lidar01 = world.spawn_actor(lidar_bp, carla.Transform(carla.Location(z=args.sensor_h)), attach_to=ego_vehicle)
        # lidar01.listen(lambda data: sensor_callback(data, sensor_queue, "lidar"))
        # sensor_list.append(lidar01)

        imu01 = world.spawn_actor(imu_bp, carla.Transform(carla.Location(z=args.sensor_h)), attach_to=ego_vehicle)
        imu01.listen(lambda data: sensor_callback(data, sensor_queue, "imu"))
        sensor_list.append(imu01)

        gnss01 = world.spawn_actor(gnss_bp, carla.Transform(carla.Location(z=args.sensor_h)), attach_to=ego_vehicle)
        gnss01.listen(lambda data: sensor_callback(data, sensor_queue, "gnss"))
        sensor_list.append(gnss01)
        #-------------------------- 传感器设置完毕 --------------------------#

        # 清空文档
        file_path = args.save_path +'imu/'+str(args.map)+'.txt'
        with open(file_path, 'w') as file:
            file.write("")     
        file_path = args.save_path +'gnss/'+str(args.map)+'.txt'
        with open(file_path, 'w') as file:
            file.write("")
        # 指定要清空图片文件的路径
        directory = args.save_path +'rgb/image_left'
        file_list = os.listdir(directory)
        # 筛选出所有的图片文件并删除
        for file in file_list:
            if file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.bmp')):
                file_path = os.path.join(directory, file)
                os.remove(file_path)
                
        directory = args.save_path +'rgb/image_right'
        file_list = os.listdir(directory)
        # 筛选出所有的图片文件并删除
        for file in file_list:
            if file.lower().endswith(('.png', '.jpg', '.jpeg', '.gif', '.bmp')):
                file_path = os.path.join(directory, file)
                os.remove(file_path)

        agent = BehaviorAgent(ego_vehicle, behavior=args.behavior)
        agent.set_destination(end_position.location)

        while True:
            # Tick the server
            # agent.update_information(ego_vehicle)
            world.tick()

            # 将CARLA界面摄像头跟随车动
            loc = ego_vehicle.get_transform().location
            spectator.set_transform(carla.Transform(carla.Location(x=loc.x,y=loc.y,z=35),carla.Rotation(yaw=0,pitch=-90,roll=0)))

            w_frame = world.get_snapshot().frame
            print("\nWorld's frame: %d" % w_frame)
            try:
                rgbs_left = []
                rgbs_right = []
                rgb_timestamp = 0
                for i in range (0, len(sensor_list)):
                    s_frame, s_name, s_data = sensor_queue.get(True, 1.0)
                    print("    Frame: %d   Sensor: %s" % (s_frame, s_name))
                    # sensor_type = s_name.split('_')[0]
                    if s_name == 'rgb_left':
                        rgb_timestamp = s_data.timestamp
                        rgbs_left.append(_parse_image_cb(s_data))
                    elif s_name == 'rgb_right':
                        rgb_timestamp = s_data.timestamp
                        rgbs_right.append(_parse_image_cb(s_data))
                    elif s_name == 'lidar':
                        lidar = _parse_lidar_cb(s_data)
                    elif s_name == 'imu':
                        imu = s_data
                    elif s_name == 'gnss':
                        gnss = s_data
                
                # 仅用来可视化 可注释
                rgb_left = np.concatenate(rgbs_left, axis=1)[...,:3]
                rgb_right = np.concatenate(rgbs_right, axis=1)
                # cv2.imshow('vizs', visualize_data(rgb, imu_yaw, gnss))
                # cv2.imshow('vizs', visualize_data(rgb, lidar, imu_yaw, gnss))

                # cv2.waitKey(100)
                mkdir_folder(args.save_path)
                if rgb_left is None or args.save_path is not None:
                    filename = args.save_path +'rgb/image_left/'+str(rgb_timestamp)+'.png'
                    cv2.imwrite(filename, np.array(rgb_left[...,::-1]))

                if rgb_right is None or args.save_path is not None:
                    filename = args.save_path +'rgb/image_right/'+str(rgb_timestamp)+'.png'
                    cv2.imwrite(filename, np.array(rgb_right[...,::-1]))
                    # filename = args.save_path +'lidar/'+str(w_frame)+'.npy'
                    # np.save(filename, lidar)
                
                if imu is None or args.save_path is not None:
                    file_path = args.save_path +'imu/'+str(args.map)+'.txt'
                    with open(file_path, 'a') as file:
                        file.write(str(imu.timestamp)+' '+str(imu.gyroscope.y)+' '+str(imu.gyroscope.x)+' '+str(imu.gyroscope.z)+'\n')
                
                if gnss is None or args.save_path is not None:
                    file_path = args.save_path +'gnss/'+str(args.map)+'.txt'
                    with open(file_path, 'a') as file:
                        file.write(str(gnss.timestamp)+' '+str(gnss.latitude)+' '+str(gnss.longitude)+' '+str(gnss.altitude)+'\n')

            except Empty:
                print("    Some of the sensor information is missed")

            if (agent.done()):
                break
            
            control = agent.run_step()
            control.manual_gear_shift = False
            ego_vehicle.apply_control(control)

    finally:
        world.apply_settings(original_settings)
        traffic_manager.set_synchronous_mode(False)
        for sensor in sensor_list:
            sensor.destroy()
        for actor in actor_list:
            actor.destroy()
        
        print("All cleaned up!")

def mkdir_folder(path):
    for s_type in sensor_type:
        if not os.path.isdir(os.path.join(path, s_type)):
            os.makedirs(os.path.join(path, s_type))
    return True

def sensor_callback(sensor_data, sensor_queue, sensor_name):
    # Do stuff with the sensor_data data like save it to disk
    # Then you just need to add to the queue
    sensor_queue.put((sensor_data.frame, sensor_name, sensor_data))

# modify from world on rail code
# def visualize_data(rgb, lidar, imu_yaw, gnss, text_args=(cv2.FONT_HERSHEY_SIMPLEX, 0.3, (255,255,255), 1)):

#     canvas = np.array(rgb[...,::-1])

#     if lidar is not None:
#         lidar_viz = lidar_to_bev(lidar).astype(np.uint8)
#         lidar_viz = cv2.cvtColor(lidar_viz,cv2.COLOR_GRAY2RGB)
#         canvas = np.concatenate([canvas, cv2.resize(lidar_viz.astype(np.uint8), (canvas.shape[0], canvas.shape[0]))], axis=1)

#     # cv2.putText(canvas, f'yaw angle: {imu_yaw:.3f}', (4, 10), *text_args)
#     # cv2.putText(canvas, f'log: {gnss[0]:.3f} alt: {gnss[1]:.3f} brake: {gnss[2]:.3f}', (4, 20), *text_args)

#     return canvas
def visualize_data(rgb, imu_yaw, gnss, text_args=(cv2.FONT_HERSHEY_SIMPLEX, 0.3, (255,255,255), 1)):

    canvas = np.array(rgb[...,::-1])

    # if lidar is not None:
    #     lidar_viz = lidar_to_bev(lidar).astype(np.uint8)
    #     lidar_viz = cv2.cvtColor(lidar_viz,cv2.COLOR_GRAY2RGB)
    #     canvas = np.concatenate([canvas, cv2.resize(lidar_viz.astype(np.uint8), (canvas.shape[0], canvas.shape[0]))], axis=1)

    # cv2.putText(canvas, f'yaw angle: {imu_yaw:.3f}', (4, 10), *text_args)
    # cv2.putText(canvas, f'log: {gnss[0]:.3f} alt: {gnss[1]:.3f} brake: {gnss[2]:.3f}', (4, 20), *text_args)

    return canvas
# modify from world on rail code
def lidar_to_bev(lidar, min_x=-24,max_x=24,min_y=-16,max_y=16, pixels_per_meter=4, hist_max_per_pixel=10):
    xbins = np.linspace(
        min_x, max_x+1,
        (max_x - min_x) * pixels_per_meter + 1,
    )
    ybins = np.linspace(
        min_y, max_y+1,
        (max_y - min_y) * pixels_per_meter + 1,
    )
    # Compute histogram of x and y coordinates of points.
    hist = np.histogramdd(lidar[..., :2], bins=(xbins, ybins))[0]
    # Clip histogram
    hist[hist > hist_max_per_pixel] = hist_max_per_pixel
    # Normalize histogram by the maximum number of points in a bin we care about.
    overhead_splat = hist / hist_max_per_pixel * 255.
    # Return splat in X x Y orientation, with X parallel to car axis, Y perp, both parallel to ground.
    return overhead_splat[::-1,:]

# modify from manual control
def _parse_image_cb(image):
    array = np.frombuffer(image.raw_data, dtype=np.dtype("uint8"))
    array = np.reshape(array, (image.height, image.width, 4))
    array = array[:, :, :3]
    array = array[:, :, ::-1]
    return array
# modify from leaderboard
def _parse_lidar_cb(lidar_data):
    points = np.frombuffer(lidar_data.raw_data, dtype=np.dtype('f4'))
    points = copy.deepcopy(points)
    points = np.reshape(points, (int(points.shape[0] / 4), 4))
    return points


if __name__ == "__main__":
    try:
        main(args)
    except KeyboardInterrupt:
        print(' - Exited by user.')

为了方便重复录制数据,这里我是使用预设的地图点作为起点和终点,为了方便查看当前地图的所有地图点,可以使用下面的代码进行所有地图点的可视化。

#!/usr/bin/env python

# Copyright (c) 2018 Intel Labs.
# authors: German Ros (german.ros@intel.com)
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

"""Example of automatic vehicle control from client side."""

from __future__ import print_function

import argparse
import collections
import datetime
import glob
import logging
import math
import os
import numpy.random as random
import re
import sys
import weakref
import cv2

try:
    import pygame
    from pygame.locals import KMOD_CTRL
    from pygame.locals import K_ESCAPE
    from pygame.locals import K_q
except ImportError:
    raise RuntimeError('cannot import pygame, make sure pygame package is installed')

try:
    import numpy as np
except ImportError:
    raise RuntimeError(
        'cannot import numpy, make sure numpy package is installed')

# ==============================================================================
# -- Find CARLA module ---------------------------------------------------------
# ==============================================================================
try:
    sys.path.append(glob.glob('../carla/dist/carla-*%d.%d-%s.egg' % (
        sys.version_info.major,
        sys.version_info.minor,
        'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
except IndexError:
    pass

# ==============================================================================
# -- Add PythonAPI for release mode --------------------------------------------
# ==============================================================================
try:
    sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))) + '/carla')
except IndexError:
    pass

from carla import ColorConverter as cc

from agents.navigation.behavior_agent import BehaviorAgent  # pylint: disable=import-error
from agents.navigation.basic_agent import BasicAgent  # pylint: disable=import-error

import carla

client = carla.Client('localhost',2000)
world = client.get_world()
#world.set_weather(world.get_weather().ClearNight)
m = world.get_map()
transform = carla.Transform()
spectator = world.get_spectator()
bv_transform = carla.Transform(transform.location + carla.Location(z=250,x=0), carla.Rotation(yaw=0, pitch=-90))
spectator.set_transform(bv_transform)

blueprint_library = world.get_blueprint_library()
spawn_points = m.get_spawn_points()

for i, spawn_point in enumerate(spawn_points):
    world.debug.draw_string(spawn_point.location, str(i), life_time=100)
    world.debug.draw_arrow(spawn_point.location, spawn_point.location + spawn_point.get_forward_vector(), life_time=100)
    
    
while True:
    world.wait_for_tick()

二、真值处理

录制过程记录的imu和gnss信息需要进行处理才能拼接成tum格式的groundtruth。下面代码是用于拼接imu和gnss的代码,这里默认imu输出的是时间戳和rpy角度,gnss输出的是时间戳和xyz坐标、转换为tum格式时,rpy角会被转换为四元数形式,而xyz按道理是可以直接用,但是拼接后容易出现无法和SLAM结果对齐的问题,这里还是建议根据SLAM运行结果,调整xyz的顺序。

import numpy as np
from scipy.spatial.transform import Rotation as R

# 定义函数,将 RPY 转换为四元数
def rpy_to_quaternion(roll, pitch, yaw):
    r = R.from_euler('xyz', [roll, pitch, yaw], degrees=False)
    q = r.as_quat()  # [qx, qy, qz, qw]
    return q[0], q[1], q[2], q[3]

# 读取 IMU 文件和 GNSS 文件
imu_data = np.loadtxt('./Town10/imu/Town10HD_Opt.txt')
gnss_data = np.loadtxt('./Town10/gnss/Town10HD_Opt.txt')

# 检查时间戳一致性
assert np.array_equal(imu_data[:, 0], gnss_data[:, 0]), "时间戳不一致"

# 初始化保存轨迹真值的数据列表
trajectory = []

# 遍历每一行数据
for i in range(len(imu_data)):
    timestamp = imu_data[i, 0]
    roll, pitch, yaw = imu_data[i, 1], imu_data[i, 2], imu_data[i, 3]
    tx, ty, tz = gnss_data[i, 1], gnss_data[i, 2], gnss_data[i, 3]
    
    # 转换 RPY 为四元数
    qx, qy, qz, qw = rpy_to_quaternion(roll, pitch, yaw)
    
    # 组合数据并添加到轨迹列表
    trajectory.append([timestamp, 100000*tx, 0*tz, 100000*ty, qx, qy, qz, qw])

# 保存结果到新文件
np.savetxt('groundtruth.txt', trajectory, fmt='%.6f', delimiter=' ', header="timestamp tx ty tz qx qy qz qw")

三、使用ORBSLAM2运行自建CARLA数据集

在使用ORBSLAM2运行自建数据集时,首先需要自己构建一个time.txt,不然会出现无法读取图像的问题,我在录制数据集时采用的是将时间戳作为文件名,这会导致直接转换为double时溢出,所以这里我限制了六位的长度。

import os

def save_sorted_and_rename_files(directory_path, output_file):
    # 获取目录下所有以 .png 结尾的文件名
    filenames = [f for f in os.listdir(directory_path) if f.endswith('.png') and os.path.isfile(os.path.join(directory_path, f))]
    
    # 将文件名去掉 .png 后缀并转换为浮点数,再按浮点数排序
    filenames = sorted(filenames, key=lambda x: float(x.replace('.png', '')))
    
    # 重命名文件并写入排序后的文件名到txt文件
    with open(output_file, 'w') as file:
        for original_filename in filenames:
            # 去掉 .png 后缀并保留小数点后六位
            try:
                float_value = float(original_filename.replace('.png', ''))
                new_filename = f"{float_value:.6f}.png"
                
                # 重命名文件
                original_path = os.path.join(directory_path, original_filename)
                new_path = os.path.join(directory_path, new_filename)
                os.rename(original_path, new_path)
                
                # 写入新的文件名(不带扩展名)
                file.write(f"{new_filename.replace('.png', '')}\n")
                
            except ValueError:
                # 跳过无法转换为浮点数的文件名
                continue

# 示例用法
directory_path = './Town10/rgb/image_left/'  # 替换为你的目录路径
output_file = './Town10/rgb/times.txt'  # 输出文件名

save_sorted_and_rename_files(directory_path, output_file)

除此之外,比较关键的一点是carla中相机的内外参,在仿真环境中添加标定板显然是不现实。在carla的github评论区找到了一个解决方法,这个方法实际上在官方的演示文件里面也用到了。计算内参可以使用下面的公式:

Focus_length = ImageSizeX /(2 * tan(CameraFOV * π / 360))
Center_X = ImageSizeX / 2
Center_Y = ImageSizeY / 2

其中Focus_length为焦距f,Center_X和Center_Y分别为cx和cy,CameraFOV要根据录制数据时的设置进行调整,我前面代码使用的FOV是60。而外参是根据添加相机时位置的设置计算出来的,我前面代码中两个相机朝向相同差别只在y上,根据carla的单位,这里相当于y上差了两米,即基线距离为2m,而orbslam的yaml文件中,Camera.bf字段的单位并不是米,而是基线距离乘以焦距,所以这里我们需要再2m的基础上再乘以前面计算出的焦距。最后修改出来的配置文件为:

%YAML:1.0

#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------

# Camera calibration and distortion parameters (OpenCV) 
Camera.fx: 1206
Camera.fy: 1206
Camera.cx: 696
Camera.cy: 256

Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0

Camera.width: 1392
Camera.height: 512

# Camera frames per second 
Camera.fps: 20.0

# stereo baseline times fx
Camera.bf: 2412

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# Close/Far threshold. Baseline times.
ThDepth: 40

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 2000

# ORB Extractor: Scale factor between levels in the scale pyramid 	
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid	
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast			
ORBextractor.iniThFAST: 12
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.6
Viewer.KeyFrameLineWidth: 2
Viewer.GraphLineWidth: 1
Viewer.PointSize:2
Viewer.CameraSize: 0.7
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -100
Viewer.ViewpointZ: -0.1
Viewer.ViewpointF: 2000

如果配置文件不对,ORBSLAM的运行结果会出现很大差别,尤其是在转弯的时候,如果内外参错了会直接导致转弯偏离巨大。

最后放一张成功运行并使用evo进行评价的图:
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2238343.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

[论文粗读][REALM: Retrieval-Augmented Language Model Pre-Training

引言 今天带来一篇检索增强语言模型预训练论文笔记——REALM: Retrieval-Augmented Language Model Pre-Training。这篇论文是在RAG论文出现之前发表的。 为了简单&#xff0c;下文中以翻译的口吻记录&#xff0c;比如替换"作者"为"我们"。 语言模型预训练…

【人工智能】ChatGPT多模型感知态识别

目录 ChatGPT辅助细化知识增强&#xff01;一、研究背景二、模型结构和代码任务流程一&#xff1a;启发式生成 三、数据集介绍三、性能展示实现过程运行过程训练过程 ChatGPT辅助细化知识增强&#xff01; 多模态命名实体识别&#xff08;MNER&#xff09;最近引起了广泛关注。…

【黑马点评debug日记】

q1:登录无session跳转主页 p30&#xff0c;页面登录后返回&#xff0c;然后点击我的&#xff0c;需要重新设置&#xff0c;拦截器都没有问题。 参考&#xff1a; redis 黑马点评p30 login没有正常跳转&#xff0c;修改前端代码后还是一直跳转主界面_黑马点评登录后跳转到主页…

地面远阴影对光伏电站的影响

影响因素 1、太阳高度角和方位角 太阳高度角是指太阳光的入射方向和地平面之间的夹角。太阳高度角随时间、季节和地理位置的变化而变化。 方位角是指太阳光线在水平面上的投影与正南方向的夹角。方位角也随时间和地理位置的变化而变化。 可以通过天文公式或者专业的太阳位置…

消息队列高级

目录 消息可靠性 生产者消息确认 第一步&#xff1a;修改application.yml配置文件信息 第二步&#xff1a;定义发送者确认confirm回调方法 第三步&#xff1a;创建消息发送者回执return回调方法&#xff08;确保消息从交换机到消息队列&#xff09; 总结&#xff1a; 消息持…

宏观经济学笔记

【拯救者】宏观经济学速成 国民生产总值GNP: GNP 衡量一国(地区)成员在一定时期内运用生产要素所生产的全部最终产品和服务的市场价值。凡是本国国民所 创造的收入&#xff0c;不管生产要素是否在国内&#xff0c;都计入本国GNP中。 GDP本国居民在本国创造的价值外国居民在本国…

ONLYOFFICE 8.2测评:功能增强与体验优化,打造高效办公新体验

引言 随着数字化办公需求的不断增长&#xff0c;在线办公软件市场竞争愈加激烈。在众多办公软件中&#xff0c;ONLYOFFICE 无疑是一个颇具特色的选择。它不仅支持文档、表格和演示文稿的在线编辑&#xff0c;还通过开放的接口与强大的协作功能&#xff0c;吸引了众多企业和个人…

独显装完ubuntu后启动黑屏显示/dev/sda:clean files blocks的解决方案

解决方案如下&#xff1a; 选中Ubuntu按E键 在编辑界面倒数第2行的linux那行&#xff08;后面有quiet splash选项&#xff09;的最后添加nomodeset 然后按F10保存重启 然后管理员权限打开/etc/modprobe.d/blacklist.conf&#xff0c;在文件末尾添加&#xff1a; blacklist…

[Docker#2] 发展历史 | Namespace环境隔离 | Cgroup资源控制

目录 1.发展历史 Jail 时代 云时代 云原生时代 技术标准的确立 虚拟机 vs Docker 2. 容器化技术 2.1 Namespace 命令详解 1. dd 命令 2. mkfs 命令 3. df 命令 4. mount 命令 5. unshare 命令 实战 进程隔离 文件隔离 2.2 CGroup 相关命令 2.1 pidstat 2.…

AI生活之我用AI处理Excel表格

AI生活之我用AI处理Excel表格 场景再现AI提问词AI代码运行调试结果心得感受 场景再现 因学习需要&#xff0c;整理了某个题库&#xff0c;方便自己刷题使用。 已将每套题打上了制定标签&#xff0c;得到一个Excel表格。截图如下&#xff1a; 需求是&#xff1a;一共35套题&…

Stable Diffusion Web UI - ControlNet 姿势控制 openpose

openpose 是 ControlNet 中常用的控制模式之一。 通过 openpose 可以锁定人物姿势&#xff0c;把姿势信息传递给 Stable Diffusion 扩散模型&#xff0c;让其在扩散生成图片的时候遵照特定的任务姿势。 通过 openpose 能够得到类似如下效果&#xff1a; 同样的姿势&#xff0…

第三百一十九节 Java线程教程 - Java线程中断

Java线程教程 - Java线程中断 我们可以通过使用interrupt()方法中断一个活动的线程。 这个方法调用在线程只是一个指示。它是由线程如何响应中断。 例子 下面的代码显示了中断主线程并打印线程中断状态的代码。 public class Main {public static void main(String[] args)…

人工智能(AI)和机器学习(ML)技术学习流程

目录 人工智能(AI)和机器学习(ML)技术 自然语言处理(NLP): Word2Vec: Seq2Seq(Sequence-to-Sequence): Transformer: 范式、架构和自注意力: 多头注意力: 预训练、微调、提示工程和模型压缩: 上下文学习、思维链、全量微调、量化、剪枝: 思维树、思维…

Cynet:全方位一体化安全防护工具

前言 1999年&#xff0c;布鲁斯施奈尔曾说过&#xff1a;“复杂性是安全最大的敌人。”彼时还是19年前&#xff0c;而现在&#xff0c;网络安全已然变得更加繁杂。 近日我在网上冲浪过程中发现了这么一个平台性质的软件&#xff0c;看似具有相当强的防护能力。 根据Cynet的描…

可变类型参数

将形参设为可变类型参数&#xff0c;首先自己的函数要先有一个确定的形参&#xff0c;然后剩余的参数为 ... 用到三个宏&#xff0c;va_list, va_start, va_arg . va_list: 当作一个类型&#xff0c;底层是一个char* 被 typedef va_strat: 先定义一个va_list 类型的变量&#x…

AlphaFold3 开源啦!喜大普奔!

2024年5月8日&#xff0c;AlphaFold3 正式发布&#xff01;时隔半年&#xff0c;今天&#xff0c;AlphaFold3 终于开源啦&#xff01;&#x1f389; 不过别太激动哈哈哈哈哈&#xff0c;权重还是要额外申请的&#xff01; 半年前&#xff0c;AlphaFold3 的发布激起了学术界的广…

什么是多因素身份验证(MFA)的安全性?

多因素身份验证(MFA)简介 什么是MFA 多因素身份验证(MFA)是一种安全过程&#xff0c;要求用户在授予对系统、应用程序或账户的访问权限之前提供两种或多种形式的验证。仅使用单个因素&#xff08;通常是用户名和密码&#xff09;保护资源会使它们容易受到泄露&#xff0c;添加…

Autosar CP Can State Mangement规范导读

CanSM的主要功能 CAN网络通信模式控制 管理CAN网络的启动、停止和不同通信模式(如全通信、静默通信、无通信)之间的切换。通过状态机实现对CAN网络状态的精确控制,确保网络在不同条件下稳定运行。错误处理与状态报告 根据AUTOSAR基础软件的错误分类方案处理错误,包括开发错…

【Python爬虫实战】全面解析 DrissionPage:简化 Python 浏览器自动化的三种模式

&#x1f308;个人主页&#xff1a;易辰君-CSDN博客 &#x1f525; 系列专栏&#xff1a;https://blog.csdn.net/2401_86688088/category_12797772.html ​ 目录 前言 一、DrissionPage简介 &#xff08;一&#xff09;ChromiumPage &#xff08;二&#xff09;WebPage &a…

测试驱动:编写完善测试用例的艺术

测试驱动&#xff1a;编写完善测试用例的艺术 如何编写测试用例 如何撰写高效的测试用例&#xff0c;为产品的稳定性和质量保驾护航。无论你是新手还是经验丰富的测试工程师&#xff0c;让我们一起深入探讨&#xff0c;掌握测试用例编写的精髓&#xff01; 1. 明确测试目标 …