RandLA-Net 训练自定义数据集

news2025/1/21 17:56:56

在这里插入图片描述
在这里插入图片描述

https://arxiv.org/abs/1911.11236


搭建训练环境

  1. git clone https://github.com/QingyongHu/RandLA-Net.git
  2. 搭建 python 环境 , 这里我用的 3.9
    conda create -n randlanet python=3.9
    source activate randlanet
    pip install tensorflow==2.15.0 -i https://pypi.tuna.tsinghua.edu.cn/simple  --timeout=120
    pip install -r helper_requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
    pip install Cython -i https://pypi.tuna.tsinghua.edu.cn/simple
    conda install -c conda-forge scikit-learn
    
  3. cd utils/cpp_wrappers/cpp_subsampling/ , 执行 python setup.py build_ext --inplace , 输出 grid_subsampling.cpython-39-x86_64-linux-gnu.so
  4. cd nearest_neighbors , 执行 python setup.py build_ext --inplace, 输出 nearest_neighbors.cpython-39-x86_64-linux-gnu.so

制作数据集

  1. 这里我用 CloudCompare 标注的数据集 , 具体标注方法,上网找找.
  2. 创建 make_train_dataset.py, 开始生成训练数据集
    # 写这段代码的时候,只有上帝和我知道它是干嘛的
    # 现在,只有上帝知道
    # @File : make_cloud_train_datasets.py
    # @Author : J.
    # @desc : 生成 RandLanet 训练数据集
    
    from sklearn.neighbors import KDTree
    from os.path import join, exists, dirname, abspath
    import numpy as np
    import os, glob, pickle
    import sys
    
    BASE_DIR = dirname(abspath(__file__))
    ROOT_DIR = dirname(BASE_DIR)
    sys.path.append(BASE_DIR)
    sys.path.append(ROOT_DIR)
    from helper_ply import write_ply
    from helper_tool import DataProcessing as DP
    
    grid_size = 0.01
    dataset_path = './data/sample/original_data'
    original_pc_folder = join(dirname(dataset_path), 'original_ply')
    sub_pc_folder = join(dirname(dataset_path), 'input_{:.3f}'.format(grid_size))
    os.mkdir(original_pc_folder) if not exists(original_pc_folder) else None
    os.mkdir(sub_pc_folder) if not exists(sub_pc_folder) else None
    
    railway_cnt = 0
    backgroud_cnt = 0
    for pc_path in glob.glob(join(dataset_path, '*.txt')):
        file_name = os.path.basename(pc_path)[:-4]
        if exists(join(sub_pc_folder, file_name + '_KDTree.pkl')):
            continue
        pc = np.loadtxt(pc_path)
        labels = pc[:, -1].astype(np.uint8)
        values , counts =  np.unique(labels,return_counts = True)
        for i in range(len(values)):
        	# 我标注2个类别(包含背景类别)
        	# 统计每个类别点的数量
            if values[i] == 0:
                backgroud_cnt = backgroud_cnt + counts[i]
            elif values[i] == 1:
                 railway_cnt = railway_cnt + counts[i]
    
     
        full_ply_path = join(original_pc_folder, file_name + '.ply')
        #  Subsample to save space
        sub_points, sub_colors, sub_labels = DP.grid_sub_sampling(pc[:, :3].astype(np.float32),
                                                                  pc[:, 3:6].astype(np.uint8), labels, 0.01)
        
        sub_labels = np.squeeze(sub_labels)
        write_ply(full_ply_path, (sub_points, sub_colors, sub_labels), ['x', 'y', 'z', 'red', 'green', 'blue', 'class'])
        # save sub_cloud and KDTree file
        sub_xyz, sub_colors, sub_labels = DP.grid_sub_sampling(sub_points, sub_colors, sub_labels, grid_size)
        sub_colors = sub_colors / 255.0
        sub_labels = np.squeeze(sub_labels)
        sub_ply_file = join(sub_pc_folder, file_name + '.ply')
        write_ply(sub_ply_file, [sub_xyz, sub_colors, sub_labels], ['x', 'y', 'z', 'red', 'green', 'blue', 'class'])
    
        search_tree = KDTree(sub_xyz, leaf_size=50)
        kd_tree_file = join(sub_pc_folder, file_name + '_KDTree.pkl')
        with open(kd_tree_file, 'wb') as f:
            pickle.dump(search_tree, f)
        proj_idx = np.squeeze(search_tree.query(sub_points, return_distance=False))
        proj_idx = proj_idx.astype(np.int32)
        proj_save = join(sub_pc_folder, file_name + '_proj.pkl')
        with open(proj_save, 'wb') as f:
            pickle.dump([proj_idx, labels], f)
    # 统计每个类别个数
    print("----> backgroud_cnt: " + str(backgroud_cnt))
    print("----> railway_cnt: " + str(railway_cnt))
    
  3. 修改 helper_tools.py
     #import cpp_wrappers.cpp_subsampling.grid_subsampling as cpp_subsampling
     #import nearest_neighbors.lib.python.nearest_neighbors as nearest_neighbors
     # 修改成
     import utils.cpp_wrappers.cpp_subsampling.grid_subsampling as cpp_subsampling
     import utils.nearest_neighbors.nearest_neighbors as nearest_neighbors
    
    ...
    # 复制一个 起个自己名字 
    class ConfigSample:
        k_n = 16  # KNN
        num_layers = 5  # Number of layers
        num_points = 16000  # Number of input points
        # 包含背景类别,如果想排除背景类别, 修改 ignored_labels
        num_classes = 2  # Number of valid classes  
        sub_grid_size = 0.01  # preprocess_parameter # Todo
        batch_size = 4  # batch_size during training
        val_batch_size = 2  # batch_size during validation and test
        train_steps = 500  # Number of steps per epochs
        val_steps = 3  # Number of validation steps per epoch
        sub_sampling_ratio = [4, 4, 4, 4, 2]  # sampling ratio of random sampling at each layer
        d_out = [16, 64, 128, 256, 512]  # feature dimension
        noise_init = 3.5  # noise initial parameter
        max_epoch = 100  # maximum epoch during training
        learning_rate = 1e-2  # initial learning rate
        lr_decays = {i: 0.95 for i in range(0, 500)}  # decay rate of learning rate
        train_sum_dir = 'train_log'
        saving = True
        saving_path = None
      
        augment_scale_anisotropic = True
        augment_symmetries = [True, False, False]
        augment_rotation = 'vertical'
        augment_scale_min = 0.8
        augment_scale_max = 1.2
        augment_noise = 0.001
        augment_occlusion = 'none'
        augment_color = 0.8
    
    @staticmethod
        def get_class_weights(dataset_name):
            # pre-calculate the number of points in each category
            num_per_class = []
            if dataset_name is 'S3DIS':
                num_per_class = np.array([3370714, 2856755, 4919229, 318158, 375640, 478001, 974733,
                                          650464, 791496, 88727, 1284130, 229758, 2272837], dtype=np.int32)
            elif dataset_name is 'Semantic3D':
                num_per_class = np.array([5181602, 5012952, 6830086, 1311528, 10476365, 946982, 334860, 269353],
                                         dtype=np.int32)
            elif dataset_name is 'SemanticKITTI':
                num_per_class = np.array([55437630, 320797, 541736, 2578735, 3274484, 552662, 184064, 78858,
                                          240942562, 17294618, 170599734, 6369672, 230413074, 101130274, 476491114,
                                          9833174, 129609852, 4506626, 1168181])
            # TODO 增加一个自己的类别
            elif dataset_name is 'Sample':
    	        # 每一个类别点的数量
                num_per_class = np.array([4401119, 148313])
            weight = num_per_class / float(sum(num_per_class))
            ce_label_weight = 1 / (weight + 0.02)
            return np.expand_dims(ce_label_weight, axis=0)
    ...
    

训练

  1. main_Sample.py (拷贝 main_S3DIS.py)
from os.path import join, exists
from RandLANet import Network
from tester_Railway import ModelTester
from helper_ply import read_ply
from helper_tool import Plot
from helper_tool import DataProcessing as DP
from helper_tool import ConfigRailway as cfg
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import numpy as np
import pickle, argparse, os

class Railway:
    def __init__(self):
        self.name = 'Sample'
        # 最好给绝对路径
        self.path = '/home/ab/workspace/train/randla-net-tf2-main/data/sample'
        self.label_to_names = {0: 'background', 1: 'sample'}
        self.num_classes = len(self.label_to_names)
        self.label_values = np.sort([k for k, v in self.label_to_names.items()])
        self.label_to_idx = {l: i for i, l in enumerate(self.label_values)}
        # 如果想忽略背景类别 np.sort([0])
        #self.ignored_labels = np.sort([0]) # TODO
        self.ignored_labels = np.sort([]) # TODO

        self.original_folder = join(self.path, 'original_data')
        self.full_pc_folder = join(self.path, 'original_ply')
        self.sub_pc_folder = join(self.path, 'input_{:.3f}'.format(cfg.sub_grid_size))

        #训练、验证、测试数据都在original_data数据集中,需要做划分
        self.val_split = ["20240430205457370","20240430205527591"]  
        self.test_split= ["20240430205530638"]

        # Initial training-validation-testing files
        self.train_files = []
        self.val_files = []
        self.test_files = []
        cloud_names = [file_name[:-4] for file_name in os.listdir(self.original_folder) if file_name[-4:] == '.txt']
         #根据文件名划分训练、验证、测试数据集
        for pc_name in cloud_names:
            pc_file=join(self.sub_pc_folder, pc_name + '.ply')
            if pc_name in self.val_split:
                self.val_files.append(pc_file)
            elif pc_name in self.test_split:
                self.test_files.append(pc_file)
            else:
                self.train_files.append(pc_file)
        # Initiate containers
        self.val_proj = []
        self.val_labels = []
        self.test_proj = []
        self.test_labels = []

        self.possibility = {}
        self.min_possibility = {}
        self.class_weight = {}
        self.input_trees = {'training': [], 'validation': [], 'test': []}
        self.input_colors = {'training': [], 'validation': [], 'test': []}
        self.input_labels = {'training': [], 'validation': []}

        # Ascii files dict for testing
        self.ascii_files = {
            '20240430205530638.ply': '20240430205530638-reduced.labels'}

        self.load_sub_sampled_clouds(cfg.sub_grid_size)

    def load_sub_sampled_clouds(self, sub_grid_size):
        tree_path = join(self.path, 'input_{:.3f}'.format(sub_grid_size))
        files = np.hstack((self.train_files, self.val_files, self.test_files))
        for i, file_path in enumerate(files):
            cloud_name = file_path.split('/')[-1][:-4]
            print('Load_pc_' + str(i) + ': ' + cloud_name)
            if file_path in self.val_files:
                cloud_split = 'validation'
            elif file_path in self.train_files:
                cloud_split = 'training'
            else:
                cloud_split = 'test'

            # Name of the input files
            kd_tree_file = join(tree_path, '{:s}_KDTree.pkl'.format(cloud_name))
            sub_ply_file = join(tree_path, '{:s}.ply'.format(cloud_name))

            # read ply with data
            data = read_ply(sub_ply_file)
            sub_colors = np.vstack((data['red'], data['green'], data['blue'])).T
            if cloud_split == 'test':
                sub_labels = None
            else:
                sub_labels = data['class']

            # Read pkl with search tree
            with open(kd_tree_file, 'rb') as f:
                search_tree = pickle.load(f)

            self.input_trees[cloud_split] += [search_tree]
            self.input_colors[cloud_split] += [sub_colors]
            if cloud_split in ['training', 'validation']:
                self.input_labels[cloud_split] += [sub_labels]

        # Get validation and test re_projection indices
        print('\nPreparing reprojection indices for validation and test')

        for i, file_path in enumerate(files):

            # get cloud name and split
            cloud_name = file_path.split('/')[-1][:-4]

            # Validation projection and labels
            if file_path in self.val_files:
                proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name))
                with open(proj_file, 'rb') as f:
                    proj_idx, labels = pickle.load(f)
                self.val_proj += [proj_idx]
                self.val_labels += [labels]

            # Test projection
            if file_path in self.test_files:
                proj_file = join(tree_path, '{:s}_proj.pkl'.format(cloud_name))
                with open(proj_file, 'rb') as f:
                    proj_idx, labels = pickle.load(f)
                self.test_proj += [proj_idx]
                self.test_labels += [labels]
        print('finished')
        return

    # Generate the input data flow
    def get_batch_gen(self, split):
        if split == 'training':
            num_per_epoch = cfg.train_steps * cfg.batch_size
        elif split == 'validation':
            num_per_epoch = cfg.val_steps * cfg.val_batch_size
        elif split == 'test':
            num_per_epoch = cfg.val_steps * cfg.val_batch_size

        # Reset possibility
        self.possibility[split] = []
        self.min_possibility[split] = []
        self.class_weight[split] = []

        # Random initialize
        for i, tree in enumerate(self.input_trees[split]):
            self.possibility[split] += [np.random.rand(tree.data.shape[0]) * 1e-3]
            self.min_possibility[split] += [float(np.min(self.possibility[split][-1]))]

        if split != 'test':
            _, num_class_total = np.unique(np.hstack(self.input_labels[split]), return_counts=True)
            self.class_weight[split] += [np.squeeze([num_class_total / np.sum(num_class_total)], axis=0)]

        def spatially_regular_gen():
            # Generator loop
            for i in range(num_per_epoch):  # num_per_epoch
                # Choose the cloud with the lowest probability
                cloud_idx = int(np.argmin(self.min_possibility[split]))
                # choose the point with the minimum of possibility in the cloud as query point
                point_ind = np.argmin(self.possibility[split][cloud_idx])
                # Get all points within the cloud from tree structure
                points = np.array(self.input_trees[split][cloud_idx].data, copy=False)
                # print("points........." + str(points.shape))
                # Center point of input region
                center_point = points[point_ind, :].reshape(1, -1)
                # Add noise to the center point
                noise = np.random.normal(scale=cfg.noise_init / 10, size=center_point.shape)
                pick_point = center_point + noise.astype(center_point.dtype)
                query_idx = self.input_trees[split][cloud_idx].query(pick_point, k=cfg.num_points)[1][0]
                # Shuffle index
                query_idx = DP.shuffle_idx(query_idx)
                # Get corresponding points and colors based on the index
                queried_pc_xyz = points[query_idx]
                queried_pc_xyz[:, 0:2] = queried_pc_xyz[:, 0:2] - pick_point[:, 0:2]
                queried_pc_colors = self.input_colors[split][cloud_idx][query_idx]
                if split == 'test':
                    queried_pc_labels = np.zeros(queried_pc_xyz.shape[0])
                    queried_pt_weight = 1
                else:
                    queried_pc_labels = self.input_labels[split][cloud_idx][query_idx]
                    queried_pc_labels = np.array([self.label_to_idx[l] for l in queried_pc_labels])
                    queried_pt_weight = np.array([self.class_weight[split][0][n] for n in queried_pc_labels])

                # Update the possibility of the selected points
                dists = np.sum(np.square((points[query_idx] - pick_point).astype(np.float32)), axis=1)
                delta = np.square(1 - dists / np.max(dists)) * queried_pt_weight
                self.possibility[split][cloud_idx][query_idx] += delta
                self.min_possibility[split][cloud_idx] = float(np.min(self.possibility[split][cloud_idx]))
                if True:
                    yield (queried_pc_xyz,
                           queried_pc_colors.astype(np.float32),
                           queried_pc_labels,
                           query_idx.astype(np.int32),
                           np.array([cloud_idx], dtype=np.int32))
        gen_func = spatially_regular_gen
        gen_types = (tf.float32, tf.float32, tf.int32, tf.int32, tf.int32)
        gen_shapes = ([None, 3], [None, 3], [None], [None], [None])
        return gen_func, gen_types, gen_shapes

    def get_tf_mapping(self):
        # Collect flat inputs
        def tf_map(batch_xyz, batch_features, batch_labels, batch_pc_idx, batch_cloud_idx):
            batch_features = tf.map_fn(self.tf_augment_input, [batch_xyz, batch_features], dtype=tf.float32)
            input_points = []
            input_neighbors = []
            input_pools = []
            input_up_samples = []

            for i in range(cfg.num_layers):
                neigh_idx = tf.py_func(DP.knn_search, [batch_xyz, batch_xyz, cfg.k_n], tf.int32)
                sub_points = batch_xyz[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :]
                pool_i = neigh_idx[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :]
                up_i = tf.py_func(DP.knn_search, [sub_points, batch_xyz, 1], tf.int32)
                input_points.append(batch_xyz)
                input_neighbors.append(neigh_idx)
                input_pools.append(pool_i)
                input_up_samples.append(up_i)
                batch_xyz = sub_points

            input_list = input_points + input_neighbors + input_pools + input_up_samples
            input_list += [batch_features, batch_labels, batch_pc_idx, batch_cloud_idx]

            return input_list

        return tf_map

    # data augmentation
    @staticmethod
    def tf_augment_input(inputs):
        xyz = inputs[0]
        features = inputs[1]
        theta = tf.random_uniform((1,), minval=0, maxval=2 * np.pi)
        # Rotation matrices
        c, s = tf.cos(theta), tf.sin(theta)
        cs0 = tf.zeros_like(c)
        cs1 = tf.ones_like(c)
        R = tf.stack([c, -s, cs0, s, c, cs0, cs0, cs0, cs1], axis=1)
        stacked_rots = tf.reshape(R, (3, 3))

        # Apply rotations
        transformed_xyz = tf.reshape(tf.matmul(xyz, stacked_rots), [-1, 3])
        # Choose random scales for each example
        min_s = cfg.augment_scale_min
        max_s = cfg.augment_scale_max
        if cfg.augment_scale_anisotropic:
            s = tf.random_uniform((1, 3), minval=min_s, maxval=max_s)
        else:
            s = tf.random_uniform((1, 1), minval=min_s, maxval=max_s)

        symmetries = []
        for i in range(3):
            if cfg.augment_symmetries[i]:
                symmetries.append(tf.round(tf.random_uniform((1, 1))) * 2 - 1)
            else:
                symmetries.append(tf.ones([1, 1], dtype=tf.float32))
        s *= tf.concat(symmetries, 1)

        # Create N x 3 vector of scales to multiply with stacked_points
        stacked_scales = tf.tile(s, [tf.shape(transformed_xyz)[0], 1])

        # Apply scales
        transformed_xyz = transformed_xyz * stacked_scales

        noise = tf.random_normal(tf.shape(transformed_xyz), stddev=cfg.augment_noise)
        transformed_xyz = transformed_xyz + noise
        # rgb = features[:, :3]
        # stacked_features = tf.concat([transformed_xyz, rgb], axis=-1)
        return transformed_xyz

    def init_input_pipeline(self):
        print('Initiating input pipelines')
        cfg.ignored_label_inds = [self.label_to_idx[ign_label] for ign_label in self.ignored_labels]
        gen_function, gen_types, gen_shapes = self.get_batch_gen('training')
        gen_function_val, _, _ = self.get_batch_gen('validation')
        gen_function_test, _, _ = self.get_batch_gen('test')
        self.train_data = tf.data.Dataset.from_generator(gen_function, gen_types, gen_shapes)
        self.val_data = tf.data.Dataset.from_generator(gen_function_val, gen_types, gen_shapes)
        self.test_data = tf.data.Dataset.from_generator(gen_function_test, gen_types, gen_shapes)

        self.batch_train_data = self.train_data.batch(cfg.batch_size)
        self.batch_val_data = self.val_data.batch(cfg.val_batch_size)
        self.batch_test_data = self.test_data.batch(cfg.val_batch_size)
        map_func = self.get_tf_mapping()

        self.batch_train_data = self.batch_train_data.map(map_func=map_func)
        self.batch_val_data = self.batch_val_data.map(map_func=map_func)
        self.batch_test_data = self.batch_test_data.map(map_func=map_func)

        self.batch_train_data = self.batch_train_data.prefetch(cfg.batch_size)
        self.batch_val_data = self.batch_val_data.prefetch(cfg.val_batch_size)
        self.batch_test_data = self.batch_test_data.prefetch(cfg.val_batch_size)

        iter = tf.data.Iterator.from_structure(self.batch_train_data.output_types, self.batch_train_data.output_shapes)
        self.flat_inputs = iter.get_next()
        self.train_init_op = iter.make_initializer(self.batch_train_data)
        self.val_init_op = iter.make_initializer(self.batch_val_data)
        self.test_init_op = iter.make_initializer(self.batch_test_data)


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--gpu', type=int, default=0, help='the number of GPUs to use [default: 0]')
    parser.add_argument('--mode', type=str, default='train', help='options: train, test, vis')
    parser.add_argument('--model_path', type=str, default='None', help='pretrained model path')
    FLAGS = parser.parse_args()

    GPU_ID = FLAGS.gpu
    os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
    os.environ['CUDA_VISIBLE_DEVICES'] = str(GPU_ID)
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'

    Mode = FLAGS.mode
    dataset = Railway()
    dataset.init_input_pipeline()

    if Mode == 'train':
        model = Network(dataset, cfg)
        model.train(dataset)
    elif Mode == 'test':
        cfg.saving = False
        model = Network(dataset, cfg)
        if FLAGS.model_path is not 'None':
            chosen_snap = FLAGS.model_path
        else:
            chosen_snapshot = -1
            logs = np.sort([os.path.join('results', f) for f in os.listdir('results') if f.startswith('Log')])
            chosen_folder = logs[-1]
            snap_path = join(chosen_folder, 'snapshots')
            snap_steps = [int(f[:-5].split('-')[-1]) for f in os.listdir(snap_path) if f[-5:] == '.meta']
            chosen_step = np.sort(snap_steps)[-1]
            chosen_snap = os.path.join(snap_path, 'snap-{:d}'.format(chosen_step))
        print(".............. chosen_snap:" + chosen_snap)
        tester = ModelTester(model, dataset, restore_snap=chosen_snap)
        tester.test(model, dataset)

    else:
        ##################
        # Visualize data #
        ##################

        with tf.Session() as sess:
            sess.run(tf.global_variables_initializer())
            sess.run(dataset.train_init_op)
            while True:
                flat_inputs = sess.run(dataset.flat_inputs)
                pc_xyz = flat_inputs[0]
                sub_pc_xyz = flat_inputs[1]
                labels = flat_inputs[21]
                Plot.draw_pc_sem_ins(pc_xyz[0, :, :], labels[0, :])
                Plot.draw_pc_sem_ins(sub_pc_xyz[0, :, :], labels[0, 0:np.shape(sub_pc_xyz)[1]])

  1. 开始训练 python main_Sample.py --mode train --gpu 0

参考

  1. https://github.com/QingyongHu/RandLA-Net
  2. https://blog.csdn.net/weixin_40653140/article/details/130285289

在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1789799.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

从CSV到数据库(简易)

需求:客户上传CSV文档,要求CSV文档内容查重/插入/更新相关数据。 框架:jdbcTemplate、commons-io、 DB:oracle 相关依赖: 这里本来打算用的2.11.0,无奈正式项目那边用老版本1.3.1,新版本对类型…

eNSP学习——RIP路由协议基础配置

目录 主要命令 原理概述 实验内容 实验目的 实验拓扑 实验编址 实验步骤 1、基本配置 2、使用RIPv1搭建网络 开启 RIP调试功能 3、使用RIPv2搭建网络 RIPv1和RIPv2的不同 需要eNSP各种配置命令的点击链接自取:华为eNSP各种设备配置命令大全PD…

使用Python库Matplotlib绘制常用图表类型

使用Python库Matplotlib绘图 一、Matplotlib绘图参数设置1.1 设置分辨率和画布大小1.2 保存图片并设置边缘留白为紧凑型1.3 设置坐标轴标签1.4 画直线设置线宽和颜色1.5 画子图1.5.1 通过figure的add_subplot()画子图1.5.2 通过plt的subplots画子图 二、使用Matplotlib中scatte…

经验分享,超声波车位引导系统和视频车位引导系统有哪些区别

随着城市化进程的加速和汽车保有量的持续增长,停车难已成为城市交通管理的一大挑战。车位引导系统作为解决这一问题的有效工具,其重要性日益凸显。它不仅能够提升停车场的运营效率,还能显著改善驾驶者的停车体验。目前市场上主要有两种车位引…

【Centos7】CentOS 7下的PyTorch安装策略:高效实践指南

【Centos7】CentOS 7下的PyTorch安装策略:高效实践指南 大家好 我是寸铁👊 总结了一篇【Centos7】CentOS 7下的PyTorch安装策略:高效实践指南✨ 喜欢的小伙伴可以点点关注 💝 前言 由于需要跑深度学习,要用到pytorch&a…

全域外卖项目能不能做?可行性分析来了!

作为新的网络热词,全域外卖的传播范围随着时间的推移而不断扩大,从最初的行业内部逐步扩散到多个创业者社区,让许多创业者都有了做全域外卖项目的想法。但是,由于全域外卖赛道刚兴起不久,因此,目前大多数人…

实时监控与报警:人员跌倒检测算法的实践

在全球范围内,跌倒事件对老年人和儿童的健康与安全构成了重大威胁。据统计,跌倒是老年人意外伤害和死亡的主要原因之一。开发人员跌倒检测算法的目的是通过技术手段及时发现和响应跌倒事件,减少因延迟救助而造成的严重后果。这不仅对老年人群…

Mysql(一)查询Sql是如何执行的

Hello,大家好我是极客涛😎,我最近在整理Mysql相关的知识点,所以准备开启一个Mysql的主线任务,大概耗时3周左右,整个节奏还是由浅入深,主要包括Mysql的架构、事务实现、索引组织形式、SQL优化、日…

[C][数据结构][时间空间复杂度]详细讲解

目录 0.铺垫1.时间复杂度 -- 衡量算法的运行快慢1.是什么?2.大O的渐进表示法 2.空间复杂度 - 衡量算法所需要的额外空间3.常见复杂度对比 0.铺垫 时间是累计的空间是不累计的,可以重复利用 1.时间复杂度 – 衡量算法的运行快慢 1.是什么? …

SQL Server数据库UNC路径注入攻击

点击星标,即时接收最新推文 本文选自《内网安全攻防:红队之路》 扫描二维码五折购书 UNC路径注入 如果我们能强制SQL服务器连接到我们控制的SMB共享,连接将会包含认证数据。更具体的来说,将会发起一个NTLM认证,我们将能…

词法分析器的设计与实现--编译原理操作步骤,1、你的算法工作流程图; 2、你的函数流程图;3,具体代码

实验原理: 词法分析是编译程序进行编译时第一个要进行的任务,主要是对源程序进行编译预处理之后,对整个源程序进行分解,分解成一个个单词,这些单词有且只有五类,分别时标识符、关键字(保留字&a…

【实物+仿真设计】智能安全门控制系统设计

《智能安全门控制系统设计 实物仿真》 整体功能: 本课题首先确定整个智能安全门控制系统进行总体方案设计。主要包括按键模块、 电磁锁模块、语音提示模块、人员检测模块。按键提供给用户人工交互的功能,用户可 以选择输入按键的方式控制安全门。单片机…

民国漫画杂志《时代漫画》第39期.PDF

时代漫画39.PDF: https://url03.ctfile.com/f/1779803-1248636473-6bd732?p9586 (访问密码: 9586) 《时代漫画》的杂志在1934年诞生了,截止1937年6月战争来临被迫停刊共发行了39期。 ps: 资源来源网络!

市场凌乱,智能算法哪种效果好?

当我们在面对市场波动,个股震荡,无从下手的时候,不懂算法的朋友就只懂做t;懂算法的朋友这会儿就迷茫并不知道选择哪种智能算法交易?今天小编给大家整理一套性价比高的,适合个人投资者搞的算法交易&#xff…

【SITS_CC】卫星图像时间序列的变化字幕(IEEE GRSL)

摘要 Satellite images time series (SITS) 提供了一种有效的方法来同时获取地球上观测区域的时间和空间信息。然而,传统的遥感CD方法的输出是二进制图或语义变化图,往往难以被最终用户解释,传统的遥感图像变化字幕方法只能描述双时图像。提…

湖南(品牌定位)源点咨询 企业如何选择品牌定位差异化调研

湖南源点认为:精准且占据消费者认知,探寻与消费者共鸣的常态化品牌定位调研是企业品牌长远健康发展的基石。 品牌定位里要强调品牌的差异。英文是point of difference. 这个差异点就是强调品牌能带来的利益(benefit)。 这个“利…

C++编程:模板初阶

目录 一、泛型编程 1、通用版交换函数的实现: 2、模板的引入 二、函数模板 1、函数模板的定义和使用 2、函数模板的实例化 三、类模板 1、类模板的定义和实例化 模板是C的一项强大特性,犹如中国古代四大发明中的活字印刷术与造纸术融为一体一般&a…

【学习】测试用例设计与执行的黄金法则

在软件测试领域,测试用例的设计与执行是确保产品质量的关键环节。一个优秀的测试用例能够揭示软件中的缺陷,而高效的执行则能保障测试覆盖的全面性。如同璀璨的星辰指引航船前行,以下黄金法则将引领测试用例设计与执行的过程,确保…

uniapp内置的button组件的问题

问题描述 由于想要使用uniapp内置button组件的开放能力,所以就直接使用了button,但是他本身带着边框,而且使用 border:none;是没有效果的。 问题图片 解决方案 button::after {border: none;} 正确样式 此时的分享…

2024 年该如何利用 MidJourney 创作AI艺术(详细教程)

什么是 Midjourney Midjourney 是根据文本提示创建图像的生成式人工智能的优秀范例。与 Dall-E 和 Stable Diffusion 一样,它已成为最受欢迎的人工智能艺术创作工具之一。与竞争对手不同的是,Midjourney 是自筹资金和封闭源代码的,因此对它的…