全流程点云机器学习(二)使用PaddlePaddle进行PointNet的机器学习训练和评估

news2024/11/20 2:31:56

前言

这不是高支模项目需要嘛,他们用传统算法切那个横杆竖杆流程复杂耗时很长,所以想能不能用机器学习完成这些工作,所以我就来整这个工作了。

基于上文的数据集切分 ,现在来对切分好的数据来进行正式的训练。

本系列文章所用的核心骨干网络代码主要来自点云处理:实现PointNet点云分割

原文的代码有点问题,这里做了一点修改,主要应用了paddlepaddle进行的pointNet进行分割任务。

流程

这里用的PointNet网络由于使用了全连接层,所以输入必须要抽稀出结果,故而流程如下:

  1. 读取原始点云和标签
  2. 随机对原始点云和标签进行采样
  3. 进行数据集划分
  4. 创建模型
  5. 进行训练
  6. 保存模型
  7. 对象评估

具体内容

1.依赖

import os
import tqdm
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import  warnings
warnings.filterwarnings("ignore", module="matplotlib")    
from mpl_toolkits.mplot3d import Axes3D
# paddle相关库
import paddle
from paddle.io import Dataset
import paddle.nn.functional as F
from paddle.nn import Conv2D, MaxPool2D, Linear, BatchNorm, Dropout, ReLU, Softmax, Sequential

2.点云的可视化

def visualize_data(point_cloud, label, title):
    COLORS = ['b', 'y', 'r', 'g', 'pink']
    label_map = ['none', 'Support'] 
    df = pd.DataFrame(
        data={
            "x": point_cloud[:, 0],
            "y": point_cloud[:, 1],
            "z": point_cloud[:, 2],
            "label": label,
        }
    )
    fig = plt.figure(figsize=(15, 10))
    ax = plt.axes(projection="3d")
    ax.scatter(df["x"], df["y"], df["z"])
    for i in range(label.min(), label.max()+1):
        c_df = df[df['label'] == i]
        ax.scatter(c_df["x"], c_df["y"], c_df["z"], label=label_map[i], alpha=0.5, c=COLORS[i])
    ax.legend()
    plt.title(title)
    plt.show()
    input()

3.点云抽稀和数据集

data_path = 'J:\\output\\Data'
label_path = 'J:\\output\\Label'
# 采样点
NUM_SAMPLE_POINTS = 1024 
# 存储点云与label
point_clouds = []
point_clouds_labels = []

file_list = os.listdir(data_path)
for file_name in tqdm.tqdm(file_list):
    # 获取label和data的地址
    label_name = file_name.replace('.pts', '.seg')
    point_cloud_file_path = os.path.join(data_path, file_name)
    label_file_path = os.path.join(label_path, label_name)
    # 读取label和data
    point_cloud = np.loadtxt(point_cloud_file_path)
    label = np.loadtxt(label_file_path).astype('int')
    # 如果本身的点少于需要采样的点,则直接去除
    if len(point_cloud) < NUM_SAMPLE_POINTS:
        continue
    # 采样
    num_points = len(point_cloud)
    # 确定随机采样的index
    sampled_indices = random.sample(list(range(num_points)), NUM_SAMPLE_POINTS)
    # 点云采样
    sampled_point_cloud = np.array([point_cloud[i] for i in sampled_indices])
    # label采样
    sampled_label_cloud = np.array([label[i] for i in sampled_indices])
    # 正则化
    norm_point_cloud = sampled_point_cloud - np.mean(sampled_point_cloud, axis=0)
    norm_point_cloud /= np.max(np.linalg.norm(norm_point_cloud, axis=1))
    # 存储
    point_clouds.append(norm_point_cloud)
    point_clouds_labels.append(sampled_label_cloud)


class MyDataset(Dataset):
    def __init__(self, data, label):
        super(MyDataset, self).__init__()
        self.data = data
        self.label = label

    def __getitem__(self, index):

        data = self.data[index]
        label = self.label[index]
        data = np.reshape(data, (1, 1024, 3))

        return data, label

    def __len__(self):
        return len(self.data)

4. 进行数据集的划分

# 数据集划分
VAL_SPLIT = 0.2
split_index = int(len(point_clouds) * (1 - VAL_SPLIT))
train_point_clouds = point_clouds[:split_index]
train_label_cloud = point_clouds_labels[:split_index]
print(train_label_cloud)
total_training_examples = len(train_point_clouds)
val_point_clouds = point_clouds[split_index:]
val_label_cloud = point_clouds_labels[split_index:]
print("Num train point clouds:", len(train_point_clouds))
print("Num train point cloud labels:", len(train_label_cloud))
print("Num val point clouds:", len(val_point_clouds))
print("Num val point cloud labels:", len(val_label_cloud))

# 测试定义的数据集
train_dataset = MyDataset(train_point_clouds, train_label_cloud)
val_dataset = MyDataset(val_point_clouds, val_label_cloud)

print('=============custom dataset test=============')
for data, label in train_dataset:
    print('data shape:{} \nlabel shape:{}'.format(data.shape, label.shape))
    break

# Batch_size 大小
BATCH_SIZE = 64
# # 数据加载
train_loader = paddle.io.DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
val_loader = paddle.io.DataLoader(val_dataset, batch_size=BATCH_SIZE, shuffle=False)

5. 创建PointNet网络

class PointNet(paddle.nn.Layer):
    def __init__(self, name_scope='PointNet_', num_classes=4, num_point=1024):
        super(PointNet, self).__init__()
        self.num_point = num_point
        self.input_transform_net = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024, (1, 1)),
            BatchNorm(1024),
            ReLU(),
            MaxPool2D((num_point, 1))
        )
        self.input_fc = Sequential(
            Linear(1024, 512),
            ReLU(),
            Linear(512, 256),
            ReLU(),
            Linear(256, 9, 
                weight_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
                bias_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
            )
        )
        self.mlp_1 = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 64,(1, 1)),
            BatchNorm(64),
            ReLU(),
        )
        self.feature_transform_net = Sequential(
            Conv2D(64, 64, (1, 1)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024, (1, 1)),
            BatchNorm(1024),
            ReLU(),

            MaxPool2D((num_point, 1))
        )
        self.feature_fc = Sequential(
            Linear(1024, 512),
            ReLU(),
            Linear(512, 256),
            ReLU(),
            Linear(256, 64*64)
        )
        self.mlp_2 = Sequential(
            Conv2D(64, 64, (1, 1)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128,(1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024,(1, 1)),
            BatchNorm(1024),
            ReLU(),
        )
        self.seg_net = Sequential(
            Conv2D(1088, 512, (1, 1)),
            BatchNorm(512),
            ReLU(),
            Conv2D(512, 256, (1, 1)),
            BatchNorm(256),
            ReLU(),
            Conv2D(256, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, num_classes, (1, 1)),
            Softmax(axis=1)
        )
    def forward(self, inputs):
        batchsize = inputs.shape[0]

        t_net = self.input_transform_net(inputs)
        t_net = paddle.squeeze(t_net)
        t_net = self.input_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 3, 3])
      
        x = paddle.reshape(inputs, shape=(batchsize, 1024, 3))
        x = paddle.matmul(x, t_net)
        x = paddle.unsqueeze(x, axis=1)
        x = self.mlp_1(x)

        t_net = self.feature_transform_net(x)
        t_net = paddle.squeeze(t_net)
        t_net = self.feature_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 64, 64])

        x = paddle.reshape(x, shape=(batchsize, 64, 1024))
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.matmul(x, t_net)
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.unsqueeze(x, axis=-1)
        point_feat = x
        x = self.mlp_2(x)
        x = paddle.max(x, axis=2)

        global_feat_expand = paddle.tile(paddle.unsqueeze(x, axis=1), [1, self.num_point, 1, 1])        
        x = paddle.concat([point_feat, global_feat_expand], axis=1)
        x = self.seg_net(x)
        x = paddle.squeeze(x, axis=-1)
        x = paddle.transpose(x, (0, 2, 1))

        return x

# 创建模型
model = PointNet()
model.train()
# 优化器定义
optim = paddle.optimizer.Adam(parameters=model.parameters(), weight_decay=0.001)
# 损失函数定义
loss_fn = paddle.nn.CrossEntropyLoss()
# 评价指标定义
m = paddle.metric.Accuracy()

6. 训练模型

# 训练轮数
epoch_num = 50
# 每多少个epoch保存
save_interval = 2
# 每多少个epoch验证
val_interval = 2
best_acc = 0
# 模型保存地址
output_dir = './output'
if not os.path.exists(output_dir):
    os.makedirs(output_dir)
# 训练过程
plot_acc = []
plot_loss = []
for epoch in range(epoch_num):
    total_loss = 0
    for batch_id, data in enumerate(train_loader()):
        inputs = paddle.to_tensor(data[0], dtype='float32')
        labels = paddle.to_tensor(data[1], dtype='int64')
        print(data[1])
        print(labels)
        predicts = model(inputs)
      
        # 计算损失和反向传播
        loss = loss_fn(predicts, labels)
        if loss.ndim == 0:
            total_loss += loss.numpy()  # 零维数组,直接取值
        else:
            total_loss += loss.numpy()[0]  # 非零维数组,取第一个元素
        loss.backward()
        # 计算acc
        predicts = paddle.reshape(predicts, (predicts.shape[0]*predicts.shape[1], -1))
        labels = paddle.reshape(labels, (labels.shape[0]*labels.shape[1], 1))
        correct = m.compute(predicts, labels)
        m.update(correct)
        # 优化器更新
        optim.step()
        optim.clear_grad()
    avg_loss = total_loss/batch_id
    plot_loss.append(avg_loss)
    print("epoch: {}/{}, loss is: {}, acc is:{}".format(epoch, epoch_num, avg_loss, m.accumulate()))
    m.reset()
    # 保存
    if epoch % save_interval == 0:
        model_name = str(epoch)
        paddle.save(model.state_dict(), './output/PointNet_{}.pdparams'.format(model_name))
        paddle.save(optim.state_dict(), './output/PointNet_{}.pdopt'.format(model_name))
    # 训练中途验证
    if epoch % val_interval == 0:
        model.eval()
        for batch_id, data in enumerate(val_loader()): 
            inputs = paddle.to_tensor(data[0], dtype='float32')
            labels = paddle.to_tensor(data[1], dtype='int64')
            predicts = model(inputs)
            predicts = paddle.reshape(predicts, (predicts.shape[0]*predicts.shape[1], -1))
            labels = paddle.reshape(labels, (labels.shape[0]*labels.shape[1], 1))
            correct = m.compute(predicts, labels)
            m.update(correct)
        val_acc = m.accumulate()
        plot_acc.append(val_acc)
        if val_acc > best_acc:
            best_acc = val_acc
            print("===================================val===========================================")
            print('val best epoch in:{}, best acc:{}'.format(epoch, best_acc))
            print("===================================train===========================================")
            paddle.save(model.state_dict(), './output/best_model.pdparams')
            paddle.save(optim.state_dict(), './output/best_model.pdopt')
        m.reset()
        model.train()

7.尝试对点云进行预测

ckpt_path = 'output/best_model.pdparams'
para_state_dict = paddle.load(ckpt_path)
# 加载网络和参数
model = PointNet()
model.set_state_dict(para_state_dict)
model.eval()
# 加载数据集
point_cloud = point_clouds[0]
show_point_cloud = point_cloud
point_cloud = paddle.to_tensor(np.reshape(point_cloud, (1, 1, 1024, 3)), dtype='float32')
label = point_clouds_labels[0]
# 前向推理
preds = model(point_cloud)
show_pred = paddle.argmax(preds, axis=-1).numpy() + 1

visualize_data(show_point_cloud, show_pred[0], 'pred')
visualize_data(show_point_cloud, label, 'label')

全流程代码

import os
import tqdm
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import  warnings
warnings.filterwarnings("ignore", module="matplotlib")    
from mpl_toolkits.mplot3d import Axes3D

# paddle相关库
import paddle
from paddle.io import Dataset
import paddle.nn.functional as F
from paddle.nn import Conv2D, MaxPool2D, Linear, BatchNorm, Dropout, ReLU, Softmax, Sequential

# 可视化使用的颜色和对应label的名字


def visualize_data(point_cloud, label, title):
    COLORS = ['b', 'y', 'r', 'g', 'pink']
    label_map = ['none', 'Support'] 
    df = pd.DataFrame(
        data={
            "x": point_cloud[:, 0],
            "y": point_cloud[:, 1],
            "z": point_cloud[:, 2],
            "label": label,
        }
    )
    fig = plt.figure(figsize=(15, 10))
    ax = plt.axes(projection="3d")
    ax.scatter(df["x"], df["y"], df["z"])
    for i in range(label.min(), label.max()+1):
        c_df = df[df['label'] == i]
        ax.scatter(c_df["x"], c_df["y"], c_df["z"], label=label_map[i], alpha=0.5, c=COLORS[i])
    ax.legend()
    plt.title(title)
    plt.show()
    input()

data_path = 'J:\\output\\Data'
label_path = 'J:\\output\\Label'
# 采样点
NUM_SAMPLE_POINTS = 1024 
# 存储点云与label
point_clouds = []
point_clouds_labels = []

file_list = os.listdir(data_path)
for file_name in tqdm.tqdm(file_list):
    # 获取label和data的地址
    label_name = file_name.replace('.pts', '.seg')
    point_cloud_file_path = os.path.join(data_path, file_name)
    label_file_path = os.path.join(label_path, label_name)
    # 读取label和data
    point_cloud = np.loadtxt(point_cloud_file_path)
    label = np.loadtxt(label_file_path).astype('int')
    # 如果本身的点少于需要采样的点,则直接去除
    if len(point_cloud) < NUM_SAMPLE_POINTS:
        continue
    # 采样
    num_points = len(point_cloud)
    # 确定随机采样的index
    sampled_indices = random.sample(list(range(num_points)), NUM_SAMPLE_POINTS)
    # 点云采样
    sampled_point_cloud = np.array([point_cloud[i] for i in sampled_indices])
    # label采样
    sampled_label_cloud = np.array([label[i] for i in sampled_indices])
    # 正则化
    norm_point_cloud = sampled_point_cloud - np.mean(sampled_point_cloud, axis=0)
    norm_point_cloud /= np.max(np.linalg.norm(norm_point_cloud, axis=1))
    # 存储
    point_clouds.append(norm_point_cloud)
    point_clouds_labels.append(sampled_label_cloud)
        
#visualize_data(point_clouds[0], point_clouds_labels[0], 'label')

class MyDataset(Dataset):
    def __init__(self, data, label):
        super(MyDataset, self).__init__()
        self.data = data
        self.label = label

    def __getitem__(self, index):

        data = self.data[index]
        label = self.label[index]
        data = np.reshape(data, (1, 1024, 3))

        return data, label

    def __len__(self):
        return len(self.data)

# 数据集划分
VAL_SPLIT = 0.2
split_index = int(len(point_clouds) * (1 - VAL_SPLIT))
train_point_clouds = point_clouds[:split_index]
train_label_cloud = point_clouds_labels[:split_index]
print(train_label_cloud)
total_training_examples = len(train_point_clouds)
val_point_clouds = point_clouds[split_index:]
val_label_cloud = point_clouds_labels[split_index:]
print("Num train point clouds:", len(train_point_clouds))
print("Num train point cloud labels:", len(train_label_cloud))
print("Num val point clouds:", len(val_point_clouds))
print("Num val point cloud labels:", len(val_label_cloud))

# 测试定义的数据集
train_dataset = MyDataset(train_point_clouds, train_label_cloud)
val_dataset = MyDataset(val_point_clouds, val_label_cloud)

print('=============custom dataset test=============')
for data, label in train_dataset:
    print('data shape:{} \nlabel shape:{}'.format(data.shape, label.shape))
    break

# Batch_size 大小
BATCH_SIZE = 64
# # 数据加载
train_loader = paddle.io.DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
val_loader = paddle.io.DataLoader(val_dataset, batch_size=BATCH_SIZE, shuffle=False)



class PointNet(paddle.nn.Layer):
    def __init__(self, name_scope='PointNet_', num_classes=4, num_point=1024):
        super(PointNet, self).__init__()
        self.num_point = num_point
        self.input_transform_net = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024, (1, 1)),
            BatchNorm(1024),
            ReLU(),
            MaxPool2D((num_point, 1))
        )
        self.input_fc = Sequential(
            Linear(1024, 512),
            ReLU(),
            Linear(512, 256),
            ReLU(),
            Linear(256, 9, 
                weight_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.zeros((256, 9)))),
                bias_attr=paddle.framework.ParamAttr(initializer=paddle.nn.initializer.Assign(paddle.reshape(paddle.eye(3), [-1])))
            )
        )
        self.mlp_1 = Sequential(
            Conv2D(1, 64, (1, 3)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 64,(1, 1)),
            BatchNorm(64),
            ReLU(),
        )
        self.feature_transform_net = Sequential(
            Conv2D(64, 64, (1, 1)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024, (1, 1)),
            BatchNorm(1024),
            ReLU(),

            MaxPool2D((num_point, 1))
        )
        self.feature_fc = Sequential(
            Linear(1024, 512),
            ReLU(),
            Linear(512, 256),
            ReLU(),
            Linear(256, 64*64)
        )
        self.mlp_2 = Sequential(
            Conv2D(64, 64, (1, 1)),
            BatchNorm(64),
            ReLU(),
            Conv2D(64, 128,(1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 1024,(1, 1)),
            BatchNorm(1024),
            ReLU(),
        )
        self.seg_net = Sequential(
            Conv2D(1088, 512, (1, 1)),
            BatchNorm(512),
            ReLU(),
            Conv2D(512, 256, (1, 1)),
            BatchNorm(256),
            ReLU(),
            Conv2D(256, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, 128, (1, 1)),
            BatchNorm(128),
            ReLU(),
            Conv2D(128, num_classes, (1, 1)),
            Softmax(axis=1)
        )
    def forward(self, inputs):
        batchsize = inputs.shape[0]

        t_net = self.input_transform_net(inputs)
        t_net = paddle.squeeze(t_net)
        t_net = self.input_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 3, 3])
      
        x = paddle.reshape(inputs, shape=(batchsize, 1024, 3))
        x = paddle.matmul(x, t_net)
        x = paddle.unsqueeze(x, axis=1)
        x = self.mlp_1(x)

        t_net = self.feature_transform_net(x)
        t_net = paddle.squeeze(t_net)
        t_net = self.feature_fc(t_net)
        t_net = paddle.reshape(t_net, [batchsize, 64, 64])

        x = paddle.reshape(x, shape=(batchsize, 64, 1024))
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.matmul(x, t_net)
        x = paddle.transpose(x, (0, 2, 1))
        x = paddle.unsqueeze(x, axis=-1)
        point_feat = x
        x = self.mlp_2(x)
        x = paddle.max(x, axis=2)

        global_feat_expand = paddle.tile(paddle.unsqueeze(x, axis=1), [1, self.num_point, 1, 1])        
        x = paddle.concat([point_feat, global_feat_expand], axis=1)
        x = self.seg_net(x)
        x = paddle.squeeze(x, axis=-1)
        x = paddle.transpose(x, (0, 2, 1))

        return x

# 创建模型
model = PointNet()
model.train()
# 优化器定义
optim = paddle.optimizer.Adam(parameters=model.parameters(), weight_decay=0.001)
# 损失函数定义
loss_fn = paddle.nn.CrossEntropyLoss()
# 评价指标定义
m = paddle.metric.Accuracy()
# 训练轮数
epoch_num = 50
# 每多少个epoch保存
save_interval = 2
# 每多少个epoch验证
val_interval = 2
best_acc = 0
# 模型保存地址
output_dir = './output'
if not os.path.exists(output_dir):
    os.makedirs(output_dir)
# 训练过程
plot_acc = []
plot_loss = []
for epoch in range(epoch_num):
    total_loss = 0
    for batch_id, data in enumerate(train_loader()):
        inputs = paddle.to_tensor(data[0], dtype='float32')
        labels = paddle.to_tensor(data[1], dtype='int64')
        print(data[1])
        print(labels)
        predicts = model(inputs)
      
        # 计算损失和反向传播
        loss = loss_fn(predicts, labels)
        if loss.ndim == 0:
            total_loss += loss.numpy()  # 零维数组,直接取值
        else:
            total_loss += loss.numpy()[0]  # 非零维数组,取第一个元素
        loss.backward()
        # 计算acc
        predicts = paddle.reshape(predicts, (predicts.shape[0]*predicts.shape[1], -1))
        labels = paddle.reshape(labels, (labels.shape[0]*labels.shape[1], 1))
        correct = m.compute(predicts, labels)
        m.update(correct)
        # 优化器更新
        optim.step()
        optim.clear_grad()
    avg_loss = total_loss/batch_id
    plot_loss.append(avg_loss)
    print("epoch: {}/{}, loss is: {}, acc is:{}".format(epoch, epoch_num, avg_loss, m.accumulate()))
    m.reset()
    # 保存
    if epoch % save_interval == 0:
        model_name = str(epoch)
        paddle.save(model.state_dict(), './output/PointNet_{}.pdparams'.format(model_name))
        paddle.save(optim.state_dict(), './output/PointNet_{}.pdopt'.format(model_name))
    # 训练中途验证
    if epoch % val_interval == 0:
        model.eval()
        for batch_id, data in enumerate(val_loader()): 
            inputs = paddle.to_tensor(data[0], dtype='float32')
            labels = paddle.to_tensor(data[1], dtype='int64')
            predicts = model(inputs)
            predicts = paddle.reshape(predicts, (predicts.shape[0]*predicts.shape[1], -1))
            labels = paddle.reshape(labels, (labels.shape[0]*labels.shape[1], 1))
            correct = m.compute(predicts, labels)
            m.update(correct)
        val_acc = m.accumulate()
        plot_acc.append(val_acc)
        if val_acc > best_acc:
            best_acc = val_acc
            print("===================================val===========================================")
            print('val best epoch in:{}, best acc:{}'.format(epoch, best_acc))
            print("===================================train===========================================")
            paddle.save(model.state_dict(), './output/best_model.pdparams')
            paddle.save(optim.state_dict(), './output/best_model.pdopt')
        m.reset()
        model.train()

ckpt_path = 'output/best_model.pdparams'
para_state_dict = paddle.load(ckpt_path)
# 加载网络和参数
model = PointNet()
model.set_state_dict(para_state_dict)
model.eval()
# 加载数据集
point_cloud = point_clouds[0]
show_point_cloud = point_cloud
point_cloud = paddle.to_tensor(np.reshape(point_cloud, (1, 1, 1024, 3)), dtype='float32')
label = point_clouds_labels[0]
# 前向推理
preds = model(point_cloud)
show_pred = paddle.argmax(preds, axis=-1).numpy() + 1

visualize_data(show_point_cloud, show_pred[0], 'pred')
visualize_data(show_point_cloud, label, 'label')

看了下结果,对点云的数据进行了一个测试

在这里插入图片描述
目标检测的是横杆,训练集的数据较少,所以结果比较一般,后续可以添加更多数据,应该能得到更好的结果。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1469393.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【Pytorch深度学习开发实践学习】B站刘二大人课程笔记整理lecture11 Advanced_CNN 实现GoogleNet和ResNet

【Pytorch深度学习开发实践学习】B站刘二大人课程笔记整理lecture11 Advanced_CNN 代码&#xff1a; Pytorch实现GoogleNet import torch from torchvision import datasets, transforms from torch.utils.data import DataLoader import torch.nn as nn import torch.nn.fun…

内核解读之内存管理(8)什么是page cache

文章目录 0. 文件系统的层次结构1.什么是page cache2.感观认识page cache3. Page Cache的优缺点3.1 Page Cache 的优势3.2 Page Cache 的劣势 0. 文件系统的层次结构 在了解page cache之前&#xff0c;我们先看下文件系统的层次结构。 1 VFS 层 VFS &#xff08; Virtual Fi…

【Ubuntu】解决Ubuntu 22.04开机显示器颜色(高对比度/反色)异常的问题

使用Ubuntu 22.04时强制关机了一下&#xff08;make -j16把电脑搞崩了&#xff09;&#xff0c;开机后系统显示的颜色异常&#xff0c;类似高对比度或反色&#xff0c;如下图。看着很难受&#xff0c;字体也没办法辨认。还好之前遇到过类似的问题&#xff0c;应该是一个配置文件…

装修避坑干货|阳台洗衣柜洗衣机一体柜设计。福州中宅装饰,福州装修

装修的时候常常会在洗衣柜中嵌入洗衣机&#xff0c;其实阳台柜的安装并不像看起来的那么简单&#xff0c;下面给大家说说几个注意事项‼️ 01.水电位置 在安装阳台柜之前&#xff0c;务必确认水电管道的位置。确保阳台柜不会阻碍水电管道的使用&#xff0c;以免造成不必要的麻…

Three.js-02Vue框架入手

1.创建项目 说明&#xff1a;默认有vue基础&#xff0c;node版本18以上。 vue create threejs 2.选择vue3 4.安装 npm i three 5. 修改页面 <template> <div></div> </template><script setup> import * as THREE from three;const width win…

查看仓库版本记录

打开命令行窗口 输入git log即可。 若发现分支不对&#xff0c;方法如下 查看项目目录&#xff0c;命令行输入dir可以查看 多个moudel&#xff0c;进入到需要查版本记录的moudel下 命令行输入cd .\文件名如wowo-win-server\ 切换到wowo-win-server文件夹下后&#xff0c;再输入…

【Unity】提示No valid Unity Editor liscense found.Please active your liscense.

有两个软件&#xff0c;如果只有一个&#xff0c;点黑的不会有效果、、、、&#xff08;楼主是这个原因&#xff0c;可以对号入座一下&#xff09; 简而言之&#xff0c;就是去下载Unity Hub&#xff0c;再里面激活管理通行证 问题情境&#xff1a; 点击unity出现以下弹窗&a…

板块一 Servlet编程:第八节 文件上传下载操作 来自【汤米尼克的JavaEE全套教程专栏】

板块一 Servlet编程&#xff1a;第八节 文件的上传下载操作 一、文件上传&#xff08;1&#xff09;前端内容&#xff08;2&#xff09;后端内容 二、文件下载&#xff08;1&#xff09;前端的超链接下载&#xff08;2&#xff09;后端下载 在之前的内容中我们终于结束了Servle…

C++——基础语法(2):函数重载、引用

4. 函数重载 函数重载就是同一个函数名可以重复被定义&#xff0c;即允许定义相同函数名的函数。但是相同名字的函数怎么在使用的时候进行区分呢&#xff1f;所以同一个函数名的函数之间肯定是要存在不同点的&#xff0c;除了函数名外&#xff0c;还有返回类型和参数两部分可以…

【Linux】 faillock 命令使用

faillock 命令 faillock 命令是 PAM (Pluggable Authentication Modules) 的一部分&#xff0c;它被设计用来跟踪失败的登录尝试&#xff0c;并在连续失败尝试超过某个阈值时锁定账户。这个功能可以帮助系统管理员识别和防止暴力破解攻击。当一个用户连续多次输入错误的密码后&…

Vue.js+SpringBoot开发超市商品管理系统

目录 一、摘要1.1 简介1.2 项目录屏 二、研究内容2.1 数据中心模块2.2 超市区域模块2.3 超市货架模块2.4 商品类型模块2.5 商品档案模块 三、系统设计3.1 用例图3.2 时序图3.3 类图3.4 E-R图 四、系统实现4.1 登录4.2 注册4.3 主页4.4 超市区域管理4.5 超市货架管理4.6 商品类型…

Python中的functools模块详解

大家好&#xff0c;我是海鸽。 函数被定义为一段代码&#xff0c;它接受参数&#xff0c;充当输入&#xff0c;执行涉及这些输入的一些处理&#xff0c;并根据处理返回一个值&#xff08;输出&#xff09;。当一个函数将另一个函数作为输入或返回另一个函数作为输出时&#xf…

项目实战:Qt监测操作系统物理网卡通断v1.1.0(支持windows、linux、国产麒麟系统)

若该文为原创文章&#xff0c;转载请注明出处 本文章博客地址&#xff1a;https://hpzwl.blog.csdn.net/article/details/136276999 红胖子(红模仿)的博文大全&#xff1a;开发技术集合&#xff08;包含Qt实用技术、树莓派、三维、OpenCV、OpenGL、ffmpeg、OSG、单片机、软硬结…

数据结构-列表LinkedList

一,链表的简单的认识. 数组,栈,队列是线性数据结构,但都算不上是动态数据结构,底层都是依托静态数组,但是链表是确实真正意义上的动态数组. 为什么要学习链表? 1,链表时最简单的动态数据结构 2,掌握链表有助于学习更复杂的数据结构,例如,二叉树,trie. 3,学习链表有助于更深入…

fpga_硬件加速引擎

一 什么是硬件加速引擎 硬件加速引擎&#xff0c;也称硬件加速器&#xff0c;是一种采用专用加速芯片/模块替代cpu完成复杂耗时的大算力操作&#xff0c;其过程不需要或者仅需要少量cpu参与。 二 典型的硬件加速引擎 典型的硬件加速引擎有GPU&#xff0c;DSP&#xff0c;ISP&a…

【Web】CTFSHOW 常用姿势刷题记录(全)

目录 web801 web802 web803 web804 web805 web806 web807 法一&#xff1a;反弹shell 法二&#xff1a;vps外带 web808 web809 web810 web811 web812 web813 web814 web815 web816 web817 web818 web819 web820 web821 web822 web823 web824 web825…

python统计分析——单因素方差分析

参考资料&#xff1a;用python动手学统计学 方差分析&#xff1a;analysis of variance&#xff0c;缩写为ANOVA 1、背景知识 1.1 要使用方差分析&#xff0c;数据的总体必须服从正态分布&#xff0c;而且各个水平内部的方差必须相等。 1.2 反复检验导致显著性结果更易出现…

专业130+总分410+上海交通大学819信号系统与信号处理考研上交电子信息通信生医电科,真题,大纲,参考书。

今年考研顺利结束&#xff0c;我也完成了目前人生最大的逆袭&#xff0c;跨了两个层级跨入c9&#xff0c;专业课819信号系统与信息处理135&#xff0c;数一130总分410&#xff0c;考上上海交大&#xff0c;回想这一年经历了很多&#xff0c;也成长了很多。从周围朋友&#xff0…

Mysql数据库学习之范式

范式 范式简介 在关系型数据库中&#xff0c;关于数据表设计的基本原则、规则称为范式。可以理解为&#xff0c;一张数据表的设计结构需要满足的某种设计标准的级别&#xff0c;要想设计一个结构合理的关系型数据库&#xff0c;必须满足一定的范式。 范式都包含哪些 6种范式…

在当前源文件的目录或生成系统路径中未找到文件

vsqt中增加&#xff0c;减少文件&#xff0c;都必须要动一下cmakelist.txt,点一下换行或者保存 因为vsqt反应不过来 1。都必须要动一下cmakelist.txt,点一下换行或者保存 2.然后全部重新生成&#xff0c;或者重新扫描解决方案&#xff08;多扫几次&#xff09;