NVIDIA AGX Xavier 部署 CUDA-PointPillars

news2025/1/12 18:06:59

背景:

CUDA-PointPillars 在X86 NVIDIA GeForce GTX 1060 使用自家激光雷达数据跑通并优化后,部署到边缘设备NVIDIA AGX Xavier,出现了好多问题,记录下来,以备后用。

参考:

  1. NVIDIA Jetson AGX Xavier安装OpenPCDet完整踩坑记录
  2. NVIDIA Jetson AGX Orin配置OpenPCDet环境部署PointPillar
  3. Jeston AGX Orin安装Pytorch1.11.0+torchvision0.12.0

过程:

  1. 安装 Arm anaconda
    参考:ARM上配置anaconda教程
wget https://github.com/Archiconda/build-tools/releases/download/0.2.3/Archiconda3-0.2.3-Linux-aarch64.sh
sudo bash Archiconda3-0.2.3-Linux-aarch64.sh
  1. 创建虚拟环境
conda activate OpenPCDet_torch18 python=3.6
  1. 下载和验证 torch
    下载参考:PyTorch for Jetson
    sudo jtop 查看 Xavier JetPack 的版本,为 JetPack 4.4 (L4T R32.4.3)
    在这里插入图片描述
    从而只能下载PyTorch v1.6.0 ~ PyTorch v1.10.0
$ conda activate OpenPCDet_torch18
$ python -m pip install torch-1.8.0-cp36-cp36m-linux_aarch64.whl 

验证 torch

$ python
Python 3.6.15 | packaged by conda-forge | (default, Dec  3 2021, 19:12:04) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
非法指令 (核心已转储)

解决方法:
参考:非法指令(核心已转储)
非法指令(核心已转储)解决方案

python -m pip install numpy==1.19.3

重新验证

$ python
Python 3.6.15 | packaged by conda-forge | (default, Dec  3 2021, 19:12:04) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> print(torch.__version__)
1.8.0
>>> print('CUDA available: ' + str(torch.cuda.is_available()))
CUDA available: True
>>> print('cuDNN version: ' + str(torch.backends.cudnn.version()))
cuDNN version: 8000
>>> a = torch.cuda.FloatTensor(2).zero_()
>>> print('Tensor a = ' + str(a))
Tensor a = tensor([0., 0.], device='cuda:0')
>>> b = torch.randn(2).cuda()
>>> print('Tensor b = ' + str(b))
Tensor b = tensor([ 1.4377, -0.4534], device='cuda:0')
>>> c = a + b
>>> print('Tensor c = ' + str(c))
Tensor c = tensor([ 1.4377, -0.4534], device='cuda:0')
  1. 下载和验证 torchvision
    下载 torchvision 网站
    在这里插入图片描述
    torch 1.8.0 对应 torchvision 0.9.0
    在这里插入图片描述
    选择 main ,点击 tags
    在这里插入图片描述
    点击 Mobile support, AutoAugment, improved IO and more,点击下载
    Source code (zip)在这里插入图片描述
$ unzip vision-v0.9.0.zip
$ cd vision-v0.9.0/
$ ls
android         CODE_OF_CONDUCT.md  examples    MANIFEST.in  README.rst  setup.py     tox.ini
cmake           CONTRIBUTING.md     hubconf.py  mypy.ini     references  test         version.txt
CMakeLists.txt  docs                LICENSE     packaging    setup.cfg   torchvision
$ export BUILD_VERSION=0.9.0
$ python setup.py install --user
...
Using /home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages
Finished processing dependencies for torchvision==0.9.0

$ python -m pip install 'pillow<7'

验证 vision

$ python
Python 3.6.15 | packaged by conda-forge | (default, Dec  3 2021, 19:12:04) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>import torchvision
>>> print(torchvision.__version__)
0.9.0
  1. .安装 cumm 和 spconv
    spconv 下载网站
    NVIDIA Jetson 系列安装 spconv ,得重新编译,而 spconv 又依赖于 cumm ,所以得先装cumm
    在这里插入图片描述
    cumm 下载网站
$ export CUMM_CUDA_VERSION="10.2"
$ export CUMM_CUDA_ARCH_LIST="7.2"
$ python -m pip install pccm
$ git clone https://github.com/FindDefinition/cumm
$ cd cumm/
$ pip install -e .

验证cumm

$ pip list | grep cumm
cumm-cu102                    0.3.7     /home/nvidia/torch_xavier/cumm
$ python 
Python 3.6.15 | packaged by conda-forge | (default, Dec  3 2021, 19:12:04) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cumm
>>> print(cumm.__version__)
0.3.7

安装spconv

$ export CUMM_CUDA_VERSION="10.2" # 10.2改成你的cuda版本
$ export SPCONV_DISABLE_JIT="1" # 不用JIT编译spconv,而是编译成whl后再安装
$ export CUMM_CUDA_ARCH_LIST="7.2"
$ git clone https://github.com/traveller59/spconv
$ python setup.py bdist_wheel
$ python -m pip install dist/spconv_cu102-2.2.6-cp36-cp36m-linux_aarch64.whl

此时报错:

$ pip install dist/spconv_cu102-2.2.6-cp36-cp36m-linux_aarch64.whl 
Processing ./dist/spconv_cu102-2.2.6-cp36-cp36m-linux_aarch64.whl
Requirement already satisfied: fire in /home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages (from spconv-cu102==2.2.6) (0.4.0)
Requirement already satisfied: pccm>=0.4.0 in /home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages (from spconv-cu102==2.2.6) (0.4.4)
Requirement already satisfied: pybind11>=2.6.0 in /home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages (from spconv-cu102==2.2.6) (2.10.1)
Requirement already satisfied: ccimport>=0.4.0 in /home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages (from spconv-cu102==2.2.6) (0.4.2)
Requirement already satisfied: numpy in /home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages (from spconv-cu102==2.2.6) (1.19.3)
Requirement already satisfied: cumm-cu102>=0.3.7 in /home/nvidia/torch_xavier/cumm (from spconv-cu102==2.2.6) (0.3.7)
ERROR: Package 'spconv-cu102' requires a different Python: 3.6.15 not in '>=3.7'

参考:Package ‘zipp‘ requires a different Python: 3.5.2 not in ‘>=3.6‘ 得知Spconv 版本太新了,此时版本为v2.2.6,同时查看 Spconv GIthub
在这里插入图片描述
试用了Spconv v2.1.25 和 cumm v0.3.7 不匹配

$ python setup.py bdist_wheel
Traceback (most recent call last):
  File "setup.py", line 154, in <module>
    from spconv.core import SHUFFLE_SIMT_PARAMS, SHUFFLE_VOLTA_PARAMS, SHUFFLE_TURING_PARAMS
  File "/home/nvidia/torch_xavier/spconv-2.1.25/spconv/__init__.py", line 17, in <module>
    from .core import ConvAlgo, AlgoHint
  File "/home/nvidia/torch_xavier/spconv-2.1.25/spconv/core.py", line 18, in <module>
    from cumm.gemm.algospec.core import TensorOpParams
ImportError: cannot import name 'TensorOpParams'

重新卸载 cumm-cu102

$ pip list | grep cumm
cumm-cu102                    0.3.7     /home/nvidia/torch_xavier/cumm
$ pip uninstall cumm-cu102
Found existing installation: cumm-cu102 0.3.7
Uninstalling cumm-cu102-0.3.7:
  Would remove:
    /home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages/cumm-cu102.egg-link
Proceed (Y/n)? y
  Successfully uninstalled cumm-cu102-0.3.7

查看spconv-2.1.25 下的 pyproject.toml

$ cat pyproject.toml 
[build-system]
requires = ["setuptools>=41.0", "wheel", "pccm>=0.2.21,<0.4.0", "ccimport>=0.3.0,<0.4.0", "cumm>=0.2.3,<0.3.0"]
build-backend = "setuptools.build_meta"

知道 spconv 与 cumm 之间的版本依赖关系
重新下载 cumm-0.2.9 spconv-2.1.25
安装cumm-0.2.9 时报错:

AttributeError: 'BuildMeta' object has no attribute 'includes

参考:
Python脚本报错AttributeError: ‘module’ object has no attribute’xxx’解决方法
Python报错AttributeError: ‘module’ object has no attribute’xxx’解决方法
重新删除掉解压文件,重新开始

$ export CUMM_CUDA_VERSION="10.2"
$ export CUMM_DISABLE_JIT="1"
$ export CUMM_CUDA_ARCH_LIST="7.2"
$ unzip cumm-0.2.9.zip 
$ cd cumm-0.2.9/
$ ls
$ python setup.py bdist_wheel 
$ pip install dist/cumm_cu102-0.2.9-cp36-cp36m-linux_aarch64.whl 
$ cd ..
$ ls
$ cd spconv-2.1.25/
$ ls
$ python setup.py bdist_wheel 
$ pip install dist/spconv_cu102-2.1.25-cp36-cp36m-linux_aarch64.whl 

验证cumm 和 spconv

$ python
Python 3.6.15 | packaged by conda-forge | (default, Dec  3 2021, 19:12:04) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cumm
>>> print(cumm.__version__)
0.2.9
>>> import spconv
>>> print(spconv.__version__)
2.1.25
  1. 安装llvm和llvmlite
    因为 OpenPCDet/requirements.txt需要llvmlite,而llvmlite 依赖于llvm
    查看之前x86 上安装的 llvmlite
$ pip list | grep llvm
llvmlite                      0.38.1

llvmlite 与 llvm 的对应关系
得知适应llvmlite v0.38.1 的 llvm 的版本为 11.x.x
从 llvm/llvm-project 下载 aarch64 的 llvm
的预编译好的文件,解压以后添加环境变量

$ wget https://github.com/llvm/llvm-project/releases/download/llvmorg-11.0.1/clang+llvm-11.0.1-aarch64-linux-gnu.tar.xz
$ tar -xvJf clang+llvm-11.0.1-aarch64-linux-gnu.tar.xz
$ gedit ~/.bashrc
export PATH=$PATH:/home/nvidia/torch_xavier/clang+llvm-10.0.1-aarch64-linux-gnu/bin # your path to llvm
export LLVM_CONFIG=/home/nvidia/torch_xavier/clang+llvm-10.0.1-aarch64-linux-gnu/bin/llvm-config # your path to llvm-config
$ source ~/.bashrc
python -m pip install llvmlite==0.38.1 -i https://mirror.baidu.com/pypi/simple
Looking in indexes: https://mirror.baidu.com/pypi/simple
ERROR: Could not find a version that satisfies the requirement llvmlite==0.38.1 (from versions: 0.2.0, 0.2.1, 0.2.2, 0.4.0, 0.5.0, 0.6.0, 0.7.0, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0.1, 0.12.1, 0.13.0, 0.14.0, 0.15.0, 0.16.0, 0.17.0, 0.17.1, 0.18.0, 0.19.0, 0.20.0, 0.21.0, 0.22.0, 0.23.0, 0.23.2, 0.24.0, 0.25.0, 0.26.0, 0.27.0, 0.27.1, 0.28.0, 0.29.0, 0.30.0, 0.31.0, 0.32.0, 0.32.1, 0.33.0, 0.34.0, 0.35.0, 0.36.0)
ERROR: No matching distribution found for llvmlite==0.38.1

报错 百度源没有llvmlite v0.38.1
重新下载 llvm 的版本为 10.0.1,解压,并重新声明环境变量
安装完成验证

$ clang --version
clang version 10.0.1 (http://git.linaro.org/toolchain/jenkins-scripts.git a4a126627ddd5ee3ead2bb9dec4867ca8ad04ad8)
Target: aarch64-unknown-linux-gnu
Thread model: posix
InstalledDir: /home/nvidia/torch_xavier/clang+llvm-10.0.1-aarch64-linux-gnu/bin
$ python -m pip install llvmlite==0.36.0 -i https://mirror.baidu.com/pypi/simple

然后报错

...
 /usr/bin/ld: 找不到 -ltinfo
    collect2: error: ld returned 1 exit status
    Makefile.linux:20: recipe for target 'libllvmlite.so' failed
    make: *** [libllvmlite.so] Error 1
    10.0.1
    
    SVML not detected
    Traceback (most recent call last):
      File "/tmp/pip-install-4b1eeoy9/llvmlite_900652ca1e99427cbfafd11dc781f0b6/ffi/build.py", line 191, in <module>
        main()
      File "/tmp/pip-install-4b1eeoy9/llvmlite_900652ca1e99427cbfafd11dc781f0b6/ffi/build.py", line 181, in main
        main_posix('linux', '.so')
      File "/tmp/pip-install-4b1eeoy9/llvmlite_900652ca1e99427cbfafd11dc781f0b6/ffi/build.py", line 173, in main_posix
        subprocess.check_call(['make', '-f', makefile])
      File "/home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/subprocess.py", line 311, in check_call
        raise CalledProcessError(retcode, cmd)
    subprocess.CalledProcessError: Command '['make', '-f', 'Makefile.linux']' returned non-zero exit status 2.
    error: command '/home/nvidia/archiconda3/envs/OpenPCDet_torch18/bin/python' failed with exit status 1
    ----------------------------------------
ERROR: Command errored out with exit status 1: /home/nvidia/archiconda3/envs/OpenPCDet_torch18/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-4b1eeoy9/llvmlite_900652ca1e99427cbfafd11dc781f0b6/setup.py'"'"'; __file__='"'"'/tmp/pip-install-4b1eeoy9/llvmlite_900652ca1e99427cbfafd11dc781f0b6/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-2xwwjs7c/install-record.txt --single-version-externally-managed --compile --install-headers /home/nvidia/archiconda3/envs/OpenPCDet_torch18/include/python3.6m/llvmlite Check the logs for full command output.

解决方法:

$ sudo apt-get install libedit-dev
$ sudo ldconfig
$ python -m pip install llvmlite==0.36 -i https://mirror.baidu.com/pypi/simple
Looking in indexes: https://mirror.baidu.com/pypi/simple
Collecting llvmlite==0.36
  Using cached https://mirror.baidu.com/pypi/packages/19/66/6b2c49c7c68da48d17059882fdb9ad9ac9e5ac3f22b00874d7996e3c44a8/llvmlite-0.36.0.tar.gz (126 kB)
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: llvmlite
  Building wheel for llvmlite (setup.py) ... done
  Created wheel for llvmlite: filename=llvmlite-0.36.0-cp36-cp36m-linux_aarch64.whl size=19529508 sha256=5bf627b578319bb3b2288d854364f3975ab6f06483139ba1bc7cad959673cd2c
  Stored in directory: /home/nvidia/.cache/pip/wheels/bb/ba/68/ede3b2f96d7bfbb3eb997d475693316961a54d768cace60569
Successfully built llvmlite
Installing collected packages: llvmlite
Successfully installed llvmlite-0.36.0

验证:

$ python
Python 3.6.15 | packaged by conda-forge | (default, Dec  3 2021, 19:12:04) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import llvmlite
>>> print(llvmlite.__version__)
0.36.0

或者:

$ pip show llvmlite
Name: llvmlite
Version: 0.36.0
Summary: lightweight wrapper around basic LLVM functionality
Home-page: http://llvmlite.pydata.org
Author: Continuum Analytics, Inc.
Author-email: numba-users@continuum.io
License: BSD
Location: /home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages
Requires: 
Required-by: 
  1. 安装 OpenPCDet
$ git clone https://github.com/open-mmlab/OpenPCDet.git
$ cd OpenPCDet
$ $ gedit requirements.txt

删除掉前面已经安装的库,自己剩下这些

numba
tensorboardX
easydict
pyyaml
scikit-image
tqdm
SharedArray
json
#cv2
$ python -m pip install -r requirements.txt -i https://mirror.baidu.com/pypi/simple
$ pip install opencv-python==3.4.18.65 -i https://mirror.baidu.com/pypi/simple

安装时scikit-image 会报错,多装几次
最后到 json 时报错:

ERROR: Could not find a version that satisfies the requirement json (from versions: none)
ERROR: No matching distribution found for json

解决方法:

$ python -m pip install jsonpath
$ python setup.py develop

验证 pcdet

$ pip show pcdet
Name: pcdet
Version: 0.6.0+707a861
Summary: OpenPCDet is a general codebase for 3D object detection from point cloud
Home-page: UNKNOWN
Author: Shaoshuai Shi
Author-email: shaoshuaics@gmail.com
License: Apache License 2.0
Location: /home/nvidia/torch_xavier/OpenPCDet
Requires: easydict, llvmlite, numba, numpy, pyyaml, scikit-image, SharedArray, tensorboardX, tqdm
Required-by: 

or

$ python 
Python 3.6.15 | packaged by conda-forge | (default, Dec  3 2021, 19:12:04) 
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pcdet
>>> print(pcdet.__version__)
0.6.0+707a861
  1. 部署CUDA-PointPillars
    NVIDIA-AI-IOT/CUDA-PointPillars
$ git clone https://github.com/NVIDIA-AI-IOT/CUDA-PointPillars.git && cd CUDA-PointPillars

Export Pointpillar Onnx Model 把 .pth 转化成 .onnx 需要安装 onnx 的python 包

$ python -m pip install pyyaml scikit-image onnx onnx-simplifier

报一堆错误,试着分开来装onnx 和 onnx-simplifier , pyyaml scikit-image 前面已装
不指定 onnx 的版本的话,它装v1.12.0 ,也是报错:

ERROR: Command errored out with exit status 1: /home/nvidia/archiconda3/envs/OpenPCDet_torch18/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-6pzfleu9/onnx_0c5b02c0793d47dbbc479912beb724fa/setup.py'"'"'; __file__='"'"'/tmp/pip-install-6pzfleu9/onnx_0c5b02c0793d47dbbc479912beb724fa/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-o4u93_p5/install-record.txt --single-version-externally-managed --compile --install-headers /home/nvidia/archiconda3/envs/OpenPCDet_torch18/include/python3.6m/onnx Check the logs for full command output.

解决方法:降版本

$ python -m pip install onnx==1.11  -i https://mirror.baidu.com/pypi/simple

再装 onnx-simplifier

$ python -m pip install onnx-simplifier==0.3

报错: CMake 版本低了

CMake Error at CMakeLists.txt:1 (cmake_minimum_required):
      CMake 3.22 or higher is required.  You are running version 3.20.1

参考:CMake版本低,需要更高版本.

$ wget https://github.com/Kitware/CMake/releases/download/v3.22.6/cmake-3.22.6.zip
$ unzip cmake-3.22.6.zip
$ cd CMake-3.22.6/
$ ls
$  ./configure 
$  make -j6
$ sudo make install
$ cmake --version
cmake version 3.22.6
CMake suite maintained and supported by Kitware (kitware.com/cmake).

重新安装 onnx-simplifier

$ python -m pip install onnx-simplifier==0.2
$ pip install onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com

同时参考: 处理WARNING: Ignoring invalid distribution -xpython错误

 $ cd /home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages
 $ rm -rf '~nnx-1.11.0.dist-info'
 $ rm -rf '~nnx'

参考 : Jetson Xavier系列(Jetson nano, Jetson Xavier NX, Jetson AGX Xavier)刷机以及使用ONNX加速推理、Jetson Zoo
安装onnxruntime

Traceback (most recent call last):
  File "exporter.py", line 23, in <module>
    from onnxsim import simplify
  File "/home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages/onnxsim/__init__.py", line 1, in <module>
    from onnxsim.onnx_simplifier import simplify
  File "/home/nvidia/archiconda3/envs/OpenPCDet_torch18/lib/python3.6/site-packages/onnxsim/onnx_simplifier.py", line 7, in <module>
    import onnx.optimizer           # type: ignore
ModuleNotFoundError: No module named 'onnx.optimizer'

参考:
ModuleNotFoundError: No module named ‘onnx.optimizer‘
重新安装onnx 、onnx-simplifier
最后:

$ pip list | grep onnx
onnx                          1.8.1
onnx-graphsurgeon             0.3.25
onnx-simplifier               0.2.18
onnxruntime                   1.10.0
onnxruntime-gpu               1.8.0

转化

python exporter.py --ckpt ./checkpoint_epoch_190.pth 

期间报了好多错,是 onnx-simplifier 版本问题,前面onnx-simplifier==0.2 会装 onnx-simplifier v0.2.0,试遍onnx-simplifier 的所有版本,最后 onnx-simplifier v0.2.18 才能成功转化。

  1. 部署

cmake 报错:

FindCUDA.cmake:1799 (add_custom_command): OUTPUT containing a# is not all

发现路径中有# 号,换个路径或者修改原有路径名

make 报错:

error: ‘class Params’ has no member named ‘anchor_bottom_heights’; did you mean ‘anchors_bottom_height’?
   checkCudaErrors(cudaMemcpyAsync(anchor_bottom_heights_, params_.anchor_bottom_heights,

参数不一样导致,统一成一样的

trt_infer: INVALID_ARGUMENT: getPluginCreator could not find plugin ScatterBEV version 1
ERROR: builtin_op_importers.cpp:3661 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
: failed to parse onnx model file, please check the onnx version and trt support op!

参考:TensorRT - 解决INVALID_ARGUMENT: getPluginCreator could not find plugin ScatterND version 1
原来用的TensorRT 是 v7.2.2.3 ,而Xavier 是 v7.1.3.0 ,但是查遍TensorRT 的所有Plugin ,没有 ScatterBEV Plugin
后来在旧版的代码中有

.
├── plugin
│   ├── ScatterBEV.cpp
│   └── ScatterBEV_kernels.cu
├── pointpillar.cpp
├── postprocess.cpp
├── postprocess_kernels.cu
├── preprocess.cpp
└── preprocess_kernels.cu

CUDA-PointPillars 单独写了个插件,用旧版成功跑通。

GPU has cuda devices: 1
----device id: 0 info----
  GPU : Xavier 
  Capbility: 7.2
  Global memory: 31919MB
  Const memory: 64KB
  SM in a block: 48KB
  warp size: 32
  threads in a block: 1024
  block dim: (1024,1024,64)
  grid dim: (2147483647,65535,65535)

load TRT cache.
<<<<<<<<<<<
load file: ../../data/000000.bin
find points num: 20285
points_size : 20285
num_obj: 6
TIME: pointpillar: 118.546 ms.
Bndbox objs: 3
Saved prediction in: ../../eval/000000.txt
...

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/24688.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

SpringBoot整合Memcached缓存技术/JetCache缓存技术以及J2Cache缓存技术怎么在Spring Boot中配置

写在前面&#xff1a; 继续记录自己的SpringBoot学习之旅&#xff0c;这次是SpringBoot应用相关知识学习记录。若看不懂则建议先看前几篇博客&#xff0c;详细代码可在我的Gitee仓库SpringBoot克隆下载学习使用&#xff01; 3.5.1.6 Memcached缓存技术使用 3.5.1.6.1 下载安装…

torch包下载和安装失败的解决

今天打算使用python的 torch包的时候&#xff0c;输入pip install torch&#xff0c;在pycharm下载一直失败。 报错信息里面提示一开始是pip版本出错&#xff1a; WARNING: You are using pip version 20.0.2, however version 20.2.3 is available. 导致我一整天都在更新pip&…

软件工程毕设项目 计算机SSM毕业设计【源码+论文】

文章目录前言 题目1 : 基于SSM的旅游资源网站 <br /> 题目2 : 基于SSM的中药店商城网站 <br /> 题目3 : 基于SSM的汽车租赁网站<br /> 题目4 : 基于SSM的汉服文化平台网站 <br /> 题目5 : 基于SSM的校园疫情师生防疫登记备案系统 <br /> 题目6 :…

JS——【案例】图片轮播图(自动轮播/手动点击/悬停显示)[技术栈:html、css、JavaScript]

1、效果&#xff1a; 2、需求&#xff1a; 3、代码实现&#xff1a; <!DOCTYPE html> <html lang"en"><head><meta charset"UTF-8"><meta name"viewport" content"widthdevice-width, initial-scale1.0"&…

【飞桨Paddle】RTSP视频流和PP-Human实时行人分析

PP-Human是基于飞桨深度学习框架的业界首个开源的实时行人分析工具&#xff0c;支持图片/单镜头视频/多镜头视频多种输入方式&#xff0c;功能覆盖多目标跟踪、属性识别和行为分析&#xff0c;兼容图片、视频、在线视频流多种数据格式输入。 环境准备 环境要求&#xff1a; Pa…

蓝牙耳机哪款音质最好?公认音质好的蓝牙耳机品牌

耳机作为日常生活的调剂品&#xff0c;从性能到外观&#xff0c;再到音质让我对真无线蓝牙耳机的综合性能惊艳&#xff0c;蓝牙技术的成熟开启了无线传输模式&#xff0c;面对琳琅满目的无线蓝牙耳机&#xff0c;很多人一时之间无从下手&#xff0c;不知道口碑最好的蓝牙耳机是…

【空间/通道注意模型:Nest连接:IVIF】

NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models &#xff08;NestFuse: 基于Nest连接和空间/通道注意模型的红外和可见光图像融合架构&#xff09; 我们提出了一种新颖的红外和可见光图像融合…

分布式技术——分布式事务原理与实战

摘要 分布式事务是分布式系统中非常重要的一部分&#xff0c;最典型的例子是银行转账和扣款&#xff0c;A 和 B 的账户信息在不同的服务器上&#xff0c;A 给 B 转账 100 元&#xff0c;要完成这个操作&#xff0c;需要两个步骤&#xff0c;从 A 的账户上扣款&#xff0c;以及…

sqli-labs/Less-36

这一关和上一关一样都是get请求形式的 我们首先判断一下注入类型是否为数字型 输入如下 id1 and 12 正常回显了 说明属于字符型 然后判断是单引还是双引 输入1 回显如下 不好 遇到转义了 需要宽字节注入帮助逃逸才行 于是将注入语句改成了1%df 回显如下 出现报错信息 从…

聊聊Spring Cloud Gateway 动态路由及通过Apollo的实现

在之前我们了解的Spring Cloud Gateway配置路由方式有两种方式 1.通过配置文件 spring:cloud:gateway:routes:- id: testpredicates:- Path/ms/test/*filters:- StripPrefix2uri: http://localhost:9000 2.通过JavaBean Beanpublic RouteLocator routeLocator(RouteLocatorB…

最新科目一攻略(新规)

一、*新规题 1、学法减分学习和满分教育 学法减分学习网上学习3日内累计满30分钟且考试合格&#xff0c;一次扣减1分现场学习满1小时且考试合格&#xff0c;一次扣减2分参加组织的交通安全公益活动的&#xff0c;满1小时&#xff0c;一次扣减1分【易考】饮酒后受过处罚&#xf…

[内排序]八大经典排序合集

文章目录1 排序的基本概念1.1 什么是排序1.2 排序的稳定性1.3 内排序和外排序2 插入排序2.1 直接插入排序1. 排序思路2. 直接插入排序实例3. 排序算法4. 算法分析5. 折半插入排序 / 二分插入排序5.1 排序思路5.2 排序算法5.3 算法分析2.2 希尔排序1. 排序思路2. 希尔排序实例3.…

免费查题接口

免费查题接口 本平台优点&#xff1a; 多题库查题、独立后台、响应速度快、全网平台可查、功能最全&#xff01; 1.想要给自己的公众号获得查题接口&#xff0c;只需要两步&#xff01; 2.题库&#xff1a; 查题校园题库&#xff1a;查题校园题库后台&#xff08;点击跳转&a…

NodeJs实战-待办列表(7)-connect组件简化代码

NodeJs实战-待办列表7-connect组件简化代码什么是connectconnect demo 程序conenct 应用到服务端验证添加完成什么是connect connect demo 程序 安装 conncet、connect-query 组件 npm install connect npm install connect-query编写 demo 程序&#xff0c;保存到 test_conn…

阿里高工内产的 SpringBoot 实战派手册仅发布一天霸榜Github

近年来&#xff0c;Spring Boot 是整个Java社区中最有影响力的项目之一&#xff0c;常常被人看作是Java EE( Java Platform Enterprise Edition )开发的颠覆者&#xff0c;它将逐渐替代传统SSM ( Java EE互联网轻量级框架整合开发——Spring MvCSpringMyBatis&#xff09;架构。…

低代码平台的核心价值与优势

数字化时代的到来&#xff0c;迫使企业跳出舒适圈&#xff0c;坚定地踏上数字化转型的征程。不断飙升的用户需求&#xff0c;加上专业开发人员的显著缺口&#xff0c;让我们不得不承认&#xff0c;过去几十年的应用开发方式已经无法满足需求。低代码革命已经悄然开始&#xff0…

智慧防汛解决方案-最新全套文件

智慧防汛解决方案-最新全套文件一、建设背景行业痛点&#xff1a;1、家底不清&#xff0c;责权不分2、状态不明难以监管3、内外业脱节4、主观防涝二、建设思路面临的挑战&#xff1a;三、建设方案四、获取 - 智慧防汛全套最新解决方案合集一、建设背景 随着城市的快速发展&…

数据库的一级、二级、三级封锁协议

0、内容补充 X锁&#xff08;排他锁、写锁&#xff09; S锁&#xff08;共享锁、读锁&#xff09; 一、一级封锁协议 一级封锁协议是指&#xff0c;事务T在修改数据R之前必须先对其加X锁&#xff0c;直到事务结束才释放。事务结束包括正常结束&#xff08;COMMIT&#xff09;…

Kotlin Flow啊,你将流向何方?

前言 前边一系列的协程文章铺垫了很久&#xff0c;终于要分析Flow了。如果说协程是Kotlin的精华&#xff0c;那么Flow就是协程的精髓。 通过本篇文章&#xff0c;你将了解到&#xff1a; 什么是流&#xff1f;为什么引进Flow?Fow常见的操作为什么说Flow是冷流? 1. 什么是流 …

你必须要了解的国产数据库——OceanBase

文章目录前言1、什么是OceanBase&#xff1f;2、OceanBase名字的由来3、OceanBase 发展历程4、OceanBase优势5、OceanBase的核心特性6、应用场景7、未来展望前言 一直以来&#xff0c;外国企业在数据库领域保持高市占率&#xff0c;主流的数据库系统大多数都是使用外国的产品。…