SLAM实战项目(1) — ORB-SLAM2稠密地图重建

news2024/11/28 14:45:22

目录

1 整体思路

2 功能实现

3 结果运行

(1) TUM数据集下载

(2) associate.py用于RGB和Depth匹配

(3) 运行数据集

4 CMakeLists.txt文件修改

5 完整PointCloudMapping.h和PointCloudMapping.cc

6 报错分析

7 思考扩展


文章参考部分开源代码和报错文章

1 整体思路

  • 利用RGB图和Depth图创建稠密点云,在深度值有效的条件下生成点云。
  • 利用跟踪线程的关键帧,利用关键帧生成的点云变换到世界坐标系保存为全局点云地图。
  • 闭环线程中全局BA后用更新的位姿调整全局点云地图。
  • 可视化线程显示全局点云地图 。
  • 主线程中进行全局点云地图的关闭和保存。

2 功能实现

(1) 在PointCloudMapping.cc中创建构造函数,在构造函数中,定义存储稠密点云的mpGlobalCloud,设置滤波参数,创建可视化线程mptViewerThread

    PointCloudMapping::PointCloudMapping()
    {
        std::cout << "Point cloud mapping has structed. " << std::endl;
        mpGlobalCloud = boost::make_shared< PointCloud >( );

        //体素滤波
        //voxel.setLeafSize(resolution, resolution, resolution);

        //RadiusOutlierRemoval滤波器     
        voxel.setMeanK (60);        //设置在进行统计时考虑查询点临近点数
        voxel.setStddevMulThresh (0.6);    

        mptViewerThread = make_shared<thread>(bind(&PointCloudMapping::Viewer, this));
    }

 在PointCloudMapping.h中进行声明

typedef pcl::PointXYZRGBA PointT;
typedef pcl::PointCloud<PointT> PointCloud;

//构造函数
PointCloudMapping();
//创建RadiusOutlierRemoval滤波器对象  
pcl::StatisticalOutlierRemoval<PointT> voxel;
//声明稠密点云
PointCloud::Ptr mpGlobalCloud;

(2)  在PointCloudMapping.cc中创建InsertKeyFrame函数


    void PointCloudMapping::InsertKeyFrame(KeyFrame* kf, cv::Mat &color_img, cv::Mat &depth_img)
    {
        cout << "recieve a keyframe, id = " << kf->mnId << endl;
        unique_lock<mutex> lck(mmKeyFrameMutex);
        mvKeyFrames.push_back(kf);
        GeneratePointCloud(kf, color_img, depth_img);
    }

 在PointCloudMapping.h中进行声明

//对mutex进行声明
mutex mmKeyFrameMutex;
//声明mvKeyFrames用于存储关键帧
vector<KeyFrame*> mvKeyFrames;

思考:在什么地方mpPointCloudMapping->InsertKeyFrame?

在Tracking中插入mpPointCloudMapping->InsertKeyFrame

void Tracking::CreateNewKeyFrame()
{
    ...
    mpLocalMapper->InsertKeyFrame(pKF);

    // 点云地图插入关键帧
    mpPointCloudMapping->InsertKeyFrame(pKF, mImRGB, mImDepth); 
    
    mpLocalMapper->SetNotStop(false);
    mnLastKeyFrameId = mCurrentFrame.mnId;
    mpLastKeyFrame = pKF;
}

思考:如何见PointCloudMapping传入到Tracking和LoopClosing?

在Tracking.cc和LoopClosing.cc构造函数中传入

Tracking::Tracking(System *pSys, ORBVocabulary* pVoc, FrameDrawer *pFrameDrawer, MapDrawer *pMapDrawer, Map *pMap,shared_ptr<PointCloudMapping> pPointCloud,
    KeyFrameDatabase* pKFDB, const string &strSettingPath, const int sensor): mState(NO_IMAGES_YET), mSensor(sensor), mbOnlyTracking(false), mbVO(false), mpORBVocabulary(pVoc),mpPointCloudMapping(pPointCloud),
    mpKeyFrameDB(pKFDB), mpInitializer(static_cast<Initializer*>(NULL)), mpSystem(pSys), mpViewer(NULL), mpFrameDrawer(pFrameDrawer), mpMapDrawer(pMapDrawer), mpMap(pMap), mnLastRelocFrameId(0)
{}

在Tracking.h和LoopClosing.h中声明

shared_ptr<PointCloudMapping> mpPointCloudMapping;

思考:在什么地方传入RGB和Depth?

Tracking线程中有传入imRGB和imD,直接利用RGB和Depth比较方便

cv::Mat Tracking::GrabImageRGBD(const cv::Mat &imRGB,const cv::Mat &imD, const double &timestamp)
{
    mImGray = imRGB;
    mImRGB = imRGB; 
    mImDepth = imD;
    ...
}

(3)  在PointCloudMapping.cc中创建:GeneratePointCloud函数 

    void PointCloudMapping::GeneratePointCloud(KeyFrame* kf, cv::Mat &color_img, cv::Mat &depth_img)
    {
        // cv::imshow("color img", color_img);
        // cv::imshow("depth img", depth_img);
        PointCloud::Ptr tmp (new PointCloud());
        for ( int m=0; m<depth_img.rows; m+=3 )
        {
            for ( int n=0; n<depth_img.cols; n+=3 )
            {
                float d = depth_img.ptr<float>(m)[n];
                if (d < 0.01 || d>5)
                    continue;
                PointT p;
                p.z = d;
                p.x = ( n - kf->cx) * p.z / kf->fx;
                p.y = ( m - kf->cy) * p.z / kf->fy;
                
                p.b = color_img.ptr<uchar>(m)[n*3];
                p.g = color_img.ptr<uchar>(m)[n*3+1];
                p.r = color_img.ptr<uchar>(m)[n*3+2];

                // cout << p.x << " " << p.y << " " << p.z << endl;
                    
                tmp->points.push_back(p);
            }
        }
        
        cout << "The keyframe has point clouds: " << tmp->points.size() << endl;
        kf->mptrPointCloud = tmp;
    }

 (4)  在PointCloudMapping.cc中创建UpdateCloud函数 

    void PointCloudMapping::UpdateCloud()
    {
        unique_lock<mutex> lck(mmCloudeUpdateMutex);
        mbLoopBusy = true;
        cout << endl << endl << "start loop map point." << endl << endl;
        PointCloud::Ptr tmp(new PointCloud);

        for(int i = 0; i < mvKeyFrames.size(); i++)
        {
            if(!mvKeyFrames[i]->isBad())
            {
                PointCloud::Ptr cloud(new PointCloud);
                pcl::transformPointCloud( *(mvKeyFrames[i]->mptrPointCloud), *cloud, Converter::toMatrix4d(mvKeyFrames[i]->GetPoseInverse()));
                *tmp += *cloud;
            }
        }

        cout << "finsh loop map" << endl;
        //voxel.setInputCloud(tmp);
        //voxel.filter(*mpGlobalCloud);
        mbLoopBusy = false;
    }

  在PointCloudMapping.h中进行声明

//全局BA后更新关键帧的位姿来调整地图点
void UpdateCloud();   
//对mutex和bool进行声明
mutex mmCloudeUpdateMutex;
mutex mmShutDownMutex;
bool mbLoopBusy = false;

思考:在什么地方UpdateCloud()?

在全局BA后更新点云,全局BA后的点云经过优化,位姿最优

void LoopClosing::RunGlobalBundleAdjustment(unsigned long nLoopKF)
{
    ...
    mpMap->InformNewBigChange();
    mpLocalMapper->Release();
    //在全局BA后更新点云
    mpPointCloudMapping->UpdateCloud();
}

(5)  在PointCloudMapping.cc中创建Viewer函数 

   void PointCloudMapping::Viewer()
    {
        pcl::visualization::CloudViewer viewer("Dense map viewer");
        while(1)
        {
            size_t N = 0;
            {
                unique_lock<mutex> lck(mmKeyFrameMutex);
                if(mbLoopBusy)
                {
                    continue;
                }
                {
                    unique_lock<mutex> lck_shut_down(mmShutDownMutex);
                    if(mbShutDownFlag)
                    {
                        cout << "Viewer has break " << endl;
                        break;
                    }
                }

                N = mvKeyFrames.size();
                if(N == mnLastKeyFrameId)
                {
                    continue;
                }
                else
                {
                    unique_lock<mutex> lck_(mmCloudeUpdateMutex);
                    for(size_t i = mnLastKeyFrameId; i < N; i++)
                    {
                        PointCloud::Ptr p (new PointCloud);
                        pcl::transformPointCloud( *(mvKeyFrames[i]->mptrPointCloud), *p, Converter::toMatrix4d(mvKeyFrames[i]->GetPoseInverse()));
                        //cout<<"处理好第i个点云"<<i<<endl;
                        *mpGlobalCloud += *p;
                    }
                    PointCloud::Ptr tmp(new PointCloud);
                    voxel.setInputCloud(mpGlobalCloud);
                    voxel.filter(*tmp);
                    mpGlobalCloud->swap(*tmp);

                    mnLastKeyFrameId = N;
                    cout << "Total has point clouds: " << mpGlobalCloud->points.size() << endl;
                }
            }
            viewer.showCloud(mpGlobalCloud);
        }
    }

在PointCloudMapping.h中进行声明

//可视化
void Viewer();

//声明mutex和其他变量
mutex mmShutDownMutex;
bool mbShutDownFlag = false;
size_t mnLastKeyFrameId = 0;
mutex mmCloudeUpdateMutex;

(6)  在PointCloudMapping.cc中创建SaveCloud函数 

    void PointCloudMapping::SaveCloud()
    {
        pcl::io::savePCDFile("result.pcd",*mpGlobalCloud);
        cout << "global cloud save finished !" << endl;
    }

思考:在什么地方SaveCloud?

在rgbd_tum.cc文件是整个系统的主函数,在主函数运行结束时SaveCloud

int main(int argc, char **argv)
{
    //保存点云
    SLAM.save();
    SLAM.Shutdown(); 

    // Stop all threads
    return 0;
}

 (7)  在PointCloudMapping.cc中创建InsertKeyFrame函数 


    void PointCloudMapping::ShutDown()
    {
        {
            unique_lock<mutex> lck(mmShutDownMutex);
            mbShutDownFlag = true;
        }
        mptViewerThread->join();
        cout << "Point cloud mapping has shut down! " << endl;
    }

思考:在什么地方ShutDown?

void System::Shutdown()
{
    mpLocalMapper->RequestFinish();
    mpLoopCloser->RequestFinish();
    mpPointCloudMapping->ShutDown();
    ...
}

3 结果运行

(1) TUM数据集下载

TUM数据集下载地址:Computer Vision Group - Dataset Download

(2) associate.py用于RGB和Depth匹配

#!/usr/bin/python
# Software License Agreement (BSD License)
#
# Copyright (c) 2013, Juergen Sturm, TUM
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above
#    copyright notice, this list of conditions and the following
#    disclaimer in the documentation and/or other materials provided
#    with the distribution.
#  * Neither the name of TUM nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Requirements: 
# sudo apt-get install python-argparse

"""
The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.

For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
"""

import argparse
import sys
import os
import numpy


def read_file_list(filename):
    """
    Reads a trajectory from a text file. 
    
    File format:
    The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)
    and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. 
    
    Input:
    filename -- File name
    
    Output:
    dict -- dictionary of (stamp,data) tuples
    
    """
    file = open(filename)
    data = file.read()
    lines = data.replace(","," ").replace("\t"," ").split("\n") 
    list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]
    list = [(float(l[0]),l[1:]) for l in list if len(l)>1]
    return dict(list)

def associate(first_list, second_list,offset,max_difference):
    """
    Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim 
    to find the closest match for every input tuple.
    
    Input:
    first_list -- first dictionary of (stamp,data) tuples
    second_list -- second dictionary of (stamp,data) tuples
    offset -- time offset between both dictionaries (e.g., to model the delay between the sensors)
    max_difference -- search radius for candidate generation

    Output:
    matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))
    
    """
    //python2
    first_keys = first_list.keys()
    second_keys = second_list.keys()
    
    //python3
    //first_keys = list(first_list.keys())
    //second_keys = list(second_list.keys())
    potential_matches = [(abs(a - (b + offset)), a, b) 
                         for a in first_keys 
                         for b in second_keys 
                         if abs(a - (b + offset)) < max_difference]
    potential_matches.sort()
    matches = []
    for diff, a, b in potential_matches:
        if a in first_keys and b in second_keys:
            first_keys.remove(a)
            second_keys.remove(b)
            matches.append((a, b))
    
    matches.sort()
    return matches

if __name__ == '__main__':
    
    # parse command line
    parser = argparse.ArgumentParser(description='''
    This script takes two data files with timestamps and associates them   
    ''')
    parser.add_argument('first_file', help='first text file (format: timestamp data)')
    parser.add_argument('second_file', help='second text file (format: timestamp data)')
    parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')
    parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)
    parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.02)
    args = parser.parse_args()

    first_list = read_file_list(args.first_file)
    second_list = read_file_list(args.second_file)

    matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    

    if args.first_only:
        for a,b in matches:
            print("%f %s"%(a," ".join(first_list[a])))
    else:
        for a,b in matches:
            print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))
            
        

由于Python2和python3语法的差别,需要将associate.py中第86行87行的

first_keys = first_list.keys()
second_keys = second_list.keys()

改为

first_keys = list(first_list.keys())
second_keys = list(second_list.keys())

(3) 运行数据集

rgbd_dataset_freiburg3_long_office_household替换为自己的数据集名称,rgbd_dataset_freiburg3_long_office_household放在orb-slam2下,放在其他文件及路径要对应更改。

./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt ./Examples/RGB-D/TUM1.yaml rgbd_dataset_freiburg3_long_office_household rgbd_dataset_freiburg3_long_office_household/associations.txt

4 CMakeLists.txt文件修改

cmake_minimum_required(VERSION 2.8)	#设定cmake最小版本号
project(ORB_SLAM2)			#指定项目工程
set(CMAKE_CXX_STANDARD 14)		#C++版本可能会报错

#编译的类型(debug;release)
IF(NOT CMAKE_BUILD_TYPE)
  SET(CMAKE_BUILD_TYPE Release)
ENDIF()

#输出消息:"Build type: Release"(打印调试信息)
MESSAGE("Build type: " ${CMAKE_BUILD_TYPE})	

#cmake_c_flags用来设置编译选项 如 -g -wall(不展示警告);-march=native,GCC会自动检测你的CPU支持的指令集
#set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 -march=native ")
#set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 -march=native")

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 ")

#set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 ")
#set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 ")

#include:从文件或模块加载并运行CMake代码 ;CheckCXXCompilerFlag: 检查CXX编译器是否支持给定标志
include(CheckCXXCompilerFlag)		

# Check C++11 or C++0x support
#以下代码都用于自动判断系统编译器是否支持c++11标准;	
CHECK_CXX_COMPILER_FLAG("-std=c++11" COMPILER_SUPPORTS_CXX11)
CHECK_CXX_COMPILER_FLAG("-std=c++0x" COMPILER_SUPPORTS_CXX0X)
if(COMPILER_SUPPORTS_CXX11)
   set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
   add_definitions(-DCOMPILEDWITHC11)
   message(STATUS "Using flag -std=c++11.")
elseif(COMPILER_SUPPORTS_CXX0X)
   set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x")
   add_definitions(-DCOMPILEDWITHC0X)
   message(STATUS "Using flag -std=c++0x.")
else()
   message(FATAL_ERROR "The compiler ${CMAKE_CXX_COMPILER} has no C++11 support. Please use a different C++ compiler.")
endif()


#list(APPEND <list><element> [<element> ...])  添加新element到list中
LIST(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake_modules)

#设置OpenCV的目录
find_package(OpenCV 3.0 QUIET)
if(NOT OpenCV_FOUND)
   find_package(OpenCV 2.4.3 QUIET)
   if(NOT OpenCV_FOUND)
      message(FATAL_ERROR "OpenCV > 2.4.3 not found.")
   endif()
endif()


#找到各种头文件以及源码
find_package(Eigen3 REQUIRED NO_MODULE)
find_package(Pangolin REQUIRED)
find_package(PCL 1.10 REQUIRED)

include_directories(
${PROJECT_SOURCE_DIR}
${PROJECT_SOURCE_DIR}/include
${EIGEN3_INCLUDE_DIR}
${Pangolin_INCLUDE_DIRS}
${PCL_INCLUDE_DIRS}
)

#设置变量到工程lib下,构建时将所有LIBRARY目标放置的位置
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/lib)

#建立共享库
add_library(${PROJECT_NAME} SHARED
src/System.cc
src/Tracking.cc
src/LocalMapping.cc
src/LoopClosing.cc
src/ORBextractor.cc
src/ORBmatcher.cc
src/FrameDrawer.cc
src/Converter.cc
src/MapPoint.cc
src/KeyFrame.cc
src/Map.cc
src/MapDrawer.cc
src/Optimizer.cc
src/PnPsolver.cc
src/Frame.cc
src/KeyFrameDatabase.cc
src/Sim3Solver.cc
src/Initializer.cc
src/Viewer.cc
src/PointCloudMapping.cc
)


#该指令的作用为将目标文件与库文件进行链接
target_link_libraries(${PROJECT_NAME}
${OpenCV_LIBS}
${EIGEN3_LIBS}
${Pangolin_LIBRARIES}
${PCL_LIBRARIES} 
${PROJECT_SOURCE_DIR}/Thirdparty/DBoW2/lib/libDBoW2.so
${PROJECT_SOURCE_DIR}/Thirdparty/g2o/lib/libg2o.so
)

# Build examples

#指定可执行文件的输出位置,CMAKE_RUNTIME_OUTPUT_DIRECTORY: 构建RUNTIME目标文件的输出目
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/Examples/RGB-D)

#使用指定的源文件将可执行文件添加到项目中
add_executable(rgbd_tum
Examples/RGB-D/rgbd_tum.cc)
target_link_libraries(rgbd_tum ${PROJECT_NAME})

set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/Examples/Stereo)

add_executable(stereo_kitti
Examples/Stereo/stereo_kitti.cc)
target_link_libraries(stereo_kitti ${PROJECT_NAME})

add_executable(stereo_euroc
Examples/Stereo/stereo_euroc.cc)
target_link_libraries(stereo_euroc ${PROJECT_NAME})


set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${PROJECT_SOURCE_DIR}/Examples/Monocular)

add_executable(mono_tum
Examples/Monocular/mono_tum.cc)
target_link_libraries(mono_tum ${PROJECT_NAME})

add_executable(mono_kitti
Examples/Monocular/mono_kitti.cc)
target_link_libraries(mono_kitti ${PROJECT_NAME})

add_executable(mono_euroc
Examples/Monocular/mono_euroc.cc)
target_link_libraries(mono_euroc ${PROJECT_NAME})

5 完整PointCloudMapping.h和PointCloudMapping.cc

#ifndef POINTCLOUDMAPPING_H
#define POINTCLOUDMAPPING_H

#include "System.h"
#include <pcl/visualization/cloud_viewer.h>
#include <pcl/filters/voxel_grid.h>
#include <pcl/filters/statistical_outlier_removal.h>
#include <pcl/common/transforms.h>
#include <pcl/io/pcd_io.h>
#include <condition_variable>

namespace ORB_SLAM2
{
class PointCloudMapping
{
public:
    //构造函数
    PointCloudMapping();
    //将Tracking线程得到的关键帧传入到稠密建图中,并开始稠密建图
    void InsertKeyFrame(KeyFrame* kf, cv::Mat& color_img, cv::Mat& depth_img);
    //通过RGB图、深度图构建该关键帧在相机坐标下的稠密点云
    void GeneratePointCloud(KeyFrame* kf, cv::Mat& color_img, cv::Mat& depth_img);
    //全局BA后更新关键帧的位姿来调整地图点
    void UpdateCloud();
    //可视化
    void Viewer();
    void SaveCloud();
    void ShutDown();

    typedef pcl::PointXYZRGBA PointT;
    typedef pcl::PointCloud<PointT> PointCloud;

    PointCloud::Ptr mpGlobalCloud;
    shared_ptr<thread> mptViewerThread;
    vector<KeyFrame*> mvKeyFrames;
    size_t mnLastKeyFrameId = 0;
    mutex mmKeyFrameMutex;
    mutex mmCloudeUpdateMutex;
    mutex mmShutDownMutex;

    bool mbLoopBusy = false;
    bool mbShutDownFlag = false;

    double resolution = 0.04;
    //pcl::VoxelGrid<PointT> voxel;
    pcl::StatisticalOutlierRemoval<PointT> voxel;
};

} 

#endif // POINTCLOUDMAPPING_H

#include "PointCloudMapping.h"
#include "Converter.h"


namespace ORB_SLAM2
{
    PointCloudMapping::PointCloudMapping()
    {
        std::cout << "Point cloud mapping has structed. " << std::endl;
        mpGlobalCloud = boost::make_shared< PointCloud >( );

        //体素滤波
        //voxel.setLeafSize(resolution, resolution, resolution);

        //RadiusOutlierRemoval滤波器     
        //voxel.setMeanK (60);        //设置在进行统计时考虑查询点临近点数
        //voxel.setStddevMulThresh (0.6);    

        mptViewerThread = make_shared<thread>(bind(&PointCloudMapping::Viewer, this));
    }

    void PointCloudMapping::InsertKeyFrame(KeyFrame* kf, cv::Mat &color_img, cv::Mat &depth_img)
    {
        cout << "recieve a keyframe, id = " << kf->mnId << endl;
        unique_lock<mutex> lck(mmKeyFrameMutex);
        mvKeyFrames.push_back(kf);
        GeneratePointCloud(kf, color_img, depth_img);

    }

    void PointCloudMapping::GeneratePointCloud(KeyFrame* kf, cv::Mat &color_img, cv::Mat &depth_img)
    {

        // cv::imshow("color img", color_img);
        // cv::imshow("depth img", depth_img);
        PointCloud::Ptr tmp (new PointCloud());
        for ( int m=0; m<depth_img.rows; m+=3 )
        {
            for ( int n=0; n<depth_img.cols; n+=3 )
            {
                float d = depth_img.ptr<float>(m)[n];
                if (d < 0.01 || d>5)
                    continue;
                PointT p;
                p.z = d;
                p.x = ( n - kf->cx) * p.z / kf->fx;
                p.y = ( m - kf->cy) * p.z / kf->fy;
                
                p.b = color_img.ptr<uchar>(m)[n*3];
                p.g = color_img.ptr<uchar>(m)[n*3+1];
                p.r = color_img.ptr<uchar>(m)[n*3+2];

                // cout << p.x << " " << p.y << " " << p.z << endl;
                    
                tmp->points.push_back(p);
            }
        }
        
        cout << "The keyframe has point clouds: " << tmp->points.size() << endl;
        kf->mptrPointCloud = tmp;
    }

    void PointCloudMapping::UpdateCloud()
    {
        unique_lock<mutex> lck(mmCloudeUpdateMutex);
        mbLoopBusy = true;
        cout << endl << endl << "start loop map point." << endl << endl;
        PointCloud::Ptr tmp(new PointCloud);

        for(int i = 0; i < mvKeyFrames.size(); i++)
        {
            if(!mvKeyFrames[i]->isBad())
            {
                PointCloud::Ptr cloud(new PointCloud);
                pcl::transformPointCloud( *(mvKeyFrames[i]->mptrPointCloud), *cloud, Converter::toMatrix4d(mvKeyFrames[i]->GetPoseInverse()));
                *tmp += *cloud;
            }
        }

        cout << "finsh loop map" << endl;
        //voxel.setInputCloud(tmp);
        //voxel.filter(*mpGlobalCloud);
        mbLoopBusy = false;
    }

    void PointCloudMapping::Viewer()
    {
        pcl::visualization::CloudViewer viewer("Dense map viewer");
        while(1)
        {
            size_t N = 0;
            {
                unique_lock<mutex> lck(mmKeyFrameMutex);
                if(mbLoopBusy)
                {
                    continue;
                }
                {
                    unique_lock<mutex> lck_shut_down(mmShutDownMutex);
                    if(mbShutDownFlag)
                    {
                        cout << "Viewer has break " << endl;
                        break;
                    }
                }

                N = mvKeyFrames.size();
                if(N == mnLastKeyFrameId)
                {
                    continue;
                }
                else
                {
                    unique_lock<mutex> lck_(mmCloudeUpdateMutex);
                    for(size_t i = mnLastKeyFrameId; i < N; i++)
                    {
                        PointCloud::Ptr p (new PointCloud);
                        pcl::transformPointCloud( *(mvKeyFrames[i]->mptrPointCloud), *p, Converter::toMatrix4d(mvKeyFrames[i]->GetPoseInverse()));
                        //cout<<"处理好第i个点云"<<i<<endl;
                        *mpGlobalCloud += *p;
                    }
                    //PointCloud::Ptr tmp(new PointCloud);
                    //voxel.setInputCloud(mpGlobalCloud);
                    //voxel.filter(*tmp);
                    //mpGlobalCloud->swap(*tmp);

                    mnLastKeyFrameId = N;
                    cout << "Total has point clouds: " << mpGlobalCloud->points.size() << endl;
                }
            }
            viewer.showCloud(mpGlobalCloud);
        }
    }

    void PointCloudMapping::SaveCloud()
    {
        pcl::io::savePCDFile("result.pcd",*mpGlobalCloud);
        cout << "global cloud save finished !" << endl;
    }

    void PointCloudMapping::ShutDown()
    {
        {
            unique_lock<mutex> lck(mmShutDownMutex);
            mbShutDownFlag = true;
        }
        mptViewerThread->join();
        cout << "Point cloud mapping has shut down! " << endl;
    }
}

6 报错分析

(1) 编译出现参数未声明问题,参考代码中有些未在头文件中声明,要完整声明才能正确编译。

(2) 运行数据集时出现段错误,出现可视化界面后闪退

ORB Extractor Parameters: 
- Number of Features: 1000
- Scale Levels: 8
- Scale Factor: 1.2
- Initial Fast Threshold: 20
- Minimum Fast Threshold: 7

Depth Threshold (Close/Far Points): 2.98842

-------
Start processing sequence ...
Images in the sequence: 2488

New map created with 891 points
recieve a keyframe, id =1
段错误 (核心已转储)

后面搜寻各种解决办法:

解决办法1:主文件和Thirdparty/g2o文件中的CMakeList.txt,将-march=native删除,仍未解决。

#set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 -march=native ")
#set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 -march=native")

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 ")

解决办法2:尝试将Eigen版本降低为Eigen-3.1.0,同样也没有解决

解决办法3:后面尝试运行提供的参考代码成功运行,定位错误的原因为代码问题,System.cc文件没有声明:

mpPointCloudMapping = make_shared<PointCloudMapping>();

(3) 数据运行结果,自己运行和参考代码运行存在明显不同,存在重复错乱的问题

解决办法:对比运行数据发现,点云数明显不同,思考显示和运行数据判断,可能是RGB图和深度信息不匹配的情况,没有有效提取特征点。

定位问题在mImDepth名称在其他地方没有同步更改,更改后稠密建图正常。

cv::Mat Tracking::GrabImageRGBD(const cv::Mat &imRGB,const cv::Mat &imD, const double &timestamp)
{
    mImGray = imRGB;
    mImRGB = imRGB; 
    mImDepth = imD;
    
    // mImGray、mImRGB、mImDepth在后面出现的地方同步更改
}

 (4) 编译时修改代码后好像会延续上次的错误,删除编译文件后解决。

7 思考扩展

(1) 跟踪线程和局部建图线程中插入 mpPointCloudMapping->InsertKeyFrame 函数的优缺点

跟踪线程插入

  • 优点:跟踪线程中InsertKeyFrame进行稠密建图,可以很好的利用RGB和Depth
  • 缺点:会占用个跟踪线程的时间

局部建图线程中插入

  • 优点:不会占用跟踪线程的时间,更好的满足实时性的要求,在局部建图后的关键帧是经过优化后的位姿,同时可以在局部建图后建立稠密点云地图,效果会更好。
  • 缺点:利用RGB和Depth没有跟踪线程方便

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/607478.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【微信小程序开发】第 3 节 - 安装开发者工具

欢迎来到博主 Apeiron 的博客&#xff0c;祝您旅程愉快 &#xff01; 时止则止&#xff0c;时行则行。动静不失其时&#xff0c;其道光明。 目录 1、缘起 2、微信开发者工具 3、下载 4、安装 5、扫码登录 6、设置外观和代理 7、总结 1、缘起 开发微信小程序从大的方…

吊打面试官的16000字JVM专属秘籍,又一个Java面试神器!终于可在简历写上精通JVM了!

前言 吊打面试官的16000字JVM专属秘籍&#xff0c;总共包含三部分的内容&#xff0c;从基础到进阶带大家一步步深入理解JVM&#xff01; 学完就可以在简历上面直接写上精通JVM&#xff01; 因为篇幅限制这里只给大家做简单的一个介绍&#xff0c;也就是进行一个大点的梳理&a…

记录--手把手教你Vue+ECharts+高德地图API实现天气预报数据可视化

这里给大家分享我在网上总结出来的一些知识&#xff0c;希望对大家有所帮助 前言 所谓数据可视化&#xff0c;我们可以理解为从宏观角度来看一眼就能看出来整个数据的占比&#xff0c;走向。对于数据可视化&#xff0c;很多互联网公司是很看重这一块的&#xff0c;包括大厂&…

基于wireshark打造安全分析师工具--解析suricata中的分析结果

从本篇文章开始&#xff0c;我将通过若干篇文章陆续介绍在实际安全运营的过程中&#xff0c;基于wireshark打造安全分析师趁手的流量威胁分析工具&#xff0c;帮助安全分析人员在面对网络数据包取证和分析时候达到事半功倍的效果。本篇文件介绍使用在使用iwreshark分析数据包事…

21天学会C++:Day7----auto关键字

CSDN的uu们&#xff0c;大家好。这里是C入门的第七讲。 座右铭&#xff1a;前路坎坷&#xff0c;披荆斩棘&#xff0c;扶摇直上。 博客主页&#xff1a; 姬如祎 收录专栏&#xff1a;C专题 目录 1. 知识引入 2. auto的使用 2.1 auto与指针和引用结合起来使用 2.2 在同一…

区分序列/UIO/特征集示例

区分序列/UIO/特征集示例 从确定性有限状态机进行测试&#xff1a;检查状态 概述 让我们假设我们有一个状态集 S 的 FSM M。还假设我们知道通过转换 t 达到的当前状态是 s 或 s0。 我们如何确定 t 到达了哪个状态&#xff1f; 分离状态 输入序列 w 将两个状态 s 和 s0 分开&…

C++进阶之继承

文章目录 前言一、继承的概念及定义1.继承概念2.继承格式与访问限定符3.继承基类与派生类的访问关系变化4.总结 二、基类和派生类对象赋值转换基本概念与规则 三、继承中的作用域四、派生类的默认成员函数五、继承与友元六、继承与静态成员六、复杂的菱形继承及菱形虚拟继承七、…

图论试题2020

n-m 2 16 Pk(Kn)k(k-1)…(k-n1)。 C&#xff1a;A2对角线元素aii2等于对应顶点vi的度数&#xff0c;所以对角线元素之和等于边数的两倍。 A的所有特征值的平方和等于A2的对角线元素之和。 B 完全图没有顶点隔&#xff0c;实际上也只有以完全图为生成子图的图没有顶点隔。 连通…

Qt6 C++基础入门1 定时器与QTimer

定时器 定时器图片流水灯案例 实现效果&#xff1a;构建一个界面&#xff0c;点击开始按钮轮流播放文件夹下图片&#xff0c;点击停止按钮停止播放 构建页面&#xff0c;上部是一个没有内容的 label 下面是开始和暂停按钮&#xff0c;各自的名称分别为 startBtn 和 stopBtn 先保…

6.事件绑定

目录 1 事件对象的属性 2 事件绑定方式 3 在事件中赋值 4 事件传参 1 事件对象的属性 target是触发该事件源头的组件&#xff0c;currentTarget是当前事件所绑定的组件&#xff0c;比如现在有一个父组件包着子组件&#xff0c;你给父组件绑定事件&#xff0c;由于事件…

ps磨皮插件专用智能磨皮插件Portraiture4

Portraiture是一款智能磨皮插件&#xff0c;为Photoshop和Lightroom添加一键磨皮美化功能&#xff0c;快速对照片中皮肤、头发、眉毛等部位进行美化&#xff0c;无需手动调整&#xff0c;大大提高P图效率。全新4版本&#xff0c;升级AI算法&#xff0c;并独家支持多人及全身模式…

从0到1深入剖析微服务架构,阿里人十年经验浓缩成一份笔记

前言 数字化经济的快速发展和云计算给底层IT系统带来的巨大变革正是当下微服务架构快速发展的时代背景。Gartner预计&#xff0c;从2018年到2022年&#xff0c;PaaS将成为未来的主流平台交付模式&#xff0c;而PaaS平台需要更加灵活的云原生应用架构做技术支撑&#xff0c;微服…

图论与算法(3)图的深度优先遍历

1. 遍历的意义 1.1 图的遍历 图的遍历是指按照一定规则访问图中的所有顶点&#xff0c;以便获取图的信息或执行特定操作。常见的图遍历算法包括深度优先搜索&#xff08;DFS&#xff09;和广度优先搜索&#xff08;BFS&#xff09;。 深度优先搜索&#xff08;DFS&#xff0…

UART串口通信实验

不管是单片机开发还是嵌入式 Linux 开发&#xff0c;串口都是最常用到的外设。 可以通过串口将开发板与电脑相连&#xff0c;然后在电脑上通过串口调试助手来调试程序。 还有很多模块&#xff0c;比如蓝牙、GPS、GPRS等都使用串口与主控进行通信。 UART简介 串口全称串行接口…

vb6 Webview2微软Edge Chromium内核执行JS取网页数据测速

微软Edge Chromium内核执行JS获取网页数据测试 ExcuteScript eval(document.body.innerHTML) from : https://www.163.com 采集的网页HTM字符串占用字节空间1.2MB ExcuteScript回调事件中取得JS执行结果&#xff0c;用时 54 毫秒 其中JSON转字符13.5209毫秒 jSON数据长度: 增…

ChatGPT更新说明(20230524)

原文传送门&#xff1a;ChatGPT — Release Notes 更新说明&#xff08;5月24日&#xff09; 简要&#xff1a;iOS应用在更多国家可用&#xff0c;Alpha测试中的共享链接&#xff0c;Bing插件&#xff0c;iOS上的历史记录禁用 ChatGPT iOS应用在更多国家可用 好消息&#xf…

Elasticsearch:如何使用集群级别的分片分配过滤(不包括节点)安全地停用节点

当你想停用 Elasticsearch 中的节点时&#xff0c;通常的过程不是直接销毁节点。 如果你这样做&#xff0c;那么你就有数据丢失的风险&#xff0c;这不是你想要对应该是可靠的数据库做的事情。 这样做的问题是&#xff0c;节点很可能会通过 Elasticsearch 处理的恰当命名的分片…

Character.AI成为新晋AI聊天应用爆款;谷歌推出 Google Slides AI 图像生成

&#x1f989; AI新闻 &#x1f680; Character.AI&#xff1a;首周下载量超越ChatGPT&#xff0c;成为新晋AI聊天应用爆款 摘要&#xff1a;Character.AI是一款受欢迎的人工智能聊天应用&#xff0c;用户可以自由创建AI角色&#xff0c;并与它们聊天。该应用于2023年5月23日…

C#,码海拾贝(32)——计算“实对称三对角阵的全部特征值与特征向量的”之C#源代码,《C#数值计算算法编程》源代码升级改进版

using System; namespace Zhou.CSharp.Algorithm { /// <summary> /// 矩阵类 /// 作者&#xff1a;周长发 /// 改进&#xff1a;深度混淆 /// https://blog.csdn.net/beijinghorn /// </summary> public partial class Matrix {…

【Mysql】 表的约束

文章目录 【Mysql】 表的约束空属性默认值列描述zerofill主键自增长唯一键外键综合案例 【Mysql】 表的约束 上一个博客记录的是mysql中的类型&#xff0c;这篇博客记录的是mysql中的表的约束&#xff1b;即列字段对插入数据的约束 空属性 俩个值&#xff1a; null (默认) 和…