相机标定与3D重建(3)使用OpenCV对摄像机进行标定

news2024/12/31 4:51:44

相机已经存在很长很长时间了。然而,随着20世纪末廉价针孔相机的出现,针孔相机在我们的日常生活中司空见惯。不幸的是,这种廉价是有代价的:严重的扭曲。幸运的是,这些都是常量,通过标定和一些重新映射,我们可以纠正这一点。此外,通过标定,还可以确定相机的自然单位(像素)和现实单位(例如毫米)之间的关系。

1.理论

对于畸变,OpenCV考虑了径向和切向畸变。

对于径向畸变,采用以下公式:
x d i s t o r t e d = x ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) y d i s t o r t e d = y ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) x_{distorted}=x(1+k_1r^2+k_2r^4+k_3r^6) \\ y_{distorted}=y(1+k_1r^2+k_2r^4+k_3r^6) xdistorted=x(1+k1r2+k2r4+k3r6)ydistorted=y(1+k1r2+k2r4+k3r6)
因此,对于一个未扭曲的像素点(x,y),它在扭曲图像上的位置将是( x d i s t o r t e d , x d i s t o r t e d x_{distorted}, x_{distorted} xdistorted,xdistorted)。径向畸变的存在表现为“桶”或“鱼眼”效应。

切向畸变的发生是由于透镜与成像平面并非完全平行。它可以用公式表示:
x d i s t o r t e d = x + [ 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) y d i s t o r t e d = y + [ 2 p 2 x y + p 2 ( r 2 + 2 y 2 ) x_{distorted}=x + [2p_1xy + p_2(r^2+2x^2) \\ y_{distorted}=y + [2p_2xy + p_2(r^2+2y^2) xdistorted=x+[2p1xy+p2(r2+2x2)ydistorted=y+[2p2xy+p2(r2+2y2)

所以我们有5个畸变参数,在OpenCV中表现为一个5列的行矩阵:distortion_coefficients=(k1 k2 p1 p2 k3)

现在,对于单位转换,我们使用以下公式:
[ x y w ] = [ f x 0 c x 0 f y c y 0 0 1 ] [ X Y Z ] \left[ \begin{matrix} x \\ y \\ w \end{matrix} \right] = \left[ \begin{matrix} f_x & 0 & c_x\\ 0 & f_y & c_y \\ 0 & 0 & 1\end{matrix} \right]\left[ \begin{matrix} X \\ Y \\ Z \end{matrix} \right] xyw = fx000fy0cxcy1 XYZ
这里 w w w的存在是用单应性坐标系( w = Z w=Z w=Z)来解释的。未知的参数是 f x f_x fx f y f_y fy(相机焦距)和 ( c x , c y ) (c_x,c_y) (cx,cy),它们是用像素坐标表示的光学中心。如果两个轴都有一个共同的焦距和一个给定的纵横比(通常是1),那么 f y = f x ∗ a f_y=f_x∗a fy=fxa,在上面的公式中,我们将有一个单一的焦距 f f f。包含这四个参数的矩阵称为摄像机矩阵。尽管无论使用何种相机分辨率,失真系数都是相同的,但这些系数应与标定分辨率的当前分辨率一起缩放。

确定这两个矩阵的过程就是标定。这些参数的计算是通过基本几何方程来完成的。所用的方程取决于所选择的标定对象。目前OpenCV支持三种(不止三种)类型的对象进行标定:

  • 经典的黑白棋盘
  • 对称圆形图案
  • 不对称圆形图案

基本上,你需要用你的相机拍下这些标定模式的快照,然后让OpenCV找到它们。每一个发现的模式都会产生一个新的等式。要解这个方程,至少需要预定数量的模式快照来形成一个适定的方程系统。这个数字在棋盘图案中更高,在圆形图案中更低。例如,理论上棋盘图案需要至少两次快照。然而,在实际操作中,我们的输入图像中存在大量的噪声,因此为了获得好的结果,您可能需要至少10个输入模式的良好快照。

2.目标

示例应用程序将:

  • 确定失真矩阵
  • 确定摄像机矩阵
  • 从相机,视频和图像文件列表中获取输入
  • 从XML/YAML文件读取配置
  • 将结果保存到XML/YAML文件中
  • 计算重投影误差

3.源代码

你也可以在OpenCV源代码库的samples/cpp/tutorial_code/calib3d/camera_calibration/文件夹中找到源代码,或者从这里下载。对于程序的使用,请使用-h参数运行它。程序有一个基本的参数:它的配置文件的名称。如果没有给出,那么它将尝试打开名为“default.xml”的文件。下面是一个XML格式的配置文件示例。在配置文件中,您可以选择使用相机,视频文件或图像列表作为输入。如果选择最后一个,则需要创建一个枚举的配置文件,如文件所示。需要记住的是,需要使用应用程序工作目录中的绝对路径或相对路径指定图像地址。您可以在上面提到的samples目录中找到所有这些内容。

应用程序启动时从配置文件读取设置。虽然,这是一个重要的部分,它与本教程的主题无关:相机标定。因此,我选择不在这里发布该部分的代码。关于如何做到这一点的技术背景,可以在使用XML和YAML文件的文件输入和输出教程中找到。

#include <iostream>
#include <sstream>
#include <string>
#include <ctime>
#include <cstdio>

#include <opencv2/core.hpp>
#include <opencv2/core/utility.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/calib3d.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/videoio.hpp>
#include <opencv2/highgui.hpp>

using namespace cv;
using namespace std;

class Settings
{
public:
    Settings() : goodInput(false) {}
    enum Pattern { NOT_EXISTING, CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
    enum InputType { INVALID, CAMERA, VIDEO_FILE, IMAGE_LIST };

    void write(FileStorage& fs) const                        //Write serialization for this class
    {
        fs << "{"
                  << "BoardSize_Width"  << boardSize.width
                  << "BoardSize_Height" << boardSize.height
                  << "Square_Size"         << squareSize
                  << "Calibrate_Pattern" << patternToUse
                  << "Calibrate_NrOfFrameToUse" << nrFrames
                  << "Calibrate_FixAspectRatio" << aspectRatio
                  << "Calibrate_AssumeZeroTangentialDistortion" << calibZeroTangentDist
                  << "Calibrate_FixPrincipalPointAtTheCenter" << calibFixPrincipalPoint

                  << "Write_DetectedFeaturePoints" << writePoints
                  << "Write_extrinsicParameters"   << writeExtrinsics
                  << "Write_gridPoints" << writeGrid
                  << "Write_outputFileName"  << outputFileName

                  << "Show_UndistortedImage" << showUndistorted

                  << "Input_FlipAroundHorizontalAxis" << flipVertical
                  << "Input_Delay" << delay
                  << "Input" << input
           << "}";
    }
    void read(const FileNode& node)                          //Read serialization for this class
    {
        node["BoardSize_Width" ] >> boardSize.width;
        node["BoardSize_Height"] >> boardSize.height;
        node["Calibrate_Pattern"] >> patternToUse;
        node["Square_Size"]  >> squareSize;
        node["Calibrate_NrOfFrameToUse"] >> nrFrames;
        node["Calibrate_FixAspectRatio"] >> aspectRatio;
        node["Write_DetectedFeaturePoints"] >> writePoints;
        node["Write_extrinsicParameters"] >> writeExtrinsics;
        node["Write_gridPoints"] >> writeGrid;
        node["Write_outputFileName"] >> outputFileName;
        node["Calibrate_AssumeZeroTangentialDistortion"] >> calibZeroTangentDist;
        node["Calibrate_FixPrincipalPointAtTheCenter"] >> calibFixPrincipalPoint;
        node["Calibrate_UseFisheyeModel"] >> useFisheye;
        node["Input_FlipAroundHorizontalAxis"] >> flipVertical;
        node["Show_UndistortedImage"] >> showUndistorted;
        node["Input"] >> input;
        node["Input_Delay"] >> delay;
        node["Fix_K1"] >> fixK1;
        node["Fix_K2"] >> fixK2;
        node["Fix_K3"] >> fixK3;
        node["Fix_K4"] >> fixK4;
        node["Fix_K5"] >> fixK5;

        validate();
    }
    void validate()
    {
        goodInput = true;
        if (boardSize.width <= 0 || boardSize.height <= 0)
        {
            cerr << "Invalid Board size: " << boardSize.width << " " << boardSize.height << endl;
            goodInput = false;
        }
        if (squareSize <= 10e-6)
        {
            cerr << "Invalid square size " << squareSize << endl;
            goodInput = false;
        }
        if (nrFrames <= 0)
        {
            cerr << "Invalid number of frames " << nrFrames << endl;
            goodInput = false;
        }

        if (input.empty())      // Check for valid input
                inputType = INVALID;
        else
        {
            if (input[0] >= '0' && input[0] <= '9')
            {
                stringstream ss(input);
                ss >> cameraID;
                inputType = CAMERA;
            }
            else
            {
                if (isListOfImages(input) && readStringList(input, imageList))
                {
                    inputType = IMAGE_LIST;
                    nrFrames = (nrFrames < (int)imageList.size()) ? nrFrames : (int)imageList.size();
                }
                else
                    inputType = VIDEO_FILE;
            }
            if (inputType == CAMERA)
                inputCapture.open(cameraID);
            if (inputType == VIDEO_FILE)
                inputCapture.open(input);
            if (inputType != IMAGE_LIST && !inputCapture.isOpened())
                    inputType = INVALID;
        }
        if (inputType == INVALID)
        {
            cerr << " Input does not exist: " << input;
            goodInput = false;
        }

        flag = 0;
        if(calibFixPrincipalPoint) flag |= CALIB_FIX_PRINCIPAL_POINT;
        if(calibZeroTangentDist)   flag |= CALIB_ZERO_TANGENT_DIST;
        if(aspectRatio)            flag |= CALIB_FIX_ASPECT_RATIO;
        if(fixK1)                  flag |= CALIB_FIX_K1;
        if(fixK2)                  flag |= CALIB_FIX_K2;
        if(fixK3)                  flag |= CALIB_FIX_K3;
        if(fixK4)                  flag |= CALIB_FIX_K4;
        if(fixK5)                  flag |= CALIB_FIX_K5;

        if (useFisheye) {
            // the fisheye model has its own enum, so overwrite the flags
            flag = fisheye::CALIB_FIX_SKEW | fisheye::CALIB_RECOMPUTE_EXTRINSIC;
            if(fixK1)                   flag |= fisheye::CALIB_FIX_K1;
            if(fixK2)                   flag |= fisheye::CALIB_FIX_K2;
            if(fixK3)                   flag |= fisheye::CALIB_FIX_K3;
            if(fixK4)                   flag |= fisheye::CALIB_FIX_K4;
            if (calibFixPrincipalPoint) flag |= fisheye::CALIB_FIX_PRINCIPAL_POINT;
        }

        calibrationPattern = NOT_EXISTING;
        if (!patternToUse.compare("CHESSBOARD")) calibrationPattern = CHESSBOARD;
        if (!patternToUse.compare("CIRCLES_GRID")) calibrationPattern = CIRCLES_GRID;
        if (!patternToUse.compare("ASYMMETRIC_CIRCLES_GRID")) calibrationPattern = ASYMMETRIC_CIRCLES_GRID;
        if (calibrationPattern == NOT_EXISTING)
        {
            cerr << " Camera calibration mode does not exist: " << patternToUse << endl;
            goodInput = false;
        }
        atImageList = 0;

    }
    Mat nextImage()
    {
        Mat result;
        if( inputCapture.isOpened() )
        {
            Mat view0;
            inputCapture >> view0;
            view0.copyTo(result);
        }
        else if( atImageList < imageList.size() )
            result = imread(imageList[atImageList++], IMREAD_COLOR);

        return result;
    }

    static bool readStringList( const string& filename, vector<string>& l )
    {
        l.clear();
        FileStorage fs(filename, FileStorage::READ);
        if( !fs.isOpened() )
            return false;
        FileNode n = fs.getFirstTopLevelNode();
        if( n.type() != FileNode::SEQ )
            return false;
        FileNodeIterator it = n.begin(), it_end = n.end();
        for( ; it != it_end; ++it )
            l.push_back((string)*it);
        return true;
    }

    static bool isListOfImages( const string& filename)
    {
        string s(filename);
        // Look for file extension
        if( s.find(".xml") == string::npos && s.find(".yaml") == string::npos && s.find(".yml") == string::npos )
            return false;
        else
            return true;
    }
public:
    Size boardSize;              // The size of the board -> Number of items by width and height
    Pattern calibrationPattern;  // One of the Chessboard, circles, or asymmetric circle pattern
    float squareSize;            // The size of a square in your defined unit (point, millimeter,etc).
    int nrFrames;                // The number of frames to use from the input for calibration
    float aspectRatio;           // The aspect ratio
    int delay;                   // In case of a video input
    bool writePoints;            // Write detected feature points
    bool writeExtrinsics;        // Write extrinsic parameters
    bool writeGrid;              // Write refined 3D target grid points
    bool calibZeroTangentDist;   // Assume zero tangential distortion
    bool calibFixPrincipalPoint; // Fix the principal point at the center
    bool flipVertical;           // Flip the captured images around the horizontal axis
    string outputFileName;       // The name of the file where to write
    bool showUndistorted;        // Show undistorted images after calibration
    string input;                // The input ->
    bool useFisheye;             // use fisheye camera model for calibration
    bool fixK1;                  // fix K1 distortion coefficient
    bool fixK2;                  // fix K2 distortion coefficient
    bool fixK3;                  // fix K3 distortion coefficient
    bool fixK4;                  // fix K4 distortion coefficient
    bool fixK5;                  // fix K5 distortion coefficient

    int cameraID;
    vector<string> imageList;
    size_t atImageList;
    VideoCapture inputCapture;
    InputType inputType;
    bool goodInput;
    int flag;

private:
    string patternToUse;


};

static inline void read(const FileNode& node, Settings& x, const Settings& default_value = Settings())
{
    if(node.empty())
        x = default_value;
    else
        x.read(node);
}

enum { DETECTION = 0, CAPTURING = 1, CALIBRATED = 2 };

bool runCalibrationAndSave(Settings& s, Size imageSize, Mat&  cameraMatrix, Mat& distCoeffs,
                           vector<vector<Point2f> > imagePoints, float grid_width, bool release_object);

int main(int argc, char* argv[])
{
    const String keys
        = "{help h usage ? |           | print this message            }"
          "{@settings      |default.xml| input setting file            }"
          "{d              |           | actual distance between top-left and top-right corners of "
          "the calibration grid }"
          "{winSize        | 11        | Half of search window for cornerSubPix }";
    CommandLineParser parser(argc, argv, keys);
    parser.about("This is a camera calibration sample.\n"
                 "Usage: camera_calibration [configuration_file -- default ./default.xml]\n"
                 "Near the sample file you'll find the configuration file, which has detailed help of "
                 "how to edit it. It may be any OpenCV supported file format XML/YAML.");
    if (!parser.check()) {
        parser.printErrors();
        return 0;
    }

    if (parser.has("help")) {
        parser.printMessage();
        return 0;
    }

    //! [file_read]
    Settings s;
    const string inputSettingsFile = parser.get<string>(0);
    FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settings
    if (!fs.isOpened())
    {
        cout << "Could not open the configuration file: \"" << inputSettingsFile << "\"" << endl;
        parser.printMessage();
        return -1;
    }
    fs["Settings"] >> s;
    fs.release();                                         // close Settings file
    //! [file_read]

    //FileStorage fout("settings.yml", FileStorage::WRITE); // write config as YAML
    //fout << "Settings" << s;

    if (!s.goodInput)
    {
        cout << "Invalid input detected. Application stopping. " << endl;
        return -1;
    }

    int winSize = parser.get<int>("winSize");

    float grid_width = s.squareSize * (s.boardSize.width - 1);
    bool release_object = false;
    if (parser.has("d")) {
        grid_width = parser.get<float>("d");
        release_object = true;
    }

    vector<vector<Point2f> > imagePoints;
    Mat cameraMatrix, distCoeffs;
    Size imageSize;
    int mode = s.inputType == Settings::IMAGE_LIST ? CAPTURING : DETECTION;
    clock_t prevTimestamp = 0;
    const Scalar RED(0,0,255), GREEN(0,255,0);
    const char ESC_KEY = 27;

    //! [get_input]
    for(;;)
    {
        Mat view;
        bool blinkOutput = false;

        view = s.nextImage();

        //-----  If no more image, or got enough, then stop calibration and show result -------------
        if( mode == CAPTURING && imagePoints.size() >= (size_t)s.nrFrames )
        {
          if(runCalibrationAndSave(s, imageSize,  cameraMatrix, distCoeffs, imagePoints, grid_width,
                                   release_object))
              mode = CALIBRATED;
          else
              mode = DETECTION;
        }
        if(view.empty())          // If there are no more images stop the loop
        {
            // if calibration threshold was not reached yet, calibrate now
            if( mode != CALIBRATED && !imagePoints.empty() )
                runCalibrationAndSave(s, imageSize,  cameraMatrix, distCoeffs, imagePoints, grid_width,
                                      release_object);
            break;
        }
        //! [get_input]

        imageSize = view.size();  // Format input image.
        if( s.flipVertical )    flip( view, view, 0 );

        //! [find_pattern]
        vector<Point2f> pointBuf;

        bool found;

        int chessBoardFlags = CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE;

        if(!s.useFisheye) {
            // fast check erroneously fails with high distortions like fisheye
            chessBoardFlags |= CALIB_CB_FAST_CHECK;
        }

        switch( s.calibrationPattern ) // Find feature points on the input format
        {
        case Settings::CHESSBOARD:
            found = findChessboardCorners( view, s.boardSize, pointBuf, chessBoardFlags);
            break;
        case Settings::CIRCLES_GRID:
            found = findCirclesGrid( view, s.boardSize, pointBuf );
            break;
        case Settings::ASYMMETRIC_CIRCLES_GRID:
            found = findCirclesGrid( view, s.boardSize, pointBuf, CALIB_CB_ASYMMETRIC_GRID );
            break;
        default:
            found = false;
            break;
        }
        //! [find_pattern]
        //! [pattern_found]
        if ( found)                // If done with success,
        {
              // improve the found corners' coordinate accuracy for chessboard
                if( s.calibrationPattern == Settings::CHESSBOARD)
                {
                    Mat viewGray;
                    cvtColor(view, viewGray, COLOR_BGR2GRAY);
                    cornerSubPix( viewGray, pointBuf, Size(winSize,winSize),
                        Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.0001 ));
                }

                if( mode == CAPTURING &&  // For camera only take new samples after delay time
                    (!s.inputCapture.isOpened() || clock() - prevTimestamp > s.delay*1e-3*CLOCKS_PER_SEC) )
                {
                    imagePoints.push_back(pointBuf);
                    prevTimestamp = clock();
                    blinkOutput = s.inputCapture.isOpened();
                }

                // Draw the corners.
                drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );
        }
        //! [pattern_found]
        //----------------------------- Output Text ------------------------------------------------
        //! [output_text]
        string msg = (mode == CAPTURING) ? "100/100" :
                      mode == CALIBRATED ? "Calibrated" : "Press 'g' to start";
        int baseLine = 0;
        Size textSize = getTextSize(msg, 1, 1, 1, &baseLine);
        Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10);

        if( mode == CAPTURING )
        {
            if(s.showUndistorted)
                msg = cv::format( "%d/%d Undist", (int)imagePoints.size(), s.nrFrames );
            else
                msg = cv::format( "%d/%d", (int)imagePoints.size(), s.nrFrames );
        }

        putText( view, msg, textOrigin, 1, 1, mode == CALIBRATED ?  GREEN : RED);

        if( blinkOutput )
            bitwise_not(view, view);
        //! [output_text]
        //------------------------- Video capture  output  undistorted ------------------------------
        //! [output_undistorted]
        if( mode == CALIBRATED && s.showUndistorted )
        {
            Mat temp = view.clone();
            if (s.useFisheye)
            {
                Mat newCamMat;
                fisheye::estimateNewCameraMatrixForUndistortRectify(cameraMatrix, distCoeffs, imageSize,
                                                                    Matx33d::eye(), newCamMat, 1);
                cv::fisheye::undistortImage(temp, view, cameraMatrix, distCoeffs, newCamMat);
            }
            else
              undistort(temp, view, cameraMatrix, distCoeffs);
        }
        //! [output_undistorted]
        //------------------------------ Show image and check for input commands -------------------
        //! [await_input]
        imshow("Image View", view);
        char key = (char)waitKey(s.inputCapture.isOpened() ? 50 : s.delay);

        if( key  == ESC_KEY )
            break;

        if( key == 'u' && mode == CALIBRATED )
           s.showUndistorted = !s.showUndistorted;

        if( s.inputCapture.isOpened() && key == 'g' )
        {
            mode = CAPTURING;
            imagePoints.clear();
        }
        //! [await_input]
    }

    // -----------------------Show the undistorted image for the image list ------------------------
    //! [show_results]
    if( s.inputType == Settings::IMAGE_LIST && s.showUndistorted && !cameraMatrix.empty())
    {
        Mat view, rview, map1, map2;

        if (s.useFisheye)
        {
            Mat newCamMat;
            fisheye::estimateNewCameraMatrixForUndistortRectify(cameraMatrix, distCoeffs, imageSize,
                                                                Matx33d::eye(), newCamMat, 1);
            fisheye::initUndistortRectifyMap(cameraMatrix, distCoeffs, Matx33d::eye(), newCamMat, imageSize,
                                             CV_16SC2, map1, map2);
        }
        else
        {
            initUndistortRectifyMap(
                cameraMatrix, distCoeffs, Mat(),
                getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0), imageSize,
                CV_16SC2, map1, map2);
        }

        for(size_t i = 0; i < s.imageList.size(); i++ )
        {
            view = imread(s.imageList[i], IMREAD_COLOR);
            if(view.empty())
                continue;
            remap(view, rview, map1, map2, INTER_LINEAR);
            imshow("Image View", rview);
            char c = (char)waitKey();
            if( c  == ESC_KEY || c == 'q' || c == 'Q' )
                break;
        }
    }
    //! [show_results]

    return 0;
}

//! [compute_errors]
static double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints,
                                         const vector<vector<Point2f> >& imagePoints,
                                         const vector<Mat>& rvecs, const vector<Mat>& tvecs,
                                         const Mat& cameraMatrix , const Mat& distCoeffs,
                                         vector<float>& perViewErrors, bool fisheye)
{
    vector<Point2f> imagePoints2;
    size_t totalPoints = 0;
    double totalErr = 0, err;
    perViewErrors.resize(objectPoints.size());

    for(size_t i = 0; i < objectPoints.size(); ++i )
    {
        if (fisheye)
        {
            fisheye::projectPoints(objectPoints[i], imagePoints2, rvecs[i], tvecs[i], cameraMatrix,
                                   distCoeffs);
        }
        else
        {
            projectPoints(objectPoints[i], rvecs[i], tvecs[i], cameraMatrix, distCoeffs, imagePoints2);
        }
        err = norm(imagePoints[i], imagePoints2, NORM_L2);

        size_t n = objectPoints[i].size();
        perViewErrors[i] = (float) std::sqrt(err*err/n);
        totalErr        += err*err;
        totalPoints     += n;
    }

    return std::sqrt(totalErr/totalPoints);
}
//! [compute_errors]
//! [board_corners]
static void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,
                                     Settings::Pattern patternType /*= Settings::CHESSBOARD*/)
{
    corners.clear();

    switch(patternType)
    {
    case Settings::CHESSBOARD:
    case Settings::CIRCLES_GRID:
        for( int i = 0; i < boardSize.height; ++i )
            for( int j = 0; j < boardSize.width; ++j )
                corners.push_back(Point3f(j*squareSize, i*squareSize, 0));
        break;

    case Settings::ASYMMETRIC_CIRCLES_GRID:
        for( int i = 0; i < boardSize.height; i++ )
            for( int j = 0; j < boardSize.width; j++ )
                corners.push_back(Point3f((2*j + i % 2)*squareSize, i*squareSize, 0));
        break;
    default:
        break;
    }
}
//! [board_corners]
static bool runCalibration( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs,
                            vector<vector<Point2f> > imagePoints, vector<Mat>& rvecs, vector<Mat>& tvecs,
                            vector<float>& reprojErrs,  double& totalAvgErr, vector<Point3f>& newObjPoints,
                            float grid_width, bool release_object)
{
    //! [fixed_aspect]
    cameraMatrix = Mat::eye(3, 3, CV_64F);
    if( !s.useFisheye && s.flag & CALIB_FIX_ASPECT_RATIO )
        cameraMatrix.at<double>(0,0) = s.aspectRatio;
    //! [fixed_aspect]
    if (s.useFisheye) {
        distCoeffs = Mat::zeros(4, 1, CV_64F);
    } else {
        distCoeffs = Mat::zeros(8, 1, CV_64F);
    }

    vector<vector<Point3f> > objectPoints(1);
    calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern);
    objectPoints[0][s.boardSize.width - 1].x = objectPoints[0][0].x + grid_width;
    newObjPoints = objectPoints[0];

    objectPoints.resize(imagePoints.size(),objectPoints[0]);

    //Find intrinsic and extrinsic camera parameters
    double rms;

    if (s.useFisheye) {
        Mat _rvecs, _tvecs;
        rms = fisheye::calibrate(objectPoints, imagePoints, imageSize, cameraMatrix, distCoeffs, _rvecs,
                                 _tvecs, s.flag);

        rvecs.reserve(_rvecs.rows);
        tvecs.reserve(_tvecs.rows);
        for(int i = 0; i < int(objectPoints.size()); i++){
            rvecs.push_back(_rvecs.row(i));
            tvecs.push_back(_tvecs.row(i));
        }
    } else {
        int iFixedPoint = -1;
        if (release_object)
            iFixedPoint = s.boardSize.width - 1;
        rms = calibrateCameraRO(objectPoints, imagePoints, imageSize, iFixedPoint,
                                cameraMatrix, distCoeffs, rvecs, tvecs, newObjPoints,
                                s.flag | CALIB_USE_LU);
    }

    if (release_object) {
        cout << "New board corners: " << endl;
        cout << newObjPoints[0] << endl;
        cout << newObjPoints[s.boardSize.width - 1] << endl;
        cout << newObjPoints[s.boardSize.width * (s.boardSize.height - 1)] << endl;
        cout << newObjPoints.back() << endl;
    }

    cout << "Re-projection error reported by calibrateCamera: "<< rms << endl;

    bool ok = checkRange(cameraMatrix) && checkRange(distCoeffs);

    objectPoints.clear();
    objectPoints.resize(imagePoints.size(), newObjPoints);
    totalAvgErr = computeReprojectionErrors(objectPoints, imagePoints, rvecs, tvecs, cameraMatrix,
                                            distCoeffs, reprojErrs, s.useFisheye);

    return ok;
}

// Print camera parameters to the output file
static void saveCameraParams( Settings& s, Size& imageSize, Mat& cameraMatrix, Mat& distCoeffs,
                              const vector<Mat>& rvecs, const vector<Mat>& tvecs,
                              const vector<float>& reprojErrs, const vector<vector<Point2f> >& imagePoints,
                              double totalAvgErr, const vector<Point3f>& newObjPoints )
{
    FileStorage fs( s.outputFileName, FileStorage::WRITE );

    time_t tm;
    time( &tm );
    struct tm *t2 = localtime( &tm );
    char buf[1024];
    strftime( buf, sizeof(buf), "%c", t2 );

    fs << "calibration_time" << buf;

    if( !rvecs.empty() || !reprojErrs.empty() )
        fs << "nr_of_frames" << (int)std::max(rvecs.size(), reprojErrs.size());
    fs << "image_width" << imageSize.width;
    fs << "image_height" << imageSize.height;
    fs << "board_width" << s.boardSize.width;
    fs << "board_height" << s.boardSize.height;
    fs << "square_size" << s.squareSize;

    if( !s.useFisheye && s.flag & CALIB_FIX_ASPECT_RATIO )
        fs << "fix_aspect_ratio" << s.aspectRatio;

    if (s.flag)
    {
        std::stringstream flagsStringStream;
        if (s.useFisheye)
        {
            flagsStringStream << "flags:"
                << (s.flag & fisheye::CALIB_FIX_SKEW ? " +fix_skew" : "")
                << (s.flag & fisheye::CALIB_FIX_K1 ? " +fix_k1" : "")
                << (s.flag & fisheye::CALIB_FIX_K2 ? " +fix_k2" : "")
                << (s.flag & fisheye::CALIB_FIX_K3 ? " +fix_k3" : "")
                << (s.flag & fisheye::CALIB_FIX_K4 ? " +fix_k4" : "")
                << (s.flag & fisheye::CALIB_RECOMPUTE_EXTRINSIC ? " +recompute_extrinsic" : "");
        }
        else
        {
            flagsStringStream << "flags:"
                << (s.flag & CALIB_USE_INTRINSIC_GUESS ? " +use_intrinsic_guess" : "")
                << (s.flag & CALIB_FIX_ASPECT_RATIO ? " +fix_aspectRatio" : "")
                << (s.flag & CALIB_FIX_PRINCIPAL_POINT ? " +fix_principal_point" : "")
                << (s.flag & CALIB_ZERO_TANGENT_DIST ? " +zero_tangent_dist" : "")
                << (s.flag & CALIB_FIX_K1 ? " +fix_k1" : "")
                << (s.flag & CALIB_FIX_K2 ? " +fix_k2" : "")
                << (s.flag & CALIB_FIX_K3 ? " +fix_k3" : "")
                << (s.flag & CALIB_FIX_K4 ? " +fix_k4" : "")
                << (s.flag & CALIB_FIX_K5 ? " +fix_k5" : "");
        }
        fs.writeComment(flagsStringStream.str());
    }

    fs << "flags" << s.flag;

    fs << "fisheye_model" << s.useFisheye;

    fs << "camera_matrix" << cameraMatrix;
    fs << "distortion_coefficients" << distCoeffs;

    fs << "avg_reprojection_error" << totalAvgErr;
    if (s.writeExtrinsics && !reprojErrs.empty())
        fs << "per_view_reprojection_errors" << Mat(reprojErrs);

    if(s.writeExtrinsics && !rvecs.empty() && !tvecs.empty() )
    {
        CV_Assert(rvecs[0].type() == tvecs[0].type());
        Mat bigmat((int)rvecs.size(), 6, CV_MAKETYPE(rvecs[0].type(), 1));
        bool needReshapeR = rvecs[0].depth() != 1 ? true : false;
        bool needReshapeT = tvecs[0].depth() != 1 ? true : false;

        for( size_t i = 0; i < rvecs.size(); i++ )
        {
            Mat r = bigmat(Range(int(i), int(i+1)), Range(0,3));
            Mat t = bigmat(Range(int(i), int(i+1)), Range(3,6));

            if(needReshapeR)
                rvecs[i].reshape(1, 1).copyTo(r);
            else
            {
                //*.t() is MatExpr (not Mat) so we can use assignment operator
                CV_Assert(rvecs[i].rows == 3 && rvecs[i].cols == 1);
                r = rvecs[i].t();
            }

            if(needReshapeT)
                tvecs[i].reshape(1, 1).copyTo(t);
            else
            {
                CV_Assert(tvecs[i].rows == 3 && tvecs[i].cols == 1);
                t = tvecs[i].t();
            }
        }
        fs.writeComment("a set of 6-tuples (rotation vector + translation vector) for each view");
        fs << "extrinsic_parameters" << bigmat;
    }

    if(s.writePoints && !imagePoints.empty() )
    {
        Mat imagePtMat((int)imagePoints.size(), (int)imagePoints[0].size(), CV_32FC2);
        for( size_t i = 0; i < imagePoints.size(); i++ )
        {
            Mat r = imagePtMat.row(int(i)).reshape(2, imagePtMat.cols);
            Mat imgpti(imagePoints[i]);
            imgpti.copyTo(r);
        }
        fs << "image_points" << imagePtMat;
    }

    if( s.writeGrid && !newObjPoints.empty() )
    {
        fs << "grid_points" << newObjPoints;
    }
}

//! [run_and_save]
bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs,
                           vector<vector<Point2f> > imagePoints, float grid_width, bool release_object)
{
    vector<Mat> rvecs, tvecs;
    vector<float> reprojErrs;
    double totalAvgErr = 0;
    vector<Point3f> newObjPoints;

    bool ok = runCalibration(s, imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs, reprojErrs,
                             totalAvgErr, newObjPoints, grid_width, release_object);
    cout << (ok ? "Calibration succeeded" : "Calibration failed")
         << ". avg re projection error = " << totalAvgErr << endl;

    if (ok)
        saveCameraParams(s, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, reprojErrs, imagePoints,
                         totalAvgErr, newObjPoints);
    return ok;
}

4.源码解释

4.1读取配置文件

配置文件in_VID5.xml的内容如下,可以根据需要自己修改

<?xml version="1.0"?>
<opencv_storage>
<Settings>
  <!-- Number of inner corners per a item row and column. (square, circle) -->
  <BoardSize_Width> 9</BoardSize_Width>
  <BoardSize_Height>6</BoardSize_Height>
  
  <!-- The size of a square in some user defined metric system (pixel, millimeter)-->
  <Square_Size>50</Square_Size>
  
  <!-- The type of input used for camera calibration. One of: CHESSBOARD CIRCLES_GRID ASYMMETRIC_CIRCLES_GRID -->
  <Calibrate_Pattern>"CHESSBOARD"</Calibrate_Pattern>
  
  <!-- The input to use for calibration. 
		To use an input camera -> give the ID of the camera, like "1"
		To use an input video  -> give the path of the input video, like "/tmp/x.avi"
		To use an image list   -> give the path to the XML or YAML file containing the list of the images, like "/tmp/circles_list.xml"
		-->
  <Input>"images/CameraCalibration/VID5/VID5.xml"</Input>
  <!--  If true (non-zero) we flip the input images around the horizontal axis.-->
  <Input_FlipAroundHorizontalAxis>0</Input_FlipAroundHorizontalAxis>
  
  <!-- Time delay between frames in case of camera. -->
  <Input_Delay>100</Input_Delay>	
  
  <!-- How many frames to use, for calibration. -->
  <Calibrate_NrOfFrameToUse>25</Calibrate_NrOfFrameToUse>
  <!-- Consider only fy as a free parameter, the ratio fx/fy stays the same as in the input cameraMatrix. 
	   Use or not setting. 0 - False Non-Zero - True-->
  <Calibrate_FixAspectRatio> 1 </Calibrate_FixAspectRatio>
  <!-- If true (non-zero) tangential distortion coefficients  are set to zeros and stay zero.-->
  <Calibrate_AssumeZeroTangentialDistortion>1</Calibrate_AssumeZeroTangentialDistortion>
  <!-- If true (non-zero) the principal point is not changed during the global optimization.-->
  <Calibrate_FixPrincipalPointAtTheCenter> 1 </Calibrate_FixPrincipalPointAtTheCenter>
  
  <!-- The name of the output log file. -->
  <Write_outputFileName>"out_camera_data.xml"</Write_outputFileName>
  <!-- If true (non-zero) we write to the output file the feature points.-->
  <Write_DetectedFeaturePoints>1</Write_DetectedFeaturePoints>
  <!-- If true (non-zero) we write to the output file the extrinsic camera parameters.-->
  <Write_extrinsicParameters>1</Write_extrinsicParameters>
  <!-- If true (non-zero) we write to the output file the refined 3D target grid points.-->
  <Write_gridPoints>1</Write_gridPoints>
  <!-- If true (non-zero) we show after calibration the undistorted images.-->
  <Show_UndistortedImage>1</Show_UndistortedImage>
  <!-- If true (non-zero) will be used fisheye camera model.-->
  <Calibrate_UseFisheyeModel>0</Calibrate_UseFisheyeModel>
  <!-- If true (non-zero) distortion coefficient k1 will be equals to zero.-->
  <Fix_K1>0</Fix_K1>
  <!-- If true (non-zero) distortion coefficient k2 will be equals to zero.-->
  <Fix_K2>0</Fix_K2>
  <!-- If true (non-zero) distortion coefficient k3 will be equals to zero.-->
  <Fix_K3>0</Fix_K3>
  <!-- If true (non-zero) distortion coefficient k4 will be equals to zero.-->
  <Fix_K4>1</Fix_K4>
  <!-- If true (non-zero) distortion coefficient k5 will be equals to zero.-->
  <Fix_K5>1</Fix_K5>
</Settings>
</opencv_storage>

下面是读取in_VID5.xml文件的代码:

	Settings s;
    const string inputSettingsFile = parser.get<string>(0);
    FileStorage fs(inputSettingsFile, FileStorage::READ); // Read the settings
    if (!fs.isOpened())
    {
        cout << "Could not open the configuration file: \"" << inputSettingsFile << "\"" << endl;
        parser.printMessage();
        return -1;
    }
    fs["Settings"] >> s;
    fs.release();        

为此,我使用了简单的OpenCV类输入操作。在读取文件之后,我有一个额外的后处理函数来检查输入的有效性。只有当所有的输入都是好的输入变量才为真。

4.2得到下一个输入,如果它失败或我们有足够的参数-标定

在这之后,我们有一个大的循环,我们在其中执行以下操作:从图像列表、摄像机或视频文件中获取下一个图像。如果这失败或我们有足够的图像,然后我们运行标定过程。在图像的情况下,我们跳出循环,否则通过从检测模式更改为标定模式,剩余的帧将被用作去畸变(如果设置了该选项)。

for(;;)
    {
        Mat view;
        bool blinkOutput = false;
        view = s.nextImage();
        //-----  If no more image, or got enough, then stop calibration and show result -------------
        if( mode == CAPTURING && imagePoints.size() >= (size_t)s.nrFrames )
        {
          if(runCalibrationAndSave(s, imageSize,  cameraMatrix, distCoeffs, imagePoints, grid_width, release_object))
              mode = CALIBRATED;
          else
              mode = DETECTION;
        }
        if(view.empty())          // If there are no more images stop the loop
        {
            // if calibration threshold was not reached yet, calibrate now
            if( mode != CALIBRATED && !imagePoints.empty() )
                runCalibrationAndSave(s, imageSize,  cameraMatrix, distCoeffs, imagePoints, grid_width,
                                      release_object);
            break;
        }

对于一些相机,我们可能需要翻转输入图像。在这里我们也这样做。

4.3 在当前输入中找到模式

我上面提到的方程式的形成是为了寻找输入中的主要模式:在棋盘中,这些模式是正方形的角,而对于圆形,也就是圆形本身。这些位置将形成被写入pointBuf向量的结果。

		vector<Point2f> pointBuf;
        bool found;
        int chessBoardFlags = CALIB_CB_ADAPTIVE_THRESH | CALIB_CB_NORMALIZE_IMAGE;
        if(!s.useFisheye) {
            // fast check erroneously fails with high distortions like fisheye
            chessBoardFlags |= CALIB_CB_FAST_CHECK;
        }
        switch( s.calibrationPattern ) // Find feature points on the input format
        {
        case Settings::CHESSBOARD:
            found = findChessboardCorners( view, s.boardSize, pointBuf, chessBoardFlags);
            break;
        case Settings::CIRCLES_GRID:
            found = findCirclesGrid( view, s.boardSize, pointBuf );
            break;
        case Settings::ASYMMETRIC_CIRCLES_GRID:
            found = findCirclesGrid( view, s.boardSize, pointBuf, CALIB_CB_ASYMMETRIC_GRID );
            break;
        default:
            found = false;
            break;
        }

根据输入模式的类型,您可以使用cv::findChessboardCornerscv::findCirclesGrid函数。对于这两个,你传递当前图像和标定板的大小,你就会得到图案的位置。此外,它们还返回一个布尔变量,表示是否在输入中找到了模式(我们只需要考虑那些为真的图像!)

在相机的情况下,我们只在经过一个输入延迟时间时拍摄相机图像。这样做是为了让用户移动棋盘,并获得不同的图像。相似的图像会产生相似的方程,在标定步骤上相似的方程会形成不适定问题,因此标定会失败。对于正方形图像,角的位置只是近似的。我们可以通过调用cv::cornerSubPix函数来改进这一点。winSize用于控制搜索窗口的边长。默认值为11winSize可以通过命令行参数--winSize=<number>来更改。它将产生更好的标定结果。在这之后,我们向imagePoints向量添加一个有效的输入结果,以将所有等式收集到一个容器中。最后,为了可视化结果,我们将使用cv::findChessboardCorners函数在输入图像上绘制发现的点。

		if ( found)                // If done with success,
        {
              // improve the found corners' coordinate accuracy for chessboard
                if( s.calibrationPattern == Settings::CHESSBOARD)
                {
                    Mat viewGray;
                    cvtColor(view, viewGray, COLOR_BGR2GRAY);
                    cornerSubPix( viewGray, pointBuf, Size(winSize,winSize),
                        Size(-1,-1), TermCriteria( TermCriteria::EPS+TermCriteria::COUNT, 30, 0.0001 ));
                }
                if( mode == CAPTURING &&  // For camera only take new samples after delay time
                    (!s.inputCapture.isOpened() || clock() - prevTimestamp > s.delay*1e-3*CLOCKS_PER_SEC) )
                {
                    imagePoints.push_back(pointBuf);
                    prevTimestamp = clock();
                    blinkOutput = s.inputCapture.isOpened();
                }
                // Draw the corners.
                drawChessboardCorners( view, s.boardSize, Mat(pointBuf), found );
        }

4.4向用户显示状态和结果,加上应用程序的命令行控制

这部分在图像上显示文本输出。

        string msg = (mode == CAPTURING) ? "100/100" :
                      mode == CALIBRATED ? "Calibrated" : "Press 'g' to start";
        int baseLine = 0;
        Size textSize = getTextSize(msg, 1, 1, 1, &baseLine);
        Point textOrigin(view.cols - 2*textSize.width - 10, view.rows - 2*baseLine - 10);
        if( mode == CAPTURING )
        {
            if(s.showUndistorted)
                msg = cv::format( "%d/%d Undist", (int)imagePoints.size(), s.nrFrames );
            else
                msg = cv::format( "%d/%d", (int)imagePoints.size(), s.nrFrames );
        }
        putText( view, msg, textOrigin, 1, 1, mode == CALIBRATED ?  GREEN : RED);
        if( blinkOutput )
            bitwise_not(view, view);

如果我们运行标定并得到带有失真系数的相机矩阵,我们可能想要使用cv:: undistortion函数来校正图像:

        if( mode == CALIBRATED && s.showUndistorted )
        {
            Mat temp = view.clone();
            if (s.useFisheye)
            {
                Mat newCamMat;
                fisheye::estimateNewCameraMatrixForUndistortRectify(cameraMatrix, distCoeffs, imageSize,
                                                                    Matx33d::eye(), newCamMat, 1);
                cv::fisheye::undistortImage(temp, view, cameraMatrix, distCoeffs, newCamMat);
            }
            else
              undistort(temp, view, cameraMatrix, distCoeffs);
        }

然后我们显示图像并等待输入键,如果这是u,我们切换失真消除,如果是g,我们再次开始检测过程,最后为ESC键,我们退出应用:

        imshow("Image View", view);
        char key = (char)waitKey(s.inputCapture.isOpened() ? 50 : s.delay);
        if( key  == ESC_KEY )
            break;
        if( key == 'u' && mode == CALIBRATED )
           s.showUndistorted = !s.showUndistorted;
        if( s.inputCapture.isOpened() && key == 'g' )
        {
            mode = CAPTURING;
            imagePoints.clear();
        }

4.5显示失真消除图像

当您使用图像列表时,不可能在循环中消除失真。因此,必须在循环之后执行此操作。利用这一点,我现在扩展cv::undistort函数,它实际上首先调用 cv::initUndistortRectifyMap来查找变换矩阵,然后使用cv::remap函数执行变换。因为,标定成功后,只需计算一次映射,通过使用这个扩展的形式,您可以加快您的应用程序:

    if( s.inputType == Settings::IMAGE_LIST && s.showUndistorted && !cameraMatrix.empty())
    {
        Mat view, rview, map1, map2;
        if (s.useFisheye)
        {
            Mat newCamMat;
            fisheye::estimateNewCameraMatrixForUndistortRectify(cameraMatrix, distCoeffs, imageSize,
                                                                Matx33d::eye(), newCamMat, 1);
            fisheye::initUndistortRectifyMap(cameraMatrix, distCoeffs, Matx33d::eye(), newCamMat, imageSize,
                                             CV_16SC2, map1, map2);
        }
        else
        {
            initUndistortRectifyMap(
                cameraMatrix, distCoeffs, Mat(),
                getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0), imageSize,
                CV_16SC2, map1, map2);
        }
        for(size_t i = 0; i < s.imageList.size(); i++ )
        {
            view = imread(s.imageList[i], IMREAD_COLOR);
            if(view.empty())
                continue;
            remap(view, rview, map1, map2, INTER_LINEAR);
            imshow("Image View", rview);
            char c = (char)waitKey();
            if( c  == ESC_KEY || c == 'q' || c == 'Q' )
                break;
        }
    }

4.6标定与保存

因为每台相机只需要进行一次标定,所以在成功标定后保存它是有意义的。这样,以后您就可以将这些值加载到您的程序中。因此,我们首先进行标定,如果标定成功,我们将结果保存到OpenCV样式的XML或YAML文件中,这取决于您在配置文件中给出的扩展名。

因此在第一个函数中我们把这两个过程分开。因为我们想要保存许多标定变量,所以我们将在这里创建这些变量,并将它们传递给标定和保存函数。同样,我不会显示保存部分,因为它与标定没有什么共同之处。

bool runCalibrationAndSave(Settings& s, Size imageSize, Mat& cameraMatrix, Mat& distCoeffs,
                           vector<vector<Point2f> > imagePoints, float grid_width, bool release_object)
{
    vector<Mat> rvecs, tvecs;
    vector<float> reprojErrs;
    double totalAvgErr = 0;
    vector<Point3f> newObjPoints;
    bool ok = runCalibration(s, imageSize, cameraMatrix, distCoeffs, imagePoints, rvecs, tvecs, reprojErrs,
                             totalAvgErr, newObjPoints, grid_width, release_object);
    cout << (ok ? "Calibration succeeded" : "Calibration failed")
         << ". avg re projection error = " << totalAvgErr << endl;
    if (ok)
        saveCameraParams(s, imageSize, cameraMatrix, distCoeffs, rvecs, tvecs, reprojErrs, imagePoints,
                         totalAvgErr, newObjPoints);
    return ok;
}

我们借助cv::calibrateCameraRO函数进行标定。它有以下参数:

  • The object points目标点:vector<Point3f>向量,对于每个输入图像,它描述了模式的外观。如果我们有一个平面模式(如棋盘),那么我们可以简单地将所有Z坐标设为零。这是这些重要点的集合。因为,我们对所有输入图像使用单一模式,我们只需计算一次,然后将其乘以所有其他输入视图。我们用calcBoardCornerPositions函数计算角点:
static void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,
                                     Settings::Pattern patternType /*= Settings::CHESSBOARD*/)
{
    corners.clear();
    switch(patternType)
    {
    case Settings::CHESSBOARD:
    case Settings::CIRCLES_GRID:
        for( int i = 0; i < boardSize.height; ++i )
            for( int j = 0; j < boardSize.width; ++j )
                corners.push_back(Point3f(j*squareSize, i*squareSize, 0));
        break;
    case Settings::ASYMMETRIC_CIRCLES_GRID:
        for( int i = 0; i < boardSize.height; i++ )
            for( int j = 0; j < boardSize.width; j++ )
                corners.push_back(Point3f((2*j + i % 2)*squareSize, i*squareSize, 0));
        break;
    default:
        break;
    }
}

然后把它乘以:

vector<vector<Point3f> > objectPoints(1);
calcBoardCornerPositions(s.boardSize, s.squareSize, objectPoints[0], s.calibrationPattern);
objectPoints[0][s.boardSize.width - 1].x = objectPoints[0][0].x + grid_width;
newObjPoints = objectPoints[0];
objectPoints.resize(imagePoints.size(),objectPoints[0]);

请注意:
如果你的标定板是不准确的、未测量的、大致平面的目标(使用现成打印机的纸上的棋盘图案是最方便的标定目标,但大多数都不够精确),可以利用一种方法来显著提高估计相机内部参数的精度。如果提供了命令行参数-d=<number>,将调用这个新的标定方法。在上面的代码片段中,grid_width实际上是由-d=<number>设置的值。它是左上(0,0,0)和右上(s.squareSize*(s.boardSize.width-1), 0, 0)之间的测量距离。应该用尺子或游标卡尺精确测量。标定后,newObjPoints将更新为精确的三维物体点坐标。

  • The image points图像点:vector<Point2f>对于每个输入图像包含重要点的坐标(棋盘的角和圆图案的圆心)。我们已经从cv::findChessboardCornerscv::findCirclesGrid函数中收集了这个。我们只需要把它传下去。
  • 从相机、视频文件或图像中获得的图像的大小。
  • 固定对象的索引点。我们将其设置为-1以要求标准标定方法。如果要使用新的对象释放方法,则将其设置为标定板网格右上角点的索引。详细说明见cv::calibrateCameraRO
int iFixedPoint = -1;
if (release_object)
    iFixedPoint = s.boardSize.width - 1;
  • The camera matrix:如果我们使用固定长宽比选项,我们需要设置f_x:
    cameraMatrix = Mat::eye(3, 3, CV_64F);
    if( !s.useFisheye && s.flag & CALIB_FIX_ASPECT_RATIO )
        cameraMatrix.at<double>(0,0) = s.aspectRatio;
  • The distortion coefficient matrix:初始化为零
distCoeffs = Mat::zeros(8, 1, CV_64F);
  • 对于所有视图,该函数将计算旋转和平移向量,将目标点(在世界坐标空间中给出)转换为图像点(在模型坐标空间中给出)。第7和第8个参数是矩阵的输出向量,其中第i个位置包含第i个物体点到第i个图像点的旋转和平移向量。
  • 标定模式点的更新输出向量。标准标定方法忽略此参数。
  • 最后一个参数是flag。你需要在这里指定一些选项,比如固定焦距的长宽比,假设切向失真为零,或者固定主点。这里我们使用CALIB_USE_LU来获得更快的标定速度。
rms = calibrateCameraRO(objectPoints, imagePoints, imageSize, iFixedPoint,
                        cameraMatrix, distCoeffs, rvecs, tvecs, newObjPoints,
                        s.flag | CALIB_USE_LU);
  • 该函数返回平均重投影误差。这个数字对所找到的参数的精度给出了很好的估计。这应该尽可能接近于零。考虑到内参、畸变参数、旋转和平移矩阵,我们可以通过使用cv::projectPoints来计算一个视图的误差,首先将物体点转换为图像点。然后我们计算通过变换得到的结果与角/圆查找算法之间的绝对范数。为了找到平均误差,我们计算所有标定图像的计算误差的算术平均值。
static double computeReprojectionErrors( const vector<vector<Point3f> >& objectPoints,
                                         const vector<vector<Point2f> >& imagePoints,
                                         const vector<Mat>& rvecs, const vector<Mat>& tvecs,
                                         const Mat& cameraMatrix , const Mat& distCoeffs,
                                         vector<float>& perViewErrors, bool fisheye)
{
    vector<Point2f> imagePoints2;
    size_t totalPoints = 0;
    double totalErr = 0, err;
    perViewErrors.resize(objectPoints.size());
    for(size_t i = 0; i < objectPoints.size(); ++i )
    {
        if (fisheye)
        {
            fisheye::projectPoints(objectPoints[i], imagePoints2, rvecs[i], tvecs[i], cameraMatrix,
                                   distCoeffs);
        }
        else
        {
            projectPoints(objectPoints[i], rvecs[i], tvecs[i], cameraMatrix, distCoeffs, imagePoints2);
        }
        err = norm(imagePoints[i], imagePoints2, NORM_L2);
        size_t n = objectPoints[i].size();
        perViewErrors[i] = (float) std::sqrt(err*err/n);
        totalErr        += err*err;
        totalPoints     += n;
    }
    return std::sqrt(totalErr/totalPoints);
}

4.7结果

假设有一个输入棋盘图案,大小为9 X 6。我使用AXIS IP摄像机创建了标定板的两个快照,并将其保存到VID5目录中。我把它放在我工作目录的images/ camercalibration文件夹中,并创建了下面的VID5.XML文件来描述使用哪些图像:

<?xml version="1.0"?>
<opencv_storage>
<images>
images/CameraCalibration/VID5/xx1.jpg
images/CameraCalibration/VID5/xx2.jpg
images/CameraCalibration/VID5/xx3.jpg
images/CameraCalibration/VID5/xx4.jpg
images/CameraCalibration/VID5/xx5.jpg
images/CameraCalibration/VID5/xx6.jpg
images/CameraCalibration/VID5/xx7.jpg
images/CameraCalibration/VID5/xx8.jpg
</images>
</opencv_storage>

如何生成上面的xml文件呢?

// genImageXML.cpp
#include <opencv2\opencv.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\features2d\features2d.hpp>
#include <opencv2\core\core.hpp>

using namespace std;
using namespace cv;

int main()

{
	string pattern = "D:/code/PycharmProjects/learn_azureKinect/chessboard_5x7_30mm/*.jpg";
	vector<string> fn;
	glob(pattern, fn, false);
	FileStorage fs("./VID5.xml", cv::FileStorage::WRITE);
	fs << "images" <<"[";

	for (auto name : fn)
	{
		fs<<name;
	}

	fs << "]";

	fs.release();
	return 0;

}

然后传递images/ camercalibration /VID5/VID5. xml作为配置文件的输入。下面是在应用程序运行时发现的一个棋盘模式:
在这里插入图片描述
应用失真去除后,我们得到:
在这里插入图片描述
同样的方法也适用于这个不对称的圆形图案,设置输入宽度为4,高度为11。这一次,我通过为输入指定ID(“1”)来使用一个实时摄像机输入。以下是检测到的模式的样子:
在这里插入图片描述
在这两种情况下,在指定的输出XML/YAML文件中,你会发现相机和畸变系数矩阵:

<camera_matrix type_id="opencv-matrix">
<rows>3</rows>
<cols>3</cols>
<dt>d</dt>
<data>
 6.5746697944293521e+002 0. 3.1950000000000000e+002 0.
 6.5746697944293521e+002 2.3950000000000000e+002 0. 0. 1.</data></camera_matrix>
<distortion_coefficients type_id="opencv-matrix">
<rows>5</rows>
<cols>1</cols>
<dt>d</dt>
<data>
 -4.1802327176423804e-001 5.0715244063187526e-001 0. 0.
 -5.7843597214487474e-001</data></distortion_coefficients>

将这些值作为常量添加到你的程序中,调用cv::initUndistortRectifyMapcv::remap函数来消除失真,为廉价和低质量的相机享受无失真输入。

BONUS

下面基于Python+OpenCV实现单目相机标定
在这里插入图片描述
在这里插入图片描述

我们现在将使用 OpenCV 执行标定过程。为了确定 9 个参数(4 个相机内在系数和 5 个失真系数),我们需要一些失真的棋盘图像——建议使用至少 10 个图像的数据集,使用要标定的相机拍摄。

在这里插入图片描述
我们首先进行导入并设置数据以供以后在标定过程中使用。 OpenCV 的cornerSubPix() 函数需要一个终止标准,该函数执行高精度搜索棋盘图像中的角点。需要一组对象点来告诉 OpenCV 我们正在使用 8 x 8 棋盘作为标定目标。

import cv2
import numpy as np
import glob
 
# 设置cornerSubPix() 的终止标准
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001)
 
# 为 8x8 棋盘创建和填充对象点
objp = np.zeros((7 * 7, 3), np.float32)
objp[:,:2] = np.mgrid[0:7, 0:7].T.reshape(-1, 2)
 
# 为对象点和图像点创建数组
objpoints = [] # 现实世界空间中的 3d 点
imgpoints = [] # 图像平面中的 2d点

设置好初始变量后,我们可以遍历失真棋盘图像的标定数据集,并应用 OpenCV 的 findChessboardCorners() 函数来定位棋盘图像中的角。

# 收集文件夹中图像的文件名
images = glob.glob('calibration*.png')
 
# 遍历文件夹中的图像并创建棋盘角
for fname in images:
    print(fname)
    image = cv2.imread(fname)
    gray = cv2.split(image)[0]
    ret, corners = cv2.findChessboardCorners(gray, (7, 7), None)
    if ret == True:
        objpoints.append(objp)
        corners_SubPix = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
        imgpoints.append(corners_SubPix)
        print("Return value: ", ret)
        img = cv2.drawChessboardCorners(gray, (7, 7), corners_SubPix, ret)
        cv2.imshow("Corners", img)
        cv2.waitKey(500)
cv2.destroyAllWindows()

下图显示了 findChessboardCorners() 函数的示例输出——OpenCV 已成功检测到失真棋盘图像的所有内角,随后可用于执行标定。
在这里插入图片描述
以下代码使用 OpenCV 的 calibrateCamera() 函数来确定相机内在矩阵和失真系数。文件存储 API 用于将参数保存到 XML 文件中。

# 相机标定: cameraMatrix = 3x3 相机内参; 畸变系数distCoeffs = 5x1 向量
# gray.shape[::-1] 将单通道图像值从 h、w 交换为 w、h(numpy 到 OpenCV 格式)
retval, cameraMatrix, distCoeffs, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, gray.shape[::-1], None, None)
 
# 保存相机内参和畸变系数
fs = cv2.FileStorage("intrinsics.xml", cv2.FileStorage_WRITE)
fs.write("image_width", gray.shape[1])
fs.write("image_height", gray.shape[0])
fs.write("camera_matrix", cameraMatrix)
fs.write("distortion_coefficients", distCoeffs)
fs.release()

在文本编辑器中检查XML文件后,确定4个值的相机内在系数和5个失真系数。
请添加图片描述
成功获得我们的参数后,我们可以导入一个新的失真图像并应用 OpenCV 的非失真函数来拉直图像。使用的图像取自厨房台面样本,在应用视觉检查功能之前需要对其进行失真处理——桶形失真在原始图像中非常明显。
请添加图片描述
下面的代码为要处理的图像细化摄像机矩阵,然后计算和应用转换。

# 输入失真图像,保留为 3 通道
image_dist = cv2.imread('./sample.png')
print("Distorted image shape: ", image_dist.shape)
cv2.imshow("Distorted Image", image_dist)
cv2.waitKey(0)
 
# 根据比例因子返回相应的新的相机内参矩阵,并得到有效的ROI
h, w = image_dist.shape[:2]
cameraMatrixNew, roi = cv2.getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, (w, h), 1, (w, h))
 
# 计算原始图像和矫正图像之间的转换关系,将结果以映射的形式表达,映射关系存储在map1和map2中
map1, map2 = cv2.initUndistortRectifyMap(cameraMatrix, distCoeffs, None, cameraMatrixNew, (w, h), cv2.CV_32FC1)
 
# 把原始图像中某位置的像素映射到矫正后的图像指定位置,
# 这里的map1和map2就是上面cv::initUndistortRectifyMap()计算出来的结果。
image_undist = cv2.remap(image_dist, map1, map2, cv2.INTER_LINEAR)
cv2.imshow("Undistorted Image Full", image_undist)
cv2.waitKey(0)

也可以使用undistort函数,因为undistort函数内部调用了initUndistortRectifyMap函数和remap函数。

下图是变换后的未失真输出。边缘处的黑色斑块是重新映射过程的副产品,因为像素被重新定位以获得直线度。
在这里插入图片描述
OpenCV的重映射函数识别出了上面图像边缘的黑色斑块,提供了一个有效的ROI(感兴趣区域),在变换之后给出了可能的最大的矩形图像。

# crop undistorted image to valid ROI
print("Valid ROI: ", roi)
x, y, w, h = roi
image_undist = image_undist[y:y+h, x:x+w]
cv2.imshow("Undistorted Image Valid ROI", image_undist)
cv2.waitKey(0)

生成的图像是裁剪后的有效 ROI,随后可用作视觉检测算法的输入。
在这里插入图片描述

参考目录

https://docs.opencv.org/4.x/d4/d94/tutorial_camera_calibration.html

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/193268.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

WebDAV之葫芦儿·派盘+GeniusScan

GeniusScan 支持WebDAV方式连接葫芦儿派盘。 推荐一款功能极其强大的手机微型扫描仪软件,可以将所有的东西扫描成为pdf格式的文档,还支持连接葫芦儿派盘服务。 GeniusScan让您的安卓设备变身微型扫描仪。它能让您快速扫描文档,将扫描结果保存JPEG或PDF格式,

【数字化】要点整理-《数据治理体系完整指南》

导读&#xff1a;本文整理内容来自一篇关于数据治理体系相对比较完整内容文章&#xff0c;体系化的范围介绍主要包括了介绍元数据、数据标准、数据建模、数据集成、数据质量、数据开发、数据安全、ETL。可以作为数据治理建设参考。01 数据治理体系02 元数据2.1、元数据解决的问…

先天性心脏病的6大症状,家长要重视治疗!

先天性心脏病是一种严重的心血管疾病&#xff0c;与遗传和环境有密切的关系&#xff0c;而且先天性心脏病越早治疗效果越好&#xff0c;因此要了解先天性心脏病的相关症状&#xff0c;能够更早的确诊病情&#xff0c;并提高患者的治愈几率。 天天性心脏病有哪些症状&#xff1f…

【链表面试题考察】

以下题目均为IO型。1.给你一个链表的头节点 head 和一个整数 val &#xff0c;请你删除链表中所有满足 Node.val val 的节点&#xff0c;并返回 新的头节点 。题目示例如上&#xff1a;解题思路&#xff1a;双指针问题&#xff0c;给定指针prev和cur&#xff0c;从头结点开始往…

Unity SKFramework Documentation

文章目录Audio 音频背景音乐音效音频库Audio ListenerActions 事件Action 事件类型Action Chain 事件链Sequence 序列事件链Concurrent 并发事件链Timeline 时间轴事件链FSM 有限状态机State 状态State Machine 状态机State Builder 状态构建器ObjectPool 对象池IPoolable 接口…

2023 年 1 月的5篇深度学习论文推荐

本文整理了 2023 年 1 月5 篇著名的 AI 论文&#xff0c;涵盖了计算机视觉、自然语言处理等方面的新研究。 InstructPix2Pix: Learning to Follow Image Editing Instructions https://arxiv.org/abs/2211.09800v2 伯克利分校的研究人员开发了一种使用人工指令编辑图像的新方…

小程序项目学习--**第三章:WXSS-WXML-WXS语法**事件处理-组件化开发

第三章&#xff1a;WXSS-WXML-WXS语法事件处理-组件化开发 01_(掌握)WXML语法-基本规则和mustache语法 Page({data: {message: "Hello World",firstname: "kobe",lastname: "bryant",date: new Date().toLocaleDateString(),}, }) <!-- 1.Mu…

【HBase高级】5. HBase数据结构(上)跳表、二叉搜索树、红黑树、B、B+树

4. HBase事务 HBase 支持特定场景下的 ACID&#xff0c;即当对同一行进行 Put 操作时保证完全的 ACID。可以简单理解为针对一行的操作&#xff0c;是有事务性保障的。HBase也没有混合读写事务。也就是说&#xff0c;我们无法将读操作、写操作放入到一个事务中。 5. HBase数据…

CMMI高效落地 4大关键点要注意

CMM对企业降本增效、增强竞争力方面&#xff0c;优势明显。那么如何顺利进行CMMI认证&#xff1f;我们在CMMI认证时&#xff0c;需要注意哪些方面&#xff1f; 1、公司高层的支持 一个公司过程改进 工作的顺利施行&#xff0c;首先需要公司高层的支持。公司的商业目标、公司高层…

45_API接口漏洞

API接口漏洞 一、概念 api > application interface 应用接口 向特定的接口发送一个请求包 返回一个类似于json格式的字符串 二、REST型web service 可以从网上去搜索下api接口去理解,下面有个我找到的网址,给出api接口的分类 https://blog.csdn.net/t79036912/article…

【顺序表和链表的对比】

前言&#xff1a; 我们已经学习过了顺序表和链表的一些知识&#xff0c;在实际运用中我们不能笼统的说哪种存储结构更好&#xff0c;由于它们各有优缺点&#xff0c;选择哪种存储结构&#xff0c;则应该根据具体问题作出具体的分析&#xff0c;通常从空间性能和时间性能上作比较…

Day14【元宇宙的实践构想03】—— 元宇宙的资产观(NFT、数字资产、虚拟地产、与现实世界资产关系)

&#x1f483;&#x1f3fc; 本人简介&#xff1a;男 &#x1f476;&#x1f3fc; 年龄&#xff1a;18 ✍今日内容&#xff1a;《元宇宙的实践构想》03——元宇宙的资产观 ❗❗❗从1.31日开始&#xff0c;阿亮每天会查阅一些元宇宙方面的小知识&#xff0c;和大家一起分享。一是…

cobaltstrike的shellcode免杀

基础概念 shellcode是一段用于利用软件漏洞而执行的代码&#xff0c;也可以认为是一段填充数据&#xff0c;shellcode为16进制的机器码&#xff0c;因为经常让攻击者获得shell而得名。shellcode常常使用机器语言编写。 可在暂存器eip溢出后&#xff0c;塞入一段可让CPU执行的s…

vue入门到精通(七)

6、依赖注入 祖先组件向后代组件传值 6.1 provide() 提供一个值&#xff0c;可以被后代组件注入。 provide() 接受两个参数&#xff1a;第一个参数是要注入的 key&#xff0c;可以是一个字符串或者一个 symbol&#xff0c;第二个参数是要注入的值。 与注册生命周期钩子的 AP…

百趣代谢组学文献分享埃博拉病毒发病机制及组合生物标志物的发现

百趣代谢组学文献分享&#xff0c;今天我们分享的文献就是通过多组学技术研究埃博拉病毒发病机制及组合生物标志物的发现。该文献的研究思路也可以给我们开展新型冠状病毒肺炎相关研究提供借鉴。 代谢组学文献分享&#xff0c;2013-2016年西非埃博拉病毒病&#xff08;EVD&…

(面经三,技术面)——时间:2022-11-11 地点:线上

面试经历&#xff08;三&#xff09;——时间&#xff1a;2022-11-11 地点&#xff1a;线上 1.什么是抽象类 有抽象方法的类&#xff0c;用来表征对问题领域进行分析、设计中得出的抽象概念。 2.抽象类和接口的区别 继承关系&#xff1a;类只能单继承。接口可以实现多个接口 构…

智慧物业管理系统的设计与实现

项目描述 临近学期结束&#xff0c;还是毕业设计&#xff0c;你还在做java程序网络编程&#xff0c;期末作业&#xff0c;老师的作业要求觉得大了吗?不知道毕业设计该怎么办?网页功能的数量是否太多?没有合适的类型或系统?等等。这里根据疫情当下&#xff0c;你想解决的问…

【大数据】第二章:搭建Hadoop集群(送尚硅谷大数据笔记)

尚硅谷Hadoop3.x官方文档大全免费下载 搭建集群没什么好讲的&#xff0c;跟着视频和笔记出不了什么问题。 唯一遇到的问题就是安装好VmWare后打不开&#xff0c;发现是老师给的VmWare版本不适配本机的WIN11。 解决办法就是下载最新版本的VmWare。新版已经修复了与WIN11的兼容性…

计算机网络基础(三)

前言&#xff1a; 在计算机网络基础(二)中&#xff0c;我们着重学习了应用层&#xff0c;传输层的知识。在 本文&#xff0c;就要介绍网络层&#xff0c;数据链路层&#xff0c;这两块内容细节也很多。这是计算机网络基础篇的最后一文&#xff0c;系统的学习后&#xff0c;就可…

基于php、Thinkphp5的共享电动车管理系统

摘 要当前共享单车在社会上广泛使用,但单车骑行的短距离仍旧不能完全满足广大用户的需求。共享电动车管理系统可以为用户提供账户信息、押金信息、充值信息、租车信息等功能,拥有较好的用户体验.能实时动态显示车辆位置提供更加快捷方便的租车方式,解决了常见共享电动车管理较为…