Bootstrap

MatLab的双目相机标定和orbslam双目参数匹配

本文承接ROS调用USB双目摄像头模组

相机标定

本文用的是Matlab R2021a
1.准备黑白棋盘格
2.相机采样
启动双目相机对棋盘格进行不同位姿的多次拍照采样,一般在20张左右即可。分别将左目和右目的图像存在两个文件夹中。

如果你的双目相机生成的是一幅图像,可以用下面的Matlab代码分割成左右两部分

clear;
for i = 1:34
    name=['IMG_00',num2str(i,"%02d"),'.jpg'];
    A=imread(['C:\Users\daybeha\Pictures\',name]); %读取原图
    [m ,n,~]=size(A); %读取原图尺寸,原图为三通道
    n1=n/2;

    % picture_1=zeros(m,n1,3);
    % picture_2=zeros(m,n1,3);
    picture_1=A(:,1:n1,:);
    picture_2=A(:,n1+1:n,:);

    cd('C:\Users\daybeha\Desktop\相机标定\1_left'); %将切割好的左右图像保存到指定文件夹
    imwrite(picture_1, name);
    cd('C:\Users\daybeha\Desktop\相机标定\2_right');
    imwrite(picture_2, name);
end 

3.安装Matlab标定工具箱
点击APP->获取更多APP
在这里插入图片描述
搜索Computer Vision Toolbox并安装
在这里插入图片描述
装好后回到APP你会看到Stereo Camera Calibrator,打开
在这里插入图片描述
4.进行标定
点击 Add Images
在这里插入图片描述
选择你的左右相机网格图像路径(第三个参数是你的网格图像中每个网格的真实边长,单位mm)
此时MATLAB会自动检测角点,去除不完整的图片。
在这里插入图片描述

解释一下标定界面:
1.是否使用先前标定好的相机参数,如果使用,存入箭头5指向的地方
2.畸变参数的不同表达形式(3个参数表达往往更精准)
3.是否认为图像坐标系x和y轴是垂直的,如果勾选,那内参矩阵会有细微变化,可以自己观察一下
4.是否计算切向畸变
5.优化预设参数(内参矩阵)
6.开始标定
在这里插入图片描述

点击6、Calibrator开始标定
标定后结果图
在这里插入图片描述

1.每对图像标定误差,可以鼠标点击或者拉上面的红线进行选中,把误差大的删除
2.观察相机和标定板的相对位姿分布,分布越多越好的
3.如图远近和角度都有不同的分布。但太远了也看不清,又是另外一说了。

导出参数,并保存
在这里插入图片描述

导出为YAML文件(也可以手动粘贴)

返回Matlab编程界面,可以看到stereoParams
在导出为YAML之前我们需要一个库文件

两个下载链接
链接1
链接2(解压后直接放到Matlab目前的工作路径即可)

然后运行下面的代码生成mystereo.yaml文件:
export_yaml.m

% .m文件,保存参数到yaml,注意相机相对位姿取逆
height = stereoParams.CameraParameters1.ImageSize(1);
width = stereoParams.CameraParameters1.ImageSize(2);
RD1 = stereoParams.CameraParameters1.RadialDistortion;
TD1 = stereoParams.CameraParameters1.TangentialDistortion;
D1 = [RD1(1), RD1(2), TD1(1), TD1(2), RD1(3)];
K1 = stereoParams.CameraParameters1.IntrinsicMatrix;
K1 = K1(:);

% height2 = stereoParams.CameraParameters2.ImageSize(1);
% width2 = stereoParams.CameraParameters2.ImageSize(2);
RD2 = stereoParams.CameraParameters2.RadialDistortion;
TD2 = stereoParams.CameraParameters2.TangentialDistortion;
D2 = [RD2(1), RD2(2), TD2(1), TD2(2), RD2(3)];
K2 = stereoParams.CameraParameters2.IntrinsicMatrix;
K2 = K2(:);

rot = stereoParams.RotationOfCamera2;
trans = stereoParams.TranslationOfCamera2;
%取逆
T=eye(4);
T(1:3,1:3)=rot;
T(1:3,4)=trans;
T=inv(T);
rot=T(1:3,1:3);
rot = rot(:);
% trans=T(1:3,4);

T = T';
T = T(:);

fx = stereoParams.CameraParameters2.IntrinsicMatrix(1,1);
baseline = norm(stereoParams.TranslationOfCamera2)/1000;    % 单位:m
bf = fx*baseline;

X = struct('width',width, 'height',height,'K1',K1 ,'D1', D1, ...
    'K2',K2 ,'D2',D2, ...
    'rot',rot,'trans',trans,'T', T, 'baseline',baseline, 'bf',bf);

%自己先手动新建一个result文件夹保存
fileName = 'mystereo.yaml';
cd('/home/daybeha/Documents/Sensors/camera_calib/Matlab');

YAML.write(fileName, X); % save X to a yaml file
X = YAML.read(file); % load X from a yaml file
disp(X)

下面简单说一下stereoParams的参数,详细内容可以查阅官方文档
相机外参:
旋转矩阵(RotationOfCamera2, 3x3) + 平移向量(TranslationOfCamera2, 3x1)

(都是相机2相对于相机1而言)

:旋转矩阵需要转置之后才能使用

在这里插入图片描述
相机内参(Intrinsic Matrix, 需要先转置再使用)
在这里插入图片描述
在这里插入图片描述
径向畸变(RadialDistortion, [k1,k2,k3])+切向畸变(TangentialDistortion, [p1,p2]);
需要注意参数的排放顺序,即[K1,K2,P1,P2,K3]切记不可弄错,否则后续的立体匹配会出现很大的偏差。
在这里插入图片描述
重投影平均误差
在这里插入图片描述

评估标定结果

相机标定与3D重建(4)评估单台相机标定的准确性

生成可用于ORB-SLAM2的yaml文件

首先找到ORB-SLAM2的EuRoC.yaml作为参照

%YAML:1.0
#------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#------------------------------------------------------------------------

# Camera calibration and distortion parameters (OpenCV) 相机内参
Camera.fx: 435.2046959714599
Camera.fy: 435.2046959714599
Camera.cx: 367.4517211914062
Camera.cy: 252.2008514404297
//
其对应Intrinsic Matrix:
例如我的是
[729.7486770458635, 0.0, 633.3497291775194]
[0.0, 729.497611002646, 325.11211658624217]
[0.0, 0.0, 1.0]

对应的修改焦距和相机中心如下:
Camera.fx: 729.7486770458635
Camera.fy: 729.497611002646
Camera.cx: 633.3497291775194
Camera.cy: 325.11211658624217
//

Camera.k1: 0.0
Camera.k2: 0.0
Camera.p1: 0.0
Camera.p2: 0.0
//
默认不改,因代码中已做畸变纠正。故均为0.
//

# 相机图像尺寸
Camera.width: 752
Camera.height: 480
//
我的修改为:
Camera.width: 1280
Camera.height: 720

//
# Camera frames per second 帧率
Camera.fps: 30.0

# stereo baseline times fx
Camera.bf: 47.90639384423901
//
这个参数是个大坑,其为相机的基线×相机的焦距。
我的基线:norm(stereoParams.TranslationOfCamera2)=59.576167656746300
我的焦距fx = 729.7486770458635
59.576167656746300 * 729.7486770458635 = 4.347562953097317e+04
orbslam的参数文件中单位是m
而matlab标定文件中的单位是mm
所以填入Camera.bf:  4.347562953097317e+01

//
# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

# Close/Far threshold. Baseline times.
ThDepth: 35
//
深度阈值,不是一个精确的数值,大概预估的,可以不改动,要改的话参考下述公式
自己粗略估计一个相机可以良好显示的最大距离值为s = 10  
如果fx = 100, Camera.bf = 20
那么 ThDepth = s*fx/Camera.bf = 10 *100 /20 = 50
将你自己的参数带入上述公式 可以得到大概的阈值。
//

#------------------------------------------------------------------------
# Stereo Rectification. Only if you need to pre-rectify the images.
# Camera.fx, .fy, etc must be the same as in LEFT.P
#------------------------------------------------------------------------
LEFT.height: 480
LEFT.width: 752
//
调整为你自己的相机大小
//
# 左图像畸变参数
LEFT.D: !!opencv-matrix
   rows: 1
   cols: 5
   dt: d
   data: [-0.28340811, 0.07395907, 0.00019359, 1.76187114e-05, 0.0]
   //
   [K1,K2,P1,P2,K3]
   位于mystereo.yml中的D1
	D1: [0.06512161299595974, 0.11973407972347475, -0.0011078652775439795, -2.1852799094934536E-4,
  -0.41333551502669647]
   //
  
# 左图像相机内参
LEFT.K: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [458.654, 0.0, 367.215, 0.0, 457.296, 248.375, 0.0, 0.0, 1.0]
   //
   位于mystereo.yml 的K1
   K1: [729.7486770458635, 0.0, 633.3497291775194, 0.0, 729.497611002646, 325.11211658624217, 0.0, 0.0, 1.0]
   //
   
# 左相机旋转矩阵
LEFT.R:  !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [0.999966347530033, -0.001422739138722922, 0.008079580483432283, 0.001365741834644127, 0.9999741760894847, 0.007055629199258132, -0.008089410156878961, -0.007044357138835809, 0.9999424675829176]
   //
   matlab得到的是右相机相对于左相机的旋转矩阵,因此个人认为这里应当填单位阵[1, 0, 0, 0, 1, 0, 0, 0, 1],下面的RIGHT.R填mystereo.yml 的rot
	rot: [0.9999045990168313, -4.565431820900782E-4, -0.01380523209920022, 4.957028071407323E-4, 0.9999958633653085, 0.0028332897836715284, 0.013803881472864573, -0.002839862777345364, 0.9999006890865157]
   //
   
# 投影矩阵
LEFT.P:  !!opencv-matrix
   rows: 3
   cols: 4
   dt: d
   data: [435.2046959714599, 0, 367.4517211914062, 0,  0, 435.2046959714599, 252.2008514404297, 0,  0, 0, 1, 0]
//
3x4的投影矩阵 (P' = K(RP + t) = KTP):
我没有在matlab中找到这个参数,但可以调用如下代码同时获得R和P(为便于理解变量定义为yaml中的符号,实际使用需先修改下):
        cv::Mat LEFT.R, RIGHT.R;
        cv::Mat LEFT.P, RIGHT.P, Q;
        cv::Size newImSize_;
        originalImSize_.width = LEFT.width;
        originalImSize_.height = LEFT.height;

        // 其中R12,t12即为mystereo.yaml中的T矩阵内的旋转矩阵和偏移矩阵
        cv::stereoRectify(LEFT.K,LEFT.D,RIGHT.K,RIGHT.D,newImSize_,
                          R12, t12,
                          LEFT.R, RIGHT.R, LEFT.P, RIGHT.P,Q,
                          cv::CALIB_ZERO_DISPARITY,-1,newImSize_);
      //

RIGHT.height: 480
RIGHT.width: 752
RIGHT.D: !!opencv-matrix
   rows: 1
   cols: 5
   dt: d
   data: [-0.28368365, 0.07451284, -0.00010473, -3.555907e-05, 0.0]
RIGHT.K: !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [457.587, 0.0, 379.999, 0.0, 456.134, 255.238, 0.0, 0.0, 1]
RIGHT.R:  !!opencv-matrix
   rows: 3
   cols: 3
   dt: d
   data: [0.9999633526194376, -0.003625811871560086, 0.007755443660172947, 0.003680398547259526, 0.9999684752771629, -0.007035845251224894, -0.007729688520722713, 0.007064130529506649, 0.999945173484644]
RIGHT.P:  !!opencv-matrix
   rows: 3
   cols: 4
   dt: d
   data: [435.2046959714599, 0, 367.4517211914062, -47.90639384423901, 0, 435.2046959714599, 252.2008514404297, 0, 0, 0, 1, 0]
//


RIGHT相机的设置与LEFT一致,唯一不同的就是RIGHT.P: 参数,
extrinsics.yml 中的 Pr:如下:
Pr: !!opencv-matrix
   rows: 3
   cols: 4
   dt: d
   data: [ 2.8559499458758660e+02, 0., 2.7029193305969238e+02,
       -3.9636548646706200e+04, 0., 2.8559499458758660e+02,
       2.8112063348293304e+02, 0., 0., 0., 1., 0. ]
对其进行修改,也就是data中的第4个值,需要转化单位从mm转为m。
所以应该填入RIGHT.P: 的数值为:
   data: [ 2.8559499458758660e+02, 0., 2.7029193305969238e+02,
       -3.9636548646706200e+01, 0., 2.8559499458758660e+02,
       2.8112063348293304e+02, 0., 0., 0., 1., 0. ]
ORB Parameter 没什么争议,较为明了,暂不介绍。
//
#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1200

# ORB Extractor: Scale factor between levels in the scale pyramid 	
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid	
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast			
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3
Viewer.ViewpointX: 0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500

生成可用于ORB-SLAM3的yaml文件

ORB-SLAM3对ORB-SLAM2的yaml做了简化,会在代码内部进行去畸变并计算相关参数,因此只需要提供标定的内参即可,不需要另外计算P、R等矩阵参数。
也是找到ORB-SLAM3的EuRoC.yaml作为参照

内容差不多,不做重复注释了:

%YAML:1.0

#--------------------------------------------------------------------------------------------
# System config
#--------------------------------------------------------------------------------------------

# When the variables are commented, the system doesn't load a previous session or not store the current one

# If the LoadFile doesn't exist, the system give a message and create a new Atlas from scratch
#System.LoadAtlasFromFile: "Session_MH01_MH02_MH03_Stereo60_Pseudo"

# The store file is created from the current session, if a file with the same name exists it is deleted
#System.SaveAtlasToFile: "Session_MH01_MH02_MH03_Stereo60_Pseudo"

#--------------------------------------------------------------------------------------------
# Camera Parameters. Adjust them!
#--------------------------------------------------------------------------------------------
File.version: "1.0"

# 相机模型:“针孔相机”
Camera.type: "PinHole"

# Camera calibration and distortion parameters (OpenCV) 
Camera1.fx: 458.654
Camera1.fy: 457.296
Camera1.cx: 367.215
Camera1.cy: 248.375

Camera1.k1: -0.28340811
Camera1.k2: 0.07395907
Camera1.p1: 0.00019359
Camera1.p2: 1.76187114e-05

Camera2.fx: 457.587
Camera2.fy: 456.134
Camera2.cx: 379.999
Camera2.cy: 255.238

Camera2.k1: -0.28368365
Camera2.k2: 0.07451284
Camera2.p1: -0.00010473
Camera2.p2: -3.55590700e-05

Camera.width: 752
Camera.height: 480

# Camera frames per second 
Camera.fps: 20

# Color order of the images (0: BGR, 1: RGB. It is ignored if images are grayscale)
Camera.RGB: 1

Stereo.ThDepth: 60.0

# [R T
#  0 1] 的位姿矩阵
# 对应mystereo.yaml中的T
Stereo.T_c1_c2: !!opencv-matrix
  rows: 4
  cols: 4
  dt: f
  data: [0.999997256477797,-0.002317135723275,-0.000343393120620,0.110074137800478,
         0.002312067192432,0.999898048507103,-0.014090668452683,-0.000156612054392,
         0.000376008102320,0.014089835846691,0.999900662638081,0.000889382785432,
         0,0,0,1.000000000000000]

#--------------------------------------------------------------------------------------------
# ORB Parameters
#--------------------------------------------------------------------------------------------

# ORB Extractor: Number of features per image
ORBextractor.nFeatures: 1200

# ORB Extractor: Scale factor between levels in the scale pyramid 	
ORBextractor.scaleFactor: 1.2

# ORB Extractor: Number of levels in the scale pyramid	
ORBextractor.nLevels: 8

# ORB Extractor: Fast threshold
# Image is divided in a grid. At each cell FAST are extracted imposing a minimum response.
# Firstly we impose iniThFAST. If no corners are detected we impose a lower value minThFAST
# You can lower these values if your images have low contrast			
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7

#--------------------------------------------------------------------------------------------
# Viewer Parameters
#--------------------------------------------------------------------------------------------
Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1.0
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2.0
Viewer.CameraSize: 0.08
Viewer.CameraLineWidth: 3.0
Viewer.ViewpointX: 0.0
Viewer.ViewpointY: -0.7
Viewer.ViewpointZ: -1.8
Viewer.ViewpointF: 500.0
Viewer.imageViewScale: 1.0

关于ORB-SLAM3 v1.0的运行,除了

运行中如果遇到类似这种错误:
在这里插入图片描述
要注意你的yaml文件是否有非空格的缩进,特别是"tab"键!

下面我们测试一下

2022.5.6补充

上面的转换不够方便,发现ros的camerainfo可以直接计算基线等参数,写一个matlab转OpenCV的流程,这次一步到位,不用再手动copy了。

先写一个matlab转opencv的函数,让matlab输出的yaml文件是opencv可读的。

function matlab2opencv( variable, fileName)

[rows, cols] = size(variable);

% % Beware of Matlab's linear indexing
% variable = variable';
% 
% % Write mode as default
% if ( ~exist('flag','var') )
%     flag = 'w';
% end
% 
% if ( ~exist(fileName,'file') || flag == 'w' )
%     % New file or write mode specified
%     file = fopen( fileName, 'w');%不存在则创建写入模式
%     fprintf( file, '%%YAML:1.0\n');
% else
%     % Append mode
%     file = fopen( fileName, 'a');%存在则追加模式
% end

file = fopen( fileName, 'a');%存在则追加模式

if (rows == cols && rows == 1)
    fprintf( file, '%s: ', inputname(1));
else
    % Write variable header
    fprintf( file, '%s: !!opencv-matrix\n', inputname(1));
    fprintf( file, '    rows: %d\n', rows);
    fprintf( file, '    cols: %d\n', cols);
    fprintf( file, '    dt: d\n');%double类型
    fprintf( file, '    data: [ ');
end


% Write variable data
if (rows == cols && rows == 1)
    fprintf( file, '%d', variable);
else
    for i=1:rows*cols
%         fprintf( file, '%.16d', variable(i));%16表示小数点后有16fprintf( file, '%d', variable(i));%16表示小数点后有16if (i == rows*cols), break, end
    if mod(i+1,4) == 0
        fprintf( file, ',\n        ');
    else
        fprintf( file, ', ');
    end
    end
end


if (rows == cols && rows == 1)
    fprintf( file, '\n');
else
    fprintf( file, ']\n');
end

fclose(file);

然后用上面的方法标定后执行下面代码生成my_stereo.yaml文件

%利用matlab2opencv函数从matlab标定结果矩阵中提取所需的数值保存到对应的yml文件中(按opencv函数所需矩阵的顺序设置值),以供opencv直接利用cvLoad等加载调用.

cd('/home/daybeha/Documents/Sensors/camera_calib/Matlab');;%切换到立体标定结果的目录下
load stereoParams.mat;%加载标定结果矩阵

% .m文件,保存参数到yaml,注意相机相对位姿取逆
height = stereoParams.CameraParameters1.ImageSize(1);
width = stereoParams.CameraParameters1.ImageSize(2);
RD1 = stereoParams.CameraParameters1.RadialDistortion;
TD1 = stereoParams.CameraParameters1.TangentialDistortion;
D1 = [RD1(1), RD1(2), TD1(1), TD1(2), RD1(3)];
K1 = stereoParams.CameraParameters1.IntrinsicMatrix;
% K1 = K1(:);

% height2 = stereoParams.CameraParameters2.ImageSize(1);
% width2 = stereoParams.CameraParameters2.ImageSize(2);
RD2 = stereoParams.CameraParameters2.RadialDistortion;
TD2 = stereoParams.CameraParameters2.TangentialDistortion;
D2 = [RD2(1), RD2(2), TD2(1), TD2(2), RD2(3)];
K2 = stereoParams.CameraParameters2.IntrinsicMatrix;
% K2 = K2(:);

rot = stereoParams.RotationOfCamera2;
trans = stereoParams.TranslationOfCamera2/1000;
trans = trans';

%如果你发现用opencv显示的去畸变图像不对,那可能是你的matlab版本比较新,不用再取逆了
%取逆
T=eye(4);
T(1:3,1:3)=rot;
T(1:3,4)=trans;
T=inv(T);
rot=T(1:3,1:3);
T = T';



fx = stereoParams.CameraParameters2.IntrinsicMatrix(1,1);
baseline = norm(stereoParams.TranslationOfCamera2)/1000;    % 单位:m
bf = fx*baseline;



fileName = 'mystereo.yaml';
file = fopen( fileName, 'w');%不存在则创建写入模式
fprintf( file, '%%YAML:1.0\n');

%调用matlab2opencv函数保存矩阵到yml文件
matlab2opencv(width,fileName);%注意该函数脚本要与本脚本在同一目录下或者该函数脚本已设置路径了
matlab2opencv(height,fileName);
matlab2opencv(K1,fileName);
matlab2opencv(D1,fileName);
matlab2opencv(K2,fileName);
matlab2opencv(D2,fileName);
matlab2opencv(rot,fileName);
matlab2opencv(trans,fileName);
matlab2opencv(T,fileName);
matlab2opencv(baseline,fileName);
matlab2opencv(bf,fileName);

注意 T 是否需要取逆奥!!!

my_stereo.yaml文件示例:

%YAML:1.0
width: 1280
height: 720
K1: !!opencv-matrix
    rows: 3
    cols: 3
    dt: d
    data: [7.345998e+02, 0, 6.317633e+02,
        0, 7.348341e+02, 3.183312e+02, 0,
        0, 1]
D1: !!opencv-matrix
    rows: 1
    cols: 5
    dt: d
    data: [1.203863e-01, -1.229253e-01, -1.374659e-03,
        -1.154086e-03, -4.694199e-02]
K2: !!opencv-matrix
    rows: 3
    cols: 3
    dt: d
    data: [7.319376e+02, 0, 6.515635e+02,
        0, 7.326062e+02, 2.989715e+02, 0,
        0, 1]
D2: !!opencv-matrix
    rows: 1
    cols: 5
    dt: d
    data: [1.044217e-01, -4.885175e-02, -8.480718e-04,
        -1.273430e-03, -1.533052e-01]
rot: !!opencv-matrix
    rows: 3
    cols: 3
    dt: d
    data: [9.999588e-01, 6.880603e-04, 9.052271e-03,
        -6.845675e-04, 9.999997e-01, -3.889435e-04, -9.052536e-03,
        3.827306e-04, 9.999590e-01]
trans: !!opencv-matrix
    rows: 3
    cols: 1
    dt: d
    data: [-5.962796e+01, -6.345052e-02, -5.821155e-01]
T: !!opencv-matrix
    rows: 4
    cols: 4
    dt: d
    data: [9.999588e-01, -6.845675e-04, -9.052536e-03,
        5.962019e+01, 6.880603e-04, 9.999997e-01, 3.827306e-04,
        1.047009e-01, 9.052271e-03, -3.889435e-04, 9.999590e-01,
        1.121835e+00, 0, 0, 0,
        1]
baseline: 5.963084e-02
bf: 4.364605e+01

然后由于不会用matlab算立体校正后的位姿矩阵……
回到C++,用OpenCV计算双目立体校正后的R、P矩阵,并转成CameraInfo的文件格式:
compute_P.cpp

#include <iostream>

#include <opencv2/opencv.hpp>
#include <fstream>


using namespace std;
using namespace cv;

int img_cols = 1280;  //摄像头的分辨率
int img_rows = 720;
Size imageSize = Size(img_cols, img_rows);

Rect validROIL, validROIR;//图像校正之后,会对图像进行裁剪,这里的validROI就是指裁剪之后的区域, 其内部的所有像素都有效

Mat mapLx, mapLy, mapRx, mapRy;
Mat Rl, Rr, Pl, Pr, Q;              //校正旋转矩阵R,投影矩阵P 重投影矩阵Q

Mat cameraMatrixL, cameraMatrixR, distCoeffL, distCoeffR;
// Mat cameraMatrixL = (Mat_<double>(3, 3) << 458.654, 0.0, 367.215, 0.0, 457.296, 248.375,
//         0.0, 0.0, 1.0);
// Mat distCoeffL = (Mat_<double>(5, 1) << -0.28340811, 0.07395907, 0.00019359, 1.76187114e-05,0);

// Mat cameraMatrixR = (Mat_<double>(3, 3) << 457.587, 0.0, 379.999, 0.0, 456.134, 255.238,
//         0.0, 0.0, 1.0);
// Mat distCoeffR = (Mat_<double>(5, 1) << -0.28368365, 0.07451284, -0.00010473, -3.55590700e-05,0);

//左右目之间的R,t可通过stereoCalibrate()或matlab工具箱calib求得
Mat trans, R;
// Mat trans = (Mat_<double>(3, 1) << -59.49161163342028, 0.010760053363461395, -1.2873374693176298);//T平移向量
// Mat R = (Mat_<double>(3, 3) << 0.9999522267540742, -6.925008919407085E-4, -0.009750110362602303,
//         6.897833651297106E-4, 0.9999997223155306, -2.820775973100296E-4,
//         0.009750302994135816, 2.753386576112956E-4, 0.9999524267584664);;//R 旋转矩阵

bool loadAndWriteCamInfo(std::string filename)
{
    // Load settings related to stereo calibration
    cv::FileStorage fsSettings(filename, cv::FileStorage::READ);
    if(!fsSettings.isOpened())
    {
        cerr << "ERROR: Wrong path to settings" << endl;
        return -1;
    }

    img_cols = fsSettings["width"];
    img_rows = fsSettings["height"];
    imageSize = Size(img_cols, img_rows);

    fsSettings["K1"] >> cameraMatrixL;
    fsSettings["K2"] >> cameraMatrixR;
    fsSettings["D1"] >> distCoeffL;
    fsSettings["D2"] >> distCoeffR;

    fsSettings["rot"] >> R;
    fsSettings["trans"] >> trans;

    cout << "width: " << img_cols << endl
         << "height: " << img_rows << endl
         << "K1: " << cameraMatrixL << endl
         << "K2: " << cameraMatrixR << endl
         << "D1: " << distCoeffL << endl
         << "D2: " << distCoeffR << endl
         << "rot: " << R << endl
         << "trans: " << trans << endl;

    /// 3、计算去畸变参数
            //    Rodrigues(rec, R); //Rodrigues变换
            //经过双目标定得到摄像头的各项参数后,采用OpenCV中的stereoRectify(立体校正)得到校正旋转矩阵R、投影矩阵P、重投影矩阵Q
            //flags-可选的标志有两种零或者 CV_CALIB_ZERO_DISPARITY ,如果设置 CV_CALIB_ZERO_DISPARITY 的话,该函数会让两幅校正后的图像的主点有相同的像素坐标。否则该函数会水平或垂直的移动图像,以使得其有用的范围最大
            //alpha-拉伸参数。如果设置为负或忽略,将不进行拉伸。如果设置为0,那么校正后图像只有有效的部分会被显示(没有黑色的部分),如果设置为1,那么就会显示整个图像。设置为0~1之间的某个值,其效果也居于两者之间。
    stereoRectify(cameraMatrixL, distCoeffL, cameraMatrixR, distCoeffR, imageSize, R, trans, Rl, Rr, Pl, Pr, Q, CALIB_ZERO_DISPARITY, 0, imageSize, &validROIL, &validROIR);
    // //再采用映射变换计算函数initUndistortRectifyMap得出校准映射参数,该函数功能是计算畸变矫正和立体校正的映射变换
    // initUndistortRectifyMap(cameraMatrixL, distCoeffL, Rl, Pl, imageSize, CV_32FC1, mapLx, mapLy);
    // initUndistortRectifyMap(cameraMatrixR, distCoeffR, Rr, Pr, imageSize, CV_32FC1, mapRx, mapRy);
    // 基于上面计算得到的映射变换参数,得到校正后的图像
    // cv::remap(left_img,left_img,mapLx,mapLy,cv::INTER_LINEAR);
    // cv::remap(right_img,right_img,mapRx,mapRy,cv::INTER_LINEAR);

    cout << "Rl: \n" << Rl << endl;
    cout << "Rr: \n" << Rr << endl;
    cout << "Pl: \n" << Pl << endl;
    cout << "Pr: \n" << Pr << endl;
    cout << "Q: \n" << Q << endl;


    ofstream file_left_info,file_right_info;
    file_left_info.open("left.yaml", ios::out | ios::trunc );
    file_right_info.open("right.yaml", ios::out | ios::trunc );
    file_left_info << "%YAML:1.0" << endl;
    file_left_info << "image_width: "  << img_cols << endl;
    file_left_info << "image_height: "  << img_rows << endl;
    file_left_info << "camera_name: "  << "my_stereo" << endl;
    file_left_info << "distortion_model: "  << "plumb_bob" << endl;
    file_left_info << "camera_matrix: " <<endl
                   << "  rows: " << "3" <<endl
                   << "  cols: " << "3" <<endl
                   << "  dt: " << "d" << endl
                   << "  data: " << cameraMatrixL.reshape(1,1)  <<endl;
    file_left_info << "distortion_coefficients: "  << endl
                   << "  rows: " << "1" <<endl
                   << "  cols: " << distCoeffL.size[1] <<endl
                   << "  dt: " << "d" << endl
                   << "  data: " << distCoeffL  <<endl;
    file_left_info << "rectification_matrix: "   << endl
                   << "  rows: " << "3" <<endl
                   << "  cols: " << "3" <<endl
                   << "  dt: " << "d" << endl
                   << "  data: " << Rl.reshape(1,1)  <<endl;

    file_left_info << "projection_matrix: "  << endl
                   << "  rows: " << "3" <<endl
                   << "  cols: " << "4" <<endl
                   << "  dt: " << "d" << endl
                   << "  data: " << Pl.reshape(1,1)  <<endl;

    file_right_info << "%YAML:1.0" << endl;
    file_right_info << "image_width: "  << img_cols << endl;
    file_right_info << "image_height: "  << img_rows << endl;
    file_right_info << "camera_name: "  << "my_stereo" << endl;
    file_right_info << "distortion_model: "  << "plumb_bob" << endl;
    file_right_info << "camera_matrix: " <<endl
                   << "  rows: " << "3" <<endl
                   << "  cols: " << "3" <<endl
                   << "  dt: " << "d" << endl
                   << "  data: " << cameraMatrixR.reshape(1,1)  <<endl;
    file_right_info << "distortion_coefficients: " << endl
                   << "  rows: " << "1" <<endl
                   << "  cols: " << distCoeffR.size[1] <<endl
                   << "  dt: " << "d" << endl
                   << "  data: " << distCoeffR  <<endl;
    file_right_info << "rectification_matrix: "   << endl
                   << "  rows: " << "3" <<endl
                   << "  cols: " << "3" <<endl
                   << "  dt: " << "d" << endl
                   << "  data: " << Rr.reshape(1,1)  <<endl;
    file_right_info << "projection_matrix: "  << endl
                   << "  rows: " << "3" <<endl
                   << "  cols: " << "4" <<endl
                   << "  dt: " << "d" << endl
                   << "  data: " << Pr.reshape(1,1)  <<endl;


    file_left_info.close();
    file_right_info.close();

    fsSettings.release();


    return true;
}

int main(int argc, char* argv[])
{
    //-- Zuo丶    Load Camera YAML
    loadAndWriteCamInfo("../mystereo.yaml");
}

CMakeLists.txt

cmake_minimum_required(VERSION 2.8.3)
project(StereoRectify)

IF(NOT CMAKE_BUILD_TYPE)
   set(CMAKE_BUILD_TYPE Release)
ENDIF()
MESSAGE("Build type: " ${CMAKE_BUILD_TYPE})

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 -march=native ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall  -O3 -march=native")


# 要保证整个工程的opencv版本一致,包括dbow,源码以及ros相关的
# 3 4 都可以正常运行
find_package(OpenCV 4.2)
if(NOT OpenCV_FOUND)
   message(FATAL_ERROR "OpenCV > 4.2 not found.")
endif()

add_executable(compute_P compute_P.cpp)
target_link_libraries(compute_P ${OpenCV_LIBS})

编译后执行./compute_P文件输出分别为:left.yaml 和 right.yaml
right.yaml示例:

%YAML:1.0
image_width: 640
image_height: 480
camera_name: my_stereo
distortion_model: plumb_bob
camera_matrix: 
  rows: 3
  cols: 3
  dt: d
  data: [274.2904, 0, 319.8862, 0, 274.6972, 240.9309, 0, 0, 1]
distortion_coefficients: 
  rows: 1
  cols: 4
  dt: d
  data: [0.004087323, -0.001881896, -9.958729e-05, -0.001207858]
rectification_matrix: 
  rows: 3
  cols: 3
  dt: d
  data: [0.9997110831949438, -0.01647589230136512, -0.01750128881156914, 0.01646703841877354, 0.9998641979190275, -0.0006498964951845734, 0.01750961972479053, 0.0003615143339274061, 0.9998466295009848]
projection_matrix: 
  rows: 3
  cols: 4
  dt: d
  data: [286.8489174684013, 0, 329.6765251159668, 0, 0, 286.8489174684013, 240.5877666473389, 0, 0, 0, 1, 0]

基于left.yaml 和 right.yaml,再用下面代码可视化看一看矫正后图像变成了什么样子:

import cv2
import numpy as np
import yaml

# Load the images
img1 = cv2.imread('/home/daybeha/Documents/rosbag_tools/calibration/results/left/53.jpg')
img2 = cv2.imread('/home/daybeha/Documents/rosbag_tools/calibration/results/right/53.jpg')
h, w = img1.shape[:2]

# 创建cv2.FileStorage对象
fs_settings_l = cv2.FileStorage('./build/left.yaml', cv2.FILE_STORAGE_READ)
fs_settings_r = cv2.FileStorage('./build/right.yaml', cv2.FILE_STORAGE_READ)

# 检查是否成功打开文件
if not fs_settings_l.isOpened() or not fs_settings_r.isOpened():
    print("ERROR: Wrong path to settings")
    exit(-1)

# 读取文件中的数据
# camera matrix
camera_matrix_l = fs_settings_l.getNode("camera_matrix").mat()
camera_matrix_r = fs_settings_r.getNode("camera_matrix").mat()

# distortion coefficients
dist_coeffs_l = fs_settings_l.getNode("distortion_coefficients").mat()
dist_coeffs_r = fs_settings_r.getNode("distortion_coefficients").mat()

# rotation matrix
R_l = fs_settings_l.getNode("rectification_matrix").mat()
R_r = fs_settings_r.getNode("rectification_matrix").mat()

# projection matrix
P_l = fs_settings_l.getNode("projection_matrix").mat()
P_r = fs_settings_r.getNode("projection_matrix").mat()

mapx_l, mapy_l = cv2.initUndistortRectifyMap(camera_matrix_l, dist_coeffs_l, R_l, P_l, (w, h), cv2.CV_32FC1)
mapx_r, mapy_r = cv2.initUndistortRectifyMap(camera_matrix_r, dist_coeffs_r, R_r, P_r, (w, h), cv2.CV_32FC1)

img1 = cv2.remap(img1, mapx_l, mapy_l, cv2.INTER_LINEAR)
img2 = cv2.remap(img2, mapx_r, mapy_r, cv2.INTER_LINEAR)
cv2.imshow("left", img1)
cv2.imshow("right", img2)
cv2.waitKey()

参考

【ROS实践入门(八)ROS使用USB视觉传感器相机】
matlab双目标定(详细过程)
双目相机标定——从MATLAB到OpenCV
双目标定:matlab自动标定相机参数方法
用matlab对相机进行标定获取相机内参
双目相机标定和orbslam2双目参数详解

Matlab立体标定mat转换成Opencv的CvMat

;