Traditional Culture Encyclopedia - Traditional stories - Camera calibration technology

Camera calibration technology

Calibration base on off-line camera

Off-line camera calibration technology needs accurate camera internal parameters and external parameters as the input and premise of reconstruction algorithm. At present, the most popular offline camera calibration algorithm is proposed by Tsai in 1987 [Tsai 1987]. TSAI method uses a three-dimensional calibration object with a special calibration mark for non-* * plane to provide the corresponding relationship between image points and their corresponding three-dimensional space points and calculate calibration parameters. Zhang proposed another practical method in 1998 [Bouguet2007], which needs to calibrate at least two different views of a plane calibration graph. The camera calibration tool of California Institute of Technology has effectively realized the above two methods, and has been integrated into Intel's visual algorithm library OpenCV [OpenCV2004]. Through the calibration algorithm, the projection matrix of the camera can be calculated and the three-dimensional measurement information of the scene can be provided. Without giving the absolute translation, rotation and scaling parameters of the real scene, the measurement and reconstruction of similar transformation levels can be realized.

Online camera calibration

In many cases, such as the lack of calibration equipment or the constant change of camera parameters, there is not enough data to support offline camera calibration, so online camera calibration technology is needed to reconstruct this kind of scene from multiple perspectives. The main difference between online calibration and offline calibration framework lies in the method of calibrating camera or estimating camera parameters. In most literatures, online calibration technology is called self-calibration. Self-calibration methods can be roughly divided into two categories: self-calibration based on scene constraints and self-calibration based on geometric constraints.

① Self-calibration based on scene constraints

Proper scene constraints can often simplify the difficulty of self-calibration to a great extent. For example, parallel lines widely existing in buildings or artificial scenes can help provide vanishing point and vanishing line information in three main orthogonal directions, and can give algebraic or numerical solutions of camera internal parameters based on them [Caprile 1990]. The vanishing point can be solved by voting and searching for the maximum value. Barnard used Gaussian sphere to construct solution space [Barnard 1983]. Quan, Lutton, Routher and others gave further optimization strategies [Quan 1989, Lutton 1994, Routher 2000]. The document [Quan 1989] gives a direct algorithm for searching the solution space, and the improved algorithm given by Heuvel adds a forced orthogonal condition [Heuvel 1998]. Caprile gave a geometric parameter estimation method based on three main orthogonal vanishing points, and Hartley used calibration curves to calculate the focal length [Hartley2003]. Liebowitz and others further construct the constraint of absolute conic from the vanishing point position, and solve the calibration matrix by Cox decomposition [Liebowitz 1999].

② Self-calibration based on geometric constraints

Self-calibration based on geometric constraints does not need external scene constraints, but only depends on the internal geometric constraints of multiple views to complete the calibration task. The theory and algorithm of self-calibration using absolute quadric surface were first put forward by Riggs [Riggs 1997]. Solving camera parameters based on Kruppa equation began with the work of Faugeras of Maybank [Faugeras 1992, Maybank 1992]. Hartley gave the basic matrix and deduced another derivation of Kruppa equation [Hartley 1997]. The literature [Sturm2000] discussed the uncertainty of Kruppa equation theoretically. Hierarchical self-calibration technology is used to upgrade from projection reconstruction to metric reconstruction [Faulgers1992]. One of the main difficulties of self-calibration technology is that it is not unlimited for any image or video sequence. In fact, there is a certain motion sequence or spatial feature distribution, which leads to the degradation and singular solution of the self-calibration solution framework. The literature [Sturm 1997] gives a detailed discussion and classification of degradation. For the discussion on the existence of some special solvable cases and their solutions, please refer to the literature [Wilesde 1996] and so on.