2.1. Introduction

Much work in camera calibration has been done in the photogrammetry community (see [3, 6] to cite a few) and more recently in computer vision ([12, 11, 33, 10, 37, 35, 22, 9] to cite a few). According to the dimension of the calibration objects, we can classify those techniques roughly into four categories.

3D reference object-based calibration. Camera calibration is performed by observing a calibration object whose geometry in 3D space is known with very good precision. Calibration can be done very efficiently [8]. The calibration object usually consists of two or three planes orthogonal to each other. Sometimes, a plane undergoing a precisely known translation is also used [33], which equivalently provides 3D reference points. This approach requires an expensive calibration apparatus and an elaborate setup.

2D plane-based calibration. Techniques in this category require observation of a planar pattern shown at a few different orientations [42, 31]. Different from Tsai's technique [33], the knowledge of the plane motion is not necessary. Because almost anyone can make such a calibration pattern by himself or herself, the setup is easier for camera calibration.

1D line-based calibration. Calibration objects used in this category are composed of a set of collinear points [44]. As will be shown, a camera can be calibrated by observing a moving line around a fixed point, such as a string of balls hanging from the ceiling.

Self-calibration. Techniques in this category do not use any calibration object and can be considered a 0D approach because only image point correspondences are required. Just by moving a camera in a static scene, the rigidity of the scene provides, in general, two constraints [22, 21] on the cameras' internal parameters from one camera displacement by using image information alone. Therefore, if images are taken by the same camera with fixed internal parameters, correspondences between three images are sufficient to recover both the internal and external parameters, which allow us to reconstruct 3D structure up to a similarity [20, 17]. Although no calibration objects are necessary, a large number of parameters need to be estimated, resulting in a much harder mathematical problem.

Other techniques exist: vanishing points for orthogonal directions [4, 19] and calibration from pure rotation [16, 30].

No single calibration technique is the best for all conditions. It depends on the situation a user needs to deal with. Following are a few recommendations:

- Calibration with apparatus versus self-calibration. Whenever possible, if we can precalibrate a camera, we should do it with a calibration apparatus. Self-calibration cannot usually achieve an accuracy comparable to that of precalibration, because self-calibration needs to estimate a large number of parameters, resulting in a much harder mathematical problem. When precalibration is impossible (e.g., scene reconstruction from an old movie), self-calibration is the only choice.

- Partial versus full self-calibration. Partial self-calibration refers to the case where only a subset of camera intrinsic parameters are to be calibrated. Along the same line as the previous recommendation, whenever possible, partial self-calibration is preferred because the number of parameters to be estimated is smaller. Take an example of 3D reconstruction with a camera with variable focal length. It is preferable to precalibrate the pixel aspect ratio and the pixel skewness.

- Calibration with 3D versus 2D apparatus. Highest accuracy can usually be obtained by using a 3D apparatus, so it should be used when accuracy is indispensable and when it is affordable to make and use a 3D apparatus. According to feedback from computer vision researchers and practitioners around the world in the last couple of years, calibration with a 2D apparatus seems to be the best choice in most situations because of its ease of use and good accuracy.

- Calibration with 1D apparatus. This technique is relatively new, and it is hard for the moment to predict how popular it will be. It, however, should be useful especially for calibration of a camera network. To calibrate the relative geometry between multiple cameras as well as their intrinsic parameters, it is necessary for all involved cameras to simultaneously observe a number of points. It is hardly possible to achieve this with 3D or 2D calibration apparatus[1] if one camera is mounted in the front of a room while another in the back. This is not a problem for 1D objects. We can, for example, use a string of balls hanging from the ceiling.

[1] An exception is when those apparatus are made transparent; then the cost is much higher.

This chapter is organized as follows. Section 2.2 describes the camera model and introduces the concept of the absolute conic, which is important for camera calibration. Section 2.3 presents the calibration techniques using a 3D apparatus. Section 2.4 describes a calibration technique by observing a freely moving planar pattern (2D object). Its extension for stereo calibration is also addressed. Section 2.5 describes a relatively new technique that uses a set of collinear points (1D object). Section 2.6 briefly introduces the self-calibration approach and provides references for further reading. Section 2.7 concludes the chapter with a discussion on recent work in this area.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset