Huajian DENG,Hao WANG,Xiaoya HAN,Yang LIU,Zhonghe JIN
1Micro-Satellite Research Center,Zhejiang University,Hangzhou 310027,China
2Zhejiang Key Laboratory of Micro-nano Satellite Research,Hangzhou 310027,China
3Beijing Institute of Tracking and Telecommunications Technology,Beijing 100094,China
Abstract:Inadequate geometric accuracy of cameras is the main constraint to improving the precision of infrared horizon sensors with a large field of view(FOV).An enormous FOV with a blind area in the center greatly limits the accuracy and feasibility of traditional geometric calibration methods.A novel camera calibration method for infrared horizon sensors is presented and validated in this paper.Three infrared targets are used as control points.The camera is mounted on a rotary table.As the table rotates,these control points will be evenly distributed in the entire FOV.Compared with traditional methods that combine a collimator and a rotary table which cannot effectively cover a large FOV and require harsh experimental equipment,this method is easier to implement at a low cost.A corresponding three-step parameter estimation algorithm is proposed to avoid precisely measuring the positions of the camera and the control points.Experiments are implemented with 10 infrared horizon sensors to verify the effectiveness of the calibration method.The results show that the proposed method is highly stable,and that the calibration accuracy is at least 30% higher than those of existing methods.
Key words:Infrared horizon sensor;Ultra-field infrared camera;Camera calibration
The infrared horizon sensor provides almost uninterrupted fine attitude knowledge at a relatively low cost(Mazzini,2016;Nguyen et al.,2018;Modenini et al.,2020),and it is therefore widely used in various space missions(Deng et al.,2017;Gou and Cheng,2018).To capture the full Earth in low orbit,an infrared horizon sensor equipped with an infrared panoramic annular lens(PAL)camera was designed and has been validated in orbit(Wang H et al.,2021).The PAL camera has a maximum field of view(FOV)of 180°and a±30°circular blind area in the center(Niu et al.,2007).Its imaging diagram,when it is facing the Earth at an orbital altitude of 500 km,is shown in Fig.1.Edge points of the Earth are extracted and reprojected to the camera frame to calculate the Earth center’s direction.Apparently,the reprojection process depends on the geometric calibration accuracy of the infrared PAL camera,so a highly accurate camera calibration of the infrared horizon sensor is essential.
Fig.1 Imaging diagram of the horizon sensor
Cameras with a super large FOV,including the PAL camera,are also called ultra-field cameras(Zhang S et al.,2020b).The most widely used calibration method for ultra-field infrared cameras is the infrared plane calibration board(IPCB),which uses IPCB pictures with different attitudes and positions to resolve the camera distortion parameters(Kannala and Brandt,2006;Scaramuzza et al.,2006).Due to the specific detection range of infrared cameras,many types of IPCBs have been designed to provide higher-contrast infrared characteristics and achieve more accurate control point positioning(Sheng et al.,2010;Vidas et al.,2012;Dias et al.,2013;Zhang Y et al.,2013;Shibata et al.,2017;Usamentiaga et al.,2017;Li XY et al.,2019).Generally,the IPCB method is good to use when calibrating ultra-field infrared cameras(Chen et al.,2019;Wang ZA et al.,2020;Zhang S et al.,2020a),but there are still several challenges when it is applied to the camera calibration of infrared horizon sensors.
Normally,an IPCB with huge size chessboard squares is needed to deal with an ultra-field infrared camera with low resolution(Chen et al.,2019).The IPCB needs to surround the center area of the ultra-field infrared camera,so the control points can cover the camera’s entire FOV.However,as shown in Fig.1,the large blind area of the infrared PAL camera causes the discontinuity in the FOV,which leads to difficulties in placing the IPCB.In particular,it is almost impossible to place conventional IPCBs in the narrow area above and below the circular blind area.In this case,the distortion parameters will fit only the image area covered by the control points but not the entire FOV,which is unacceptable in our application.
Another standard calibration method for highprecision cameras is using a rotary table and a collimator,which is widely used in star tracker calibration tasks(Liebe,2002;Sun et al.,2013;Zhang H et al.,2017;Fan et al.,2020).Star trackers,which are the most accurate attitude measurement devices in satellites,are equipped with a narrow-field visible light camera.Wei et al.(2014)proposed a calibration method based on integrated modeling of intrinsic and extrinsic parameters to calibrate star trackers.This method is insensitive to the errors incurred in installation and alignment,and it is appealing for use in calibrating infrared horizon sensors.However,there are several problems in applying this method to infrared horizon sensors.
The camera should be covered by parallel light when the collimator method is applied.However,this is difficult to accomplish in calibrating cameras with a large FOV,because a small misalignment of the camera and the center of the rotary table may cause the camera not to be covered in parallel light.Thus,the camera should be placed exactly in the center of the rotary table,or the infrared collimator should have a large enough aperture.Furthermore,the frame of the rotary table should not block the camera’s FOV.These requirements are too strict to be practical.Therefore,in our work,instead of the traditional collimator method,we use multiple infrared targets.Because it is only necessary to ensure that the infrared targets are within the camera’s FOV,the additional requirements for the experimental equipment no longer exist.The targets can easily cover the infrared horizon sensor’s super-large FOV at a low cost.In addition,by using multiple targets,we are able to obtain enough data in less experimental time,which means that higher accuracy can be achieved efficiently.
A complete model of the camera’s imaging process during the rotation is established to obtain the required camera parameters.By considering the distance between the camera and the center of the rotary table,the proposed model can describe the effect of the camera motion on the imaging process when rotating.The positions of the camera and the infrared targets should be known parameters in our method,so they should be precisely measured beforehand.Normally,because the measurement of these parameters is always troublesome,a parameter estimation algorithm could help in simplifying this operation.
There are many existing algorithms for solving similar parameter estimation problems in the star tracker calibration task(Li YT et al.,2014;Wei et al.,2014;Zhang CF et al.,2018;Ye et al.,2019).These algorithms rely on the close connection between the ideal model and the distorted model of the star tracker’s camera.However,the ideal model for ultra-field cameras varies according to their design,and the widely used distorted model(Scaramuzza et al.,2006)is not directly related to the ideal model,which greatly limits the universality of these algorithms.They should be adapted for use in camera calibration of infrared horizon sensors.Inspired by the existing two-step algorithm(Wei et al.,2014),which is the most widely used algorithm by far,a three-step parameter estimation algorithm is proposed to deal with unknown parameters.The ideal imaging model and distorted imaging model of the ultra-field camera are developed for preliminary parameter estimation and optimal parameter estimation,respectively.The connection between the ideal imaging model and the distorted imaging model is established using polynomial fitting.As a result,the required parameters can be estimated with high accuracy and robustness.
Compared with traditional camera calibration methods,the proposed method has fewer requirements for experimental equipment,wider application,and higher accuracy.It effectively solves the problem of camera calibration of infrared Earth sensors with a large FOV.Experimental results validate the excellent robustness,accuracy,and stability of the proposed method.
The infrared camera calibration system consists of a two-axis rotary table with a position accuracy of±3'',a computer,infrared cameras to be calibrated,and infrared targets.The setup of the calibration system is shown in Fig.2.The infrared targets and the two-axis rotary table are placed on a stable platform that is isolated from vibration.The infrared camera is mounted on the two-axis rotary table and points at the infrared targets.The two-axis rotary table drives the infrared camera to rotate to different angles,and the camera takes pictures of infrared targets at each pose.The proposed method has no additional requirements on the structure of the rotary table,the camera can be mounted anywhere on the rotary table,and the targets can easily cover the large FOV of the camera as the rotary table rotates.
Fig.2 Setup of the infrared camera calibration system
Comparison of calibration methods using a collimator and infrared targets is shown in Fig.3,revealing the superiority of the infrared target method.As shown in Fig.2,the internal frame of the rotary table(P)is established with the rotation center(O)of the rotary table as the origin,the inner axis of the rotary table as theZaxis(Z),and the outer axis as theXaxis(X).TheYaxis(Y)of framePis normal to theX-Zplane.Assume that the camera’s optical centerOcis on theZaxis,and that the camera’s optical axis is parallel to theZaxis.Let the rotary table rotate around theXaxis at an angle ofW1.To ensure that the FOV of the ultra-field infrared camera is not blocked by the rotary table itself,there may be a large distance between the optical center of the camera and the rotating center of the rotary table.Thus,as shown in Fig.3,the position ofOcchanges fromOc1toOc2when the rotary table rotates.
In the collimator method,the angle changeW2measured by the camera is equal to the rotation angleW1of the rotary table.As shown in Fig.3,the aperture of the collimator should be large enough,andOcandOshould coincide as much as possible;otherwise,the camera will leave the parallel light coverage.This leads to high demands on experimental equipment and greatly limits the versatility of the collimator method.However,in the infrared target method,the angle changeW3of the infrared target measured by the camera is equal toW1+W4,which is affected by the position change ofOc.Therefore,the infrared target method can use camera motion to achieve easier coverage of the camera’s full FOV,rather than being disturbed by the motion as in the collimator method.In summary,the infrared target solution is less demanding on the experimental equipment,more widely adaptable,and particularly suitable for calibration of ultra-field cameras.
Fig.3 Comparison of calibration methods using a collimator(top)and infrared targets(bottom)
The specific experimental procedure is as follows:First,the rotary table and infrared targets are adjusted so that the infrared targets are directly in front of the camera.Then the rotary table rotates 90°around theXaxis at 5°intervals;at each rotation interval of theXaxis,the rotary table rotates 360°around theZaxis at 10°intervals.The infrared camera takes pictures of the infrared targets at each pose.This process produces a large amount of experimental data,and a computer is responsible for controlling the rotary table and collecting infrared camera data in sequence so that automatic calibration is realized.The workflow of the calibration system is shown in Fig.4,which can greatly promote the calibration efficiency.The final effective calibration data include about 260 images.
Fig.4 Workflow of the infrared camera calibration system
The actual image and the obtained infrared image of infrared targets are shown in Fig.5.The infrared targets consist of three circular ceramic heating plates with a diameter of 9 mm,which are lowcost shelf commodities.More data can be obtained in a shorter time by using multiple infrared targets.The heating plates are energized and hung in the air using their own power supply metal wire during calibration.They are placed on a vibration-free platform in enclosed rooms to avoid vibration and interference,so the position accuracy of the target during the rotation of the table can be ensured.After the plates reach thermal equilibrium,their surface temperature is maintained at 200-500°C based on power consumption and remains steady for several hours.The heating plates are made of multilayer alumina ceramics,which have a small thermal expansion coefficient and high sintering temperature,so they do not deform and keep their shape during the operation.
The grayscale images of the 9-mm three-target combination taken by an infrared PAL camera at a distance of about 1 m are shown in Fig.5b.Considering the symmetry of the circular heating sheet,its energy distribution will also be symmetrical.The high contrast of the infrared targets ensures a high signal-to-noise ratio,and the energy distribution can be easily measured in the form of gray pixel values.Therefore,it is a good approach to achieve control point positioning based on energy distribution.The specific steps of the control point positioning algorithm are as follows:
Fig.5 Combination of three infrared targets with a diameter of 9 mm:(a)actual image;(b)infrared image
1.Separate infrared targets and background in the image based on grayscale thresholdT.Here,the average value of the entire image plus 3.5 times the standard deviation of the whole image is taken asT.Then the threshold grayscale image is obtained as follows:
where(u,v)denotes the coordinates of the pixel,f(u,v)denotes the original grayscale of the pixel,andF(u,v)denotes the threshold grayscale of the pixel.
2.Divide the image into four connected components according to the threshold grayscale.Then the infrared target areas are identified according to the size and grayscale of the connected components,and the images that do not contain the infrared targets are excluded.
3.The weighted centroid positioning algorithm is applied to complete the control point positioning(Stone,1989):
wheremdenotes the number of rows,ndenotes the number of columns,and(uc,vc)denotes the control point position.
4.Match actual targets and image spots according to the relative position features.For the limitation of the ultra-field camera,the tangential features distort noticeably in different regions of the image.Therefore,the radial features,which are less prone to distortion,are chosen to be constructed here.As shown in Figs.2 and 5,the infrared targets are placed at different distances in the radial direction of the camera with a margin.The position of the infrared targets and the rotation angles of the rotary table are deliberately planned to ensure that targets do not cross the central area during rotation.Once the control points are positioned,the radial distance between the control points and the image center is calculated.The control point identification and matching are completed according to the radial distance.
Camera calibration aims to obtain the intrinsic parameters of the camera.Before estimating parameters,an accurate integrated model is needed to describe the whole imaging process.The imaging process of the infrared target method is different from that of the collimator method(Wei et al.,2014),because the motion of the camera has an impact on the imaging process.As coordinate transformation errors from different frames accumulate,their effect on the result will be great.Thus,a novel and more accurate calibration model is proposed.The distance between the camera and the center of the rotary table is considered to describe the effect of the camera’s motion on the imaging process when rotating.The infrared camera calibration system and the frames used in the following integrated modeling are shown in Fig.6.The frames used are defined as follows:
Fig.6 The infrared camera calibration system
FrameP(defined in Section 2.2)rotates with the rotary table.When the rotary table is at its initial position,framePis defined as the inertial coordinate frame(B).The origin of the camera coordinate frame(C)is the optical center of the camera(Oc).TheXaxis(Xc)of frameCis the row’s direction of the imaging sensor,and theYaxis(Yc)of frameCis the column’s direction of the imaging sensor.TheZaxis(Zc)of frameCis normal to theXc-Ycplane.The image coordinate frame takes the imaging center of the image sensor(Os)as the origin,and the pixel coordinate frame takes the upper left corner of the image sensor as the origin.In both cases,the row and column of the image sensor are taken as theXaxis andYaxis respectively,and the unit is pixel.
The installation position deviation and angle deviation between frameCand framePare taken as one set of extrinsic parameters.The positions of the infrared targets in frameBare taken as another set of extrinsic parameters.The principal point and distortion coefficient of the ultra-field camera are taken as intrinsic parameters.The integrated calibration model is established in Sections 3.1.1 and 3.1.2.
3.1.1 Extrinsic parameter model
When the rotary table rotates at theithset of angles,assume thatis the position of thejthinfrared targetDjin frameC.The expression is as follows:
whereis the position of thejthinfrared target in frameB.is the position of the camera’s optical centerOcin frameP.RBPirepresents the rotation matrix from frameBto framePunder theithset of rotation angles.The rotation of the rotary table applied in this study can be described by two sets of independent parametersωxiandωzi,and its expression is as follows:
whereRrepresents the rotation matrix corresponding to the Euler angles.RPCrepresents the rotation matrix from framePto frameC,which can be described by three independent groups of parametersα,β,φ,and its expression is as follows:
3.1.2 Intrinsic parameter model
Assume that an infrared targetCXD=is projected as a pointp'=[u',v']Ton the image coordinate frame according to the imaging relationship,which can be expressed as
whereλrepresents the scaling factor of the infrared target from frameCto the image coordinate frame,andgrepresents the imaging function that describes the imaging relationship.
To simplify parameter calculation,an ideal imaging model and a distorted imaging model are established here,respectively,for initial estimation and later refinement.
1.Ideal imaging model
To obtain a large FOV,the PAL camera obeys the equidistance projection relationship(Niu et al.,2007),which can be expressed as
wherefis the focal length,θis the field angle of the target,andρis the image height.Considering the distortion in the lens design and processing process,the camera generally does not deviate too much from the imaging principle,so Eq.(7)roughly fits the imaging functiong.
Regardless of the imaging principle,the incident vector can be expressed as the following unit vector:
whereλ1represents the scaling factor andφrepresents the azimuth of the target.The relationship between the coordinates and azimuth is as follows:
In addition,the deviation of the principal point needs to be considered.The pointp'on the image coordinate frame is transformed intop=[u,v]Ton the pixel coordinate frame after affine transformation.The rotation matrix in the affine transformation is treated as an identity matrix here,so the affine transformation can be described as
where[u0,v0]Tis the position of the imaging centerOsin the pixel coordinate frame.The length unit of both frames is one pixel,so there is no scaling relationship.
2.Distorted imaging model
To describe the distortion of the ultra-field camera more accurately,the polynomial distorted model proposed by Scaramuzza et al.(2006)is used,and the imaging functiongis well-fitted by the polynomial:
whereλ2is the scaling factor,anda0,a2,...,aNare the coefficients of each order.According to Chen et al.(2019),Wang ZA et al.(2020),and Zhang S et al.(2020a),whenNis set to 4,a good calibration effect can be obtained for an ultra-field lens.Subsequent experiments also verified this.In addition,considering the image sensor’s imaging process,the inconsistency of the pixel length and width directions,and the tilt deviation of the two axes of the pixel(Forsyth and Ponce,2011),the transformation from image coordinate frame to pixel coordinate frame can be described as
wherekrepresents the pixel aspect ratio andsrepresents the tilt coefficient of the pixel axes.
A three-step calibration algorithm is proposed to deal with unknown parameters,which are too many to be estimated directly.The summary of the three-step calibration algorithm is shown in Fig.7.The position of the control points obtained from the images and the corresponding rotation angles are used as input data.Assuming that the camera follows the lens’s optical design,the first step is to provide a reasonable estimate for most parameters with the ideal imaging model.The parameters in the ideal imaging model are transformed into the parameters in the distorted imaging model by polynomial fitting in the second step.In the third step,the distorted imaging model is used to estimate the camera’s distortion coefficient and thus optimize the estimation of all parameters.The three steps are presented in detail in Sections 3.2.1,3.2.2,and 3.2.3.
Fig.7 Summary of the estimation algorithm
3.2.1 First step:preliminary estimation of a part of parameters
When the rotary table rotates at theithset of angles,the position of thejthinfrared targetDjin the pixel coordinate frame is measured by the infrared camera.The estimated value of the target positionCin the camera coordinate system is obtained from the extrinsic parameter model,which can be expressed as
whereErepresents the coordinate transformation relationship described by Eq.(3).
Then the estimated value of the target image positionon the image coordinate frame is obtained from the ideal imaging model,which can be expressed as
whereFrepresents the inverse process of the ideal imaging relationship described by Eqs.(7),(8),and(10).Due to the simplicity of the functional relationship,this inverse process is not difficult to express analytically.
A nonlinear least-squares estimation problem is established,and the optimization objective is the minimization of the following cost function:
The variables to be estimated in the first step are
wherej=1,2,...,K.The positions here are relative values,so we assumeBZD1=10 according to actual situations.The Levenberg-Marquardt algorithm is used to solve the nonlinear optimal estimation problem.Ignoring the installation errors,the initial guesses ofα,β,andφare given according to the corresponding relations of the coordinate frames.The initial guess of(u0,v0)locates at the center of the image coordinates,whilef’s initial guess is the lens design value.The initial guesses ofBXDj,BYDj,BZDj,P XC,P YC,andP ZCare set roughly according to the site conditions.
3.2.2 Second step:transformation of parameters in two imaging models
The parameters in the ideal imaging model and distorted imaging model do not correspond to each other directly.The conversion process is essential to use the results of the preliminary parameter estimation to the fullest.A simple solution is as follows:When 0<θ<π/2,the following approximation is available according to Eqs.(8),(9),and(11):
For thefobtained in the first step,a set of discreteθ’s are used as the input to Eq.(7),and a corresponding set ofρ’s are obtained.Then,a set of approximate solutions ofa0,a2,a3,a4are obtained by polynomial fitting according to Eq.(17).These approximate solutions will be used as initial guesses in the following step.
3.2.3 Third step:optimal estimation of all parameters
Adopting the distorted imaging model based on the first step,the estimated value of(?uij,?vij)Tis obtained.It can be expressed as
whereGrepresents the inverse process of the distorted imaging relationship described by Eqs.(11)and(12).However,this inverse process is difficult to express analytically.The difficulty comes mainly from finding a suitable root of the polynomials in Eq.(11).The distorted imaging model implies that the reprojection process from the point in the pixel coordinate to the vector in the camera coordinate is convenient,whereas the projection process from the vector in the camera coordinate to the point in the pixel coordinate is complex and nonlinear.The opposite situation exists in other kinds of models(Kannala and Brandt,2006).The computational complexity is unavoidable due to the extensive use of high-order polynomials in the distorted model,but leaving the computational convenience in the reprojection process will be more practical.This will help achieve better real-time performance in measurement applications(Wang H et al.,2021).
The same nonlinear least-squares estimation problem as in Eq.(15)is used.The variables to be estimated in the third step are
wherej=1,2,...,K.The Levenberg-Marquardt algorithm is applied,while the Jacobian matrix needs to be approximated numerically due to the complexity of calculating the analytical formula.This is realized using the MATLAB function lsqnonlin.The parameters obtained in the first and second steps are adopted as initial values,and the initial guesses forkandsare set to 1 and 0,respectively.
4.1.1 Configurations of simulations
The infrared camera used for simulations is the same as a real infrared PAL camera,including the FOV and the blind areas.Its main specifications are shown in Table 1.A combination of three calibration targets is chosen here.The values of model parameters and their initial guesses during the iteration process are shown in Table 2.The two-axis rotary table is set to rotate around theXaxis in steps of 5°,and rotate around theZaxis in steps of 10°.
Table 1 Specifications of the infrared panoramic annular lens(PAL)camera
Table 2 Simulation parameters and the initial values
Because the true value is known in the simulations,the root mean square error(RMSE)of the reprojected points,which are based on the calibrated model,and the corresponding true points are used to evaluate the calibration effect in the simulations(Scaramuzza et al.,2006).This RMSE is also calledthe real reprojection error(RRE):
where(uij,vij)represents the true value.The RRE is the average distance error in thexandydirections,which is slightly different from the Euclidean distance.We performed 200 sets of independent simulations for each condition,and the results shown are the average values.
4.1.2 Simulation results
In the simulations,we investigated the feasibility and robustness of the proposed algorithm in the case of inaccurate control point positioning.To this end,Gaussian noise with a mean of 0 and a standard deviation ofσpixels was added to thexandycoordinates of each control point.The noise level was changed fromσ=0.1 pixels toσ=2 pixels with a step size of 0.1 pixels.The simulation results are shown in Fig.8.As we can see,the average RRE of the optimal estimate is much smaller than the noise level in different simulation conditions,which fully demonstrates the good robustness of the proposed method.The average RRE increases linearly with the increase of the noise level in both optimal estimation and preliminary estimation,while the gap between the RRE of the preliminary estimation and that of the optimal estimation gradually decreases.This indicates that the distorted imaging model shows greater advantages in low-noise conditions.
Fig.8 Performance under different noise levels
Furthermore,forσ=2 pixels,which is larger than the noise caused by control point positioning,the average RRE of the optimal estimation is 0.254 pixels.Real points,measurement points,and reprojected points of one simulation are shown in Fig.9.Although the measurement points are very noisy relative to the real points,after correction by the proposed method,the reprojected points can approximate the real points very well.
Fig.9 Distribution of the real points,measurement points,and reprojected points
The iteration process of one calibration withσ=2 pixels is shown in Fig.10.Although the initial cost functionJin the preliminary estimation is ultra-high due to the poor initial guess of parameters,it still converges in a few cycles,andJdecreases further in the optimal estimation,which demonstrates the effectiveness of the proposed parameter estimation algorithm.
Fig.10 Iteration process of the calibration algorithm
4.2.1 Experimental results
We used 10 infrared horizon sensors to validate the accuracy and stability of the proposed method.All control points obtained in the experiment of one infrared horizon sensor are shown in Fig.11.Due to the use of three closely spaced targets and the high-density rotation of the rotary table,the control points of the three targets covered the camera’s entire FOV evenly,as shown in Figs.11 and 1.
Fig.11 Distribution of all control points obtained
The reprojection error distribution of one experiment with three targets is shown in Fig.12.The errors of all three targets were uniformly distributed and concentrated around zero.This indicates that the control point positioning accuracies of the three infrared targets are similar,and the constraints provided by all three infrared targets are treated equally by the estimation algorithm.The calibration results of one infrared horizon sensor are shown in Table 3.These parameters will be injected into each infrared horizon sensor’s software to help them achieve higher accuracy.
Fig.12 Distribution of reprojection errors
Because the true value was not known in the experiments,the RMSE between the reprojected points and the corresponding measurement points was used to evaluate the calibration effect in the experiments.This RMSE is also called the measurement reprojection error(MRE):
The experimental results of 10 cameras are shown in Fig.13 and Table 4.Comparison experiments were performed with the same set of calibration photos,and one of the targets,as well as all three targets,was taken for parameter solving,separately.The average MRE is 0.175 pixels when using only one target,while the average MRE is 0.148 pixels when using three targets,implying an improvement of 15% in the calibration accuracy.Also,the experiments with three targets showed better stability with a standard deviation of 0.0028 pixels,while it was 0.014 pixels with a single-target calibration experiment.Compared to a single target,the combination of three targets increases the calibration data exponentially without additional time consumption.In addition,the position relationship constraint between the three targets suppresses the random noise of a single target.These factors lead to better performance of the calibration system when using three targets.
Fig.13 Calibration experiments of 10 infrared horizon sensors
During the calibration experiments,the positions of the three targets with respect to the rotary table were kept constant,so we can evaluate the estimation accuracy of the extrinsic parameters based on their distribution.According to the estimation algorithm in Section 3.2,the estimates of the extrinsic parameters are relative values and are based on the assumption ofBZD1=10.The estimates of extrinsic parameters are shown in Table 5.The error distributions are relatively small,so the extrinsic parameters are estimated effectively via the proposed algorithm.4.2.2 Comparison with other geometric calibration methods
Table 3 Infrared horizon sensor calibration results
Table 4 Results of calibration experiments
Table 5 Estimation of extrinsic parameters
Existing results of other infrared camera calibration methods are shown in Table 6 for comparison.The evaluation criterion was described in Eq.(21);and the results of all calibration methods are similar.The method proposed in this study achieves higher calibration accuracy than other ultra-field infrared cameras.Even compared with the best result available,which is 0.219 pixels(Zhang S et al.,2020a),our method has achieved at least a 30%improvement.
As shown in the last row of Table 6,because of the severe distortion and low control point positioning accuracy of the ultra-field cameras,the geometric calibration of ultra-field infrared cameras is still less accurate than that of narrow-field infrared cameras(Usamentiaga et al.,2017).The distorted model and control point positioning algorithm of the ultra-field infrared camera will be researched more thoroughly in future works.
Table 6 Comparison of infrared camera calibration methods
A high-accuracy camera geometric calibration method for infrared horizon sensors with a large FOV has been proposed.Control points were evenly distributed throughout the entire FOV with the help of a two-axis rotary table and multiple infrared targets.Thus,the calibration accuracy of the entire FOV was ensured.The three-step parameter estimation algorithm based on integrated modeling achieved highly accurate parameter estimation.
Simulation results showed that the proposed calibration algorithm was effective and robust at different noise levels.Experiments using 10 infrared PAL cameras demonstrated that the proposed method achieved not only high accuracy,with an average MRE of 0.148 pixels,but also good stability with a standard deviation of 0.0028 pixels.The combina-tion of three targets proposed in this paper improved the calibration accuracy by 15% compared to a single target.The calibration accuracy of this method is at least 30% better than those of other existing methods.
The camera calibration method proposed in this paper helped infrared horizon sensors achieve higher accuracy,and it will be an important foundation for other measurement applications with ultra-field infrared cameras.
Contributors
Huajian DENG and Hao WANG designed the research.Huajian DENG acquired and processed the data.Huajian DENG and Hao WANG drafted the paper.Xiaoya HAN,Yang LIU,and Zhonghe JIN offered advice.Huajian DENG and Hao WANG revised and finalized the paper.
Compliance with ethics guidelines
Huajian DENG,Hao WANG,Xiaoya HAN,Yang LIU,and Zhonghe JIN declare that they have no conflict of interest.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Frontiers of Information Technology & Electronic Engineering2023年1期