亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Monocular Vision Based Relative Localization For Fixed-wingUnmanned Aerial Vehicle Landing

        2022-02-12 06:25:54YuwenXuYunfengCaoandZhouyuZhang

        Yuwen Xu, Yunfeng Cao and Zhouyu Zhang

        (College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 210000, China)

        Abstract: Autonomous landing has become a core technology of unmanned aerial vehicle (UAV) guidance, navigation, and control system in recent years. This paper discusses the vision-based relative position and attitude estimation between fixed-wing UAV and runway, which is a key issue in autonomous landing. Images taken by a airborne camera was used and a runway detection method based on long-line feature and gradient projection is proposed, which solved the problem that the traditional Hough transform requires much calculation time and easily detects end points by mistake. Under the premise of the known width and length of the runway, position and attitude estimation algorithm used the image processing results and adopted an estimation algorithm based on orthogonal iteration. The method took the objective space error as error function and effectively improved the accuracy of linear algorithm through iteration. The experimental results verified the effectiveness of the proposed algorithms.

        Keywords: autonomous landing; visual navigation; Region of Interest (ROI); edge detection; orthogonal iteration

        0 Introduction

        Autonomous landing of unmanned aerial vehicle (UAV) is a key part of the autonomous flight process. Data indicates that accidents are most likely to occur at this stage[1]. Vision-based UAV landing has received much attention due to its advantages of strong autonomy. The basic task of such technology is to extract enough image features and use them to solve the attitude and position parameters for UAV’s control system[2]. The visual-based landing method can be mainly divided into two categories: 1) cooperative targets deployed on the landing site or runway[3-7], such as artificially placed landing marks; 2) noncooperative targets extracted from images, such as runway edge or horizon[8-10]. The visual landing method based on noncooperative targets does not require manual setting of landing marks, thus has wider applicability[11]. Since most of fixed-wing UAVs must land onto the runway, vision-based navigation method based on the runway has been a hot topic in this field.

        Sasa et al.[12]proposed a method using the runway edge and the horizon for position and attitude estimation. The Berkeley team[13]tracked the centerline of runway to determine the yaw angle and horizontal offset of the fixed-wing UAV. Anitha et al.[14]extracted the runway edge and centerline to calculate UAV’s position parameters with the known width of runway. Li Hong et al.[15]processed the image of the runway to detect the three feature lines of the runway by Hough transform, then evaluated the offset and yaw angle of the aircraft. Analysis of the above research methods leads to the conclusion that because results of line detection have better accuracy and robustness than that of point, vision-based navigation methods based on the runway mostly estimate the position and attitude parameters with feature lines. However, there is still a problem: the position and attitude information only from runway edgelines is limited without the help of horizon or inertial navigation.

        In recent years, the Perspective-N-Point (PNP) problem has been widely used in the fields of photogrammetry, visual measurement, robot control, etc. Scholars have done a lot of research on the accuracy and real-time performance of PNP algorithms. Based on these achievements, this paper selects runway corners as feature to realize six-degree-of-freedom parameter estimation. The main work of this paper consists of two parts: 1) A runway detection algorithm based on long-line feature and gradient projection are proposed, and runway corners are regarded as the intersections of the edgelines in the region of interest, to improve the accuracy and robustness of point detection; 2) Under the premise of known width and length of runway, detected image coordinates are taken as the input of PNP problem, and Orthogonal Iterative (OI) are adopted to improve the accuracy of linear algorithms while avoiding time cost caused by a large number of iterations.

        The rest of the paper is organized as follows. Section 1 presents algorithm details for corner extraction. Section 2 introduces the process of OI algorithm. In Section 3, the performance of the above algorithms is validated, and appropriate initial value of iteration is determined for further discussion. Section 4 provides conclusions.

        1 Image Processing for Localization

        The research in this paper has two assumptions: 1) The UAV has been guided to the range of 2 km near the runway, and the direction is aligned with the runway; 2) The forward-looking landing camera is equipped under the UAV. The runway has been clearly visible in the forward-looking images during the research phase. It is trapezoidal from the perspective of UAV, surrounded by obvious sidelines and almost horizontal endlines. Obviously, the extraction of line feature is more stable and accurate than that of point feature at this stage. This section will introduce in detail a runway detection algorithm based on long-line feature and gray-scale feature. Furthermore, corner coordinates are obtained by solving intersections of multiple edgelines. The process can be divided into the following steps: first, locate the region of interest (ROI) using saliency analysis, then detect two sidelines on the binary image based on Hough Transform, finally find the approximate position of upper and lower boundaries through the method of gradient projection.

        Fig.1 is a block diagram of the image process algorithms to extract runway corners as feature points.

        Fig.1 Block diagram of image process algorithms

        1.1 Region of Interest

        The image collected by monocular camera has a huge resolution. If straight lines are directly detected on the original image, the calculation amount is large and it is easily interfered by complex background. Hence, the runway area should be first extracted by ROI segmentation. With reference to the human visual system, when we see a complex scene, we can quickly focus on a specific area of the image. That is region of interest. When UAV landing flight enters the stage of parameter solving, the runway in the forward-looking image has a large contrast with the surrounding environment, such as color and brightness. We try to locate the runway area based on saliency analysis. Saliency analysis method proposed by Hou and Li[16-17]is widely concerned as a global saliency measure based on the spectral residual (SR) of the Fast Fourier Transform (FFT), which favors regions with a unique appearance within the entire image. This method can quickly constructs a saliency map by analyzing the amplitude spectrum of the input image. The biggest advantage is that no prior knowledge is required. Given an imageI(x,y), the calculation process includes the following steps.

        Step1: Calculate the amplitude spectrum and phase spectrum according to Eqs. (1) and (2).

        A(f)=|F[I(x,y)]|

        (1)

        P(f)=φ(F[I(x,y)])

        (2)

        whereF[·] denotes the two-dimensional discrete Fourier transform, |·| denotes amplitude, andφ(·) denotes phase. Then the log spectrumL(f) of the image can be obtained as follows:

        L(f)=ln(A(f))

        (3)

        Step2: The spectrum residualR(f) is defined as

        R(f)=L(f)-hn(f)·L(f)

        (4)

        wherehn(f) is an×nmatrix defined by

        (5)

        Step3: The saliency mapS(x,y) is given as

        S(x,y)=g(x)·F-1[exp(R(f)+P(f))]2

        (6)

        whereg(x) is a Gaussian filter,F-1[·] denotes the inverse Fourier transform.

        While the saliency mapS(x,y) is generated, it is necessary to further use the high saliency characteristics to extract runway target from the image. The sliding window algorithm is classic and most used, which can accurately extract the target. But its low real-time performance is not suitable for landing scenes. A threshold is set to binarize saliency map, which enhances its contrast with background information based on high saliency characteristics of the runway area. Fig. 2 shows the processing effect of the saliency map after binarization. Then, the candidate boxes are extracted by the method of region-labeling[18]. The method overcomes the shortcoming of blind traversal of the sliding window algorithm and reduces the computational complexity.

        As shown in Fig. 3(a), a large number of candidate boxes are extracted on the binarized saliency map. There are two restrictions which can effectively filter the area containing the runway:

        1) Area constraint: Since the UAV has been guided near the runway, target box that contains runway already occupies a large area in the forward view image. Hence, an area threshold can be reasonably set and small candidate boxes are removed.

        2) Aspect ratio constraint: Runway usually has predetermined size, for example, the runway in the experimental image is 1000 m long and 60 m wide. During the landing process, the aspect ratio of target box only changes within a certain range. Hence, an aspect ratio threshold can be reasonably set, and the candidate boxes that obviously cannot contain the runway are deleted.

        1.2 Runway Detection

        When the UAV is close to the runway, its edge and plane features are clearly presented in the forward-looking image. The long-line feature is the most obvious. The method of threshold segmentation and line extraction is often adopted in runway detection. As one of the most effective methods to extract straight line, Hough transform has been widely used in runway detection. However, there are still some problems, which affect the accuracy of the detection results.

        1) Although the Hough transform can effectively detect two sidelines with long-line feature, due to the influence of noise points and breakpoints, the end points of detected line segments may not be runway corners.

        2) The upper edge of the runway is short, and there are many mark lines as interference near the lower edge. If the method of threshold segmentation and line extraction are still used to detect endlines, result may be unreliable.

        Therefore, Hough transform was used to realize sideline detection and the gradient feature of runway plane was used to locate endlines, so as to obtain the image coordinates of runway corners based on correct mathematical description of the four edges.

        1.2.1Detectionofsidelines

        The long-line feature of sidelines is the most significant feature of runway. Hough transform can provide parameters with high positioning accuracy and good stability. Before line detection, firstly runway edge needs to be separated from the image by threshold segmentation. The key issue is to ensure the segmentation threshold. The paper chooses Otsu segmentation algorithm, which means the maximum separation of target and background in the statistical sense. It is brief and fast. Fig. 4 is the comparison of Otsu algorithm and Niblack algorithm[19],a classic local binary algorithm. It can be seen that the segmentation result of Ostu algorithm eliminates useless details and retains more edge information.

        In the Hough transform, the lines through individual non-background points in the image can be represented as sinusoidal curves in the polar coordinate system, where the variableρis the distance from the origin to the line along a vector perpendicular to the line andθis the angle between thex-axis and this vector. If one single line were to pass through these points, then the sinusoidal curves which represents all possible lines through these points would intersect at one single point in the (θ,ρ) plane. The problem of detecting lines is now the problem of finding the Hough peaks in the (θ,ρ) plane. The detection result of sidelines using Hough Transform and corresponding (θ,ρ) plane are illustrated in Fig. 5. Two points of maximum intersection of the sinusoidal curves highlighted in Fig.5(a) correspond to two runway sidelines in Fig.5(b).

        It is not difficult to see that two longitudinal boundaries in the binary image have obvious length and direction characteristics, which can be used to reduce the huge calculation of traversing every possibleθ.Combined with the prior knowledge, the calculation range ofθis reduced to two certain range [10°,30°] and [-30°,-10°].As shown in Fig. 6, each sideline corresponds to the Hough peak in one interval.

        1.2.2Detectionofendlines

        Runway edge on the binarized image is not continuous. There are many breaks. Although Hough transform can locate two sidelines, the determination of the endpoints also requires upper and lower boundary constraints. The runway plane usually has obvious gray-scale differences from the surrounding environment, and there are also obvious mark lines on it. These all determine that the runway zone is an area where gray-scale changes sharply. Since the direction of the UAV has been aligned with the runway, the upper and lower edges of the runway can be approximately replaced with two rows in the image. The gradient magnitude of each pixel in the ROI image was calculated and the gradient magnitude of the row elements were summed. According to the row distribution of gradient magnitude, the beginning and end row of the area were found where pixels with large gradient magnitude are concentrated.

        When using the gradient operator to calculate the gradient amplitude, the calculation results of different gradient operators are different. It is expected that a good operator can distinguish the image rows occupied by the runway area from others as much as possible. The paper employs a new operator to process the image. Fig.7 shows its calculation templates and two other commonly used gradient operators. The processing results of these operators are shown in Fig. 8. The curve obtained by new operator has an obvious rising point in the first half and a sharp drop in the second half, which is similar to the one obtained by Sober operator. In addition, the new operator has a smaller calculation amount than Sober operator.

        Fig.8 Image gradient distribution obtained by three operators

        2 Orthogonal Iteration Algorithm

        After using image processing methods to obtain feature point information, it is necessary to further calculate the position and attitude parameters required by the UAV flight control system. What the landing control system needs is the relative position and attitude between the UAV and the runway. In the study, since the camera is installed on the body of the UAV, it is reasonable to assume that the camera reference frame (CRF) and the body-fixed UAV reference frame coincide. The reference frame involved in the position and attitude estimation process is defined as follows:

        1) Earth-fixed reference frame (ERF). ERF defined in this paper is built on the runway plane.Select the intersection of the centerline and the starting line of the runway as the origin, the centerline as thex-axis, and thez-axis perpendicular to the ground downwards. Determine they-axis according to the right-hand rule.

        2) Camera reference frame (CRF). CRF was used instead of body-fixed UAV reference frame in this paper. Take the optical center as the origin and the optical axis is thez-axis.

        3) Image pixel reference frame (IRF). Take the upper left corner of the image as the origin. The image columns are in accordance with the horizontal coordinates and image rows in accordance with the vertical coordinates.

        Therefore, the position and attitude estimation problem of UAV is summarized as follows[20]: finding the conversion relationship between the ERF and the CRF through the three-dimensional coordinates of a set of feature points in the ERF and the corresponding two-dimensional image coordinates. Assuming that the internal parameters of the camera are known, it is a typical PNP problem.

        Methods to solve PNP problem can be generally divided into two categories: linear and non-linear algorithms. The accuracy and robustness of linear algorithms are easily affected by image point errors. The non-linear algorithms have the disadvantage of complicated calculation process. The result of position and attitude parameter estimation in UAV landing application must not only be accurate but also stable and robust, and at the same time, it must have high computational efficiency to meet real-time requirements. Orthogonal Iteration (OI) algorithm[21-22]takes the object-space collinearity error as the error function for iterative optimization and has a fast convergence speed. Therefore, this paper adopts an estimation algorithm based on OI.

        2.1 Object-Space Collinearity Error

        The three-dimensional coordinate of feature point expressed in ERF is denoted byPiw(i=1,2,…,n), while the corresponding point in CRF is denoted byQic. For each feature point, there is a rigid transformation relationship:

        Qic=RPiw+t

        (7)

        whereRdenotes the rotation matrix from the ERF to the CRF andtdenotes the translation vector.

        RPiw+t=Vi(RPiw+t)

        (8)

        where

        Vi=(viviT)(viTvi)-1

        (9)

        Lu et al.[21]and Zhang et al.[22]refer to Eq. (8) as object-space collinearity equation and further proposes to determineRandtby minimizing the sum of the squared error:

        (10)

        The optimal value fortcan be expressed byRaccording to Eq. (11). The OI algorithm rewrittesE(R,t) only withRand computes the rotation matrix by solving the absolute orientation problem during each iteration.

        (11)

        2.2 Absolute Orientation Problem

        Absolute orientation refers to solving the conversion relationship between the two reference frames based on the three-dimensional coordinates of a set of feature points expressed in two reference frames respectively.

        Given corresponding pairsPiwandQicof three or more noncollinear feature points, the absolute orientation problem is expressed as a constrained optimization problem:

        (12)

        There is a solution method based on the singular value decomposition (SVD)[23]. The basic idea of this method is to convert rigid transformation into pure rotation, thus simplifying the problem.

        First, calculate the respective centers of two point sets:

        (13)

        After that, define the following cross-covariance matrix ΣPQbetweenQicandPiw:

        (14)

        LetUDVTbe an SVD ofΣPQ, then the optimal rotation matrix can be directly calculated by the following formula:

        R=USVT

        (15)

        where

        (16)

        2.3 Iteration Process

        The detailed algorithm steps are summarized as follows:

        Step1: ObtainR(0)by initialization algorithm. SetR(k)=R(0),k=0 and the error threshold asθ;

        Step3: Compute the sum of the squared errorE(R,t). IfE(R,t)<θ, stop the iteration and output the results; else, go to the next step.

        Step4:k=k+1, obtainR(k)by solving the absolute orientation problem and return to Step 2.

        2.4 Position and Attitude Parameter

        (17)

        In this paper, the relative position of UAV with respect to runway can be obtained by coordinate ofOw, which represents the origin of the CRFOcin the ERF.

        Ow=-R-1t

        (18)

        The rotation matrixRis an orthogonal matrix, soR-1=RT. The rotation matrixRcontains the attitude change of the CRF, that is, the attitude information of the UAV.

        (19)

        where cx=cos(x), sx=sin(x).φ,θ,ψare the relative rotation angles, referred to as the roll angle, pitch angle, and yaw angle respectively in a flight control system[24]. Hence, attitude parameters can be further solved by Eq. (20):

        (20)

        3 Simulation and Result Analysis

        The proposed algorithms are simulated and analyzed in this section, including algorithms of image processing and algorithms of position and attitude estimation. Matlab 2018 a/b was selected as simulation platform.

        3.1 Simulation Results of Image Processing

        Since it is difficult to get images of real landing scenes, the simulative flight software FlightGear[25]was used to generate the videos of UAV autonomous landing and segment the video into images. The resolution of the images was set as 1224×520.100 images taken by the UAV within 2 km of the runway were selected for the experiment.

        3.1.1ROIdetection

        In the algorithm of ROI detection, image scaling factorδand the thresholdτto obtain a binary saliency map are important parameters which will influence the results of ROI detection and processing time.

        Experimentsetup: Image resolution is 1224×520,τis three times the average intensity of the saliency map. Set scaling factorδto 0.2, 0.4, 0.6, 0.8, and 1, respectively.

        The reduction ofδmeans less image data and less processing time. In addition, Fig.9 shows that the number of detected candidate boxes decreases asδdecreases, which also means a decrease in processing time.

        Fig.9 Number of candidate boxes per frame (δ changes)

        Fig.10 shows the change of ROI in shape corresponding to differentδvalue. The processing results were further observed and it is found that whenδis small, ROI containing the runway area is irregular in shape and easily contains messy background information, which increases the difficulty of subsequent ROI filter operations. Taking into account processing time and subsequent operations at the same time, this paper choosesδ=0.5.The ROI can be filtered out by searching for the largest candidate box with aspect ratio below 1.5.

        Experimentsetup: Image resolution is 1224×520,δ= 0.5.Calculate the average intensity of the saliency map as E. Starting from E, gradually increase binarization thresholdτ.

        Whenτis small, as shown in Fig.11, complex background information will interfere with the detection, especially when the runway is far away; whenτis large, as shown in Fig.12, runway plane details will interfere with the detection, especially when the runway is close.

        The results of multiple experiments are counted, and the accuracy of ROI detection under different thresholdτwas obtained. Fig.13 shows that the value ofτis between 3 and 4, with the highest accuracy. Hence, this paper setsτthree times average intensity of the saliency map.

        Fig.13 Accuracy of ROI detection results (τ changes)

        Experimentsetup: Image resolution is 1224×520,δ= 0.5,τis three times the average intensity of the saliency map.

        Table 1 shows the processing results of the proposed method compared with those of the sliding window method. The method proposed in this paper extracted more candidate boxes and took less time. After being filtered by the restrictions of area and aspect ratio, results shown in Table 2 indicate that the algorithms can effectively detect the ROI (from far and near).

        Table 1 Comparison results of candidate box extraction

        Table 2 ROI detection results of partial frames

        3.1.2Cornerdetection

        Select 100 images (from far to near) within 2 km from runway to test runway detection algorithm mentioned above and further determine the corner coordinates by the equations of detected lines. Table 3 shows the detection results of partial frames from far and near.

        Table 3 Corner detection results of partial frames

        This paper uses two other methods to compare with the algorithms proposed. One is to adopt only Hough transform for line segment detection, and the other uses Hough transform after mathematical morphology (MM) image processing. The comparison results are shown in Table 4 and the effectiveness of methods is compared from two aspects: success rate and real-time performance.

        Table 4Comparison results of different detection methods

        It can be seen that the algorithm proposed in this paper can effectively improve the accuracy of corner detection results and has almost no increase in processing time compared with the method only with Hough transform.

        3.2 Simulation Results of Parameter Estimation

        The simulation experiments of position and attitude estimation for UAV landing discuss the feasibility and robustness of the presented algorithm. Five points on the runway were selected as shown in Fig.14 as feature points. Suppose the length of the runway is 1000 m and the width of the runway is 60 m, the coordinates of five feature points in ERF (xw,yw,zw) are demonstrated in Table 5.

        Fig.14 Five feature points in ERF

        Table 5 Coordinates of five feature points in ERF

        The estimated errors of landing parameters mainly come from errors of feature point extraction. The errors were compared by adding different levels of random noise to image coordinates of feature points simultaneously. The standard deviation (SD) denotes the level of noise.

        3.2.1Initialvalue

        The initial value of rotation matrixRaffects the accuracy and speed of the OI algorithm in a noisy condition. The experiment compares reproject error of OI algorithm employing different initialization methods. The method Linear_OI obtains initial value of rotation matrixRfrom camera matrix according to the least-squares solution[26]. Considering that the rotation angles of the UAV change only within a small range in the process of landing[27], the method Constant_OI is to set the initial value ofRas a fixed value (such as the matrix, for whichφ=0,θ=5,ψ=0). EPnP_OI and RPnP_OI methods initialize orientation parameters by EPnP[28]algorithm and RPnP[29]algorithm separately. The comparison results are shown in Table 6.

        Table 6 Error comparison of different initialization algorithms

        It can be seen that estimation results optimized by orthogonal iteration are more accurate than linear algorithm, and the accuracy of iteration results which are obtained by different initialization algorithms is different. The results of Linear_OI, EPnP_OI, and RPnP_OI are more stable and accurate than Constant_OI.

        Table 7 compares the processing speeds of different algorithms. From the results, Linear_OI has higher accuracy than linear algorithms, and has the advantage of lower computational complexity within some different initialization methods. Since the position and attitude estimation algorithm used for fixed-wing UAV landing must take both accuracy and real-time performance into account, this paper chooses Linear_OI as the solution method for position and attitude parameters.

        Table 7 Time comparison of different initialization algorithms

        3.2.2Accuracyandrobustness

        Linear_OI method was chosen to further study the effect of feature point extraction error on different parameters. Figs. 15-18 shows the estimation errors of six parameters caused by different levels of coordinate errors. The average errors of the entire landing process are given in Table 8.

        Fig.15 Errors caused by coordinate error with SD=0

        Fig.16 Errors caused by coordinate error with SD=0.01

        Fig.17 Errors caused by coordinate error with SD=0.1

        Fig.18 Errors caused by coordinate error with SD=1

        Table 8 Error comparison of different parameters

        It is obvious that estimation errors become greater and greater asSDincreases. Results demonstrate that the error of feature point coordinate has a greater impact on the attitude parameters than the position parameters. The figures below show that the coordinate error of the feature point should not exceedSD=0.1.

        4 Conclusions

        This paper discusses the vision-based relative position and attitude estimation between fixed-wing UAV and runway, which is divided into two parts: corner detection method based on Hough Transform and gradient projection; position and attitude estimation method based on orthogonal iteration. Experiments verified the effectiveness of the corner detection method and show that estimation method has good real-time performance, accuracy, and global convergence when the error of image point coordinate is small. But how to reduce the impact of larger feature point coordinate error is still a problem in practice.

        亚洲欧美另类自拍| av资源在线看免费观看| 国产国拍亚洲精品永久不卡| 国产免费人成网站在线播放| 女同欲望一区二区三区| 五月开心婷婷六月综合| 国产成人av大片大片在线播放 | 欧美性生交活xxxxxdddd| 婷婷色中文字幕综合在线| 国产av国片精品| 娇妻粗大高潮白浆| 91久久精品美女高潮喷白浆| 中文字幕日韩有码国产| 中文字幕人妻第一区| 最新亚洲av日韩av二区| 日本视频精品一区二区| 久久精品国产亚洲av影院毛片| 狼人青草久久网伊人| 亚洲人成影院在线无码观看| 精品91精品91精品国产片| 在线观看二区视频网站二区| 大地资源高清在线视频播放| 在教室伦流澡到高潮h麻豆| 亚洲AV成人综合五月天在线观看| 亚洲三级中文字幕乱码| 久久久国产打桩机| 成 人 免费 黄 色 视频| 99久久精品无码专区无| 国产日本精品一区二区免费 | 377p日本欧洲亚洲大胆张筱雨| 欧美日韩精品一区二区在线观看| 亚洲中文字幕av天堂 | 日本高清乱码中文字幕| 成人a级视频在线观看| 亚洲两性视频一三区| 国产优质av一区二区三区| 丰满少妇作爱视频免费观看| 蜜臀av免费一区二区三区| 青青草原亚洲在线视频| 国产精品一区二区日本| 成人精品一区二区三区中文字幕|