亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        A fast,accurate and dense feature matching algorithm for aerial images

        2021-01-06 12:19:14LIYingGONGGuanghongandSUNLin

        LI Ying,GONG Guanghong,and SUN Lin

        School of Automation Science and Electrical Engineering,Beihang University,Beijing 100191,China

        Abstract: Three-dimensional (3D) reconstruction based on aerial images has broad prospects,and feature matching is an important step of it. However,for high-resolution aerial images,there are usually problems such as long time,mismatching and sparse feature pairs using traditional algorithms. Therefore,an algorithm is proposed to realize fast,accurate and dense feature matching. The algorithm consists of four steps. Firstly,we achieve a balance between the feature matching time and the number of matching pairs by appropriately reducing the image resolution. Secondly,to realize further screening of the mismatches,a feature screening algorithm based on similarity judgment or local optimization is proposed. Thirdly,to make the algorithm more widely applicable,we combine the results of different algorithms to get dense results. Finally,all matching feature pairs in the low-resolution images are restored to the original images. Comparisons between the original algorithms and our algorithm show that the proposed algorithm can effectively reduce the matching time,screen out the mismatches,and improve the number of matches.

        Keywords: feature matching,feature screening,feature fusion,aerial image,three-dimensional (3D) reconstruction.

        1. Introduction

        Three-dimensional (3D) reconstruction based on aerial images is a key research problem in photogrammetry and remote sensing,computer vision,and other fields. Pix4D[1]and ContextCapture3D [2]are common aerial photogrammetry softwares for 3D reconstruction problems.

        The research on hardware and software of 3D reconstruction is a common concern. Nowadays,with the popularity of 3D reconstruction applications,more and more people want to realize 3D reconstruction in low-cost devices,such as laptop computers and tablet computers.These devices are usually equipped with low-level configuration,but they are easy to carry around and can provide services anytime and anywhere. In terms of software,by optimizing and improving algorithms,people keep studying fast and accurate methods to achieve real 3D reconstruction.

        Taking the generation of dense 3D point clouds as an example,feature matching is an important step of 3D reconstruction. Feature number,matching time and the accuracy of feature pairs can directly affect the next steps such as the calculation of aerial triangles and the generation of sparse point clouds.

        Moreover,aerial images usually have the characteristics of high resolution,large coverage areas,too many kinds of objects and repeated textures. They will lead to long matching time,uneven distribution of matching pairs and confusion of adjacent feature matching. Considering that the algorithms based on deep learning are usually limited by hardware and datasets,the existing 3D reconstruction methods usually use the traditional feature matching algorithms to directly perform feature matching and aerial triangulation calculation,resulting in too long time,few feature points or even reconstruction failures.

        With the increasing requirements of time,accuracy and large-scale scenes,it is particularly important to find a fast,accurate and dense feature matching algorithm for aerial images and low-cost devices. Therefore,an algorithm is proposed in this paper. The main contributions of this work are as follows: (i) The algorithm can realize fast,accurate and dense feature matching of aerial images on low-cost devices. (ii) By finding the appropriate resolution of aerial images,feature matching can be realized quickly on low-cost devices,while ensuring the number and distribution of feature pairs. Moreover,the matching point pairs are restored to the original images by an effective algorithm in the end. (iii) Considering that aerial images are prone to mismatching due to their large coverage areas,a region-based feature screening algorithm is proposed to further realize feature screening.(iv) To ensure the quality of 3D reconstruction,it is necessary to further improve the number and distribution uniformity of feature pairs,and based on this,a feature fusion algorithm is proposed.

        2. Related works

        Feature matching algorithms usually include feature pairing and feature screening,in which feature pairing includes feature detection,feature description and feature matching. Local feature point detection and description are the most important problems. Lowe [3]proposed the scale-invariant feature transform (SIFT) algorithm,which has scale invariance but the main direction of features may not be accurate enough. Bay et al. [4]proposed the speeded up robust features (SURF) algorithm,whose variable is the size and the scale of Gaussian blur templates,so as to avoid the process of down-sampling and improve the processing speed. Using the features from accelerated segment test (FAST) [5]algorithm,the oriented FAST and rotated binary robust independent elementary features (BRIEF) (ORB) algorithm [6]extracts features,and obtains the main direction of features with intensity centroid. The ORB algorithm has rotation invariance,but does not have scale invariance. In addition,the binary robust invariant scalable keypoints (BRISK)algorithm [7],the fast retina keypoint (FREAK) algorithm [8],the KAZE (a Japanese word meaning wind)algorithm [9]and the accelerated-KAZE (AKAZE) algorithm [10]all achieve good results,but they also have some limitations. Tareen et al. [11]presented a comprehensive comparison of SIFT,SURF,KAZE,AKAZE,ORB,and BRISK algorithms,and conducted experiments on diverse images taken from benchmark datasets.

        Considering the limitations of traditional image matching algorithms,related improved algorithms have been proposed. Some researchers chose to improve a single algorithm [12-14]. For example,many researchers improved the SIFT algorithm [15]. Considering that the SIFT algorithm was sensitive to nonlinear radiation distortions,a radiation-variation insensitive feature transform (RIFT)was proposed [16]. The color information [17]and the scale-orientation joint restriction criteria [18]were also used to achieve robust feature matching.

        Some researchers chose to combine multiple algorithms to improve their results. Ma et al. [19]used the FAST algorithm and the ORB algorithm to realize an improved oriented feature description. Combined with the decision tree theory,Wu [20]proposed a feature detection method based on image grayscale information-FAST operators and then used the BRIEF feature description method to describe the points. Aiming at the fact that the ORB descriptors do not have scale invariance,an improved feature point matching algorithm borrowing the idea of BRISK was proposed [21].

        Considering different application situations,researchers put forward many new methods. In order to accommodate repetitive texture and unknown distortion,Li et al. [22]proposed a novel region descriptor formed by four feature points to improve the matching accuracy. Zhu et al. [23]combined the second-order characteristics of points with the Hessian matrix to detect more feature points.

        However,the above algorithms are usually suitable for ordinary low-resolution images. Considering the time and texture characteristics of high-resolution images,further research is needed [24]. Xi et al. [25]used several different point feature extraction operators to extract features from the aerial images of different scenes and analyzed their performance and adaptability. Based on a competency criterion,and scale and location distribution constraints,Amin et al. [26]proposed a novel method to extract uniform and robust local feature for remote sensing images. Moreover,researchers adopted different algorithms for different applicable images,such as coastal remote sensing images [27],low contrast or homogeneous textures images [28,29],and the aerial images with highresolution and rich edges [30]. For aerial images,as one of the most important matching algorithm evaluation indices,time cost is also worth considering. To reduce computational complexity,Song et al. [31]achieved faster matching through an iterative transform simulation.

        In addition,in order to improve accuracy,feature screening is usually carried out. There are three traditional screening algorithms: the nearest-next-neighbor ratio(the ratio algorithm),the crosscheck algorithm,and the random sample consensus (RANSAC) method [32].Among them,the RANSAC algorithm sets objective functions according to specific problems and actual conditions,and distinguishes all feature points between inlier points and outlier points. The accuracy of RANSAC is high,but the number of retained feature points is not stable. There are also many attempts on improving these algorithms to remove mismatches. Based on the SURF bidirectional matching method,Gui et al. [33]presented a point-pattern matching method using the shape context.Xi et al. [34]improved the RANSAC algorithm by adding image gray level information. Li et al. [35]proposed a mismatch-removal algorithm called locality affine-invariant matching. Considering the viewpoint differences,Song et al. [31]proposed a homography matrix evaluation method based on a geometric approach. Ac-cording to the scale-invariant feature transform matching algorithm,Gao et al. [36]proposed and compared several improved false matches elimination algorithms.

        The evaluation of the feature matching algorithm is an important index to judge the advantages and disadvantages of the algorithm. Xi et al. [34]used the root mean square error (RMSE) to evaluate the quality of image matching. Li et al. [16]used the number of correct matches (NCM),RMSE,mean error (ME),and success rate (SR) as the evaluation metrics. Song et al. [31]used the NCM to evaluate their algorithm as well. Gesto-Diaz et al. [37]evaluated 28 different combinations of detectors and descriptors,and assessed the matching results based on the receiving operating characteristic curve associated to all combinations.

        In conclusion,it is a key problem to obtain fast,accurate and dense feature pairs through algorithm improvement for high-resolution aerial images. In this work,we propose an algorithm with four steps,which will be described in detail in the following sections.

        3. Algorithm

        The traditional steps of matching features mainly include feature pairing and feature screening. In this paper,we implement a fast,accurate and dense feature matching algorithm which includes four steps: image down-sampling matching,region-based feature screening,fusion of feature pairs using different matching algorithms,and restoration of matching pairs to the original image location.The flow chart is shown in Fig. 1.

        Fig. 1 Algorithm flow chart

        3.1 Down-sampling matching of aerial images

        Searching the tie points to calculate air triangulation is the first main step for 3D reconstruction using aerial images. As shown in Table 1,when we use ContextCapture3D [2]to reconstruct sparse point clouds of 133 aerial images with different resolutions. It is found that as the resolution of the images increases,the number of the tie points decreases.

        Considering that the SURF [4]algorithm and the RANSAC [32]algorithm are usually used for image feature matching,in order to verify the influence of image resolution on feature matching,we use aerial images to perform down-sampling matching experiments. For an image of the resolution ofN×Mpixels,if the downsampling coefficient isk,then a new image is formed by taking one pixel everykpixels in each row and column in the original image. We separately calculate the image results and data results,including (i) number of matching pairs before screening,(ii) number of matching pairs after screening,(iii) pairing time,(iv) screening time,as shown in Table 2.

        Table 1 3D reconstruction results with different resolution images

        In Table 2,the higher the image resolution,the more the feature points before screening,the longer the pairing time and the longer the screening time. However,the number of feature points after screening will become larger at first and then smaller. The results are consistent with those obtained by ContextCapture3D.

        Table 2 Aerial image down-sampling matching experiment

        Therefore,in order to ensure real-time and quantitative balance of feature matching,the resolution of aerial images can be appropriately reduced according to the actual situation. In Section 4.1,experiments and analyses will be carried out on ten different aerial image pairs to obtain the relationship between the resolution of aerial images and the matching pairs and time. Furthermore,we will give the appropriate resolution of aerial images.

        3.2 Region-based feature pair screening

        In the above experimental results,although the RANSAC algorithm has been used for point pair screening,there are still some mismatched point pairs. Taking the image result of 4 60×307 pixels above as an example,the feature matching is not completely accurate,as shown by the circles in Fig. 2. It can be found that when the feature points are close,mismatching is likely to occur. Sometimes even one point corresponds to multiple points. At the same time,due to the local similar texture of different feature points,the general screening algorithm cannot completely remove the mismatching pairs. This problem is particularly evident in aerial images with repeating textures,such as buildings or forests.

        The number of the feature pairs after the RANSAC[32]screening is 164. At this time,if the ratio algorithm or the crosscheck algorithm is continued to be used for matching screening,the number of remaining feature pairs may decrease sharply. Therefore,we propose a region-based feature pair screening algorithm,which is designed for the characteristics of high similarity between adjacent aerial images. It includes a screening algorithm based on similarity judgment and a screening algorithm based on local optimization.

        Fig. 2 Mismatching result

        3.2.1 Similarity judgment based screening algorithm

        The screening algorithm based on similarity judgment uses image similarity indices such as peak signal to noise ratio (PSNR),structural similarity (SSIM) index and perceptual hash to further screen the matched pairs. By comparing the image similarity of the same matching pairs’ surrounding areas,we can judge whether it is the same area,and then eliminate the point pairs that do not belong to the same area.

        PSNR can be used to judge the similarity of the images. The larger the PSNR value between the two images is,the more similar the two images are. The PSNR valuePcan be calculated by the following equation:

        where M SE is the mean square error between the two surrounding areas,nis the bits’ number of the sampled value. For example,the correspondingnvalue of a 0-255 grayscale image is 8.

        SSIM is an indicator to measure the similarity between two images. It is designed with the visual characteristics of human eyes,which is more in line with the human visual perception than the traditional methods. The larger the SSIM value,the more similar the two images. The SSIM valueS(x,y) can be calculated based on different windows,if the size of the window isn×npixels,thenS(x,y)can be calculated by the following equation:

        where (x,y) is the coordinate of pixels in the image;μxand μyrepresent the means ofxandy; σxand σyrepresent the standard deviations ofxandy; σxyrepresents the covariance ofxandy.c1andc2are constants to avoid the denominator being 0.

        The calculation of the perceptual hash value is to generate a “fingerprint” string for each image and then compare the fingerprints of different images. The closer the results are,the more similar the images are. Mean hash is a kind of perceptual hash,which typically scales down photos and simplifies colors to make the complex information easier. In this work,we improve the mean hash calculation. In order to preserve image information,we do not make size and color change. The procedures of images’ similarity comparison using the improved mean hash valueHis given as follows.

        Step 1Calculate the grayscale averageG_avg of all pixels in the image.

        Step 2For each pixel in the image,compare the gray levelGwithG_avg. IfG≥G_avg,the resultRis 1. IfG<G_avg,Ris 0.

        Step 3CombineRof each pixel to form a string,which is the fingerprint of this image. Ensure that all images are in the same combination order.

        Step 4Compare the fingerprints of different images and calculate the difference valueH. The bigger the difference,the more different the images.

        Using the above three similarity judgment indices,we can screen out the mismatches. The screening algorithm based on similarity judgment contains the following steps.

        Step 1Obtain the feature pairs to be screened and the related data of them in the two images.

        Step 2Seti=1 at first,and obtain the feature points of the numberipair in both two images.

        Step 3Select square regions ofn×npixels centering on the two feature points.

        Step 4CalculateP,S(x,y) orHof the two square regions,and then set a thresholdTcorresponding to different indices.

        Step 5CompareP,S(x,y) orHwith the correspondingT. If the image similarity is high enough,retain the matching pair. Otherwise,regard the numberipair as a mismatch and delete it.

        Step 6Seti=i+1,repeat Step 2 to Step 5 untili=N,whereNis the number of the matched pairs. Finish the loop.

        3.2.2 Local optimization based screening algorithm

        When a local area has a large number of feature points,mismatches will occur,making it difficult to screen.Therefore,in order to improve the matching accuracy,we use the screening algorithm based on local optimization to choose the best pairs in local areas. The algorithm mainly determines the most similar feature points according to the distance of feature points and the degree of similarity. The screening algorithm based on local optimization contains the following steps.

        Step 1Select a distance thresholdD.

        Step 2Seti=1.

        Step 3Obtain the feature points of the numberipair in both two images.

        Step 4Setj=i+1.

        Step 5Obtain the feature points of the numberjpair in both two images.

        Step 6Calculate the distancedbetween the point in numberipair and the point in numberjpair in the first image.

        Step 7CompareDwithd. IfD≥d,calculate the matching degree of these two pairs,retain the more similar one,and turn to Step 8. IfD<d,turn to Step 9.

        Step 8Determine whether the retained pair is the numberipair. If yes,turn to Step 9. If no,turn to Step 11.

        Step 9Setj=j+1.

        Step 10Determine whether the numberj-1 pair is the last pair. If yes,turn to Step 11. If no,turn to Step 5.

        Step 11Seti=i+1.

        Step 12Determine whether the numberipair is the last pair. If yes,finish the loop. If no,turn to Step 3.

        3.3 Fusion of different matching algorithms

        Different feature algorithms have different applicability,so the matching feature pairs of different algorithms will have a big difference. For example,for the same two images,the SURF algorithm [4]and the ORB algorithm [5]will result in different matches. In order to increase the number of feature pairs and make the algorithm more applicable,we can use two or more traditional image matching algorithms in Section 2 to match the images and fuse the matching results together. Therefore,we put forward the fusion algorithm for feature pairs of different matching algorithms to make feature pairs dense.

        To give an example,we use SURF [4]+ RANSAC[32]and ORB [5]+ RANSAC [32]to fuse. The process is shown in Fig. 3 and the steps are shown as follows.

        Fig. 3 Fusion of the feature pairs of SURF and ORB

        Step 1Use two algorithms to obtain two sets of different matching pairs called Sequence 1 and Sequence 2.

        Step 2Successively extract the feature pairs in Sequence 1 and Sequence 2,and read the coordinate information of the point pairs.

        Step 3Compare the coordinates to know if there is any coincident pair or coincident point.

        Step 4If there are coincident pairs or points,find the two pairs’ information and judge which pair has higher similarity. Then,mark the less similar pair.

        Step 5Check the tags of all pairs in Sequence 1 and Sequence 2. If the tag is unmarked,add the pair to Sequence 3. Sequence 3 is the fusion result.

        3.4 Matching pairs restoration

        The following operations have been completed after the above process: fast feature matching after downsampling,error pairs elimination through screening,and point pairs increasing through fusion of the methods. Finally,the matched pairs need to be restored to the original images to achieve high-resolution feature pair matching. Since each point actually corresponds to a small surrounding area after restoration (the specific size is related tok),the most similar pair in the surrounding area can be used as the matching pair after restoration. The specific steps include the following parts.

        Step 1In Section 3.1,for an image ofN×Mpixels resolution,if the down-sampling coefficient isk,we take one pixel everykpixels in each row and column to form a down-sampled image. Therefore,in this part,multiply the coordinates of each point in pairibykto achieve a preliminary restore operation,wherei=1 at first.

        Step 2Take each restored feature point as the center,and take the range of (k+1)×(k+1) pixels around it as the region to be matched.

        Step 3Calculate the descriptors of each point in the surrounding region,and the points in the surrounding region are matched in turn with the points in the corresponding region.

        Step 4Find the most similar point pairs in the two regions and reserve them as the final matching pairi.

        Step 5If pairiis not the last pair,seti=i+1,and repeat the restore operation from Step 1 to Step 4.

        4. Experiments

        Experimental results and data analyses are provided in this section. Section 4.1 introduces the influence of image down-sampling. Section 4.2 presents the region-based feature pair screening. Section 4.3 provides the fusion algorithm results. Section 4.4 describes the restoration to the original resolution. Section 4.5 summarizes the experimental results of all steps.

        4.1 Influence of image down-sampling

        To quantitatively analyze the impact of image resolution on matching points and time,we define the average time spent on each point pair as AToEP,and the AToEP valueAcan be calculated as

        where T T is the total time,N OM is the number of output matching pairs.

        Ten aerial image pairs taken from different angles and different scenes are selected. They are processed into images with different resolutions to calculateAand N OM.The results are shown in Fig. 4 and Fig. 5. A smallAand a large N OM are required.

        Fig. 4 AToEP changes of different image pairs

        Fig. 5 Matching pair number changes of different image pairs

        From the results shown above,the images with the resolution between 5 00×500 pixels and 2 000×2000 pixels can achieve a good balance between time and the number of point pairs. They not only have low AToEP values,but also have a large number of matching pairs. Therefore,the images can be reduced to a certain range such as 500 × 500 pixels to 2 000 × 2 000 pixels. After that,feature matching and feature screening can be done.

        4.2 Region-based feature pair screening

        The high threshold value of PSNR or SSIM or the low threshold value of mean hash indicates the high requirements on image similarity and few feature pairs are retained through final screening. Meanwhile,when selecting the size of local areas to match,due to the different angles of different images,there will be some parallax occlusion. As a result,when the edge size of the local area is large,the time spent is long,and the final reserved pairs are sparse. When the area is too small,it will result in a large number of correct pairs being screened out. The same is true for the screening algorithm based on local optimization.

        After experimental comparison,three aspects are comprehensively considered: quantity of retained features,quality of retained features and time of error pairs removal.

        For the screening algorithm based on similarity judgment,the local area side length is 20 pixels,the threshold value of PSNR is 8,the threshold value of SSIM is 0.1,and the threshold value of mean hash is 200. For the screening algorithm based on local optimization,the distanceDbetween the points is 8 pixels.

        We analyze the effect of the algorithm qualitatively and quantitatively.

        The qualitative analysis is to compare the experimental image results from the visual angle,and judge whether the wrong matching point pairs are removed. We still choose to conduct qualitative experimental comparison with Fig. 3,from which we can see the mismatching pairs in the red circles obviously. If the screening algorithm proposed is effective,we can see the reduction of mismatches from the images.

        Meanwhile,we use the NCM pairs to carry out the quantitative analysis of accuracy. The method to calculate the matching accuracy is as follows.

        Step 1Obtain the correspondence between two images by calculating the homography matrix.

        Step 2Use affine transformation to get the corresponding positions of the feature points in image 1.

        Step 3Calculate the distance between the corresponding point and the matching point in image 2.

        Step 4Considering the occlusion between the images and the calculation error of the homography matrix,when the distance is less than 9 pixels (two percent of the image length),we think it is the correct matching pair.

        In Section 4.1,we set the number of output matching pairs as N OM. Here,we set the number of correct matching pairs as N CM,and the number of wrong matching pairs as N WM. The above three indexes satisfy the following relation:

        We carry out the screening experiments based on the similarity judgment algorithm and the local optimization algorithm,in which the similarity judgement experiments use the PSNR value,the SSIM value,and the improved mean hash value. In order to verify the accuracy,the feature pairs quantity and the screening time results,we compare the image results and the experimental data results,including (i) the number of matching pairs before screening,(ii) NOM,(iii) NCM,(iv) NWM,(v) screening time,as shown in Table 3.

        Table 3 Region-based feature pair screening experiment

        According to the experimental results,the above algorithms can effectively screen out the mismatching pairs and improve the accuracy of matching in a relatively short time. Through the comparison of the results,it can be seen that the proposed screening algorithms have the relative consistency of the results. It is worth noting that the pixel threshold used here in calculating N CM can be adjusted according to the occlusion and the shooting dis-tance of the adjacent images. Moreover,we can adjust the threshold of algorithms or select different algorithms according to the specific requirements such as real-time or feature pair quantity.

        4.3 Fusion algorithm results

        Since both the ORB algorithm and the SURF algorithm have the advantages of short matching time and stable matching effect,they are selected to do the fusion experiments. Meanwhile,in order to improve the speed and the accuracy,the down-sampled images in Fig. 3 and the screening algorithm based on local optimization are adopted. We perform experiments on the SURF algorithm,the ORB algorithm,and the fusion of the SURF algorithm and the ORB algorithm. Table 4 shows the statistics of image results,number of feature pairs and time.

        Table 4 Experimental results of the algorithm based on feature pair fusion

        In the above experiment,the number of SURF matching pairs is 118,and the number of ORB matching pairs is 107. In the end,a total of 221 pairs are obtained. It can be seen that,after the fusion of the two algorithms,the final feature pairs cover a wider area and are much denser than that of only one algorithm.

        4.4 Restoration to the original resolution

        We restore the results from the fusion of the SURF algorithm and the ORB algorithm in Section 4.3 to the original images,and the comparison is shown in Table 5.

        Table 5 Experimental results of restoration

        4.5 Experimental results of all steps

        In order to verify the feasibility of the algorithm,we carry out experiments on the aerial image pairs of three different regions as shown in Fig. 6.

        The initial resolution of the aerial image pairs is 3 680 × 2 456 pixels. Using the conclusion obtained in Section 4.1,we down-sample the images to 920 × 614 pixels at first. Then we use the traditional algorithm(SURF+RANSAC) and the proposed algorithm to perform feature point matching. In order to observe the application of aerial image feature matching results in 3D reconstruction,a structure from motion (SfM) algorithm is carried out and the sparse point cloud reconstruction results are obtained. The experimental results are shown in Table 6,including image results,SfM results,time results,(i) NOM,and (ii) NCM.

        From the above experimental verification,we can see that the number and the distribution of feature point pairs will directly affect the generation result of sparse point clouds. Compared with the traditional algorithm,the proposed algorithm can quickly obtain accurate,dense and uniformly distributed feature point pairs,which is very helpful for 3D reconstruction.

        Fig. 6 Three groups of tested aerial images

        Table 6 Comparison between SURF+RANSAC and our algorithm

        5. Conclusions

        In this paper,a fast,accurate and dense feature matching algorithm is realized by down-sampling and matching the aerial images,screening out mismatches based on the region,fusing the pairs of different matching algorithms,and restoring the matching pairs to the original image.This algorithm can solve the feature matching problems of high-resolution aerial images using low-cost devices,such as long time,few matching points and uneven distribution. Our experiments show that the matching time is greatly reduced,the mismatches are removed and the matching pairs are increased and evenly distributed.

        In the future,we will conduct the followup research,for example,further automatic selection of fusion algorithms for different image textures. In terms of matching time,the time pressure brought by two or more algorithms will be further studied.

        亚洲AV成人综合五月天在线观看| 人妻少妇偷人精品一区二区三区| 在线观看中文字幕不卡二区| 青青草视频在线观看绿色| 中文字幕一区乱码在线观看| 人人妻人人添人人爽欧美一区| 日本无码人妻波多野结衣| 日本成人久久| 成在线人视频免费视频| 香蕉蜜桃av一区二区三区| 五月色婷婷丁香无码三级| 欧美精品中文字幕亚洲专区| 亚洲另类自拍丝袜第五页| 国产成人福利在线视频不卡| 亚洲女同精品久久女同| 国产一区二区三区在线观看蜜桃 | 日日噜噜夜夜狠狠2021| 人妻少妇偷人精品一区二区三区| 日韩av午夜在线观看| 香港台湾经典三级a视频| 亚洲国产激情一区二区三区| chinesefreexxxx国产麻豆| 亚洲黄色尤物视频| 亚洲日产国无码| 国产一区二区在线中文字幕| 亚洲国产综合久久天堂| 国产又色又爽又刺激在线播放| 成人性生交大片免费看r | 黑人一区二区三区在线| 亚洲乱码中文字幕视频| 国自产精品手机在线观看视频| 拍摄av现场失控高潮数次| 国产精品18久久久久网站| 国产精品国产三级农村妇女| 日韩午夜福利无码专区a| 亚洲日本va中文字幕| 中年人妻丰满AV无码久久不卡| 永久免费看黄在线观看| 真人抽搐一进一出视频| 99精品一区二区三区无码吞精| 欧美人与动牲交片免费|