亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        A multi-source image fusion algorithm based on gradient regularized convolution sparse representation

        2020-06-24 13:42:02WANGJianQINChunxiaZHANGXiufeiYANGKeandRENPing

        WANG Jian,QIN Chunxia,ZHANG Xiufei,YANG Ke,and REN Ping

        1.School of Electronics and Information Engineering,Northwestern Polytechnical University,Xi’an 710072,China;2.No.365 Institute,Northwestern Polytechnical University,Xi’an 710065,China

        Abstract: Image fusion based on the sparse representation (SR)has become the primary research direction of the transform domain method.However, the SR-based image fusion algorithm has the characteristics of high computational complexity and neglecting the local features of an image, resulting in limited image detail retention and a high registration misalignment sensitivity. In order to overcome these shortcomings and the noise existing in the image of the fusion process, this paper proposes a new signal decomposition model, namely the multi-source image fusion algorithm of the gradient regularization convolution SR (CSR). The main innovation of this work is using the sparse optimization function to perform two-scale decomposition of the source image to obtain high-frequency components and low-frequency components. The sparse coefficient is obtained by the gradient regularization CSR model, and the sparse coefficient is taken as the maximum value to get the optimal high frequency component of the fused image.The best low frequency component is obtained by using the fusion strategy of the extreme or the average value. The final fused image is obtained by adding two optimal components. Experimental results demonstrate that this method greatly improves the ability to maintain image details and reduces image registration sensitivity.

        Keywords: gradient regularization, convolution sparse representation(CSR),image fusion.

        1.Introduction

        Image fusion technology is very useful for getting images that are more conducive to human visual perception and computer processing [1]. The simplest way is the pixellevel fusion. Wang et al. [2] proposed a fast weighted guided filter image fusion algorithm. Yang et al. [3] presented a fusion framework based on block-matching and 3D(BM3D)multi-scale transform.Ma et al.[4]proposed a fusion algorithm for the infrared and visible images by using a generative adversarial network,termed as Fusion-GAN.In the above methods,if the representation method is translational and when a small geometrical is distorted or the image registration is out of alignment, the output results of different source images will not show the same features, so non-registered areas in the fusion image are prone to pseudo Gibbs phenomenon. In fact, Blanc and Zhang discussed the related methods that can be insensitive to image misregistration in an image fusion[5,6].

        Recently,with the development of compressed sensing,sparse representation(SR)[7]and dictionary learning provide new tools for the image fusion. Image fusion refers to the process of integrating multiple images of the same scene collected by different sensors into a comprehensive image for observation or further processing[8,9].The SR is a new image representation theory that can select a combination of sparse linear atoms in a given overcomplete dictionary to describe an image or image block.It has been successfully applied to many image processing problems,such as denoising,interpolation and recognition[10].

        Due to the sparse capability of the SR,the image fusion method based on the SR [11–15] has received more and more attention.Liu et al. [11]proposed a convolution SR(CSR)image fusion framework and the multi-focus image fusion, and the multi-modal image is decomposed into a base layer and a detail layer. Yang et al. [12] established a sparse decomposition model under the discrete cosine transformation(DCT)dictionary,and used the orthogonal matching pursuit(OMP)method for the SR.Its advantages are better image fusion and low computational complexity,but its sparse ability is poor,and it is too dependent on the geometric features of the image.Li et al.[13]merged the sparse constraints to ensure that the sparse coefficients obtained have group sparsity.Chen et al.[14]added gradient sparsity constraints to the sparse solution,which makes the sparse coefficients reflect sharp edges accurately.In order to take advantage of the correlations presented in the ima-ge,Yang and Li[15]used simultaneous OMP(SOMP)to jointly decompose segments from multiple sources on the same dictionary,making non-zero coefficients from different sources appear in the same location.A complete dictionary has been learned from a large number of training samples similar to the input image [16–19], thereby increasing the adaptability of the representation. However, there is only one general dictionary that does not accurately reflect the complex structure in the input image. Therefore,Kim et al. [20]divided the cluster training samples into a number of structural groups and then trained a specific subdictionary on each group.This makes each sub-dictionary best suited to a particular structure, and the entire dictionary has a strong representation. Wang et al. [21] constructed a spectral dictionary and a spatial detail dictionary for the fusion of multi-spectral and panchromatic images,respectively.Zhang et al.[22]designed an efficient multifocus image fusion method based on the SR, which takes into account the detailed information of each image block and its spatial neighborhood.In[23],a multi-focus image fusion algorithm based on the discrete wavelet transform(DWT)and the SR is proposed,which can better preserve the focus area of the source image and reduce artifacts.A joint sparse model was designed in [24–26] to retain as much texture detail information as possible.

        In the above image fusion algorithms, the SR only reflects the global characteristics of the image, ignoring the local features,thus affecting the final fusion effect.In addition, the computational complexity of dictionary learning in the above SR is high.Therefore, this paper proposes a multi-source image fusion algorithm for the gradient regularization CSR, which is not sensitive to image misregistration and is better than the traditional method considering subjective visual consequent and objective evaluation.

        2.Gradient regularized CSR

        The traditional mathematical expression of the block SR is

        whereDis a dictionary matrix,xis a sparse coefficient,srepresents the input source image andλis the regularized parameter.

        The CSR [27] is commonly used for noise reduction with a convolution basis. The result of the whole image is an optimal single value, which can preserve the details of the image well,especially with translational invariance.The mathematical expression is as follows:

        wheredm(m ∈{1,2,...,M})is a set ofMdictionary filter sets;Mis the dimension of the filter;?denotes convolution;xmis a set of sparse coefficient sets; andαmrepresents the coefficient weight of a set of thel1norm.

        The method decomposes the signal by the convolution sum of the filter and the corresponding characteristic response. The formula (3) is added to a gradient of thel2norm regularization coefficient map to represent the lowfrequency components of the image[28].Considering the edge smoothing effect of the norm gradient regularization,a total variation regularization(TVR)alternative is adopted and(3)can be got as

        whereβmrepresents the coefficient of thel2norm;μis the regularized parameter;g0andg1are gradient filters along the rows and columns of the image, andg0= [?1 1],g1=[?1 1]T.

        3.Alternating direction multiplier method(ADMM)algorithm of gradient regularization

        The ADMM is a dual convex optimization algorithm,which can be decomposed into several problems to be solved alternately [29]. A linear operatorDmis defined,Gl(l ∈{0,1})satisfiesDmxm=dm ?xmandGlxm=gl ?xm,so that the final term of(4)is transformed into

        becauseαandΓl(l ∈{0,1})are block matrices.

        whereIis the identity matrix.

        The final item of(5)can be further transformed as

        Carry on the Fourier transform for(4)and we have

        wheredenotes element-wise multiplication.

        The auxiliary matricesy0,y1andy2are introduced,and(8)is transformed into

        The dual variable and Lagrange multipliersu0,u1andu2are introduced,and the constrained optimization problem of(9)is transformed into the form of non-constrained optimization problem through iteration:

        whereρ >0 is the penalty parameter;jis the number of iteration.

        Equation(11)is given by

        The formula (15) is a problem of quadratic optimization.The partial derivative ofis obtained and the partial derivative is 0.

        4.Multi-source fusion with gradient regularized CSR

        The principle of the algorithm in this paper is shown in Fig.1.

        Fig.1 Multi-source fusion algorithm based on gradient regularized CSR

        To begin with, the sparse optimization function is used to decompose the source image into high-frequency and low-frequency components.Sparse coefficients of high frequency components are obtained by a gradient regularized CSR model.Then,use the maximum value strategy.Low-frequency components are obtained by an extreme or average value fusion strategy.

        5.Improved algorithm

        If the image registration is out of alignment,the output results of different source images will not show the same features. Blanc and Zhang discussed the use of image registration in the image fusion [5,6]. Thus, researchers study the image registration problem separable with the image fusion,and many effective methods for image registration have been proposed. The gradient regularization method considers the consistency between image blocks,and uses sparse coding with the convolution to achieve the SR of the entire image,that is,the sum of the convolutions of the filter and the corresponding feature response can increase the robustness of regional registration misalignment.When the area step sizes in the horizontal direction and the vertical direction are one pixel,the SR of the image using the algorithm in this paper has translation invariance.In addition,using the maximum and average strategies can also alleviate the above-mentioned regional misregistration problems.

        The principle of the multi-source image fusion algorithm based on gradient regularization of the CSR is as follows:

        Supposekregistered source imagesIk(k ∈{1,2,...,K})are geometrically registered,and there are a set of dictionary filtersdm(m ∈{1,2,...,M}). The multi-source image fusion algorithm of the gradient regularized CSR proposed consists of four parts.

        5.1 Two-scale image decomposition

        The source imageIkis decomposed into a high-frequency componentand a low-frequency componentThis two-scale decomposition method has been applied to many image fusion methods[30].The high-frequency part of the image mainly reflects the detailed information, while the low-frequency part mainly represents the spectral information in the image,which is obtained by

        whereIkis the source image input;?represents convolution;ηis the regularized parameter and set to 5;is the low frequency part;gxandgyare gradient filters along image rows and columns,andgx=[?1 1],gy=[?1 1]Trespectively.The formula(18)is a Tikhonov regularization problem whose regularization is a used method of regularization of ill-posed problems and can be solved effectively through the fast Fourier transform(FFT):

        The formula(19)is a problem of quadratic optimization.The standard approach is the least squares linear regression.However,if nosatisfies(19)or more than onedoes,that is,the solution is not unique–the problem is said to be ill-posed.To solve the above optimization problems,it should solve the partial derivative ofset to 0:

        The inverse frequency Fourier transform is performed by (20) to obtain the low frequency component, and the high frequency component can be obtained by subtracting the low frequency component from the source image.

        5.2 Multi-source image fusion algorithm with high frequency component

        The high frequency part reflects the details of image edge,texture and so on,the sparse coefficient graphck,m(m ∈{1,2,...,M})can be obtained by

        whereck,mis a sparse coefficient set ofl1andl2norms;Ikis the detail layer of the source image.The formula(22)can be solved by the gradient regularization ADMM algorithm.The last term of(3)can be transformed into

        whereCis the block matrix, andis sparse coefficient maps obtained by solving the CSR model with the method in[27]. The formula (22) is transformed by Fourier transform:

        whereis the result of FFT of

        By using dual variables and introducing Lagrange multipliersu0,u1andu2,the constrained optimization problem of (25)is transformed into an unconstrained optimization problem by iteration. The constrained optimization problem of(25)is transformed into the form of non-constrained optimization problem by iteration:

        where.Equation(27)is given as

        The formula(29)is a problem of quadratic optimization as(19).When the partial derivative ofis 0,

        For a linear system mathematical expression,

        whereJrepresents a diagonal matrix;is the column vector.is the offset matrix, by using the Sherman Morrison equation which is as follows:whereis an invertible matrix;are column vectors.

        In combination with(31)and(32)it can be derived that(J+aaH)?1. SinceaHJ?1ois a scalar, then it can be obtained that

        Equation(30)is similar to(31),and(30)can be obtained by using the Sherman Morrison equation:

        Then, the inverse Fourier transform ofis performed to obtainck,m. Assume thatck,1:M(x,y) represents the content ofck,mat the location(x,y)in the space domain,andck,1:M(x,y)is anMdimension vector.According to the SR fusion method adopted in [12,15], thel1norm ofck,1:M(x,y) is used as the activity level measurement of the source image. The activity level diagramAk(x,y) is obtained by

        In order to make this method insensitive to registration misreading,the window-based average strategy is applied to obtaining the final level of activity map forAk(x,y):

        whererdetermines the size of the window.The larger the value ofr,the more stable the image registration misreading will be. At the same time, however, some small details may be lost. In the multi-focus image, the edge of the object in the multi-source image has different definitions, which makes the edge position of each source image not exactly the same. Therefore, a relatively largeris more suitable for the multi-focus image fusion. In the multi-modal image fusion, a relatively smallris more suitable for the multi-modal image fusion because of the small scale details in the source image. Using the “maximum or minimum strategy” according the multi-focus or multi-modal image respectively,the combined coefficients diagram are

        Finally,the fusion results of the high-frequency part are reconstructed:

        5.3 Multi-source image fusion algorithm with low-frequency components

        The low-frequency part mainly represents the spectral information in the image.For the low-frequency components of the image,different fusion methods are suitable for different types of fusion images. For the multi-focus image fusion,the most important thing is to extract the details of the source image.Because some details still exist in the basic layer,for the multi-focus image fusion,the“maximum”fusion strategy is selected to extract some details existing in the basic layer, and the fusion result expression of the low-frequency component is

        However, the fusion strategy of “maximum” may lead to the inconsistency of human vision for the multi-modal medical image,because the gray value of the same location may be very different.The average fusion strategy can not only preserve the texture details of the source image, but also meet the consistency of the human visual system:

        5.4 Two-scale image reconstruction

        After getting the different low-and high-frequencycomponents respectively according to different images,the highfrequencyand low-frequencycomponents are reconstructed to get the fused imageIF(x,y):

        6.Results and analysis

        The experimental platform for this paper is the notebook with Intel(R) Core(TM) CPU i7-3610QM CPU@2.30 GHz,Memory 8.0 GB and 64-bit Win7 operating system.All the algorithms we mentioned are achieved with Matlab2014b.

        In order to verify the feasibility and effectiveness of the algorithm, four groups of multi-focus images(Fig. 2(a)), multi-modal medical images (Fig. 2(b)) and infrared and visible images (Fig. 2(c)) are used. The size of each source image is 256×256.In the multi-source image fusion data used in this paper, all the source images are registered, which means that the objects in all images are geometrically aligned. The proposed algorithm compared with the fusion algorithm is based on nonsubsampled contourlet transform(NSCT)[31],double-tree complex wavelet transform (DTCWT) [32], guide filter(GFF) [33,34], image matting (IM) [35], SR [36], curvature transform and SR(CVT-SR) [37,38],and the combination of NSCT and pulse coupled neural network(NSCTPCNN)[39]to confirm the performance respectively.The number of decomposition layers of DTCWT, NSCT and NSCT-PCNN is set to four,and the numbers of directions corresponding to decomposition on each layer are 4, 8, 8 and 16.The“averaging”mode is applied to low-frequency subbands and the“maximum”absolute value is applied to high-frequency subbands. The experiments compare subjective evaluation indicators and objective evaluation indicators.

        Fig.2 Source image groups

        6.1 Experimental analysis and parameter set

        The size of the sliding window block is 23×23 in the multifocus image fusion,as shown in Table 1.Whenμ=0.001,λ= 0.001,the quality of the fusion algorithm is the best,which determines the regularization parameterλof thel1norm and the regularization parameterμof the gradientl2norm.When the values ofμandλare selected,the sizerof the activity sliding window block is to be determined.Whenris 23, the four objective evaluation indices above are the best,as shown in Table 2.For the multi-modal medical image fusion and the infrared and visible images fusion, the value of the regularization parameterλof thel1norm and the regularization parameterμof the gradientl2norm are determined.Whenμandλare fixed,then determine the size of the sliding window blockr, as shown in Table 3.Whenris 3,the image fusion performance meets the human visual system.

        Table 1 Average objective evaluation indicators of multi-focus images when sliding window r is fixed and μ and λ change

        Table 2 Average objective evaluation indicators of multi-focus images when sliding window r changes and μ and λ are fixed

        Compared with the uncertainty of human visible analysis,objective evaluation indices such as the mutual information(MI),the information structure similarityQY, the peak signal-to-noise ratio(PSNR)and the edge holding degreeQAB/Fare introduced to evaluating the performance of different fusion methods quantitatively.

        Table 3 Objective evaluation indicators of multi-modal medical and infrared and visible light images when sliding window r changes and μ and λ are fixed

        In order to compare the complexity of the algorithm in this paper, each group of experiments is repeated five times, and the average time consumption calculated is shown in each table.

        6.2 Experiment and analysis of multi-focus image fusion

        This section mainly uses four sets of multi-focus images as shown in Fig.2(a).It can be clearly seen from Fig.3 that the result images have ringing effects through the fusion result graphs based on the DTCWT, CVT-SR and NSCT algorithms,and to a certain extent,the image edge contrast is reduced,as shown in Fig.3(a1)–Fig.3(d1),Fig.3(a2)–Fig.3(d2),Fig.3(a6)–Fig.3(d6).

        Fig. 3 Multi-focus source images and fusion results of different methods

        The result images of the GFF algorithms, as shown in Fig. 3(a3)–Fig. 3(d3), have artifacts in the edge region of the fused images. Although multi-directional filtering based on the IM method has a strong resolution,the fused image will be affected by the severe artifacts generated by the down-sampling operation, so that the contrast of the fused image is significantly reduced, as shown in Fig.3(a4)–Fig. 3(d4).The NSCT method has the advantages of multi-scale analysis,the fused image obtained by this method loses part of the edge information, and the false contour of the focus area in these images is obvious,which makes the contour of the focus area blurred.Similarly, the fusion method based on the NSCT-PCNN has a great visual improvement on the merged image,and more significant features can be extracted from the source image, but the result still has artifacts, and the weak edge is insensitive and cannot accurately extract the boundary information of the focus area, as shown in Fig. 3(a7) and Fig.3(b7).The SR-based algorithm has been always based on local image blocks rather than on the entire image block.Some details are smoothed or even lost in the fused image (Fig. 3(a5)). In contrast, the fused image of the method proposed herein optimally extracts the focus area from the source image by accurately locating the boundary of the focus area. From Fig. 3(a8)–Fig. 3(d8), it can be seen that the contour of the focal region is clear and complete.In addition,the contrast of the fused image obtained by this method is higher than that of other fusion methods, the transition region between the fusion region and the background is natural, and few human influences are introduced in the fusion process, which is convenient for identifying different targets in a complex background.

        Objective assessment of four groups of the multi-focus image fusion method is shown in Table 4. The algorithm is able to retain a lot of focus information,and reflects the stability and reliability of the proposed algorithm to a certain extent.

        Table 4 Objective indicators of different methods for multi-focus images

        6.3 Experiment and analysis of multi-modal medical image fusion

        Four groups of computed tomography/magnetic resonance imaging (CT/MRI) multi-mode medical source images are used to verify the effectiveness of the proposed algorithm compared with NSCT, DTCWT, GFF, IM, SR,CVT-SR and NSCT-PCNN respectively in Fig. 2(b).Fig. 4 shows four sets of CT/MRI multi-modal medical source images and fused images of the above algorithms. The method based on the CVT-SR and NSCTPCNN is superior to other methods in brightness and contrast, but local details are lost, as shown in Fig. 4(e6),Fig. 4(e7), Fig. 4(f6), Fig. 4(f7), Fig. 4(g6), Fig. 4(g7),Fig.4(h6)and Fig.4(h7).The local middle part of the image obtained by the DTCWT and IM method is ambiguous, as shown in Fig. 4(e2), Fig. 4(f2), Fig. 4(g2) and Fig. 4(h2). The fusion methods of GFF and NSCT not only lose a lot of details,but also cause serious artifacts,as shown in Fig. 4(e1),Fig. 4(e3),Fig. 4(f3)and Fig.4(g3).Methods based on the SR lose local edge information(Fig. 4(e5)). From image fusion results in Fig. 4(h6) and Fig.4(h7),the CVT-SR in Fig.4(h6))introduces different degrees of artifacts,while the proposed algorithm result in Fig.4(h7)does not.Through the comparison of the above fused images,the fused image obtained in this paper is superior to the fused image obtained by the above algorithms,and the obtained fused image can not only extract a large amount of detailed information of the source image, but also generate visible artifacts and brightness distortion.

        Fig.4 Multi-modal medical source images and fusion results of different methods

        The proposed method for four groups medical images is compared with other methods in Table 5.The proposed algorithm is superior to other fusion methods,because it can extract details from source image,and highlight the significant features.

        Table 5 Objective indicators of different methods for multi-modal medical images

        Continued

        6.4 Experiment and analysis of infrared and visible image fusion

        For the infrared and visible image fusion experiments,four sets of registered infrared and visible images (Fig. 2(c))are selected to verify the correctness of the algorithm proposed in this chapter. It can be seen from Fig. 5 that the GFF-based fused image shows severe distortion,as shown in Fig.5(i3)and Fig.5(k3).The contrast of the fusion result image obtained by the DTCWT and NSCT algorithms is reduced, and the bright square panel becomes blurred,which does not reflect the partial texture information in the visible light image well, as shown in Fig. 5(i1) and Fig. 5(i2)). The edge of the fused image obtained by the fusion method based on the NSCT-PCNN is lost,as shown in Fig.5(i7),Fig.5(j7),Fig.5(k7)and Fig.5(l7).The details of the visible light appearing in the fused image based on the CVT-SR,SR and GFF are lost,and serious artifacts appear,as shown in Fig.5(k5)and Fig.5(k6).Although the fused image obtained by the NSCT-based fusion algorithm has been greatly improved visually,a small amount of detail information is lost, as shown in Fig. 5(i3), Fig. 5(j3),Fig.5(k3)and Fig.5(l3).

        Fig.5 Fusion results of infrared and visible source images with different methods

        From the visible light image of the third group in Fig.2,the railings and street lamps on the side of the road can be seen.In the infrared images,the moving people,the car and the house can be seen. However, a small amount of distortions occur in the localized image of the fused image obtained by the IM algorithm (Fig. 5(k4)). In Fig. 5(k8),the fused image can clearly distinguish the infrared targets,which is better than the above algorithms in the visual effect,and can better represent the texture details in the visible images. In addition, in the comparison of these four sets of objective data, as shown in Table 6, some indicators of the algorithm may be slightly lower than those of the comparison algorithm.However,integrating subjective visual and objective evaluation indicators,the improved fusion algorithm is better than other methods.

        Fig.5 shows the average evaluation criteria values of the different fusion methods on five pairs of test infrared visible images. It clearly shows that the proposed algorithm significantly outperforms other methods for the tested images with constantly higher scores in terms of fusion metrics. These three largest criteria values confirm the objective assessment, which means the images obtained by the proposed method generally incorporate more information from the visible image together with important targets from the infrared image.

        To further compare the performance of the algorithms,Fig.6 shows the objective evaluation criteria values of different image fusion methods on four groups of test images.It clearly shows that the proposed algorithm significantly outperforms other methods for the tested images with constantly higher scores in terms of fusion metrics, because the image fusion method in the spatial domain can realize the fast multi-scale decomposition and reconstruction of the image through the simple subtraction and addition of the image pixel value. Compared with the spatial domain, the pixel level multi-scale method in the frequency domain realizes the multi-scale decomposition and reconstruction of the image through the high time complexity Fourier transform and inverse Fourier transform of image pixel values.

        Table 6 Objective indicators of infrared and visible image with different methods

        Fig. 6 Objective indicators of fused images obtained based on different fusion algorithms

        Therefore,it can be seen that the proposed algorithm’s complexity is higher than that of the NSCT, GFF, IM and DTCWT,but lower than that of the SR, CVT-SR and NSCT-PCNN.

        7.Conclusions

        The fusion results of the proposed method are compared with seven mainstream fusion algorithms. The gradient regularized CSR is introduced to the multi-source image fusion algorithm,which greatly compensates for the shortcomings of the multi-source fusion algorithm based on the SR in image detail preservation.The experimental results show that the proposed algorithm is superior to the traditional multi-source image fusion algorithms, as shown in Fig.6.The regularization parameters of the gradient regularization CSR and the size of the window are studied.However,the setting of these parameters is not unique.The proposed algorithm needs to be further optimized to improve the performance of the fusion.In the future development, convolution sparseness will have greater potentials in the field of image fusion.

        丰满人妻中文字幕乱码| 欧美a级情欲片在线观看免费| 色综合久久中文娱乐网| 又白又嫩毛又多15p| 中文字幕一区二区人妻出轨 | 麻豆一区二区99久久久久| 欧美亚洲高清日韩成人| 亚洲国产综合性感三级自拍 | 国产精品永久免费| 亚洲av成人无码网天堂| 亚洲AV永久无码精品表情包 | 97超碰国产成人在线| 香港三级日本三级a视频| 毛片免费全部无码播放| 日日噜噜夜夜狠狠久久av| 日本免费观看视频一区二区| 内射人妻少妇无码一本一道| 亚洲人成无码网www| 激情综合五月天开心久久| 虎白m粉嫩小在线播放| 亚洲精品无码国产| 亚洲欧美中文在线观看4| 青青草视频在线免费观看91| 欧美老妇牲交videos| 国产伦精品一区二区三区免费| 最新国产一区二区精品久久| 在线观看av国产自拍| 日韩精品午夜视频在线| 国产成人精品亚洲日本在线观看| 海角国精产品一区一区三区糖心 | 国产精品女同av在线观看| 婷婷色香五月综合缴缴情| 久久久精品久久日韩一区综合| 久久国产精99精产国高潮| 久久午夜一区二区三区| 国产免费又爽又色又粗视频| 无遮高潮国产免费观看| 中文字幕乱码亚洲无线| 中文字幕女同系列在线看一| 男女车车的车车网站w98免费| 国产特级全黄一级毛片不卡|