亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Analysis and Evaluation of IKONOS Image Fusion Algorithm Based on Land Cover Classification

        2015-02-02 05:06:50XiaJINGYanBAO
        Asian Agricultural Research 2015年1期

        Xia JING,Yan BAO

        1.College of Geometrics,Xian University of Science and Technology,Xi'an 710054,China;2.The Key Laboratory of Urban Security and Disaster Engineering,Ministry of Education,Beijing 100124,China

        1 Introduction

        The image fusion can producemulti-spectral and high spatial resolution image to achieve the complementation between a variety of information resources,raise people's awareness of remote sensing data,and increase the decision-making scientificity and accuracy[1].Currently most of the studies of remote sensing image fusion algorithm focus on selecting suitable fusion algorithm based on the type of sensor or improving the original image algorithm in order to improve the quality of fusion image.These studies rarely consider the application purpose.Zhang Ningyuet al.use Brovey fusion and wavelet fusion to analyze the impacton the amount of information of QuickBird image,and find that the Brovey fusion method does not apply to the processing of QuickBird remote sensing image[2].Li Chunhuaet al.perform the quantitative analysis of the spectral fidelity and high frequency information integration,and find that the fusion algorithm based on statistical theory is in general better than the fusion method based on filtering principle,so it is more suitable for the fusion of QuickBird high-resolution image[3].Tominimize the spectral distortion of IKONOS1-m fusion image,Kalpoma KA and Kudoh Juse the steepest descent method to establish the spectral response relationship between panchromatic band and multi spectral image band[4].Liu Junet al.develop a remote sensing image fusion method for fast discrete Curvelet transformation,and the fusion experiment and quantitative evaluation of IKONOS,QuickBird,WordView-2multi-spectral and panchromatic image show that the method has obvious advantages over traditional methods[5].Wang Yanliang and Tang Yan discuss the remote sensing image fusion method based on multi-band wavelet transformation,and the experimental results show that the method not only improves the clarity and resolution of the image,but also retains the spectral information of the original image[6-7].There are also some experts and scholars using single ormulti-sensor image as the experimental data,and comparing the fusion effect of different algorithms from spectral fidelity and clarity of spatial structure information[8-12].Various image fusion algorithms can achieve the complementation between spatial resolution and spectral resolution and improve the classification accuracy of remote sensing image,but different fusion algorithms have different advantages and limitations,so it is necessary to further study onwhat kind of fusion algorithm more conducive to improving the classification accuracy of remote sensing image.With IKONOS panchromatic and multi-spectral data as the object of study,this paper uses5 algorithms(IHS,Brovey,PCA,SFIM,Gram-Schmidt)for image fusion and classification,and evaluates the image fusion effect in order to find the IKONOS image fusion algorithm suitable for land cover classification.

        2 Image fusion algorithm and evaluation

        2.1 Image fusion algorithm Pohlet al.believe that the most commonly used image fusion algorithm at present can be divided into color synthesis,arithmetic algorithm and image transformation[13],specifically including IHS transformation,Brovey transformation and PCA transformation.In recentyears,the image fusion algorithm of SFIM transformation[14],Gram-Schimdt transformation[15],Ehlers transformation[16]and pansharpening transformation[17]is also frequently reported.

        2.1.1 IHS transformation.The IHS transformation can be realized using two methods.The first is the direct method,which transforms the 3-band image into the specified IHS space.The second is the alternative method,which firstly transforms a data set consisting of RGB3-band data into the separated IHS color space,then one of the components of IHS is replaced by another band image,and the fourth band image goes through the imageenhancement processing,to obtain the same variance or mean as the image to be replaced.Finally,the fusion image is generated through the inverse transformation of IHS.

        2.1.2 Brovey transformation.The Brovey transformation is also known as color standardization transformation.The multi-spectral band is firstly standardized,and then the standardized multi-spectral image is multiplied by high-resolution image to obtain the image after Brovey fusion.It is calculated as follows:

        whereFiis the corresponding band data after fusion;Panis the high-resolution panchromatic image data;Miis one band of multispectral data.

        2.1.3 Principal component analysis.Principal component analysis(PCA)is to reduce multiple components into several integrated components by the dimension reduction technique.Firstly,the principal component transformation is performed on the multi-spectral image,and then the high-resolution image is stretched to the same variance and mean as the first principal component;the stretched high-resolution image replaces the first principal component of multi-band image;finally,the image fusion is completed by the PCA inverse transformation on the replaced image.The direct transformation formula is as follows:

        whereMPCAis the multi-spectral image;Mis the principal component image data obtained by the principal component transformation;Tis the principal component transformation matrix obtained by calculating the covariance matrix of original multi-band image data.

        whereFis the image data after fusion;MPCAis the principal component image data after the first principal component is replaced;T-1is the inverse transformation matrix of principal component.

        2.1.4 Smoothing filter-based intensity modulation.The main step for smoothing filter-based intensity modulation(SFIM)[14]to perform image fusion is to first carry out strict registration of highre solution images and low-resolution images,on this basis conduct neighborhood smoothing convolution operation of high-resolution images,and put the operation results as the median image.The transformation formula is as follows:whereIMAGElowis the image obtained by resampling of low-resolution images on high-resolution images;IMAGEmeanis the image obtained through the neighborhood smoothing convolution operation of high-resolution images;IMAGEhighis the high-resolution image.

        2.1.5 Gram-Schmidt transformation.The Gram-Schmidt transformation is to orthogonalize the matrix ormultidimensional images to eliminate redundant information,and the key steps are as follows[15]:(i)using low spatial resolution multi-spectral images to simulate the high-resolution images;(ii)using the simulated high-resolution images as the first component of Gram-Schmidt transformation to carry out the Gram-Schmidt transformation of simulated high-resolution band images and low-resolution images;(iii)adjusting the statistical values of high-resolution images to match the first principal component GS1 after Gram-Schmidt transformation,in order to produce the modified high-resolution images;(iv)using the modified high-resolution images to replace the first component after Gram-Schmidt transformation to produce a new data set,and conducting inverse Gram-Schmidt transformation of new data set to produce the enhanced spatial resolution multispectral images.

        2.2 Analysis and evaluation of image fusion effectThe purpose of image fusion is to achieve complementarity between spatial resolution and spectral resolution,and minimize loss of the original information.The evaluation of its effect is often conducted on the basis of visual check combined withma the matical statistics method.The evaluation data selected in this paper is the IKONOS image of Shihezi City on July 25,2008,and the basic information of experimental data is shown in Table 1.

        Table 1 Basic information of experimental data

        2.2.1 Visual evaluation.For better comparison of visual effects of different fusion methods,this paper extracts some sub-sections from the whole scene image(Fig.1).As can be seen from Fig.1,the spatial resolution of image after fusion is significantly improved;the land and road bordersare clear;the spatial texture information is greatly enhanced and details aremore prominent.Although there are small differences in improving the spatial texture information between the five fusion algorithms,there are obvious differences in the information fidelity.The fidelity of SFIM fusion method is best,and the distortion of image obtained by IHS transformation is obvious.

        2.2.2 Quantitative analysis.The visual evaluation is susceptible to observers' experience and observing conditions,so this paper select6 statistical parameters for measuring the amount of information(near-infrared band information entropy,average gradient,deviation index,correlation coefficient before and after fusion,mean and standard deviation)to conduct the quantitative analysis of spectral texture information enhancement and spectral information fidelity capacity.The statistical parameters are calculated in Table 2,and the quantitative evaluation results are shown in Table 3.

        As can be seen from Table 3,the mean of image after IHS transformation and Brovey transformation is less than the mean of the original image.After the fusion using other transformation methods,the mean of image increases,and the mean for PCA fusion method increases most,so the image is brightest after PCA fusion.There is the smallest difference in the mean between the fusion image obtained through SFIM transformation and the original image,so from the mean evaluation indicator,the brightness of image after SFIM transformation fusion is consistent with the original image,and the fusion effect is most ideal.If regarding the information entropy of fusion image as the evaluation indicator for the quality of image fusion,the information entropy of fusion image obtained using5 transformation methods is improved in varying degrees compared with the original image.From the deviation of the fusion image from the original image,the deviation index fusion image after IHS transformation is highest,indicating that the correlation between fusion image obtained through IHS transformation and the original image is lowest.The deviation index of SFIM transformation is smallest,followed by Gram-Schmidt transformation and Brovey transformation,indicating that the SFIM transformation keeps up the most spectral information of the original image.The average gradient indicates the relative clarity of image and reflects the texture richness of image after fusion.Table 3 shows that the ability of the 5methods in this paper to express the details of ground object is improved,and there are great differences in the average gradient between the image after fusion using Gram-Schmidt transformation,PCA transformation or SFIM transformation,and the original image,especially for Gram-Schmidt transformation,indicating that Gram-Schmidt fusion algorithm has the strongest ability to express the contrast of tiny details of image,and the image clarity is better.From the correlation between the original image and the image after fusion,the fusion image after IHS transformation is least correlated with the original image,followed by Brovey transformation.The correlation coefficient between the original image and the fusion image after SFIM transformation,Gram-Schmidt transformation or PCA transformation is more than 0.9,indicating that there are great similarities in the spectral characteristics between the original image and the image after fusion using the three methods,and spectral characteristics of the original image can be well kept.

        Table 2 Calculation formula of various parameters

        Table 3 Quantitative evaluation of IKONOS multi-spectral and panchromatic band fusion results

        3 The classification accuracy evaluation of the image after fusion using different algorithm s

        For selecting the image fusion algorithm suitable for the land cover classification of remote sensing image,this paper firstly uses the maximum likelihood method to classify the fusion image after different transformation,and then uses confusion matrix to analyze the classification results for further accuracy evaluation of5 image fusion algorithms.

        3.1 Maximum likelihood classificationThe maximum likelihood method is a supervised classification algorithm commonly used in remote sensing image classification,with good statistical properties[18].This paper firstly uses hand-held GPS to conduct field survey of different land use types within the study area to determine the samples for supervised classification,and then uses the maximum likelihood supervised classification method to classify the IKONOS fusion image after five kinds of transformation in accordance with four land cover types(road,shelter forest,cotton and grapes).The classification results are shown in Fig.2,and we can find that the fusion image after Gram-Schmidt transformation and SFIM transformation has good classification effect,while the fusion image after IHS transformation has the worst classification effect.

        3.2 Accuracy evaluationFor more objective evaluation of the classification accuracy of different fusion algorithms,this paper uses the ground truth ROIs to establish the confusion matrix based on the field survey data,for calculating the overall accuracy of fusion image after different transformation and the Kappa coefficients(Table 4).As can be seen from Table 4,the land cover classification accuracy of fusion image by SFIM transformation and Gram-Schmidt transformation is high,and the overall classification accuracy is more than 98%.The classification accuracy of fusion image after Gram-Schmidt transformation is slightly better than that of fusion image after SFIM transformation;the classification accuracy of fusion image after IHS transformation is lowest,and the overall accuracy is only 83.14%.

        Table 4 Evaluation of fusion image classification accuracy

        4 Conclusions and discussions

        In this paper,with the IKONOS panchromatic and multi-spectral image as the object of study,we use five algorithms(IHS,Brovey,PCA,SFIM and Gram-Schmidt)for image fusion.We first use visual check and quantitative analysis to evaluate the image fusion results,then use the maximum likelihood method to classify the remote sensing image after fusion,and conduct the accuracy analysis of classification results on this basis.In terms of image spatial information improvement and spectral information fidelity,SFIM transformation and Gram-Schmidt transformation are better,and the Gram-Schmidt transformation has the stronger ability to express the contrast of tiny details of image than SFIM transformation,while in terms of spectral information fidelity,SFIM transformation is slightly better than Gram-Schmidt transformation.Among the five image fusion algorithms,the classification accuracy of fusion image obtained by Gram-Schmidt transformation is highest,and the overall accuracy and Kappa coefficient are 98.95%and 0.98,respectively.The classification accuracy of fusion image after Gram-Schmidt transformation is slightly better than that of fusion image after SFIM transformation;the classification accuracy of fusion image after IHS transformation is lowest,and the overall accuracy and Kappa coefficient are only 83.14%and 0.76,respectively.Therefore,the IKONOS fusion image obtained by Gram-Schmidt transformation and SFIM transformation is more conducive to improving the land cover classification accuracy.A-mong various image fusion algorithms,this paper only compares five algorithms and performs the qualitative and quantitative evaluation of image fusion effect based on specific object of study and purpose of application to select the best image fusion algorithm suitable for this study.However,there is a need to further verify whether this conclusion is still tenable when considering more image fusion algorithms or using other classification methods.

        [1]ZHOUQX,JING ZL,JIANG SZ.Comments on research and development of multi-source information fusion for remote sensing images[J].Journal of Astronautics,2002,23(5):89-94.(in Chinese).

        [2]ZHANG NY,WU QY.Information influence on Quick Bird images by Brovey fusion and wavelet fusion[J].Remote Sensing Technology and Application,2006,21(1):67-70.(in Chinese).

        [3]LICH,XU HQ.Spectral fidelity in high-resolution remote sensing image fusion[J].Geo-Information Science,2008,10(4):520-526.(in Chinese).

        [4]Kalpoma K A,Kudoh J.Image fusion processing for IKONOS1-m color imagery[J].IEEE Transactions on Geoscience and Remote Sensing,2007,45(10):3075-3076.

        [5]LIU J,LIDR,SHAO ZF.Fusion of remote sensing images based on fast discrete Curvelet transform[J].Geomatics and Information Science of Wuhan University,2011,36(3):333-337.(in Chinese).

        [6]WANG YL,TANG Y.Remote sensing image fusion based on multi-band wavelets[J].Geomatics&Spatial Information Technology,2010,33(6):16-19.(in Chinese).

        [7]TANG Y.Forest remote sensing image fusion based onmulti-band wavelets[J].Journal of Northeast Forestry University,2010,38(8):68-70.(in Chinese).

        [8]QIU DH,WU YL,ZHANG RS.Fusion evaluation of Worldview-1 and SPOT5[J].Journal of South west University(Natural Science Edition),2010,32(6):168-172.(in Chinese).

        [9]ZHANG JW,SONG XD,YANG JA,et al.Comparison study on different fusionmet hods based on ETM +image—A case study in Ansai and Yongshou County[J].Journal of Northwest Forestry University,2010,25(5):152-156.(in Chinese).

        [10]LIJJ,HE LH,DAIJF,etal.Analysis of pixel-level remote sensing image fusion methods[J].Geo-Information Science,2008,10(1):128-134.(in Chinese).

        [11]HAN SS,LIHT,GU HY.Study on image fusion for high spatial resolution remote sensing images[J].Science of Surveying and Mapping,2009,34(5):60-62.(in Chinese).

        [12]Klonus S,Ehlers M.Performance of evaluation methods in image fusion[Z].12th International Conference on Information Fusion,Seattle,WA,USA,July 6-9,2009

        [13]Pohl C,Van JLG.Multisensor image fusion in remote sensing:concepts,methods and applications[J].International Journal of Remote Sensing,1998,19(5):823-854.

        [14]Liu JG.Smoothing filter-based intensity modulation:a spectral preserve image fusion technique for improving spatialdetails[J].International Journal of Remote Sensing,2000,21(8):3461-3472.

        [15]LICG,LIU LY,WANG JH,et al.Comparison of two methods of fusing remote sensing images with fidelity of spectral information[J].Journal of Image and Graphics,2004,9(11):1376-1387.(in Chinese).

        [16]ZHAO ZM,MAW,WANGRS.Evaluation and analysis of three methods of fusing remote sensing images with high fidelity of spectral information[J].Geology and Exploration,2010,46(4):705-710.(in Chinese).

        [17]Ehlers M,Klonus S,strand PJ,etal.Multi-sensor image fusion for pansharpening in remote sensing[J].International Journal of Image and Data Fusion,2010,1(1):25-45.

        [18]TANG GA,ZHANG MS,LIU YM,et al.Digital remote sensing image processing[M].Beijing:Science Press,2004:104-107.(in Chinese).

        国产手机在线αⅴ片无码| 国产爆乳美女娇喘呻吟| 亚洲国产精品无码专区影院| japanesehd中国产在线看 | 超短裙老师在线观看一区| 国产肥熟女免费一区二区| 玩中年熟妇让你爽视频| 欧美自拍视频在线| 蜜桃网站在线免费观看视频| 亚洲精品一区二区三区52p| 色综合久久88色综合天天| 欧美激情αv一区二区三区| 日本成人三级视频网站| 97精品一区二区三区| 少妇性荡欲视频| 乱码午夜-极品国产内射 | 日韩一区二区中文字幕| 亚洲成av人片在www鸭子| 先锋影音最新色资源站| 日本欧美小视频| 亚洲av第一区综合激情久久久| 可免费观看的av毛片中日美韩| 久久久国产精品黄毛片| 亚洲制服无码一区二区三区| 国产高清大片一级黄色| 亚洲av无码专区在线| 国产mv在线天堂mv免费观看| 成人av天堂一区二区| 五月天中文字幕日韩在线| 中文字幕久久精品一二三区| 国产va精品免费观看| 一区二区三区中文字幕在线观看| 午夜精品久久久久久久99热| 福利视频黄| 亚洲无av高清一区不卡| 一边摸一边抽搐一进一出口述| 日日噜噜夜夜狠狠久久无码区| 国产精品人成在线观看| 国产亚洲3p一区二区| 少妇精品无码一区二区三区| 在线精品免费观看|