亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Adaptive Neighboring Selection Algorithm Based on Curvature Prediction in Manifold Learning

        2013-09-16 11:19:56LinMaCaiFaZhouXiLiuYuBinXu

        Lin Ma,Cai-Fa Zhou,Xi Liu,Yu-Bin Xu

        (1.Communication Research Center,Harbin Institute of Technology,Harbin 150080,China;2.Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory,Shijiazhuang 050081,China)

        1 Introduction

        Mapping the data from input space into low dimensional space is inevitable in solving many computer science problems,especially in computer vision and pattern recognition[1],with the development information technology.And manifold learning is a powerful method based on the cognitive psychology of human beings[2-3],and it can tackle the problem of discovering intrinsically low-dimensional structure embedded in high-dimensional data sets[4-6].

        There are many methods for dimensionality reduction based on the assumption of manifolds.Different approaches concentrate on preserving various characteristics of the manifolds.In Ref.[7],Roweis suggested an algorithm named Local Linear Embedding(LLE),which is a method to preserve local linear structure of high-dimensional data sets when embedding data to a low-dimensional space.Isometric Mapping(ISOMAP)tried to preserve global geometry of the manifold[2].And another method,named Laplacian Eigenmap(LE),is derived from LLE[8].The above three methods have the same feature and the Euclidean distance is implemented to show the structure features of the manifold.According to whether the manifold learning is based on the assumption linear structure of high-dimensional data sets,LLE and LE are called linear dimensionality reduction,while ISOMAP is called nonlinear dimensionality reduction.Except for the mentioned three methods,there are also numerous other dimensionality reduction methods designed for specific purposes and applications,which are less popular and will not be taken into consideration in this paper.

        Therefore,it is known that many of manifold learning methods are based on LLE,LE or ISOMAP[9-10].And we get the information that manifold learning methods have similar or the same basic framework[7-8,11].And neighboring selection is the first step in the framework.Generally,there are two strategies applied.One is K nearest neighbors(KNN);the other isε-ball[2,6].Due to the fact that the KNN strategy is easily applied in the data sets which are not convergent,we analyze this strategy in this paper,and aim at proposing an algorithm which can adaptively select the optimal neighboring,not only locally but also globally,based on the features of manifolds.

        When it comes to adaptive neighboring selection method of manifold learning,there are several previous researching results from the references which we read.In Ref.[10],Kouropteva proposed an automatic method for selecting of optimal parameters of LLE.But the method is only applied in LLE and it is a method that only can find a relative optimal fixed neighboring for the global data sets.In Ref.[12],the author suggested an adaptive neighboring selection method by considering about the local curvature of the manifold and calculating the relative change of curvature of the manifold.And the author recommended the compression and expansion methods according to the relative curvature change of the manifold.Although the method works when applied in Local Tangent Space Alignment(LTSA)and other several methods,the complexity of the algorithm,both time and space,increases greatly comparing to LTSA which has not added the adaptive neighboring selection method.

        In this paper,we propose an adaptive neighboring selection method which can find,both local and global,optimal neighboring of the manifold and at the same time the increase of complexity of algorithm is relative low at the same time.In additional,the proposed algorithm is compatible with many other manifold learning methods on condition so that they have the same or similar basic framework.

        The rest of this paper is as follows.Section II will analyze the curvature prediction of manifold and this is the key to find the adaptive neighboring selection method.Section III will investigate the accomplishment algorithm and the system error.The experiment result of adaptive neighboring selection algorithm which applies in ISOMAP and LLE will be shown in Section IV.Finally,the important conclusions will be drawn in Section V.

        2 Curvature Prediction of the Manifold

        2.1 Manifold Learning

        Manifold learning is based on perceptive psychology of human beings and considering it from academic perspective,it means that closing neighboring region must have ample overlapping to keep and strengthen the efficiency of information transmitting on local manifold[13].To keep the topology of the data sets stable,the value of K must be bigger than d,the dimension of low-dimensional space[8].Thus,the inferior value of K is d+1.

        Considering about the relationship between K and the curvature of the manifold,the value of K should decrease if the curvature of local manifold increases,otherwise,the value of K can increase.Based on the assumption that the high-dimensional space is a smooth manifold,finding out a method to predict the curvature of manifold is a significant work.From Riemannian Differential Geometry,the curvature of one point of multi-factor function can compute by Jacobi matrix of the function[14].Therefore,to calculate the curvature of the manifold,working out the functional relationship of input data sets is another key work.For having no idea about the number of the dependent variables and the independent variables,however,it is difficult to work out the functional relationship of the manifold.To achieve the accomplishment,an approximated method is proposed to estimate Jacobi matrix of discrete data sets in Ref.[8].

        2.2 Theoretical Basic of Curvature Prediction

        It is assumed that the data points sampled from a smooth manifold M=f(Ω),where:f:Ω?Rd→Rmis a smooth mapping that defines in open connected set Ω.Considering a chosen point xiand its local structure on the manifold,having:

        where xiis a point in data sets.

        To the neighboring point of xi,it can be expressed as follows:

        where Jτis the Jacobi matrix of pointτ.

        From Riemann geometry,the derivation of multifactor function can be expressed by the determinant of Jacobi matrix as the following equation:

        For having no accurate functional relationship of the manifold,neither have Jacobi matrix and data point τ.By using the approach proposed in Ref.[13],Jacobi matrix can be estimated by the limited neighboring points of xi.In this paper,we assume that Ni={xi1,xi2,…,xiN}is the nearest neighboring set of xi=f(τi).Using principal component analysis(PCA)and singular value decomposition(SVD),having the equation:

        2.3 Computation of Curvature

        By the above analyzing,a significant problem can be tackled by Riemann Geometry and PCA and a probable approach for computing the curvature of point in data sets is proposed in following analysis.From Eq.(4),deducing equation:

        From Eq.(5),another equation is:

        Thus,the following equation can be worked out:

        By simple deducing,

        In Eq.(8),Jinfis the inferior value of Jacobi matrix.Thus,the curvature of data points in the manifold can be estimated by Eq.(8).And there is another key work:how to choose the number of neighboring region of xi.A simple strategy is indicated in Ref.[16].

        Therefore,computing the inferior value of Jacob matrix and treating it as the approximated curvature of points on the manifold.

        3 Adaptive Selection of K Nearest Neighbors

        After working out the curvature of the point,a significant problem is the relationship between the value of K and the value of the curvature.Firstly,the inferior value and the superior value of K is the limited range of K.From Ref.[9],the inferior value of K is d+1.And the superior value of K is estimated by 6D[16].And we compute adaptive selection of K by:

        where Koand Kiis the initial value of K and K nearest neighbors of xi,respectively.And ΔJτis the ranging curvature of data points in one neighboring region.Another parameter isδoand it is a criterion to measure the change of curvature to the change of number of nearest neighbors.Thus,we have the adaptive neighboring selection rule:

        where Kiis defined as Eq.(10).By the rule,adaptive neighboring of every data point can be estimated.However,working out the initial value of K is difficult.From the consequence of several manifold learning algorithms,setting Koequals to 8,roughly.

        After applying the adaptive neighboring selection algorithm in different manifold learning methods,an efficient method to measure which neighboring selection method is better or how to measure the optimal result of the adaptive neighboring selection algorithm.In general,there are different definitions of“optimality”.We rely on quantitative measures introduced below to characterize this term in order to avoid a subjective evaluation often accompanying a human visual check used in many cases.

        We need to find out a method to estimate the“quality”of input-output mapping and work out a parameter which can represent how well the highdimensional structure is mapped to the embedded space.In Ref.[7],the residual variance is regarded as a suitable parameter to fulfill this purpose.The computation of the residual variance,however,has different ways based on different ideas.And the most two common methods are introduced in the paper.

        One method is based on the standard linear correlation coefficient.The value of residual variance equals to 1-whereρis the standard linear correlation coefficient,taken over all entries of DXand DY;DXand DYare the matrices of Euclidean distances(between pairs of points)in X and Y,respectively.X and Y are the input high-dimensional data set and output embedded low-dimensional data set,respectively.And we express the residual variance in the following expression:

        Another definition of the residual variance is based on the sum of residual Eigen values.In this paper,we use Eq.(10)to measure the quality of input-output mapping in Section IV.Generally,the lower the residual variance is,the better highdimensional data are represented in the embedded space.

        For most typical manifold learning algorithms used stable value of K as nearest neighbors,we need to design the algorithm to improve the compatibility to present typical manifold learning algorithms.

        According to the analysis of Section II in this paper,we can describe the adaptive neighboring method as follows:

        ·Select N according to Eq.(9),the neighboring region of xi,to compute the approximated curvature;

        ·Calculate Kiaccording to Eq.(11),the adaptive neighboring selection,and store in matrix KAN;

        ·Compute embedded consequence from X to Y by using LLE and ISOMAP algorithm respectively according to KAN;

        ·Compute embedded consequence from X to Y by using LLE and ISOMAP algorithm respectively using stable value of K;

        ·Calculateξrvarof different algorithms and different parameters according to Eq.(12).

        4 Implementation and Performance Analysis

        We apply the adaptive neighboring selection algorithm in mapping Swiss roll into two-dimensional space by using LLE and ISOMAP algorithm.We analyze the existence of optimal K nearest neighbors of the algorithm.We embed 800 data points from R3to R2and calculate the residual variance of different value of K and figure out the changing tendency of residual variance with changing of K(Fig.1).

        As shown in Fig.1,there is an optimal value of K which can minimize the residual variance.We also find out that there is a fluctuation of residual variance with the change of K.According to the basic principle of manifold learning,the value of K must be large enough to ensure the efficiency of information transmitting and when it comes to linear and local manifold learning algorithms,like LLE,etc.,the value of K cannot be too large,since as the increase of K,the local linear structure of the manifold cannot satisfy the requirements of corresponding algorithms.From the information of previous references,we have the idea that the value of K cannot be too large for the complexity of the algorithm which increases much faster as the increase value of K.We find out the optimal value of K for LLE and ISOMAP algorithm,and the values of K equal to 14 and 8 respectively according to the results in Fig.1.

        Fig.1 Residual variance tendency

        Then we apply adaptive neighboring selection algorithm in the selecting of K in LLE and ISOMAP algorithm.In LLE algorithm,we can compute out the adaptive value of K is 10,which does not equal to the optimal value of LLE algorithm,but it is an inflection point of the curve.However,we can see that the residual variance of the algorithm might be small on the condition that the value of K is very small according to the results in Fig.1.From visional result of manifold learning of Swiss roll,we can construe this paradox by visional figures of Swiss roll.We can see that the visual consequence is not good when the value of K is not large enough or too large(Fig.2).For we compare the visional results of Swiss roll with the original smooth manifold,its basic structure should be preserved when we apply manifold learning algorithm on it.

        Fig.2 shows the results of testing LLE and ISOMAP on a Swiss roll data set.800 points are generated uniformly in a rectangle(top left)and mapped into a Swiss roll configuration in R3.LLE and ISOMAP recovers the rectangular structure correctly provided that the neighborhood parameter is not too large(in this case K=8 and K=10,respectively).

        Fig.2 Manifold learning of Swiss roll

        From quantitative analysis,we have the residual variance of LLE and ISOMAP which is shown in Table 1 with K=8,10,12,respectively.

        Table 1 Part of residual variance

        After using the proposed algorithm in the paper,an optimal value of K is 14 and 8 for LLE and ISOMAP,respectively.Thus,we have the optimal residual of variance of LLE and ISOMAP is 0.2799 and 0.2788,respectively.The relative increasing of embedding quality is 45.45% and 1.00% of LLE and ISOMAP,respectively.The relative increase of embedding quality is calculated by(Resimax-Resioptimal/Resimax),where Resimaxand Resioptimalstands for maximum value of residual variance in Table 1 and optimal residual variance of LLE and ISOMAP,respectively.

        We also compute out different values of K on the condition that dividing all data points into several groups.Because the original data points are sampled from a smooth and continuous manifold,we find out that the residual is smaller when we treat all data points in one group than the residual variance when we divide all data points into several parts.We can see the change according to the results in Fig.3,which indicates the residual variance change tendency of group dividing.The parameter G stands for the number of groups divided during the test.And we find out that G equals to 1,and the residual variance has minimum value.It might work when we apply adaptive neighboring selection algorithm to a normal manifold,non-smooth or non-continuous on the condition that we divide all data sets into several parts.

        Fig.3 Residual variance changing tendency of group dividing

        5 Conclusions

        The selection of neighboring region of manifold learning method is one of the key steps to achieve the high-dimensional smooth manifold embedded to lowdimensional space.We propose an adaptive neighboring selection algorithm which can select an optimal value of K depending on the curvature or the approximated curvature of data points on the manifold and we also consider about the embedded dimensions to determine the related parameter of adaptive neighboring selection algorithm.We test our algorithm by applying the adaptive neighboring selection algorithm in LLE and ISOMAP algorithm,and analyze the performance of it by compute and figure out the curve of residual variance of embedding consequences.And we analyze the visional results of manifold learning methods.We conclude that the adaptive neighboring selection algorithm works when applied to LLE and ISOMAP algorithm and it might be able to apply to other manifold learning methods and analyze the characteristics of non-smooth and non-continuous manifolds.And we would like to analyze several works in the future research.

        [1]Chen H T,Chang H W,Liu T L.Local discriminant embedding and its variants.IEEE Transactions on Computer Vision and Pattern Recognition,2005,2(2):846-853.

        [2]Seung H S,Lee D.The manifold ways of perception.Science,2000,290(5500):2268-2269.

        [3]Lai Z.Sparse local discriminant projections for discriminant knowledge extraction and classification.Computer Vision,2012,6(6):551-559.

        [4]Tenenbaum J B,de Silva V,Langford J C.A global geometric framework for nonlinear dimensionality reduction.Science,2000,290(5500):2319-2323.

        [5]Wan M,Yang G,Lai Z,et al.Feature extraction based on fuzzy local discriminant embedding with applications to face recognition.Computer Vision,2011,5(5):301-308.

        [6]Ben X,Meng W,Wang K.Two linear subpattern dimensionality reduction algorithms.Journal of Harbin Institute of Technology(New Series),2012,19(5):47-53.

        [7]Roweis S T,Saul L K.Nonlinear dimensionality reduction by local linear embedding.Science,2000,290(5500):2323-2326.

        [8]Belkin M,Niyogi P.Laplacian eigenmaps for dimensionality reduction and data representation.Tech.Rep.TR-2002-01.Chicago:Dept.of Computer Science,University of Chicago,2002.

        [9]Li B.The Study of the Manifold Learning Based Feature Extraction Methods and Their Applications.Hefei:University of Science and Technology of China,2008.

        [10]Chahooki M,Charkari N.Shape retrieval based on manifold learning by fusion of dissimilarity measures.Image Processing,2012,6(4):327-336.

        [11]Kouropteva O,Okun O,Pietikainen M.Selection of the optimal parameter value for the locally linear embedding algorithm.Proceedings of the 1st International Conference on Fuzzy Systems and Knowledge Discovery(FSKD'02).Singapore.2002.359-363.

        [12]Wang J.Research on Manifold Learning:Theories and Approaches.Hangzhou:College of Science,Zhejiang University,2006.

        [13]Zha H,Zhang Z.Spectral analysis of alignment in manifold learning.Proceedings of the International Conference on Acoustics,Speech,and Signal Processing.Piscataway:IEEE,2005.1069-1072.

        [14]Gudmundsson S.An Introduction to Riemann Geometry.http://www.matematik.lu.se/matematiklu/personal/sigma/Riemann.pdf,2012-9-26.

        [15]Wang Q G.Research on Manifold Learning Algorithms and A Few Applications.Chongqing:College of Optoelectronic of Chongqing University,2009.14-15.

        [16]Bernstein M,de Silva V,Langford J C,et al.Graph approximations to geodesics on embedded manifolds.http://isomap.stanford.edu/BdSLT.pdf,2012-12-01.

        欧美z0zo人禽交欧美人禽交| 亚洲视频专区一区二区三区| 精品熟人妻一区二区三区四区不卡| 免费看美女被靠的网站| 国产午夜成人久久无码一区二区| 国产精品欧美成人片| 国产伦精品一区二区三区| 大地资源网在线观看免费官网| 精品免费看国产一区二区| 国产白丝网站精品污在线入口| 亚洲一区二区三区在线更新| 精品国产一区二区三区18p| 久久香蕉国产线看观看精品yw| 国产一品道av在线一二三区| 最近亚洲精品中文字幕| 少妇爽到高潮免费视频| 国产精品久久777777| 波多野结衣一区| 亚洲国产精品色婷婷久久| 国产农村妇女精品一区| www插插插无码视频网站| 午夜亚洲国产理论片亚洲2020| 国内偷拍第一视频第一视频区 | 97色综合| 白白色发布在线观看视频| 玩弄白嫩少妇xxxxx性| 粗了大了 整进去好爽视频| 久久精品国产亚洲片| 亚洲av毛片在线网站| 日韩av东京社区男人的天堂| 久草午夜视频| 国产午夜激情视频在线看| 国产aⅴ无码专区亚洲av| 亚洲美国产亚洲av| 亚洲欧美日韩一区在线观看| 日本国产精品久久一线| 成人美女黄网站色大免费的| 囯产精品无码va一区二区| 人妻露脸国语对白字幕| 少妇性l交大片7724com| 色婷婷欧美在线播放内射|