亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Sensing Matrix Optimization for Multi-Target Localization Using Compressed Sensing in Wireless Sensor Network

        2022-03-31 07:33:04XinhuaJiangNingLiYanGuoJieLiuCongWang
        China Communications 2022年3期

        Xinhua Jiang,Ning Li,*,Yan Guo,Jie Liu,Cong Wang

        1 College of Communications Engineering,Army Engineering University,Nanjing 210007,China

        2 College of Field Engineering,Army Engineering University,Nanjing 210007,China

        *The corresponding author,email: js_ningli@sina.com

        Abstract: In the multi-target localization based on Compressed Sensing (CS),the sensing matrix’s characteristic is significant to the localization accuracy.To improve the CS-based localization approach’s performance,we propose a sensing matrix optimization method in this paper,which considers the optimization under the guidance of the t%-averaged mutual coherence.First,we study sensing matrix optimization and model it as a constrained combinatorial optimization problem.Second,the t%-averaged mutual coherence is adopted as the optimality index to evaluate the quality of different sensing matrixes,where the threshold t is derived through the K-means clustering.With the settled optimality index,a hybrid metaheuristic algorithm named Genetic Algorithm-Tabu Local Search (GA-TLS) is proposed to address the combinatorial optimization problem to obtain the final optimized sensing matrix.Extensive simulation results reveal that the CS localization approaches using different recovery algorithms benefit from the proposed sensing matrix optimization method,with much less localization error compared to the traditional sensing matrix optimization methods.

        Keywords: compressed sensing; hybrid metaheuristic;K-means clustering;multi-target localization;t%-averaged mutual coherence; sensing matrix optimization

        I.INTRODUCTION

        Wireless Sensor Networks(WSNs)[1]consisted of a cluster of specific sensors are deployed on the area of interest to gather different kinds of data (e.g.,brightness,humidity,temperature).Among these data,location information serves as a crucial foundation without which the targeted service would be totally out of reach for clients.Hence,the localization issue in WSNs is highly focused and extensively studied in the literature.

        A well-known and widely used localization method is to equip the targets with Global Positioning System(GPS) receiver and obtain their location information through GPS.However,the targets may be located in indoor or underground scenarios,where the localization performance of GPS would significantly deteriorate.Besides,when the targets have no GPS receiver in themselves,the GPS cannot function anymore.

        Therefore,many other localization schemes [2-5] using different parameters like Received Signal Strength (RSS),Time of Arrival (TOA),Time Difference of Arrival (TDOA),Angle of Arrival (AOA),and their combination,have been developed to address the problem without the aid from GPS.Among these schemes,RSS-based localization has attracted much attention for its low cost and intrinsic simplicity in hardware,while others demand auxiliary devices to collect specific data.All these substitute methods face a common barrier that a colossal number of collected data are needed to achieve localization in WSNs,which dramatically challenges the sensor’s capabilities of data processing,memory,and endurance.In a typical scenario,the sensors used are resource-constrained so that these complementary solutions may fail out of this restraint.Consequently,to locate targets with much fewer data in WSNs is an essential and critical problem.

        Compressed Sensing (CS) [6-8],a revolutionary signal reconstruction theory,provides a brand-new perspective to the question of multi-target localization.According to the CS theory,only a small number of measurements will suffice the recovery of sparse or compressible signals,giving a possible solution to the problem mentioned above.In the application of CS theory,the sensing matrix is a crucial component to which the signal recovery accuracy is closely related.Therefore,many works have been done to optimize the sensing matrix.Donoho points out in[6]that the sensing matrix is supposed to satisfy that a certain quantitative degree of linear independence among all small groups of columns;moreover,the linear combinations of small groups of matrix’s columns give vectors that look much like random noise,at least as far as the?1and?2norms are concerned.And the conventional criteria used to evaluate the sensing matrix in CS theory are Restricted Isometry Property(RIP)[9]and mutual coherence[10],where the latter is simpler in calculation than the former.

        In order to reduce the column correlation,many methods have been proposed for designing or improving the properties in the sensing matrix.While since the measurement matrix used in multi-target localization based on CS theory is a restrained binary sparse matrix,which is determined by the sensors’locations,the design methods for sensing matrix optimization seem inappropriate in this context.Therefore,many relevant works choose to optimize the sensing matrix by improving its property.In essence,they do the optimization by multiplying the sensing matrix by a transformation matrix generated based on different principles.However,the effectiveness of these methods may deteriorate when the measurement noise is considered,because the Signal-to-Noise Ratio (SNR) would be lowered within the noisy measurements while multiplying the transformation matrix,leading to a higher probability of the wrong estimation.

        In this paper,we optimize the sensing matrix by iteratively improving the measurement matrix while keeping it as a constrained binary sparse matrix,which is modeled as a constrained combinatorial optimization problem.To effectively evaluate the “quality” of different sensing matrixes,we resort to thet%-averaged mutual coherence raised by[11],and we propose a deterministic method to derive the proper threshold valuetso that the optimization effect can be improved.Furthermore,a hybrid metaheuristic algorithm named Genetic Algorithm-Tabu Local Search(GA-TLS)is proposed to address the combinatorial optimization problem above,thereby optimizing the sensing matrix.

        The main contributions of this work can be summarized as follows.

        · To improve the performance of the CS-based multi-emitter localization algorithm,we study the sensing matrix optimization problem.To avoid inducing an extra transformation matrix that may lower the SNR within the RSS measurements,we optimize the sensing matrix by solving a constrained combinatorial optimization problem,where we iteratively improve the measurement matrix while keeping it as a constrained binary sparse matrix.

        · We analyze the characteristic of the sensing matrix in the context of CS-based multi-emitter localization and adopt the t%-averaged mutual coherence as the optimality index to effectively evaluate the different measurement matrixes.To decide an appropriate thresholdtin the optimality index,we apply the K-means clustering to the classification of the off-diagonal elements in the Gram matrix and derive the threshold value from the boundaries between different clusters.Based on the settled optimality index,the GA-TLS is proposed to address the constrained combinatorial optimization problem and thereby optimize the sensing matrix.

        · Extensive simulations are conducted to reveal that CS localization methods using different recovery algorithms benefit from the proposed sensing matrix optimization method,with much less localization error compared to the traditional sensing matrix optimization methods.

        The remainder of this paper is organized as follows.In section II,the related work concerning the existing sensing matrix optimization methods is presented.In section III,the system model of the multi-emitter localization scenario is established,and the problem of sensing matrix optimization is formulated.The proposed GA-TLS based sensing matrix optimization method is proposed in section IV.In section V,extensive numerical simulation results and performance analysis are given.The conclusion and future work are summarized in section VI.

        Notations used in this paper are introduced as follows.First,letters that are capital boldfaced and lowercase boldfaced represent matrixes and vectors,respectively,and capital italicized letters signify collections.‖·‖pmeans the norm of a vector.|·|represents the absolute value of a scalar or the determinant of a matrix.┌·┐means to round up a scalar.

        II.RELATED WORK

        Many methods have been proposed to optimize the sensing matrix,either by designing or improving its properties.Since the sensing matrix is the product of the measurement matrix and the sparse dictionary,and the latter is decided by the signal to be recovered,researchers have been focusing on designing the measurement matrix to construct the required sensing matrix.In general,the designed measurement matrix can be divided into two categories.The first one is the random measurement matrix,such as Gaussian random matrix [12] and Bernoulli random matrix [13].All of their elements are independent and identically distributed.In practice,the application of the random measurement matrix is quite limited for its high requirement on hardware and storage while generating random numbers.By contrast,the other kind named deterministic measurement matrix is free from these disadvantages because its elements are precomputed and deterministic.In recent years,many approaches are proposed to construct a deterministic measurement matrix.In [14],the optimal codebooks and specific codes were used to generate the deterministic CS matrix.Literature[15]constructed deterministic measurement matrix based on Bose balanced incomplete block designs,and the embedding operation was utilized for more flexibility.In [16],sparse fast Fourier transform was applied to the deterministic measurement matrix.Besides,a deterministic construction method of bipolar measurement matrices was presented in[17]based on binary sequence family.

        While since the measurement matrix used in multitarget localization based on CS theory is a restrained binary sparse matrix,which is determined by the sensors’locations,the design methods above for the measurement matrix seem inappropriate in this context.At the same time,many works achieved sensing matrix optimization by improving its property.In [18],singular value decomposition was used to improve the sensing matrix,which ensured the sparsity of original signal and the RIP of new sensing matrix,while its computational cost was rather large.Literature [19]optimized the sensing matrix by multiplying it by the product of the orthogonal form of the sensing matrix and its pseudo-inverse.The optimization method used in [20] was based on equiangular non-coherent unit norm tight frame sensing matrix,which used a random matrix as the initial preprocessing matrix.By relaxing the gram matrix iteratively and with matrix algebraic decomposition,an optimal Frobenius norm tight frame was attained as the optimal sensing matrix.In essence,they do the optimization by multiplying the sensing matrix by a transformation matrix generated based on different principles.However,these methods’effectiveness may deteriorate when the measurement noise is considered because the SNR would be lowered within the noisy measurements while multiplying the transformation matrix,leading to a higher probability of the wrong estimation.Literature [21]selected the criterion of mutual coherence and improves the sensing matrix by solving a convex optimization problem,thus avoiding an extra transformation matrix.

        III.SYSTEM MODEL AND PROBLEM FORMULATION

        In this section,we first establish the system model within which the multi-emitter localization is achieved,and then the problem of sensing matrix optimization is formulated.

        3.1 System Model

        Above all,we consider the problem of multi-emitter localization in a two-dimensional square area whose length isLcovered by a WSN.The targets to be localized are randomly distributed in the area of interest.These targets are assumed to be stationary radio transmitting devices with an omnidirectional antenna.As shown in Figure 1,the whole square area is divided intoN=n2sub-squares and they are numbered orderly from 1 toN.Were any target (denoted by the filled red star) falling into a certain sub-square,then we approximately take its centre coordinate as the target’s location,for instance,if a target is located at thenT -thsub-square,its coordinate (xT,yT) is given by:

        Figure 1. Targets to be localized in the area of interest.

        Figure 2. Flowchart of the proposed sensing matrix optimization method.

        Within the area of interest,we assume that there existKstationary emitters,whose location set is{tk}Kk=1,whereK ?N.Msensors are deployed at the locations{rm}Mm=1to sense the signals from the targets,whereM <N.Considering that the signals from all targets are aggregated at sensors,we model the RSS measurement at each sensor as the sum of the RSS of signals from all targets.In practice,a series of consecutive RSS samples are collected at each sensor.We average these samples and treat the mean as the final RSS measurement to reduce the interference from signal’s time-varying feature,which is given by:

        wherepkrepresents the transmitting power of thek-thtarget,εmdenotes the measurement noise at them-thsensor andf(r,t)means an energy decay function using the path loss model in[22],which can be expressed by:

        whered=‖r-t‖2represents the distance between the target and the sensor,d0denotes the reference distance,andγsignifies the path loss coefficient that depends on the surroundings.Usually,the value range ofγis from 2 to 5.

        Here,to simplify the expression of the signal model,we rewrite(2)in the matrix form as:

        wheres=[s1,s2,...,sM]T,ε=[ε1,ε2,...,εM]T.And our goal is to estimate the locations of K targets with the noisy measurement vectory=[y1,y2,...,yM]Tand deterministic sensors’locations.

        3.2 Problem Formulation

        As illustrated above,the area of interest is divided intoNsub-squares,and their centre coordinates can be denoted by{θj}Nj=1,whereθj=[xj,yj]T.Here,define the sensing matrixD ∈RM×N,with the settled locations of targets and sensors,the element oni-throw,j-thcolumn inDcan be denoted by:

        Definew=[w1,w2,...,wN]T,if thek-thtarget falls into thej-thsub-square,we havetk=θjandwj=pk,else,wj=0.Then,(4)can be rewritten as:

        Vectorwis sparse because the number of its nonzero elementsK ?N.Hence,the problem of location estimation is transferred into that of sparse signal recovery,which is equal to recovering sparse signalwby solving following?0-minimization problem with the known matrixDand vectorw.

        Once the index of nonzero elements inwis estimated,the targets locations{tk}Kk=1are decided.

        There exist many signal recovery algorithms like Basis Pursuit(BP)[23],Orthogonal Matching Pursuit(OMP)[24]and Sparse Bayesian Learning(SBL)[25]to tackle this problem.Nevertheless,the high similarity among columns inDwould severely confuse any recovery algorithms and lead to bad localization result.We notice that in the context of the CS-based multitarget localization,the columns of sensing matrix are highly correlated due to the sensor deployment and the distribution of candidate target locations.Therefore,the sensing matrix used in this scenario should be optimized for better localization performance.In the CS theory,the sensing matrixDcan be expressed by:

        where Φ is a 0-1 sparse measurement matrix decided by the sensor deployment.Each row of Φ has only one nonzero element and each column has at most one nonzero element.Ψ is the sparse dictionary,and its element ini-throw,j-thcolumn isf(τi,θj),whereτrepresents the potential sensor location.Theoretically,the size of Φ can beM ×∞and∞×Nfor Ψ,because the number of potential sensor locations is infinite in that sensors are free to be placed on anywhere within the target area.The elementφij=1 of Φ meansi-thsensor is located at thej-thpotential site.Hence,(8)signifies that a subsetDis formed by choosing M rows from Ψ.To this regard,we restrict the number of the candidate sensor positions to a finite valueNand they are arranged orderly like the grids shown in Figure 1.The sizes of Φ and Ψ are narrowed toM ×NandN ×N,respectively.Therefore,we are supposed to find the optimal distribution of the nonzero elements of the measurement matrix that can alleviate the column similarity inDto the greatest extent.

        IV.A SENSING MATRIX OPTIMIZATIONMETHOD BASED ON GA-TLS FOR MULTI-TARGET LOCALIZATION USING COMPRESSED SENSING

        The problem illustrated in section II is a restraint combinatorial optimization problem.In this section,we propose a hybrid metaheuristic algorithm named GATLS to address the problem.

        Above all,thet%-averaged mutual coherence is adopted as the optimality index to evaluate different sensing matrixes.Then,thresholdtis derived by classifying the off-diagonal elements of the gram matrix with the K-means clustering.Next,GA-TLS is utilized to address a restrained combinatorial optimization problem and thereby optimize the sensing matrix.At last,the accuracy of CS-based multi-emitter localization can be improved with the optimized sensing matrix.Figure 2 demonstrates the flowchart of the proposed sensing matrix optimization method in this paper.

        Figure 3. Similarity between the 85-th column and other columns in D.

        4.1 Quantitative Index for Sensing Matrix

        Above all,we are going to find an appropriate index to quantify the“quality”of different sensing matrixes in helping sparse signal recovery.First,a well-known index is the mutual coherenceμ,whose definition is given as follow[10].

        Definition 1.For a matrix D,the similarity between its i-th and j-th column can be expressed by:

        Thus,the mutual coherence ofDis denoted by:

        which signifies the largest absolute and normalized inner product among all the inner products between any two different columns ofD.With this index,we can quantify the strongest column similarity within a certain matrix and judge its vulnerability,because in the worst case,the index of the nonzero elements inwincludes that of the columns holding the strongest similarity,then,the recovery algorithms used in CS would be severely confused in reconstructing the sparse signal.Under such circumstance,the largerμ{D}is,the worse reconstruction result would be.

        The concept of mutual coherence can also be illustrated by referring to the Gram matrix,which is given by:

        According to [10,26],many recovery algorithms like BP and OMP are guaranteed to find the exact solutionto the problem in (7) once the following inequality is satisfied:

        With the given sparse vectorw,the smaller theμ{D}is,the more likely this inequality can be satisfied,and the better the estimation result would be.From this perspective,the mutual coherence may be treated as the quantitative index to evaluate the sensing matrix.

        While since the index of mutual coherence denotes the maximum column similarity withinD,it reflects the worst case while recovering the sparse signal,so this may not be a fair evaluation criterion to the actual“quality” ofD.However,if we do not just focus on the maximum value and refer to the averaged one instead,it would be more reasonable to reflect the real performance ofDin promoting exact signal recovery.Hence,we turn to the t%-averaged mutual coherence[11].

        According to [11],the definition of t%-averaged mutual coherence is given as follow.

        Definition 2.For a matrix D,its t%-averaged mutual coherence is defined as the average of the top t%absolute and normalized inner products between different columns in D,specifically,let

        where the Gt is the set of the top t%off-diagonal entries of G.

        (13) filters the relatively small elements inG,and only targets the larger ones.For different thresholds,the meaning ofμt%{D}varies,whent=100,it means the simple average of all the off-diagonal elements ofG.With the decreasing oft,μt%goes up gradually,and whentis equal to,μt%=μ.Later in the following part,thet%-averaged mutual coherence is adopted as the quantitative index based on which the sensing matrix optimization is conducted.While,one may ask what kind oftfits for our optimization problem and how to select it? We are going to discuss thetselection problem in the next part.

        4.2 Threshold t Selection by K-means Clustering

        The problem of thresholdtselection is equal to deciding which part ofGshould be targeted and averaged.In the context of multi-emitter localization using CS approach,we find that the column similarity ofDhas the following feature:Supposediis thei-thcolumn ofD,for the columns that are“close”to it,the similarity is high,while for those“far away”from thei-thcolumn,the similarity stays relatively low,the “distance”between columns here is calculated concerning their indexes,e.g.the “distance” betweeni-thandj-thcolumn is given by:

        whereciandcjrepresent the 2-D coordinates resolved from the indexesiandj,respectively.And the coordinateciis expressed by:

        To make it clearer,we represent the similarity between thei-thcolumndiand other columns inDwith the colour depth in Figure 3.To reflect the real characteristic of the scenario here,Dis obtained in a random manner wheren=10,N=100,M=30,i=85 and the targets and sensors are randomly distributed within the area.As presented in Figure 3,the lighter colour means a stronger similarity between columns and vice versa.The grid where the diamond stays denotes thed85,for grids representing other columns,the closer they are to the diamond,the stronger the similarity is.For instance,the coordinates of the red cross and the red square are [7,5] and [4,7],standing ford47andd64respectively by (15),and the colour around the red square is relatively lighter than that around the cross,which means that the shorter distance leads to a stronger similarity.

        From Figure 3,there exists a cluster of neighboring columns that hold high correlation with thed85,and this feature is shared by all columns ofD,so we can infer that the entries in a certain off-diagonal part ofDare quite large and they would greatly deteriorate the CS approach’s performance.Therefore,these entries should be targeted and improved.Hence,the thresholdtshould be settled to pick out these entries.In Figure 3,we can see that three column clusters are distinguished by different colours,the“l(fā)ight area”denoted by yellow,the“dark area”in deep blue,and the“middle area” between them.We can infer that the off-diagonal elements ofGcan also be classified into three categories.Among them,the category holding the highest averaged value determines the scope to be optimized.To this regard,the K-means clustering algorithm[27],capable of obtaining the nearly optimal clustering result in a very quick manner regarding 1-dimensional data [28],is utilized to classify all the off-diagonal entries ofGand decide the thresholdtthereby.

        Above all,for all the off-diagonal entries ofG,we classify them intoCclasses,S1,S2,...,SC.Each classSchas one focal pointoc.We adopt the Euclidean distance as the criterion to evaluate the similarity between the sample points.By the least square method and the Lagrangian principle,ocshould be equal to the average of all samples inScto minimize the sum of quadratic distances between the focal point and all points inSc,which can be expressed by:

        Then we define the clustering criterion function as:

        which signifies the quadratic sum of distances between all kinds of samples and their focal points.According to the analysis beforehand,we setC=3,and we are going to minimize the clustering criterion function next.

        First,chooseCelements from all off-diagonal entries ofGand take them as the initial focal points.Second,assign the remaining samples to the categories holding different focal points by the criterion of minimum distance,then we can obtainCclusters.Next,update each focal point with the mean value of entries in its cluster,and recalculate the clustering criterion functionJ.If both the set of focal points andJremain unchanged,then we perceive that all sample points are well classified,else,we go on with the second step and iteratively execute the procedure above.At last,we have:

        whereS*is the classified collection holding the highest averaged column correlation.

        4.3 Sensing Matrix Optimization Using GATLS

        As discussed before,for a better localization result,the minimum ofμ%{D}is required by optimizing the sensing matrix.Considering the constraints posed on Φ,the sensing matrix optimization problem can be transformed into a NP-hard constrained combinatorial optimization problem:

        Algorithms used to address this problem can be categorized into the exact and the approximate algorithms.Though the exact algorithms can give the optimal solution,it is at the expense of unreasonable computational time for the exponential search space.Approximate algorithms can give a quasi-optimal solution within reasonable computational time,which can be further divided into the heuristic algorithm and the metaheuristic algorithm.Heuristic algorithms,like Local Search(LS),are efficient in exploring the neighboring solution space but easy to get caught in the local optimum.Metaheuristic algorithms are able to escape from the local optimum and search for the global optimum extensively,such as GA,a population-based metaheuristic algorithm,produces the quasi-optimal solution through combining the good solution with higher fitness.It is first proposed by Holland [29]under the inspiration of the “survival of the fittest”in each iteration,the individuals with higher fitness would have a larger chance to survive compared to those with lower fitness,leading to the final ideal solution thereby.While the GA is not good at deeply searching for the optimum in a certain area,we enhance it by adopting a hybrid strategy.

        Combining the different algorithms from the metaheuristic,the heuristic or the exact has been proved to be a good practice in last few years,where they complement each other.In the following,a hybrid metaheuristic algorithm called GA-TLS is proposed,where TLS is used to strengthen the local search capability of the GA.Compared to the original LS,TLS can achieve a higher time efficiency in the optimization process.

        The GA-TLS starts with a randomly generated population,with each individual representing a measurement matrix.Since the size of the measurement matrix Φ is quite large,for simplicity,we denote each measurement matrix with the “gene sequence”x ∈RMsaved in each individual.Each gene sequence containsMdifferent integers ranging from 1 toN,if thexm=n,thenφmn= 1,else,φmn= 0.In this way,various measurement matrixes are condensed into different gene sequences,meanwhile,the equality constraint can also be satisfied.Suppose the population size isNp,then its gene pool can be expressed by

        Next,a loop comprised of fitness calculation,tabu local search,parent selection,crossover,and mutation is iteratively executed for updating the population until the end criterion is met.We choose the topQ%individuals in terms of the fitness and conduct the TLS on them.The “top” is stressed because the individual with higher fitness is more likely to survive,so the TLS work spent on it would not be wasted.The largerQis,the better the optimization effect would be,but the computational cost would also be more expensive,especially when the fitness calculation is complex.Meanwhile,ifQis rather small,then the enhancement would be less effective.To strike a balance,we setQ=10 in this paper.

        In the LS algorithm,the definition of solution neighborhood is crucial.For an individual gene sequencex,its neighborhood is given byB={x′|x′i=xi,x′j ∈A,i=1,...,j-1,j+1,...,M},A={y|1≤y ≤N,y /=xj,y ∈Z},where the gene sitejis randomly decided.The original LS operator iteratively searches the neighborhood space for the local optimal solution until the candidates in neighborhood are exhausted.Although the result of GA-LS is satisfying,the computational cost is also rather expensive,especially the fitness calculation is complex,which is given by:

        To this regard,we analyze the characteristic of the fitness function and propose a GA-TLS algorithm,where TLS can achieve a higher time efficiency in the optimization process compared to the original LS.

        The entryxjdetermines the location of the non-zero element inj -throw of Φ,thus decides which row in Ψ is involved in the fitness calculation.With other entries inxunchanged,the combination of other selectedM -1 rows in Ψ is settled.The off-diagonal entrygm,n(m/=n)of the gram matrixGis given by

        By Cauchy-Swartz inequality,gm,nis maximized when(elements in Ψ are positive),empirical study shows that whenis not satisfied,gm,nis maximized whenis equal to the best ratio lying in the range of (min{R},max{R}),Whilegm,nturns smaller whenis away from this range.Taking all the off-diagonal entries inGinto account,we reach the conclusion that the greater difference betweenψxjand otherM -1 rows selected,the smaller the sum of the off-diagonal entries ofG,thus bringing downμt%{ΦΨ}and raising the fitnessF(x).

        Based on the conclusion above,we are going to generate the tabu list for the TLS step,in this way,some ineffective solutions can be ruled out from the solution neighborhood.In the multi-emitter localization scenario,the difference betweenψxiandψxjgoes larger when the distance betweenxi-thandxj-thcandidate sensor locations is extended and vice versa.By the predefined deployment of candidate sensor locations,the indexes of the closest locations to thexi-thcandidate location are thexi-1-th,xi+1-th,xi-n-thandxi+n-th,so the tabu list is formed by

        x′withx′j ∈Tis less likely to bring higher fitness,so we rule them out from the solution neighborhoodB,by doing this some ineffective solutions can be omitted thus saving the computational cost thereby.

        Parental gene sequences are selected from the population by the individual fitness,which is decided by the fitness functionF(x).Specifically,the survival probability is proportional to the value ofF(x).Hence,individuals with higher fitness would be more likely to be chosen as parents.

        Here,as illustrated before,the thresholdtis obtained through the K-means clustering algorithm,but a slight modification is that to make the threshold more representative,we calculatet1,t2,...,tNpfromD1,D2,...,DNp,respectively,and average them so as to obtain the final thresholdt.

        After the parents are selected,they pair up randomly with probabilityPcfor reproduction,during which the crossover between parental gene sequences happens.The crossover mechanism we adopt is the conditional multi-point crossover.Specifically,several gene sites are randomly decided with the constraint that the genes chosen at one side are not contained in the gene sequence of the other.By the rule of this kind,the inequality constraint in (19) is satisfied thereby.Then parents exchange the genes at these sites so that two offspring are produced.While for those parents who do not pair up,the offspring directly copy their gene.The process of crossover enables the parental features to be passed onto the offspring.

        The parent selection and the crossover operators pick out the preferred individuals in line with the principle of“survivor of the fittest”,while,it may tend to lower the population diversity because after only several generations,the majority of the population would share a similar gene sequence.This kind of premature convergence would keep the algorithm from finding out the optimal solution.Therefore,a disturbance operator named mutation is added after the crossover within the loop,which decides the individual variation with a probabilityPm(a small value).To produce the individual variation,we randomly choose a gene site on its sequence and replace the original integer with a different one from 1 toN.Besides,the end criterion is defined as that the evolution generations reaches thresholdemax.

        Algorithm 1. GA-TLS based sensing matrix optimization method.1: Input:Np,Pc,Pm,emax,e=0.2: Randomly generate the gene pool X of the initial population.3: Derive G1,G2,...,GNp from X by(8)and(11).4: Calculate t1,t2,...,tNp by(18)after the application of the K-means clustering,and obtain the threshold t by averaging t1,t2,...,tNp.5: repeat 6: Calculate the μt%-based fitness of all individuals in X.7: Choose the top Q%individuals in terms of the fitness and conduct the TLS on them.8: Select parental individuals from the population by the μt%-based fitness of all individuals.9: Pair the parental individuals up with the probability of Pc and execute the conditional cross operator.10: Produce the individual variation with the probability of Pm.11: Update the population X.12: e ←e+1 13: until e ≥emax.14: Choose the best individual x*with highest fitness form X and resolve the optimized measurement matrix Φ*from x*.15: output: D* =Φ*Ψ.

        With the ongoing loop of parent selection,crossover and mutation,the quasi-optimal solutionx*can be obtained from the latest population until the end criterion is satisfied.The optimized sensing matrixD*can be derived at last.The parametersNp,PcandPmare vital to the GA and should be appropriately tuned beforehand.The whole procedure of proposed sensing matrix optimization method is demonstrated in Algorithm 1.

        In the proposed sensing matrix optimization algorithm,K-means clustering,and hybrid metaheuristic optimization process take over the majority of the computational cost.For the K-means,its computational complexity is O(CN2L),whereLis the iteration times,N2is the size of the data to be classified andC=3 here.Occasionally,we haveL ?N2,so its complexity can be written as O(N2).For the GA,since the population size and the number of iterations remain constant,its complexity mainly depends on the fitness calculation step,where we mainly consider the heapsort,so the computational complexity of GA is O(N2log2N).In GA-LS,an extra round of local search in the solution neighborhood is added,leading to a complexity of O(N3log2N).While in the TLS operator,there areN -αMneighboring solutions omitted on average,so the computational complexity for GA-TLS is O((N -αM)N2log2N),where the constantα ∈(1,5).

        V.NUMERICAL RESULTS

        In this section,extensive simulations are conducted to verify the effectiveness and robustness of the proposed sensing matrix optimization method for the CS-based multi-target localization.

        5.1 Simulation Setup

        The simulation platform used in this paper is MATLAB 2016b.Part of the simulation parameter setting is concluded in Table 1.By default,we set the target area to a 10m×10msquare region and divide it intoN= 100 sub-squares,the side length of which is 1m,andn= 10.K=4 targets are randomly distributed within the area of interest,M=30 sensor locations are selected fromS= 100 uniformly distributed candidate locations,which is decided by the used measurement matrix.In addition,the noise added to RSS measurements is modelled as a vectorε,obeying a zero-mean Gaussian distributionN(0,σ2); each element ofεremains independent of each other.In our simulations,SNR is calculated by 10 lg(‖Dθ‖22/Mσ2)in dB,and its default value is 20 dB.To effectively compare the above-mentioned optimization methods,we define the averaged localization error as the evaluation index,which is given by:

        Table 1. Simulation parameters.

        wheretkandare the real locations and the estimated location ofk -thtarget,respectively.While calculatingAve.Errin all circumstances,for the fairness in comparison,we will find the one-to-one correspondence between the real locations and the estimated ones that can minimize theAve.Err.All numerical results shown in the performance comparison part is the average over 500 Monte-Carlo simulations.

        5.2 Characteristic Analysis of the Optimized Sensing Matrix

        By the default setting,thresholdtis derived through the procedure illustrated above,whose value is 9.87.Therefore,the fitness function used in GA is 1-μ9.87%{D}.

        Simulation result in Figure 4 reveals the performance comparison among GA,GA-TLS,and GA-LS.Each curve is obtained through averaging 20 experimental results,and the average times for all algorithms under 200 iterations are also presented.With the increasing evolution generations,μt%{D}keeps decreasing with the optimization of all three algorithms till the end of iterations.As shown in Figure 4,the convergence speed of GA is much slower than that of GA-LS and GA-TLS.Besides,GA-LS and GA-TLS can achieve better optimization results than GA does while spending more computational time.Based on the GA performance,GA-LS brings an improvement of about 0.023 at the extra cost of 235.1s; however,the proposed GA-TLS can improve the optimization result by 0.015 with only an extra 32.2s,thus achieving higher time efficiency.Readers may make a tradeoff between the optimization result and the time cost according to the specific scenario.In our context,we incorporate the GA-TLS into the sensing matrix optimization method and evaluate its performance in the following.What is demonstrated in Figure 5 clearly tells the improvement brought by the proposed sensing matrix optimization method.The orange and blue bars show the distribution of the off-diagonal entries ofGbefore and after the optimization,respectively.It is evident that before optimization,the elements with high value inGtake a larger part of the whole.And the majority of the off-diagonal elements ofGare“pushed”to the area of low value by GA-TLS.The comparison shown in Figure 5 proves that the column similarity inDis indeed cut down,thus improving the accuracy in target localization.

        Figure 4. Performance comparison among GA,GA-TLS and GA-LS in optimizing μt%{D}.

        Figure 5. Histogram of the off-diagonal entries of G before and after optimization.

        Figure 6. Average localization error versus signal-to-noise ratio.

        Figure 7. Average localization error versus target number K based on OMP.

        Figure 8. Average localization error versus measurement number M based on OMP.

        5.3 Performance Comparison Among Different Sensing Matrix Optimization Methods

        In this section,we compare the performances of different sensing optimization methods by testing their optimized sensing matrixes in three CS approaches(BP,OMP,and SBL).Two typical traditional sensing matrix optimization methods[19,20]are added into comparison.We use the shorten phrase ORTH to represent the former and the UNTF for the latter concerning their content.In the first simulation,we fix other parameters and make the SNR vary from 10 to 40 in dB.Based on the BP,OMP,and SBL with no optimization,respectively,the contrast between the proposed sensing matrix optimization method and the other two traditional methods is presented in Figure 6.With the increasing of SNR,all the CS approaches get a better localization result.One interesting conclusion is that the conventional optimization methods,ORTH and UNTF,can only work in the CS algorithms like OMP,while cannot do the optimization in the algorithms like BP or SBL.This is because in (a) and (c),by using ORTH and UNTF,the localization performances of BP and SBL are the same with or even worse than that without optimization,and only(b)tested in OMP shows that these two methods could help improve the localization accuracy.However,the proposed optimization method can effectively decrease Ave.Err under any circumstances of SNR and for all three CS approaches.Another conclusion is that the increase of SNR can sharply improve the performance of ORTH and UNTF,as shown in (b) and (c).Especially in(b),when SNR>30dB,ORTH and UNTF help OMP reach a better localization accuracy than the proposed method does.This point just caters to the analysis in section II: both ORTH and UNTF optimize the sensing matrix by multiplying a transformation matrix to it.While when the noise level can’t be omitted,the optimization effect may deteriorate because multiplying a transformation matrix would lower the SNR within the RSS measurements.Meanwhile,the robustness of the proposed method could be testified because by using it,the localization performances of CS approaches vary less radically when SNR is decreasing to 10dB,especially when tested in the OMP algorithm.

        Next,we focus on the impact of the number of targets on localization accuracy.Given the former simulation result,we focus on the average localization error of OMP under the different target number with other parameters unchanged.In Figure 7,as the number of targets goes up from 1 to 7,the cardinality of the target location vector to be recovered gets larger,making it more difficult to estimate the targets’locations.Hence,there is an upward trend in the Ave.Err of the OMP.All three optimization methods could help lower the Ave.Err of OMP,nevertheless,the proposed optimization method helps the CS localization approach keep a relative lower Ave.Err compared to the other two methods when the target number varies.When the target number is relatively small,the performance gap among three optimization methods is narrowed,while whenK=7,an extra accuracy improvement of 0.2m can be achieved by using the proposed optimization method.

        Figure 9. Average localization error of different CS approaches as a function of threshold t with the proposed sensing matrix optimization method.

        In the third simulation,we only tune the measurement numberMand make other parameters remain constant.As can be seen in Figure 8,with the increasing of the measurement number,more information about the targets’locations is gathered,leading to a more precise localization result for the CS approach.And the effect brought by the proposed sensing matrix optimization method is similar to what is demonstrated in Figure 7.The OMP algorithm benefits more from it compared to the ORTH and UNTF under different measurement numbers.We can also find that on achieving the same localization accuracy,the CS approaches with the proposed optimization method need fewer sensors to collect sufficient RSS data.In general,a reduction of about 60%could be gained by the GA-TLS based sensing matrix optimization algorithm compared to that with no optimization.At last,we are going to evaluate the correctness on the thresholdtselection.As illustrated before,the K-means clustering is utilized to help decide the thresholdtfrom a statistical perspective,through which we obtain thatt=9.87 under the default setting.In Figure 9,by different thresholdt,the optimized sensing matrix shows fluctuant performance in promoting localization accuracy.Still,it can be seen that whent= 9.87,all three CS approaches get a better localization result compared to other circumstances.Considering that the number of simulations is limited,so it may not reflect the exact trend when thetvaries from 1 to 99,while it can be judged from Figure 9 that the optimal threshold exists around 9.87.Hence,it is proved that the threshold selection method adopted here can effectively decide the proper threshold.

        VI.CONCLUSION AND FUTURE WORK

        In this paper,we study the sensing matrix optimization problem under the CS-based localization framework in WSNs.To address this problem,we model it as a constrained combinatorial optimization problem and propose a hybrid metaheuristic algorithm GA-TLS to address it.During this process,we iteratively improve the properties of the measurement matrix while keeping it as a constrained binary sparse matrix.Thet%-averaged mutual coherence serves as the optimality index to evaluate the quality of various sensing matrixes.Besides,the K-means clustering algorithm decides the appropriate thresholdtbased on the sensing matrix’s characteristic in this scenario.

        This paper shows that compared with the traditional sensing matrix optimization methods for the CS-based localization,the proposed method avoids inducing an extra transformation matrix that may lower the SNR within the RSS measurements.In this way,it has a better performance in improving the localization accuracy.Extensive simulations conducted in the paper have verified the effectiveness and robustness of the proposed sensing matrix optimization method.In future work,one may consider whether the exact relation between the thresholdtand the CS performance be derived by rigorous theoretical analysis.If this problem is settled,then the optimaltcan be obtained,and the CS performance could be improved to the greatest extent.

        伊人婷婷色香五月综合缴激情| 国产精品泄火熟女| 国产福利一区二区三区在线观看| 中日韩欧美在线观看| 激情五月天俺也去综合网| 二区三区日本高清视频| 比较有韵味的熟妇无码| 无码国产精品一区二区vr老人 | 吃下面吃胸在线看无码| 一级内射免费观看视频| 艳妇臀荡乳欲伦交换h在线观看| 日韩插啊免费视频在线观看| 欧美亚洲韩国国产综合五月天| 国产av精品一区二区三区视频| 伦伦影院午夜理论片| 亚洲国产韩国欧美在线| 男女好痛好深好爽视频一区| 国产白浆一区二区三区佳柔| 精品无码一区二区三区爱欲| 国产精品国产成人国产三级| 中文字幕无码免费久久9一区9| 成人在线观看视频免费播放| 人妻少妇精品久久久久久| 性欧美大战久久久久久久久| 亚洲精品乱码久久久久99| 91九色播放在线观看| 正在播放国产多p交换视频| 91av手机在线观看| 美女被插到高潮嗷嗷叫| 我和丰满妇女激情视频| 少女高清影视在线观看动漫| 国产视频不卡在线| 99久久婷婷国产一区| 人妻聚色窝窝人体www一区| 国产丝袜在线精品丝袜不卡| 国产av精品一区二区三区不卡| 在线无码中文字幕一区| 四虎影库久免费视频| 免费人人av看| 国产日产韩国av在线| 亚洲精品92内射|