JIANG Wen,FU Xiongjun,CHANG Jiayun,and QIN Rui
School of Information and Electronics,Beijing Institute of Technology,Beijing 100081,China
Abstract:As a core part of the electronic warfare(EW)system,de-interleaving is used to separate interleaved radar signals.As interleaved radar pulses become more complex and denser,intelligent classification of radar signals has become very important.The self-organizing feature map(SOFM)is an excellent artificial neural network,which has huge advantages in intelligent classification of complex data.However,the de-interleaving process based on SOFM is faced with the problems that the initialization of the map size relies on prior information and the network topology cannot be adaptively adjusted.In this paper,an SOFM with self-adaptive network topology(SANT-SOFM)algorithm is proposed to solve the above problems.The SANT-SOFM algorithm first proposes an adaptive proliferation algorithm to adjust the map size,so that the initialization of the map size is no longer dependent on prior information but is gradually adjusted with the input data.Then,structural optimization algorithms are proposed to gradually optimize the topology of the SOFM network in the iterative process,constructing an optimal SANT.Finally,the optimized SOFM network is used for de-interleaving radar signals.Simulation results show that SANT-SOFM could get excellent performance in complex EW environments and the probability of getting the optimal map size is over 95%in the absence of priori information.
Key words:de-interleaving,self-organizing feature map(SOFM),self-adaptive network topology(SANT).
With the development of electronic warfare(EW),the emission signals intercepted by the EW system have been interleaved into complex pulse trains.To separate the trains effectively,an advanced de-interleaving theory has become very critical.
Literature contains many de-interleaving approaches,which can be divided into two categories:pulse repetition interval(PRI)analysis and feature clustering.Mahmoud[1]introduced the PRI analysis into the de-interleaving process.Mostafa[2]introduced a technique based on cumulative differences(C-DIF)of PRI histograms and Milojevic[3]and Mostafa[4]improved this method by using sequential differences(S-DIF)of histograms which need less calculation.On the other hand,some of the scholars utilized hidden Markov models[5]and pulse correlation[6]for this problem.In the latest research,Torun and Orhan[7]proposed a method to de-interleave the signals with the stagger PRI and dwell-switch PRI types,which has some reference value to the special de-interleaving process.
As the appearance of complex PRI patterns in evolving modern EW environments,it has been increasingly difficult to use PRI analysis de-interleaving complex pulse trains.Thus,clustering algorithms depending on individual parameters of radar pulses have become an indispensable task to assist the de-interleaving process.Liu and Zhang[8]proposed an improved algorithm based on clustering and S-DIF,which could effectively analyze and estimate radar signals’PRI in complex EW environments.Amini and Amineh[9]introduced the multi-density clustering into the de-interleaving process,which reduced the dependence of clustering results on the initial prototype.Notably,the multi-prototype clustering technique proposed by Silva and Wunsch[10]also achieved good results.
Compared to the conventional clustering approaches,the self-organizing feature map(SOFM)[11,12]has been a more effective method for radar pulses clustering.SOFM is a topology preserving mapping network,which is used to simulate the brain’s regional functions,self-organization and neuronal excitability.Granger and Savaria[13]introduced SOFM into the de-interleaving process and evaluated the performance of SOFM.Based on[13],Ata’a[14]and Gencol[15]further improved this process,achieving intelligent and automatic de-interleaving of radar signals.Ata’a[14]used three radar features,including angleof-arrival(AOA),pulse width(PW)and radio frequency(RF),while Gencol[15]added estimated pulse amplitude(PA)as the fourth information source to assist the clustering process.Notably,Dai and Lei[16]proposed a noise suppression algorithm and realized the SOFM-based deinterleaving in noisy environments.
The studies mentioned above all focus on how to use the SOFM algorithm,ignoring the improvement of the algorithm itself.In SOFM,the network topology implies the impact on clustering performance,making the map size and the network structure important factors affecting SOFM.However,the conventional network has the following problems:the setting of the map size depends on prior information and the network structure cannot be self-adjusted.Therefore,the conventional network topology could not meet the needs of the modern electronic warfare,and a new network with a self-adaptive topology has become the key to solving these problems.
In order to address the above dilemmas,a new network,presented as SOFM with self-adaptive network topology(SANT-SOFM),is proposed to make the network selfadaptive.The new network[17–19]possesses the following innovations.Firstly,the topology of the SOFM network is no longer fixed,but dynamically adjusted as the input changes.The network is initialized to a topology with a small map size and gradually optimized by the proposed algorithms.Secondly,an adaptive proliferation algorithm is proposed to adjust the map size,making the map size no longer depend on prior information but gradually adjust itself with the input data.Thirdly,structural optimization algorithms(including neuronal elimination,merging and division)are used to optimize the topology of the SOFM network,constructing an optimal SANT.Simulation results show that the proposed algorithm could not only adapt to the complex and variable EW environments,but also obtain better clustering effects,thus improving the de-interleaving performance effectively.
SOFM has been used to solve various problems,such as vector quantization[20],clustering[21,22],and classification[23,24].In this section,we briefly introduce the SOFM-based de-interleaving algorithm.
SOFM consists of an input layer and an output layer,as shown in Fig.1.The input layer containsNnodes,corresponding toNinputs.The output layer containsM=m1×m2neurons,which means the map size ism1×m2.
The input vectors are mapped from the input layer to the output layer,and SOFM uses a large amount of inputs to adjust neuronal weights and eventually makes the neurons sensitive only to specific patterns.
Fig.1 SOFM network
The SOFM-based de-interleaving algorithm is shown as follows.
Step 1Parameter settings:X1,X2,...,XNare input radar feature vectors,Y1,Y2,...,YMare the neurons and the weight ofYiisWi.
Step 2Initialize the map sizem1×m2,weightsWi(i=1,2,...,M),iteration timesT,the learning coefficientη(t),and the neighborhood radiusNg(t).
Step 3Normalize the input and weights.
Step 4InputXk.
Step 5Select a winning neurongby finding the minimum distancedg:
Step 6Adjust the neuronal weights ing’s neighborhood.
Step 7Normalize the new weights.
Step 8Ifk=N,go to Step 9;otherwise,k=k+1 and return to Step 4.
Step 9Updateη(t)andNg(t).
Step 10Lett=t+1 andk=1,return to Step 4 untilt=T.
Step 11The final weightsW1,...,WMwould be the clustering centers.
Numerous studies have shown that network topology of SOFM implies the impact on the clustering performance,making the map size and the network structure important factors affecting SOFM.However,the conventional network has the following problems.
(i)Initialization of the map size relies on prior information.In conventional SOFM,the map size can only be preset to a fixed value based on prior information.In modern EW environments,it is difficult for the interceptor to obtain sufficient prior information to initialize the map size,which seriously weakens the role of SOFM as an optimal vector quantizer.In order to initialize the map size without prior information,the network topology should be selfadaptive under un-supervised learning conditions.
(ii)The network structure cannot be self-adaptively adjusted.In modern EW environments,the emission signals are complex and variable,resulting in irregular changes in intercepted pulse trains.The conventional network topology cannot be adaptively adjusted with these changes,which seriously affects the performance of SOFM.Therefore,it has been very critical to construct a new SOFM network with an adaptive topology.
To solve the above problems,the SANT-SOFM algorithm is proposed.The algorithm realizes the dynamic adjustment of the network topology by regulating the neurons.First,the SOFM network is initialized to a topology with a small map size.Second,a proliferation algorithm is proposed to promote the rapid growth of neurons while preserving the data topology,preliminarily determining the map size and the distribution of neurons.Third,the Kohonen algorithm[11]is used to train the network.Finally,structural optimization algorithms are used to optimize the topology of the network.
In order to effectively initialize the map size of the SOFM network,the distribution of neurons should be determined.Therefore,this paper designs a proliferation algorithm to initially determine the topology of the SOFM network.The algorithm first initializes the network to a smaller topology,and then gradually expands the network based on the degree of similarity between the input data and the winning neuron.In order to effectively measure the similarity between the input and the winner,this paper designs a similarity threshold that can be adaptively adjusted using fuzzy operations[19].If the similarity between the input and the winner is higher than the threshold,the input signal will be classified into the winning category,and the winning neuron will also be updated;otherwise,the input signal will be regarded as a new signal category,and the network needs to be expanded.Through the proliferation algorithm,the topology of the SOFM network is gradually adapted to the input data,thereby effectively determining the map size under the condition that the prior information is missing.The proliferation algorithm is divided into the following steps.
Step 1Initialize the network.
The output layer is initialized to a small network,where the map size ism1×m2,the neurons areY1,Y2,...,YMand the weight ofYiisWi.
Step 2Input data.
Xk=[xk1,xk2,...,xkn](k=1,2,...,N)is input feature vector used to train the network,Nis the vector’number andnis the dimension.
Step 3Normalize the input and weights.
Step 4Select the winnerg.
Step 5Calculate the complement codes.
Since complement-coding can effectively suppress the excessive proliferation of neurons[19],this paper complement-codes the input and the winning neurons before calculating the similarity between the two vectors.The complement code ofis
Step 6Calculate the distanced=(d1,...,d2n).
Step 7Calculate the similarity thresholdρof the winning neurong.
The thresholdρis adaptively adjusted as the distancedichanges.The closer the input and the winning neurons are,the lower the threshold will be,and vice versa.
Step 8Judge the similarity according to the following similarity criterion:
“∧”is the fuzzy operator AND,which is an effective method for measuring the similarity between two vectors[19].The operation rules are shown as follows:
Step 9If the similarity criterion is satisfied,the weight of the winning neuron should be updated to
winnergbefore and after the update,respectively;βis the initial learning rate of the SOFM network,which is expressed as
Step 10If the similarity criterion fails,a set of neurons would be generated.The rules of the generation are shown as follows.
Assume that the neighboring neurons of the winnergare{yi,i=1,2,...,6},and the normalized weight ofyiis.The similar neighboring neuronlcan be obtained by
As shown in Fig.2,a new neuron is generated between neuronsgandl,and the weight of the new neuron is initialized to
The voids created by the generation are filled by neurons,and the weight of the filled neuroniis initialized to
Fig.2Proliferation algorithm
The map size of the expanded SOFM network has been adapted to the input data,but the network topology is not optimal.For example,there are a large number of redundant neurons that are not activated.Therefore,the topology of the network needs to be further optimized.This paper first uses the traditional Kohonen algorithm[11]to train the network to map the input in an orderly manner.When the training is completed,the algorithms in this section will be used to gradually optimize the network topology.
4.2.1Parameter settings
The parameters used to optimize the network topology are shown as follows.
(i)Mis the neuronal number in the current network.The value ofMchanges as the network changes.
(ii)DThis the distance threshold of neighboring neurons,which is used to limit the similarity of neighboring neurons.
Due to the normalization of input vectors,the similarity criterion ignores the contribution of the vector length to the vectors’differences.In this section,the distance thresholdDThis introduced as a criterion for network optimization to further distinguish the differences in vectors:
af is the adjustment coefficient[18],which is used to adjust the distance threshold.A large number of experiments show that whenAis set to 0.3,the proposed algorithm could obtain good sorting effect.nis the number of Euclidean distances between different vectors:
(iii)C(1)(i)(i=1,2,...,M)is used to record the winning number of the neuroni.
(iv)is the neuronal extinction threshold.If the winning number of the neuroniis less than the extinction threshold,the signal class represented by the neuroniis considered to be disappeared,andineeds to be eliminated.
(v)C(2)(i)(i=1,2,...,M)is used to record the number of inputs that are classified to the neuroni,but differ fromi(the distance is greater thanDTh).
(vi)is the neuronal splitting threshold.IfC(2)(i)>C(2)
Th,the neuronicorresponds to multiple signal classes.Therefore,the neuronishould be divided.
4.2.2Neuronal elimination
If the winning number of the neuroniis less thani.e.,it satisfies
it means that the neuronicannot represent any of the current signal classes.In other words,the signal class represented by the neuronihas disappeared.Therefore,the neuronishould be eliminated to achieve the forgetting function,as shown in Fig.3.
Fig.3Neuronal elimination
4.2.3Neuronal merging
If the distance between neighboring neuronsiandkis less than the distance threshold,i.e.,it satisfies
it means that the two neurons represent the same signal class,and neuronsiandkshould be merged.The rules of merging is to eliminate neuronsiandk,and generate a new neuron at the neuroni’s position,as shown in Fig.4.The weight of new neuron is
Fig.4Neuronal merging
4.2.4Neuronal division
If the data in the classi,represented by the neuroni,differ a lot,i.e.,they satisfy
it means that the classicontains multiple signal classes due to the excessive merging.Therefore,a new neuron should be split from the neuroni,to represent the new signal class.
The division rule is to find the neuroni’s similar neighboring neuronl(the method is the same as in(20)),and to grow a new neuronM+1 betweeniandl,as shown in Fig.5.The weight of the new neuron is initialized to
Fig.5Neuronal division
In the following section,a comparative study of SOFM[15]and SANT-SOFM is accomplished to evaluate the radar pulse de-interleaving performance of the two algorithms.The performance evaluation involves the network topology,the number of sample hits and the clustering result.For each experiment in the comparative study,it has been repeated 100 times under the condition of random input orders.The result of each experiment is the mean of the 20 repetitions.
In these experiments,the learning coefficient is initialized to 0.6 and the iteration times is set to 100.For other parameters,grid searches are used for parameter tuning.The interval[0,3]with a step size of 1 is used for searching optimal initial neighborhood radius.The 2-dimensional grid search[15]based on the principal component analysis(PCA)is used to initialize the SOFM network.
For SANT-SOFM,the initial map size is 2×2.andare set to 5 and 50,respectively.
The parameters of experimental data and the distribution of normalized radar features are shown in Table 1 and Fig.6,respectively.
Table 1Signal parameters
Fig.6Distribution of radar features
In order to simulate the dynamic and variable emission signals in the complex EW environments,the parameters of radar models constructed in this paper are all dynamically changed,as shown in Table 1.Signals 1–4 are used to simulate the four sub-signals of multi-mode and multiparameter radars,and signals 5–7 are used to simulate single-mode radars.It is worth pointing out that the parameters of these radar models have certain similarities.The similarity of the parameters has great advantages in accurately evaluating clustering performance of the two algorithms.
In order to investigate the performance of SANT-SOFM algorithms,the simulations in this paper are implemented in noisy and noise-free environments.In Section5.2,simulations are performed in noise-free environments;15%of the Gaussian noise signals are added to the experimental data.
Simulation results of the comparative study are shown in Fig.7 to Fig.10.Fig.7(a)and Fig.7(b)show the topological map of SOFM with the map sizes 4×4 and 4×3,respectively.The activated neurons and the hit number of each neuron have been marked in the topology.Fig.8(a)shows the topological map that is expanded from a 2×2 initial network using the proliferation algorithm.The topology in Fig.8(a)is optimized using the optimization algorithms proposed in this paper,and the result of the optimization is shown in Fig.8(b).The comparative study of the two algorithms is shown in Fig.9 to compare clustering effects,and the clustering results of SANT-SOFM in noise environments are shown in Fig.10.
Fig.7Topology of SOFM and its hits’number
Fig.7 shows that the network topology of SOFM implies the impact on mapping results.When the map size is set to 4×4,as shown in Fig.7(a),11 neurons in the network are activated while only seven signal sources exist;when the map size is 4×3,as shown in Fig.7(b),the number of activated neurons is reduced to eight.The results illustrate that the network topology is an important factor and different map sizes could bring different clustering effects.
To find the best topology,the conventional SOFM relies on prior information to initialize the map size of the network.However,in most cases,it is very difficult to get sufficient prior information for choosing an optimal map size.Therefore,this paper designs a network with an adaptive topology,and achieves the automatic acquisition of the optimal map size.
Fig.8Topology of SANT-SOFM and its hits’number
The new topology is firstly initialized to a small network and then expanded by the proposed proliferation algorithm.After the proliferation,as shown in Fig.8(a),the map size is gradually adapted to the input data,and the activated neurons are distributed along the outer edge of the topological map.The more times a neuron is hit by the samples,the more active it would be.
Based on the neuronal activity,the neuronal optimization algorithm is introduced to further adjust the network topology,and the result of the optimization is shown in Fig.8(b).As shown in Fig.8(b),the number of the activated neurons is stabilized at seven,which is equal to the quantity of signal sources,thus obtaining the dynamically optimal topology.
Fig.9Clustering results
The comparative study of SOFM and SANT-SOFM is shown in Fig.9 to compare the clustering effects of different algorithms.Due to the high similarity and the complex variability of the signal sources used in this article,the clustering performance of the two algorithms is significantly disturbed,as shown in Fig.9.
For SOFM,it is seriously affected by the disturbance.When the map size is 4×4,as shown in Fig.9(a),the seven signal sources are classified into 11 categories,and the classified categories appear to overlap significantly.When the map size is adjusted to 4×3,as shown in Fig.9(b),the number of clusters drops to eight,but the category overlapping problem is even more serious.The results illustrate that the traditional SOFM has a significant drop in the clustering performance when sorting complex and variable signal sources.Since the network topology of SOFM is fixed,these signals are easily misclassified regardless of the adjustment of the initial map size,causing the classified categories to overlap each other.
As for SANT-SOFM,its topology could be selfadaptively adjusted with the structure of the input data,making input data gradually approach clustering centers during the mapping process.Therefore,SANT-SOFM effectively improves the overlapping problem,as shown in Fig.9(c),resulting in better clustering performance.
Fig.10Performance in noisy environments
In order to evaluate the performance of the proposed algorithm in different environments,simulations in Fig.10 are performed in Gaussian noisy environments.As shown in Fig.10(a),the clustering quality of SANT-SOFM is significantly reduced in noise environments.Since the noise signals have changed the structure of the input data,the topology of the network could not be optimized to the optimal state,resulting in signal misclassification.Therefore,it is very important to use de-noising methods to remove the noise before the clustering process.In this paper,the algorithm in[16]is used to remove noise signals,and de-noised clustering results are shown in Fig.10(b).As shown in Fig.10(b),most of the noise has been removed and the remaining noise is classified as a new category.The clustering performance of the proposed algorithm is significantly improved due to the noise removal.Simulation results show that the SANT-SOFM algorithm has excellent performance in both noisy and noise-free environments.
De-interleaving performance is analyzed above,and then we will further analyze the computational complexity.The complexity(the symbols in parentheses represent the calculation amount)is derived from network initializationCI,neuronal proliferationCP,network optimizationCOand Kohonen trainingN×CK.Therefore,the computational complexity of the two algorithms is shown in Table 2.
Table 2Computational complexity
In these four parts,Kohonen training is the most important part affecting the complexity.The computation amount in Kohonen training is related to the numberNof the activated neurons.The larger the value ofNis,the higher the calculation amount will be.
The number of the activated neurons in the SOFMN1is greater than the number of signal sources,while in SANTSOFM,the numberN2is approximately equal to that of signal sources.Therefore,in Kohonen training,the computational complexity of SOFM is higher than that of SANTSOFM.However,adaptive adjustment of the topology has a certain cost,and both proliferation and optimization consume a certain amount of computing resources.Therefore,compared to SOFM,the computational complexity of SANT-SOFM has not changed significantly.
In view of the above advantages,SANT-SOFM has great advantages in clustering and vector quantization,which makes it suitable for de-interleaving similar signals.With the development of modern EW,emission signals have become complex and variable,making the signals intercepted by EW systems interleave into complex pulse trains.To separate the pulse trains effectively,it needs an advanced de-interleaving algorithm,and SANT-SOFM has a huge advantage in this respect.Therefore,the research on SANT-SOFM is very promising.
This work presents the idea of using the self-adaptive network to replace the traditional output layer,showcased in SANT-SOFM,which can solve the initialization dilemma of map size and structural adjustment problems.It is completed by using neuronal proliferation and adjustment algorithms to optimize the network topology.Simulation results show that the proposed algorithm could improve the de-interleaving performance and address the dilemmas in SOFM-based de-interleaving systems.
Journal of Systems Engineering and Electronics2020年4期