Peng Tang,Yitao Xu,Guofeng Wei,Yang Yang,Chao Yue
College of Communications Engineering,Army Engineering University of PLA,Nanjing 210007,China
Abstract: Specific emitter identification can distinguish individual transmitters by analyzing received signals and extracting inherent features of hard-ware circuits.Feature extraction is a key part of traditional machine learning-based methods, but manual extraction is generally limited by prior professional knowledge.At the same time,it has been noted that the performance of most specific emitter identification methods degrades in the low signal-to-noise ratio (SNR)environments.The deep residual shrinkage network(DRSN)is proposed for specific emitter identification,particularly in the low SNRs.The soft threshold can preserve more key features for the improvement of performance,and an identity shortcut can speed up the training process.We collect signals via the receiver to create a dataset in the actual environments.The DRSN is trained to automatically extract features and implement the classification of transmitters.Experimental results show that DRSN obtains the best accuracy under different SNRs and has less running time, which demonstrates the effectiveness of DRSN in identifying specific emitters.
Keywords: specific emitter identification; IoT devices; deep learning; soft threshold; deep residual shrinkage networks
The Internet of Things (IoT) connects ubiquitous terminal devices to make it easy to access and interact with them [1].It has been widely used in everyday life and industrial production.With the development of smart terminal devices, 5G, unmanned aerial vehicle(UAV)communication,and IoT,the radio environment has become increasingly complex [2, 3].The rapid growth of mobile communication services and congnitive radio has significantly increased the demand for radio frequency spectrum [4–6].Efficient utilization of spectrum resources is also an important topic of 6G communications in the future [7, 8].The shortage of spectrum resources has become one of the major challenges of radio.Dynamic spectrum sharing(DSS) is one of the effective methods to solve this problem,and it has been widely used in various commercial wireless standards[9,10].But within the open spectrum sharing system, legitimate users are subject to more attacks than ever before.Malicious users may send out interference signals or eavesdrop on wireless communication systems.These illegal uses of spectrum will not only impact wireless network performance, but may also mislead legitimate users and worsen the spectrum use environment[11,12].Therefore,high security is essential to wireless communication systems.However,the traditional key-based identity authentication mechanism uses cryptographic algorithms to generate digital results,and there is a risk of key leakage, which is not enough to ensure the security of wireless communications[13].These issues have inspired research on the combination of physical layer security with traditional authentication and encryption mechanisms to further improve the security of wireless networks and spectrum sharing systems[14,15].
1.2.1 Specific Emitter Identification
Specific emitter identification (SEI) can compare the fingerprint information with the feature library to determine the specific individual that emits given signals.The information that can reflect the identity of targets is referred to as the emitter fingerprint.Emitter fingerprints are inherent features of the hardware of transmitters,mainly due to the imperfections of the hardware during the manufacturing process[16].The emitters of the same manufacturer and batch still have subtle differences that make the signal characteristics of the individual transmitter possible [17, 18].Talbotet al.proposed a typical system for specific emitter identification [19], as shown in Figure 1.Firstly, the signal is collected through the radio frequency(RF)receiving subsystem; then the received signals undergo a variety of preprocessing, such as filtering, denoising, pulse detection, etc.Further, fingerprint feature extraction is performed to obtain the subtle features containing the individual information of the transmitters.Finally, compare it with the database, using the classification algorithm to determine the specific emitting signal,and to identify it.
Figure 1. Structural diagram of typical system for SEI.
Based on the collected signal for identifying emitters, SEI can be devided into two categories [20]: 1)SEI methods based on the transient signal,which usually occurs when the emitter switches the working mode.It has good resolution, but it has short duration and is easy to be submerged in noise,so it needs a very high sampling rate receiver,which will greatly increase the experimental cost.2)SEI methods based on steady-state signal,which is generally collected when the emitter is in stable working state.Compared with the transient signal, the steady-state signal has a long duration and is easy to obtain, which improves the practicability of SEI.The second class is the main research direction of SEI at present.In addition,feature extraction and classifier design are important parts of SEI.According to different feature extraction methods and classifiers, SEI methods can be categorized into methods based on traditional machine learning (ML)and deep learning(DL)[21].Next,the two SEI methods will be introduced and we focus on the DL-based SEI method.
Figure 2. Traditional ML-Based SEI method with training and identification.
1.2.2 Traditional ML-Based SEI Method
As shown in Figure 2, with the traditional ML-based SEI method, the pre-processed signal is usually extracted manually from the time domain or transform domain.ML-based classifiers identify various specific emitters by extracted features.The common machine learning classifiers include k-nearest neighbor(KNN),support vector machine(SVM),decision tree(DT)and so on.Kennedyet al.used transient signals for spectrum feature selection,and KNN was applied to identify 8 transmitters.When the SNR is 10dB, the accuracy of the model is 97%[22].Zhanget al.proposed the EM2algorithm exploiting the energy entropy,and first-and second-order color moments of Hilbert spectrum as identification features.In addition, SVM was applied to identity with good performance in singlehop and relay scenarios[23].ML-based SEI methods require researchers to understand the types and features of emitter signals and can be summarized as feature engineering.It requires data cleaning,manual feature extraction,advanced detection equipment,and the initial workload of model construction is huge.Moreover,due to the improvement of the precision of electronic components and the increasingly complex electromagnetic environment, features extracted by manual design are usually insufficient to distinguish the transmitters.Any changes in the signal may require redesign and feature extraction,leading to the reduction of the recognition efficiency and accuracy.Therefore,it has become a long-term challenge for ML-based SEI method to find the distinguishing features[24].
1.2.3 DL-Based SEI Method
DL can automatically extract more comprehensive features from input data,and has been widely used in speech recognition[25],image recognition[26],modulation recognition [27], fault classification [28], financial services[29,30],path planning[31],and other fields in recent years.The DL-based SEI method is shown in Figure 3.After signal pre-processing and conversion,DL models can automatically extract features from input signals and DL-based classifiers are designed to identify various specific emitters.The common DL models include Convolutional Neural Network (CNN), Recurrent Neural Network (RNN),Generative Adversarial Network (GAN) and so on.According to the input signals of the neural network,we divide the DL-based SEI methods into: 1) DLbased SEI methods in time domain; 2)DL-based SEI methods in transform domain.
Figure 3. DL-based SEI method with training and identification.
1)DL-based SEI methods in time domain:This kind of SEI method inputs the baseband signals in time domain directly into DL model.Riyazet al.proposed a CNN-based radio frequency fingerprint(RFF)identification method using I/Q signals, and the results showed that CNN is superior to other ML methods[32].Hosseinet al.used real data collected based on USRP and ZigBee devices to compare the RFF identification capabilities of Deep Neural Network (DNN),CNN and Long short-term memory(LSTM)at multiple SNR levels[33].Merchantet al.inputed baseband signals into one-dimensional CNN(CNN1D)to identify ZigBee devices[34].Yuet al.proposed a multisampling CNN (MSCNN) , extracting RFF from the selected region to identify 54 target ZigBee devices,and its accuracy exceeded 97% in high SNRs [35].Wanget al.proposed an efficient SEI method based on complex valued neural network (CVNN) and network compression, which can not only directly process complex baseband signals for performance improvement,but also reduce the complexity and size of the model[36].
2) DL-based SEI methods in transform domain:This kind of SEI method usually needs to change the baseband signal by spectrum analysis, signal decomposition,etc., and then the transformed signals are fed into the DL model.Linet al.converted the complex-valued signal waveform into contour stellar image (CSI), which can transmit the deep statistical information of the raw wireless signal waveform,and data augmentation based on GAN to improve the identification effect [37].Liet al.converted I/Q signals into differential contour stellar(DCS).Compared with CSI, DCS has better robustness as RFF [38].Penget al.proposed a CNN based on differential constellation trace figure (DCTF) to classify 54 target ZigBee devices, in which the baseband signal was converted to DCTF.Simulation results showed that when SNR is 30 dB,its accuracy exceeds 99%[39,40].Panet al.performed Hilbert-Huang transform on the received signals,and converted the Hilbert spectrum into grayscale images, which were fed into deep residual network(ResNet) for recognition [41].ResNet-based method can effectively solve the model degradation problem of deep neural network.
The emitter signals collected from the actual environment usually contain a lot of noise.When processing noisy signals, the learning ability of the neural network usually decreases, and the convolution kernel may detect fewer features due to the interference of noise [42].It has been noted that the performance of most SEI methods degrades in the low SNR environments.The DRSN [43] makes each sample to have a unique threshold for noise through the soft shrinkage network, which is helpful to obtain higher classification accuracy with low SNRs.Motivated by this, the SEI method based on DRSN is proposed in this paper.It can reduce the influence of noise and highlight the features of emitters.The main contributions of this paper are summarized as follows:
? We propose a SEI method based on DRSN with improved accuracy and classfication performance in the actual signal collected.The learnable soft threshold can preserve more signal features under low SNRs, thus obtaining higher accuracy of emitters identification.At the same time,an identity shortcut in DRSN is used to skip more convolutional layers, which can not only preserve key features, but also promote the training process,thereby reducing the number of epochs.
? Eight ZigBee devices are used as experimental targets, and their I/Q signals are collected in the laboratory to create a dataset.The model can be trained using I/Q data,and the effectiveness of the model is verified by using ZigBee signals.
? The DRSN and other related model are compared in different SNRs.We evalute the performance of the DRSN model and the results show that under low SNRs, the proposed model has obvious performance advantages, and under high SNRs, the model still performs better than other models.
The rest of this paper is organized as follows.Section II gives detailed descriptions on the data collection and preprocessing process.In Section III, we elaborate on related theories of DRSN and propose the SEI method based on DRSN.Section IV contains experimental settings, results and discussions.Finally, this paper is concluded in Section V.
As shown in Figure 4, the data collection system includes a receiver, a computing platform and 8 target ZigBee devices.ZigBee uses IEEE 802.15.4 as the physical layer standard, and its transmission signal is modulated by a spread spectrum.With advantages of low power consumption, low cost, and selforganization, it is widely used in wireless communication networks such as IoT and industrial monitoring systems.As shown in Figure 4(a), this paper uses the CC2530 ZigBee devices operating on the 2.4 GHz band as the signal acquisition target.Signal Hound BB60C real-time spectrum analyzer and radio frequency recorder (abbreviated as BB60C) can directly collect and store signal I/Q data.As shown in Figure 4(b), the PC and BB60C are connected via USB 3.0 to provide power and data storage space for BB60C,which together constitute the signal receiving terminal.The relevant working parameters is set in the Spike spectrum analysis software supporting BB60C.The working center frequency is 2.405 GHz, the RF bandwidth is 10 MHz, and the sampling rate is 40 Msample/s.The acquisition time for each ZigBee device is 15 seconds.A total of 8 devices from the same manufacturer and batch have been collected, and the size of raw I/Q data is 12 GB.
Figure 4. (a)target ZigBee devices;(b)BB60C and PC.
The real signal from the receiver can be expressed as:
wherer(n)is the received signal,s(n)is the transmitted signal,andv(n)is noise.BB60C converts analog signals into digital I/Q signals through sampling.The collected I/Q signals are converted into time-domain signals:
whererI(n)andrQ(n)separately represent the I and Q data of the received signals.
ZigBee networks usually work in non-beacon mode.In this mode, the slave nodes periodically confirm whether they are in the network and transmit data,and most of the time they are in a dormant state to achieve low power consumption.A section of the ZigBee signal collected is shown in Figure 5.ZigBee devices are dormant most of the time,and there is a lot of noise in the original signal.We used MATLAB 2020b to simulate AWGN channels, whose SNR is in the range of{?5,0,5,10,15,20,25}dB.Except for the noise part of the channel,the effective data transmission part(all steady-state signals) is reserved.The effective signal transmission point is then sliced, and 2048 points are used for each new sample.
Figure 5. ZigBee time-domain signal.
After the raw data is normalized, the indicators are within the same order of magnitude,suitable for comprehensive comparative evaluation.It can also improve the accuracy of the neural network and help improving the learning efficiency and convergence speed of the model.The data is scaled to[0,1]by using linear normalization:
Figure 6. ZigBee time-domain sample after normalization.
ResNet is composed of a series of residual building units (RBU).As shown in Figure 7, RBU introduces an identity shortcut,which enables the information of the previous RBU to be sent directly to the next RBU.This allows ResNet to avoid the problems of gradient disappearance and network degradation caused by the increase in network depth [26].The RBU can be expressed as:
Figure 7. Residual building unit(RBU).
wherexlis the identity shortcut part,andF(xl,Wl)is the residual part.F(xl,Wl)is the basic component of the RseNet,including batch normalization layer(BN),rectified linear unit layer (ReLU), and convolutional layer(Conv).
BN can make the output value of the entire neural network in the middle of each layer more stable, resulting in smoother optimized terrain.It can improve network optimization efficiency and generalization capabilities.First,calculate the meanμand varianceσ2of the mini-batch data.Then,we get the standardized data
wheremrepresents the size of mini-batch,xiis thei-th input of the mini-batch,andεis a very small positive constant to ensure that the denominator is greater than zero.On the basis of standardization, the learnable scale parameterγand shift parameterβare introduced to obtain the batch-normalized outputyi:
As an activation function, ReLU can perform nonlinear transformations, which can enhance the representation and learning capabilities of the network.ReLU is expressed as follows:
wherexandyare the input and output of ReLU, respectively.
The convolution kernel in the Conv convolves multiple input feature maps, and then obtains an output feature map through a nonlinear activation function:
wherexiis thei-th channel of the input feature maps,yjis thej-th channel of the output feature maps,Mjis the set of input feature maps,kis the convolution kernel,andbis the bias parameter,f(·)is a nonlinear activation function,and ReLU is generally used.
The collected emitter signals usually contain noise or redundant information, which may adversely affect the recognition task.Soft threshold is the core step of many noise reduction algorithms.It can remove features whose absolute value is less than the threshold, and can shrink features whose absolute value is greater than the threshold toward zero.The soft threshold function is as follows:
wherexandyare input and output respectively, andλrepresents the threshold.λneeds to be positive and cannot be greater than the maximum value of the inputx.Additionally,the amount of noise or redundancy in different samples is typically inconsistent[44].To do this,we use a deep residual shrinkage network,which adds a residual shrinkage unit(RSU)to the Resnet.
As shown in Figure 8, the RSU consists of two branches.One branch is connected to the global average pooling layer (GAP) to obtain the global mean value of the feature.The other is composed of fully connection(FC),ReLU,FC,and sigmoid in turn.The sigmoid function sets the threshold coefficient at the range of(0,1),which is expressed by:
Figure 8. Residual shrinkage unit(RSU).
whereziis the feature ofi-th neuron in the second FC,andαiis thei-th scaling parameter.After that,the thresholds are calculated by
whereλiis the threshold for theith channel of the feature map andm,n,andiare the indexes of width,height,and channel of the feature mapx,respectively.The threshold learned by the sub-network in the red dashed box is the multiplied value of the two values,so that each sample has its own threshold.
Since the performance of most SEI models degrades in the low SNR environments, we propose a SEI method based on the DRSN.The overall process of the method is shown in Figure 9, which mainly includes three parts: (1) Data collection; (2) Data preprocessing; (3) DRSN model training and evalution.Specifically, a receiver is used to collect the signal from transmitters.The collected signals will be made into a dataset by the analysis and preprocessing of signals.Then we build a DRSN and initialize the network parameters.The optimal model will be obtained through training data to continuously optimize the network.Finally, the identification for specific emitters and the evaluation of models are realized by the test set and classification results.
Figure 9. The process for proposed DRSN-based SEI method and the strcuture of DRSN.
As introduced above, we train the signals within a DRSN model to recognize specific emitters.As shown in Figure 9, a DRSN is composed of typical components such as Conv, RSU, BN, ReLu, GAP and FC.Conv is applied to extract the features of the signal.Zero padding is used in RSU to ensure that the features of different convolution kernels have the same size.BN performs feature normalization.GAP can reduce the number of weights used in FC and reduce the possibility of network overfitting.
Based on the above process and DRSN framework,we design a gradient descent algorithm to minimize the cross-entropy error,and train a DRSN model many times.The detailed DRSN algorithm is presented in Algorithm 1.
In this section, the performance of our proposed DRSN model is analyzed with the ZigBee dataset.The experimental settings and implementation platform are described.DRSN is compared with other models and evaluated in different devices and SNRs.Details will be shown as below.
Division of dataset: Within different SNRs, each device has 2000 samples,and the training set,validation set and test set are divided based on the ratio of 6:2:2.
Optimizer: The min-batch gradient descent method can overcome the shortcomings of the slow batch gradient descent training process and the low accuracy of stochastic gradient descent.We choose Adam as the optimizer,where batch_size is set to 128,and the preset number of training iterations is 200.
Algorithm 1. The proposed DRSN algorithm for SEI.Input: Time-domain signal samples {X} with the size of 1×2048 and labels{Y}.Output: The predicted label for each sample.1: Select time-domain signal samples in ZigBee dataset and mix them randomly.2: The dataset is divided into training,validation and testing sets within 6: 2: 2,then is fed into DRSN for training.3: Compute the output of a convolutional layer:yj =f■images/BZ_98_499_889_547_935.png i?Mj xi ?kij +bj■.4: Compute the output of RSU by repeating the following updates:αi = 1 1+e?zi,λi =αi·average|xm,n,i |m,n .5: Compute the output of BN by repeating the following updates:μ= 1 m images/BZ_98_472_1303_520_1349.pngm i=1 xi,σ2 = 1 m images/BZ_98_491_1361_539_1407.pngmi=1(xi ?μ)2,?xi = xi?μ√σ2+ε,yi =γ?xi+β.6: Update the output of FC to achieve the categorical crossentropy error.7: Return the predicted label for each sample.
Learning rate: If the initial learning rate is set too small,it will take a lot of iterations to make the model reach the optimal state, and the training is slow.If the learning rate is fixed,after multiple iterations,the performance of a model will no longer improve, and the learning rate may no longer adapt to the model.Therefore,the learning rate decay strategy is used during the training process in order to obtain the optimal model quickly and accurately.The initial learning rate is set to 0.001, and the callback function ReduceLROnPlateau is used to decay the learning rate.Morever,the callback function EarlyStopping is applied to prevent the model from overfitting.
L2 regularization:L2 adds a penalty term to the loss function, severely punishes weight vectors with large values, and tends to more scatter-weight vectors.The coefficient of the penalty term is set to 0.0001,which is the same as the classic ResNets.
Loss function: It is used to determine how close the predicted value is to the true value,back-propagate the error to a network model,and guide the update of network parameters.We use categorical-crossentropy as a loss function that reflects the distance between the true and predicted output.The smaller the value of the categorical-crossentropy, the closer the predicted value is to the true value,and the smaller the error is.
The workstation used in our experiment contains 1 Intel(R)Core(TM)i9-10980 XE central processing unit(CPU)and 1 NVIDIA GeForce RTX 2080Ti graphics processing unit(GPU),and its random access memory(RAM)is 64 GB.The operating system installed in the workstation is Ubuntu 18.04.5-Linux,which uses cuda 10.1,cudnn 7.6.5,Tensorflow 2.1 and Keras 2.3.1 via the GPU to accelerate the training and testing process of DL model.
4.3.1 Performance of DRSN within Different RSUs
Deep networks usually improve the performance of the model,but too many parameters lead to more complex models and longer optimization time.In order to balance performance and complexity, we have evaluated different numbers of RSUs to form DRSN, and the details will be shown in Table 1.The influence of the number of RSUs(RSU-NUM)on recognition performance has been evaluated in this experiment.The structure of the RSU is shown in Figure 8.The experiment is carried out under 8 devices and SNR =10 dB, and RSU-NUM is in the range of{1, 2, 3,4}.Other conditions remain unchanged.The effects of RSU-NUM on the network model and performance are shown in Table 1.When RSU-NUM = 1, the accuracy is the lowest and the error is the largest,which indicates that the shallow network cannot completely extract the fingerprint features from the samples, and the degree of self-learning of the network is not sufficient.With the increase of RSU-NUM,the recognition accuracy also increases.However,when RSU-NUM≥2, the accuracy and error between different networks are very close.In addition, the added RSUs make the network more complex, and the number of parameters and training time continue to increase.In order to balance the accuracy, network parameters and training time,we uniformly use two RSUs to construct the DRSN.Therefore, the structure of DRSN used in theexperiment is shown in Figure 9.
Table 1. Effect of RSU-NUM on Performance (SNR = 10 dB).
4.3.2 Models Comparsion within Different Devices
Our proposed DRSN is compared with the work based on ResNet[41],CNN(1D)[34],CNN(2D)[35],DNN[33], and SVM [23].Different algorithms are compared on 4 and 8 devices respectively, and the accuracy is shown in Table 2.First of all,the DRSN model proposed in this paper achieved the highest recognition accuracy among 4 and 8 devices.Secondly, it is clear that as the number of devices increases, the recognition accuracy will decrease,which is expected.Since the more devices, the more likely they are to have similar features.Then, the recognition accuracy of the DL-based models on 4 and 8 devices is higher than that of the ML-based model(SVM).Last but not least, the performance of the CNN is better than that of the DNN.When identifying 8 devices,the accuracy of DNN is 77%,and the accuracy of other CNN-based SEI methods exceeds 85%.In addition,since the SVM classifier is relatively simple and its running time is very short,we focus on the convergence time of neural network.It can be seen from Table 2 that DRSN needs less time to achieve model convergence and recognition speed is faster than other DL-based models.This is because the DRSN uses an identity shortcut to facilitate the parameter update process.
4.3.3 Methods Comparison under Different SNRs
Our proposed DRSN-based SEI method is compared with the methods based on ResNet, CNN(1D),CNN(2D),DNN and SVM under different SNRs.All models have been trained 5 times.The accuracy of the network model for each SNR is shown in Figure 10.Firstly, it can be seen from Figure 10 that the DRSN model obtained the best recognition accuracy for 8 devices under different SNRs.Especially when SNR is-5dB and 0dB, the performance of DRSN is better than other models.Secondly, under different SNRs,the accuracy of methods based on DNN and SVM is lower than other algorithms.Finally,with the increase of SNR,the recognition accuracy of each model is improved.When SNR is 5dB, the accuracy of the first four models can reach more than 80%,and when SNR is 15dB, the accuracy can exceed 90%.In summary,DRSN can effectively reduce the influence of redundant information such as noise and extract key features.With the improvement of SNR,DRSN can still obtain more representative information.
Figure 10.Accuray comparison of DRSN, ResNet,CNN(1D),CNN(2D),DNN and SVM under different SNRs.
Table 2. Accuracy and running time of different methods(SNR=10 dB).
Figure 11(a)-(d) shows example confusion matrixs for an 8-class classification problem, where the rows and columns of the matrix correspond to the true and predicted classes,respectively.Diagonal cells indicate correctly classified samples, while non-diagonal cells indicate incorrectly classified samples.The number of samples in each cell is displayed as a percentage.If the value is 0%, it will not be displayed.It can be seen from Figure 11(a)-(d) that when the SNR is 10dB, except for the poorer identification of label 3 and 4 devices, the rest can be identified better.However,compared to ResNet and CNN,DRSN can more accurately recognize and determine the type of equipment,and the probability of prediction errors is lower.As shown in Figure 11(e)-(h), the high-dimensional features of the FC are reduced and visualized in 2D space through t-distributed random neighborhood embedding (t-SNE).Compared to other DL models, the features of each class itself are more compact and there is less feature overlap between different classes in DRSN,resulting in better classification.At the same time,with the poorer recognition accuracy,the feature intersections of the label 3 and 4 devices are more,and the features of the remaining 6 devices are relatively concentrated in space.
Figure 11. The normalized confusion matrices and 2D visualizations of high-dimensional features using t-SNE when SNR is 10 dB.The (a)-(d) are the normalized confusion matrices of DRSN, ResNet, CNN(1D) and CNN(2D), respectively.The(e)-(h)are the t-SNEs of DRSN,ResNet,CNN(1D)and CNN(2D),respectively.
4.3.4 Performance Evaluation of DRSN
Although the SEI mission is a multi-class classification problem as a whole, it can be regarded as binary classification for each category.Therefore, the precision, recall rate and F1 score are used to evaluate the performance of DRSN model via confusion matrix.These performance metrics are defined as follows:
whereTP,TN, andFPare true positives, true negatives and false negatives, respectively.According to the normalized confusion matrix of DRSN in Figure 11(a), the detailed performance metrics of DRSN are shown in Table 3.
Table 3. Performance metrics of DRSN(SNR=10 dB).
In this paper, the deep residual shrinkage network is proposed to identify emitters.Soft threshold is learned and automatically set via the shrinkage network in DRSN,which not only avoids the professional knowledge required to manually set the threshold, but alsoenables each sample to learn its own threshold.In this way, the impact of noise on performance can be reduced and the identification accuracy of the device can be improved.In addition, an identity shortcut is applied to speed up the training process.The BB60C is used to collect signals from ZigBee devices and the generated dataset is used for training and testing.The experimental results show that compared with ResNet,CNN(1D), CNN(2D) and DNN, DRSN obtains the best accuracy under different SNRs and has less running time.In summary, the learnable soft threshold in our proposed DRSN model can effectively improve the identification accuracy at low SNRs and maintain robustness at different SNRs.
ACKNOWLEDGEMENT
This work was supported by the National Natural Science Foundation of China (No.U20B2038, No.61871398, NO.61901520 and No.61931011), the Natural Science Foundation for Distinguished Young Scholars of Jiangsu Province(No.BK20190030),and the National Key R&D Program of China under Grant 2018YFB1801103.