亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Supplementary materials for

        2023-07-30 03:05:18

        1 Modulation classification methods

        Modulation classification methods are divided into manual and automatic modulation classification techniques.Manual modulation classification relies on the down-conversion of the received high-frequency signal to determine the type of signal modulation through devices such as an oscilloscope,spectrum analyzer,or demodulator.Manual classification recognizes limited types and has high complexity.Compared to the complex manual modulation recognition,machines automatically carry out the existing modulation recognition methods(Wang et al.,2019).

        2 Architecture of the signal transceiver system

        The read data source is sent through the transmit (TX) channel in the universal software radio peripheral(USRP) hardware driver (UHD) sink.USRP is a flexible and powerful general-purpose software radio peripheral.The low cost of wide broadband makes USRP cost-effective (Zitouni and George,2016).During the transmission process,the signal processing inside the USRP is divided into two stages.In the first stage,the high-speed digital signal processing field programmable gate array (FPGA) in the mother board converts the digital baseband signal in the computer into a digital intermediate frequency signal.After sending control and digital up-conversion by FPGA,the signal is converted into data in the analog domain through the digital to analog converter (DAC) module.In the second stage,the child board filters the intermediate frequency (IF)signal in the analog domain to smooth the signal,and then multiplies it with the crystal oscillator signal to obtain the radio frequency (RF) signal.The signal radiated by the antenna is transmitted through the radio environment.Then the signal is received through the receive (RX) channel.The child board’s low noise amplifier and crystal oscillator down convert the signal from RF to IF and perform filtering and smoothing to prevent aliasing.After that,the analog to digital converter (ADC) in the mother board performs the analog-to-digital conversion and sends signals to the FPGA for digital down-conversion and receiving control.Gnu’s not unix (GNU) radio completes the establishment of communication with USRP by calling the application programming interface(API) provided by the UHD driver (Liu et al.,2017).The QT graphical user interface (QT GUI) time sink module in GNU radio is responsible for representing the signal.After giving a complex number to the input module,the module outputs both the real part and imaginary part of the signal,and it can be judged whether the transmission and reception are completed through the signal figure in GNU radio.The in-phase and quadrature components of the acquired signal is transferred to the computer through the file sink module and saved as a file of the corresponding modulation type.The highly compatible integration of software and hardware platforms facilitates complex signal processing.

        3 Network in the signal recognition system framework

        The designed network introduces batch normalization (BN) to solve the gradient explosion and disappearance caused by reverse gradient propagation.Besides,the data distribution after BN tends to be stable so that the subsequent network layers can learn features based on appropriate data distribution and accelerate the convergence of the loss function.We select the LeakyRelu activation function to enhance the nonlinearity of the network.In addition,when LeakyRelu takes a negative value,it has a slight slope to solve the problem that the input data neurons stop learning.The network introduces a dropout mechanism to increase the sparsity and randomness of the network design and avoid spending more time on learning unimportant features.

        4 Autoencoder

        As one of the mainstream architectures in deep learning,the purpose of the autoencoder is to minimize error so that the output reconstructs the input (Bengio et al.,2013).However,the basic autoencoder learns only with a single hidden layer,which makes it easy to obtain a linear mapping result.Researchers proposed a deep autoencoder,setting multiple hidden layers and training the network by backpropagation to solve the problem that a single hidden layer autoencoder simply copies the input as output (Hinton and Salakhutdinov,2006).The deep autoencoder can efficiently learn the hidden layer representation of the input data to obtain more objective and complete features.The algorithm model contains the encoder and the decoder.The function of the encoder is to encode the input dataxas a latent variablehby feature extraction and capture the most significant features of the neural network.The decoder converts the extracted features intoxRthrough the decoding operation,restoring the features to the original dimension.

        5 Deep residual network

        The neural network learns more sophisticated features with more network layers,and the training achieves better system recognition.However,when the layers are too deep,the network will face degradation,which means that the performance rapidly reaches saturation or even decreases.Assuming that the model at layermhas been trained to the optimal network at layern(wherem>n),the network at layersm-nis redundant.Due to a nonlinear activation function,the redundant network of layersm-ngenerates irreversible information loss and makes the network degenerate.He et al.(2016) proposed the concept of a deep residual network to reduce the effect of redundant layers on network degradation.It divides the network into two parallel parts.One part keeps the original network design.The other is designed as a bypass connection,which is an identity mapping corresponding to the starting layer of the original network,and the mapping is added across multiple hidden layers to the output layer of the original network.The two act together as the input to the next layer.In the deep residual network,xis the input,F(x) is the output after the original network,and let the final output of the network beH(x).ThenH(x)=F(x)+x.IfF(x) acts on a redundant layer,which means it lacks a positive effect on the output.ThenH(x),in the presence ofx,can guarantee that the output results are at least consistent with the results of directly onx,namelyH(x) ≈x.CallingF(x)=H(x) -xthe residual term,thenF(x) ≈ 0.Since the initialized parameter is generally around zero,learningF(x) ≈ 0 is simpler than directly learningH(x) ≈xwhen parameters are updated.Moreover,the residual model can avoid the gradient disappearance that occurs when the gradient is backpropagated.In backpropagation,εrepresents the loss equation,which is obtained by the chain rule

        久久夜色精品亚洲天堂| 日本无遮挡吸乳呻吟视频| 色丁香在线观看| 人妻少妇喷水意淫诱惑| 国产三级精品三级在线专区| 人妻精品久久无码区| 曰本女人牲交全视频免费播放| 中日韩欧美高清在线播放| 国产二区中文字幕在线观看| 国产精品久人妻精品老妇| 国产一线二线三线女| 高清国产亚洲va精品| 国产乱淫h侵犯在线观看| 69精品人人人人| 国产一品道av在线一二三区| 国产一级黄色av影片| 久久一道精品一区三区| 九色九九九老阿姨| 久久亚洲道色宗和久久| 国产精品久久久看三级| 老太婆性杂交视频| 蜜臀av无码精品人妻色欲| 欧美手机在线视频| 亚洲在线精品一区二区三区| 国产欧美日韩一区二区三区| 二区三区视频| 中文字幕亚洲精品一二三区| 国产精品亚洲专区无码不卡| 中文字幕人妻av一区二区| 久久精品美女久久| 麻豆国产精品一区二区三区| 人妻 色综合网站| 国产日韩久久久精品影院首页| 亚洲中文字幕国产剧情| 亚洲国产精品成人久久| 国产第一页屁屁影院| 精品国产福利一区二区三区| 手机av在线中文字幕| 天天天天躁天天爱天天碰| 亚洲天天综合色制服丝袜在线| 国产精品久久久黄色片|