亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        A Quantized Kernel Least Mean Square Scheme with Entropy-Guided Learning for Intelligent Data Analysis

        2017-05-09 01:39:33XiongLuoJingDengJiLiuWeipingWangXiaojuanBanJenqHaurWang
        China Communications 2017年7期

        Xiong Luo , Jing Deng , Ji Liu , Weiping Wang , Xiaojuan Ban , Jenq-Haur Wang

        1 School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China

        2 Beijing Key Laboratory of Knowledge Engineering for Materials Science, Beijing 100083, China

        3 Department of Computer Science and Information Engineering, National Taipei University of Technology, Taipei 10608, Taiwan* The corresponding authors, emails: xluo@ustb.edu.cn; shiya666888@126.com

        I. INTRODUCTION

        Kernel methods have become increasingly popular in machine learning and they are power tools to perform complex computing for network service and application. For example,the kernel learning algorithms could be employed to improve performance in detecting cyber-based attacks on computer networks [1].In recent years, enormous research efforts have been devoted to the development of kernel learning methods, such as support vector machine [2], kernel principal component analysis[3], and many others. By using kernel methods to map the original input space into a high-dimensional feature space and then performing the linear learning in feature space, these nonlinear algorithms show significant optimization performance. Especially, kernel adaptive filtering (KAF) is a powerful nonlinear filter developed in reproducing kernel Hilbert space(RKHS), using the linear structure of RKHS to achieve a mature linear adaptive algorithm in the input space [4]. Currently, there are some typical nonlinear adaptive filtering algorithms that are generated by mapping the linear algorithms to RKHS, such as kernel least mean square (KLMS) algorithm [5], kernel recursive least square (KRLS) algorithm [6], and many others [7]-[12]. There also has been a surge of interest on the study of applications of kernel learning methods [13]-[15].

        Generally, these nonlinear adaptive filtering algorithms will generate a growing radial basis functional (RBF) network via a radial symmetric Gauss kernel. This growing structure leads to the increase of computing costs and memory requirements, especially on the occasion of continuous adaptation [7]. Hence,some online sparsi fication methods have been proposed to address this issue, while using a certain criterion function with thresholds to decide whether the new samples should be added into the central set. There are some typical sparsification criteria. Although those methods can dramatically reduce the scale of network, its high computational cost also imposes challenging obstacles to the practical applications. Then, a novel quantization approach of constraining the network size was developed, and a quantized kernel least mean square (QKLMS) algorithm was proposed[7]. Similarly, the basic idea of quantization is to use a smaller body of data to represent the whole input data by partitioning the input space into smaller regions. But the quantization approach is computationally simple while providing a relatively easy method to determining quantization size, and the redundant data are used to improve the performance by quantizing to the nearest center.

        However, the methods mentioned above do not consider the pretreatment for input dataset.Recently, the entropy-based optimization technique was integrated into machine learning algorithms [16]. It is feasible to reduce input and output datasets by measuring the importance degree of every input vector in accordance to their information entropies, and deleting the input and corresponding output vectors when they are insigni ficant or they are easy to cause errors, so that it is possible to compress the memories and the scale of the network. Generally, most of current nonparametric entropy estimation techniques are with the estimation of the probability density function (PDF). These estimates are then substituted by the expression of entropy, and have been widely applied to the estimation of the Shannon entropy [17].Meanwhile, an optimization method using consecutive square entropy technique was proposed [18]. It incorporates Parzen’s windows function into square entropy to realize the entropy estimation. Parzen’s windows approach is a nonparametric method for estimating the PDF of a finite set of patterns [19],and it can fully re flect the distribution characteristics of data. Compared with the traditional information entropy, the square entropy using Parzen’s window as the PDF of dataset can better describe the uncertainty of the dataset, thus it can describe the importance degree of input vector more accurately.

        In consideration of the above analysis, we improve QKLMS by optimizing initial input data. Through the combination of square entropy and QKLMS algorithm, a novel kernel optimization scheme with entropy-guided learning, called EQ-KLMS, is proposed. In EQ-KLMS, on the basis of optimal entropy weight, the adaptive filter can be performed with high precision and low computational cost. In addition, data analysis is now widely used in practical applications. Some learning based methods have been used for data analysis, such as extreme learning machine (ELM)[20] and least squares support vector machine(LSSVM) [2], and they have their own advantages and de ficiencies in data prediction. With the development of data analytics while making decisions via data-driven learning scheme[21], kernel learning method has been widely used in intelligent data analysis, thus achieving many good results. This article also focuses on the intelligent data analysis by using our proposed scheme to address data prediction.

        The rest of this article is organized as follows. In Section 2, we introduce the adaptive filters, QKLMS algorithm, and entropy estimate technique. In Section 3, the details of the proposed scheme EQ-KLMS are presented.Experiment and discussion are provided in Section 4. Conclusion is summarized in Section 5.

        To improve the learning accuracy and reduce the computing time, we combine entropy-guided learning technique and quantization approach to develop a novel kernel optimization scheme EQ-KLMS.

        II. BACKGROUNDS

        2.1 Adaptive filters

        Adaptive flter is a class of fltering structures equipped with a built-in mechanism that enables such a flter to automatically adjust its free parameters according to statistical variations in the environment.

        The basic structure of adaptive filter is shown in Figure 1. The filter embodies a set of adjustable parameters, e.g., weights vector ω(i-1). Andy(i) is the actual response to input vector u(i), which applied to the filter at timei. The difference between actual responsey(i)and desired responsed(i) is the error signale(i). Thene(i) is used to produce an adjustment to the parameter vector ω(i-1) of the filter. The adaptive filtering process is constantly repeated in this manner until the parameter adjustments become small enough to stop the adaptation.

        2.2 Quantized kernel least mean square algorithm

        KLMS is actually the linear least mean square algorithm in RKHS, which is the simplest one among the family of KAF. The KLMS produces a growing RBF network by allocating a new kernel unit for every new example with input as the center. The growing RBF network structure with each new sample is the biggest obstacle to its wide application.

        Quantization techniques have been employed in some fields. QKLMS algorithm uses the online vector quantization (VQ) method to compact the RBF structure of KLMS by reducing the network size [7]. QKLMS algorithm can be implemented by just quantizing the feature vector ω(i) in the weight-update equation ω(i) = ω(i-1) +ηe(i)φ(i) of KLMS,whereerrore(i) =d(i) - ω(i-1)Tφ(i), φ(·) denotes the input vector in the high-dimensional feature space,d(i) is the desire signal, andηis the step-size [5].

        Then, the basic idea of quantization approach can be described briefly as follows.When a new input vector u(i) is available, we compute the Euclidean distance between u(i)and codebook C(i-1) firstly. If the distance is less than the given threshold, we keep the codebook unchanged, and quantize u(i) to the closest code-vector. Otherwise, we need to update the codebook by C(i) = {C(i-1),u(i)},and allocate a new kernel unit for u(i). Thus,the scale of the network will be compressed.

        2.3 Entropy estimation with Parzen’s windows

        Generally, the greater the entropy, the less invalid information contained in the system [22]. The consecutive square entropy of a random variable is closely related to the PDF for all possible values of this random variable, and its expression is shown in [18]. Remarkably, the PDF of consecutive square entropy could be a Parzen’s window function, which is de fined in [23].

        To reduce the influence of the discrete points of the input data on the model accuracy,a method of measuring the amount of information contained in a variable based on entropy weight is proposed. Those steps are as follows:

        Fig.1 The basic structure of adaptive filter

        2) Define validity coefficienthj=1-ej, the entropy weight of each input vector could be calculated by the validity coefficient:

        It can be seen, for an input vector, the greater the square entropy, the lower the validity coefficient, the smaller the entropy weight,and the less important in the whole system.

        Now, we give an example to demonstrate the relationship between entropy and the weight.Assuming the dataset as [1 21 3 4 5 6 7 8], we construct the input as [1 21 3 4 5; 21 3 4 5 6; 3 4 5 6 7], and output as [4 5 6 7 8]. Each column in the input represents an input vector. Then,the caculation results of consecutive square entropy is [0.7452 0.7054 0.5311 0.5311 0.5311],and the entropy weight of each input vector is[0.1302 0.1506 0.2397 0.2397 0.2397]. Here, it can be seen that the element 21 does not follow the overall distribution of data, and it can be considered as an outlier. Then, the first two input vectors contain outliers, and the calculation results show the entropy of two input vectors are larger than others.

        Generally, weight is used to measure the importance of components, so we transform entropy into the expression of entropy weight.In the prediction process, we use the firstldata to predict the next data, so the more orderly the data is, the smaller the deviation is,the higher the success rate is, which is helpful to find the relationship between input and output. That is to say, a vector with large entropy weight plays a positive role in the training process, and the vector with small entropy weight plays a negative role. So the entropy weight can be used to measure the signi ficance of vector in the training system.

        III. EQ-KLMS

        3.1 Implementation

        In EQ-KLMS, we firstly calculate the entropy weight of each input vector, to measure the importance of each input vector in the system.And then we remove the input vector and the corresponding outputs whose entropy weights are less than the average value. In this way, we can delete the input vector that is insigni ficant or may lead to errors before training, so that it can compress the dataset as well as improve the learning accuracy. Finally, with the modified training set, QKLMS model is trained,and the range of parameter adjustments will become small enough, that is to say, the weight vector of adaptive filter has tended to be stable. Now, the hidden relationships between inputs and outputs are found. Thus, we can use it to learn or predict output value more accurately when an input is given.

        ?

        The implementation of EQ-KLMS is mainly divided into five steps: constructing the inputs and outputs, calculating the entropy weights of every input vector, modifying the dataset in accordance with the entropy weights, training the QKLMS model with the modified training set, and making data analysis. And the framework of EQ-KLMS is shown in Algorithm 1.

        Remark 1: If the prediction is failed, we replace the output by desired output, so that the error is 0. Then, the weight of the last moment does not change, with the purpose of avoiding the impact of failed prediction.

        3.2 Complexity analysis

        It is clear that the core part of our scheme is the calculation of the entropy weight and the training of QKLMS model. If there areninput vectors, we need to computentimes to obtain the entropy weight of each vector, and the time complexity of computing entropy weight isO(n). In addition, the computational cost of online VQ and updating α(i) are also equal toO(n) [8]. Hence, we can infer that computational complexity of EQ-KLMS isO(n).

        IV. EXPERIMENT AND DISCUSSION

        4.1 Dataset and metrics

        We use the actual dataset obtained in [25] to conduct data prediction. We evaluate the performance through the prediction results, the computational time, the mean absolute error(MAE). Here,

        whereujrepresents the real value,uj′ denotes the predicted value, andNis the number of predicted values. According toRemark 1, if the prediction is failed, the error is 0 while replacing the output by desired output. It means that the MAE is only calculated within the successful prediction range, which is different from the conventional method.

        Assuming thatMis the total number of test items while successfully predictingmtimes,the successful prediction rate (SPR) ism/M×100%.

        To verify the effectiveness of our scheme,we perform KLMS and QKLMS under the same experimental condition. In addition,since we calculate the entropy weight for all the training inputs, we also provide a comparison experiment with ELM and LSSVM, which are two of fline algorithms.

        In the first experiment, we extract 4,000 continuous temperature data items of Haikou city in 2013, while using the first 3,005 data to generate the training input set (5×3,000)and the corresponding desired output set(3,000×1), using the following 605 data to generate the testing input set (5×600) and the corresponding desired output set (600×1).In the second experiment, we extract 5,000 continuous humidity data items of Sian city in 2013, while using the first 3,005 data to generate the training input set (5×3,000) and the corresponding desired output set (3,000×1),using the following 1,005 data to generate the testing input set (5×1000) and the corresponding desired output set (1000×1). The experiments are conducted in MATLAB computing environment running in an Intel(R) Core(TM)i5-3317U, 1.70 GH CPU. We test the SPR under different thresholdεwhich represents the requirement on the prediction accuracy.

        4.2 Results in temperature dataset

        After conducting some tests, the parameters are selected as: kernel parameterδ=0.02, stepsize parameterη=0.007, since they achieve the best accuracy. Figures 2 and 3 show how the quantization factorγaffects the performance of these five algorithms. Whenγincreases, the MAEs of QKLMS and EQ-KLMS increase gradually, but the network sizes of them decrease dramatically. Here, since there is no impact ofγfor KLMS, ELM, and LSSVM, the lines of those three algrotihms are overlapped in Figure 3. Then, we take a compromise valueγ=0.03, making MAE and network size achieve more ideal values. Because the sum ofbiis 1, and each element in the vector has equal weight, so we setbi=1/mto simplify the calculation.

        另一方面,當(dāng)0

        The prediction results for the temperature dataset are shown in Figure 4 whenε=0.15.We can find that the prediction values of EQKLMS are in good agreement with the actual values. However, the prediction accuracy of ELM is relatively worse than other schemes.Specifically, we give the prediction error of EQ-KLMS at every sampling point in Figure 5, and we can find that the prediction error converges to a value within a small enough interval.

        Figure 6 shows the MAE when the thresholdεchanges. A smaller MAE indicates a better prediction effect. On the whole, the prediction effect of EQ-KLMS is the best,and prediction error of ELM is maximum.Asεincreases, the performance of LSSVM is close to QKLMS. Figure 7 shows the SPR asεchanges. It is obvious that the greater the threshold, the higher the SPR. Among the five algorithms, the SPR of EQ-KLMS is the highest.

        We also compare the computational time whenε=0.15 in Table I. It can be observed that the computational time of LSSVM is more than that of other methods, and the time EQKLMS spends is close to QKLMS. Hence,even if EQ-KLMS spends a part of time in computing entropy, it can signi ficantly reduce the computational cost with the optimized training set. The details of calculation are shown in Table II. EQ-KLMS can significantly compress the scale of network and improve the prediction accuracy. Compared with the calculation time of the whole algorithm, the computation time of the entropy operation is a small part. And the percentage of computing time of entropy in EQ-KLMS is 0.3048/2.4649=12.37%.

        4.3 Results in humidity dataset

        After conducting some tests, the parameters of KLMS are selected as:δ=0.0021,η=0.0029, as they achieve the best accuracy. The in fluence of quantization factorγon MAE and network size of EQ-KLMS and QKLMS are similar as the previous experiment, we setγ=0.15 in this experiment. The choice ofbiis the same as above.

        Fig.2 The MAE with different quantization factor γ

        Fig.3 The network size with different quantization factor γ

        Fig.4 The prediction results for temperature item

        Figure 8 describes the humidity data of every sampling point and the prediction results whenε=0.15. Except to ELM, the prediction values of those schemes are almost follow the actual humidity data. But at some sampling points, KLMS, QKLMS, and LSSVM perform worse with bigger errors than EQ-KLMS.Meanwhile, the prediction error of EQ-KLMS at every sampling point is shown in Figure 9,it also converges to a small value.

        Figure 10 shows the corresponding MAE when the thresholdεchanges. It is clear that the MAE of EQ-KLMS is minimum. Figure 11 shows the SPR under different thresholdε.It is obvious that EQ-KLMS has the maximal SPR.

        We can conclude that the quantization approach with entropy-guided learning can improve the prediction accuracy. Considering the prediction effect and computational time simultaneously, EQ-KLMS is a competitive choice under the current computational framework.

        Table I Computational time for temperature dataset

        Table II Computational details of EQ-KLMS for temperature dataset

        Fig.5 The prediction errors for temperature item

        Fig.6 The MAE with different threshold ε for temperature item

        V. CONCLUSION

        To improve the learning accuracy and reduce the computing time, we combine entropy-guided learning technique and quantization approach to develop a novel kernel optimization scheme EQ-KLMS. Through the calculation of entropy weights, we firstly modify the training set by removing the inputs and their corresponding outputs with larger uncertainty,which are insigni ficant or easy to cause errors in learning process. Thus the memories and the scale of network will be compressed, the computational efforts and the data storage requirement will decrease when performing kernel algorithm, and the learning error will reduce as well. The experimental results on a data analysis task have demonstrated the effectiveness of our proposed scheme. However,EQ-KLMS calculates the entropy weight for all the training inputs, and it is one of limitations. Thus, we will embed the entropy weight calculation in the training process in the future. In addition, it will be an interesting issue while using our proposed scheme to address some other complex datasets (e.g., stock trading datasets).

        ACKNOWLEDGEMENT

        This work was partially supported by the National Key Technologies R&D Program of China under Grant No. 2015BAK38B01,the National Natural Science Foundation of China under Grant Nos. 61174103 and 61603032, the National Key Research and Development Program of China under Grant Nos. 2016YFB0700502, 2016YFB1001404,and 2017YFB0702300, the China Postdoctoral Science Foundation under Grant No.2016M590048, the Fundamental Research Funds for the Central Universities under Grant No. 06500025, the University of Science and Technology Beijing - National Taipei University of Technology Joint Research Program under Grant No. TW201610, and the Foundation from the National Taipei University of Technology of Taiwan under Grant No.NTUT-USTB-105-4.

        [1] J.M. Fossaceca, T.A. Mazzuchi, S. Sarkani,“MARK-ELM: Application of a novel multiple kernel learning framework for improving the robustness of network intrusion detection”,Expert Systems with Applications, vol.42, no.8, pp 4062-4080, May, 2015.

        [2] R. Mall, J.A. Suykens, “Very sparse LSSVM reductions for large-scale data”,IEEE Transactions on Neural Networks and Learning Systems, vol.26,no.5, pp. 1086-1097, May, 2015.

        Fig.7 The SPR with different threshold ε for temperature item

        Fig.8 The prediction results for humidity item

        Fig.9 The prediction errors for humidity item

        [3] P. Honeine, “Online kernel principal component analysis: A reduced-order model”,IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.34, no.9, pp 1814-1826, September,2012.

        [4] W.F. Liu, J.C. Principe, S. Haykin,Kernel Adaptive Filtering. Wiley, Hoboken, NJ, USA, 2011.

        [5] W.F. Liu, P. Pokharel, J.C. Principe, “The kernel least mean square algorithm”,IEEE Transactions on Signal Processing, vol.56, no.2, pp 543-554,February, 2008.

        [6] Y. Engel, S. Mannor, R. Meir, “The kernel recursive least-squares algorithm”,IEEE Transactions on Signal Processing, vol.52, no.8, pp 2275-2285, August, 2004.

        [7] B.D. Chen, S. Zhao, P. Zhu, J.C. Principe, “Quantized kernel least mean square algorithm”,IEEE Transactions on Neural Networks and Learning Systems, vol.23, no.1, pp 22-32, January, 2012.

        [8] B.D. Chen, S. Zhao, P. Zhu, J.C. Principe, “Quantized kernel recursive least squares algorithm”,IEEE Transactions on Neural Networks and Learning Systems, vol.24, no.9, pp 1484-1491,September, 2013.

        [9] S.Y. Nan, L. Sun, B.D. Chen, Z.P. Lin, KA Toh, ”Density-dependent quantized least squares support vector machine for large data sets”,IEEE Transactions on Neural Networks and Learning Systems, vol.28, no.1, pp 94-106, January, 2017.

        [10] X.G. Xu, H. Qu, J.H. Zhao, X.H. Yang, B.D. Chen,“Quantized kernel least mean square with desired signal smoothing”,Electronics Letters,vol.51, no.18, pp 1457-1459, September, 2015.

        [11] S. Zhao, B.D. Chen, J.C. Principe, “Fixed budget quantized kernel least mean square algorithm”,Signal Processing, vol. 93, no.9, pp 2759-2770,September, 2013.

        [12] S. Zhao, B.D. Chen, Z. Cao, P.P. Zhu, J.C. Principe,“Self-organizing kernel adaptive filtering”,EURASIP Journal on Advances in Signal Processing,vol.2016, no.1, December, 2016.

        [13] X. Luo, D. Zhang, L.T. Yang, J. Liu, X. Chang, H.Ning, “A kernel machine-based secure data sensing and fusion scheme in wireless sensor networks for the cyber-physical systems”,Future Generation Computer Systems, vol.61, pp 85-96,August, 2016.

        [14] X. Luo, J. Liu, D. Zhang, X. Chang, “A large-scale web QoS prediction scheme for the industrial Internet of Things based on a kernel machine learning algorithm”,Computer Networks,vol.101, pp 81-89, June, 2016.

        [15] Y. Xu, X. Luo, W. Wang, W. Zhao, “Eき cient DVHOP localization for wireless cyber-physical social sensing system: A correntropy-based neural network learning scheme”,Sensors, vol.17, no.1,135, January, 2017.

        [16] P. Tang, D. Chen, Y. Hou, “Entropy method combined with extreme learning machine method for the short-term photovoltaic power gener-ation forecasting”,Chaos Solitons and Fractals,vol.89, pp 243-248, October, 2015.

        Table III Computational time for humidity dataset

        Table IV Computational details of EQ-KLMS for humidity dataset

        Fig.10 The MAE with different threshold ε for humidity item

        Fig.11 The SPR with different threshold ε for humidity item

        [17] J. Beirlant, E.J. Dudewicz, L. Gyor fi, E.C. Meulen,“Nonparametric entropy estimation: An overview”,International Journal of the Mathematical Statistics Sciences, vol.6, no.1, pp 1-14, 1997.

        [18] Z.B. Liu, “A maximum margin learning machine based on entropy concept and kernel density estimation”,Journal of Electronics & Information Technology, vol.33, no.9, pp 2187-2191, September, 2011.

        [19] E. Parzen, “On estimation of a probability density function and mode”, Annals of Mathematical Statistics, vol.33, no.3, pp 1065-1076, September, 1962.

        [20] G.B. Huang, H. Zhou, X. Ding, R. Zhang, “Extreme learning machine for regression and multiclass classification”,IEEE Transactions on Systems,Man, and Cybernetics,Part B: Cybernetics,vol.42, no.2, pp. 513-529, April, 2012.

        [21] C. Wu, Y, Chen, F. Li, “Decision model of knowledge transfer in big data environment”,China Communications, vol. 13, no. 7, pp. 100-107,July, 2016.

        [22] C.E. Shannon, “Communication theory of secrecy systems”,The Bell System Technical Journal,vol.28, no.4, pp. 656-715, October, 1949.

        [23] Z.B. Liu, W.J. Zhao, “One-class Learning Machine based on Entropy”,Computer Applications and Software, vol.30, no.11, pp. 99-101, November,2013.

        [24] M.P. Wand, M.C. Jones,Kernel Smoothing. Chapman and Hall/CRC, Boca Raton, FL, USA, 1994.

        [25] https://www.wunderground.com/history

        无码吃奶揉捏奶头高潮视频| 久久久久亚洲精品中文字幕| 亚洲另类自拍丝袜第五页| 99成人无码精品视频| 免费人成黄页网站在线观看国产| 国产亚洲一区二区三区夜夜骚| 一二三四在线观看视频韩国| 人妻 偷拍 无码 中文字幕| 88国产精品视频一区二区三区| 国产精品日本天堂| 亚洲国产av一区二区不卡| 亚洲国产综合久久天堂| 成人性生交大片免费| 中文字幕在线码一区| 日韩少妇高潮在线视频| 国产偷国产偷亚洲高清视频| av午夜久久蜜桃传媒软件| 国产成人美女AV| 蜜桃视频一区二区三区| 日韩精品人妻系列中文字幕| 久久不见久久见www日本网| 少妇人妻偷人精品免费视频| 久热香蕉av在线爽青青| 在线观看视频亚洲一区二区三区| 亚洲精品乱码久久久久久不卡| 人妻影音先锋啪啪av资源| 免费在线观看一区二区| 在线人妻va中文字幕| 国产av无码专区亚洲av蜜芽| 国产美女精品aⅴ在线| 丰满人妻被猛烈进入中文字幕护士| 国产av在线观看久久| 性欧美暴力猛交69hd| 亚洲中字幕永久在线观看| 一区二区中文字幕在线观看污污| 网禁拗女稀缺资源在线观看| 欧洲午夜视频| 男女动态视频99精品| 国产大片黄在线观看| 无码欧亚熟妇人妻AV在线外遇| 国产小屁孩cao大人免费视频|