Mohammadhossein Ghahramani,,Yan Qiao,,MengChu Zhou,, Adrian O’Hagan,and James Sweeney
Abstract—Smart manufacturing refers to optimization techniques that are implemented in production operations by utilizing advanced analytics approaches. with the widespread increase in deploying industrial internet of things(IIOT)sensors in manufacturing processes, there is a progressive need for optimal and effective approaches to data management.Embracing machine learning and artificial intelligence to take advantage of manufacturing data can lead to efficient and intelligent automation. In this paper, we conduct a comprehensive analysis based on evolutionary computing and neural network algorithms toward making semiconductor manufacturing smart.We propose a dynamic algorithm for gaining useful insights about semiconductor manufacturing processes and to address various challenges. We elaborate on the utilization of a genetic algorithm and neural network to propose an intelligent feature selection algorithm.Our objective is to provide an advanced solution for controlling manufacturing processes and to gain perspective on various dimensions that enable manufacturers to access effective predictive technologies.
OVER recent decades,the manufacturing industry witnessed tremendous advances in the form of four major paradigm shifts.In the latest industrial revolution,Industry 4.0,manufacturing has embraced the industrial internet of things(IIOT)[1],[2]and machine learning(ML)to enable machinery to boost performance through self-optimization[3]–[7].Employing computer control over manufacturing phases can make industry processes smart.Broadly speaking,smart manufacturing(SM)can be defined as a data-driven approach that leverages IOT devices and various monitoring sensors.Deploying modern technologies,e.g.,IOT coupled with cloud computing,in manufacturing, provides access to valuable data at different levels,i.e.,manufacturing enterprise,manufacturing equipment,and manufacturing processes.with the prodigious amount of manufacturing data at hand,computational intelligence(CI)enables us to transform data into real-time manufacturing insights.Manufacturing,then,can be controlled by leading-edge CI and artificial intelligence(AI),and tasks are modelled based on experimental observations, to enhance productivity while reducing costs.
Cost-effective and sustainable manufacturing has become the focus of academia and industry.In doing so,it is of so great importance to identify which factors play a pivotal role in process outcomes.An integrated model based on manufacturing processes and data analytics is demonstrated in Fig.1.The model has been divided into different layers and can be considered as a computer-integrated manufacturing(CIM)model from which computational intelligence can take control of the entire production processes. At the Business planning level,all decisions regarding the end product are made.The operational decisions related to optimizing processes are managed in the operation management level.On the monitoring level,different sensor-based monitoring approaches,e.g.,anomaly detection methods,are employed.Finally,data acquisition and real-time processing are performed at the production process level and sensing level,respectively.
Fig.1.Different levels of automation and their corresponding data analytics(ERP:=enterprise resource planning;MES:= manufacturing execution systems;SCADA:=supervisory control and data acquisition;HM I:= human machine interface;PLC:=programmable logic controller;CNC:=computer numerical control;RTU:= remote terminal unit).
The approach implemented in this work aims to mitigate cost and production risks and promoting sustainable development of semiconductor manufacturing.Moving towards an optimal system,i.e.,one that is adaptive and intelligent,is not a trivial task;however,embedding intelligent algorithms in automation and semiconductor production could be beneficial for both reducing cost and enhancing quality of products.The main focus of smart manufacturing studies is on the product life-cycle management,manufacturing process management,industry specific communication protocols,and manufacturing strategies.Recent advances in technology-based solutions,e.g.,IOT,cloud/fog computing,and big data,can expedite and simplify the production process and make new development of manufacturing possible[8]–[12].These advances should drive the evolution of manufacturing architectures into integrated networks of automation devices and enable the smart characteristics of being self-adaptive,self-sensing,and self-organizing.Providing such solutions includes addressing several challenges,e.g.,data volume,data quality and data merging.
Traditional fault detection and diagnosis systems interpret sensory signals as single values[13].Then, these values are fed into a model to verify product status.The main drawback of this approach is that it fails to determine the most important features/operations involved in semiconductor production and may result in the loss of sensory data.Moreover,sensory data might consist of noise,outliers,and missing values and can be characterized by heterogeneous structures.To address these concerns,we propose an intelligent and dynamic algorithm consisting of a feature extraction phase.Generally speaking,predicting the quality of products is an imbalanced classification problem and semiconductor manufacturing is not an exception.To be specific,the dataset is imbalanced because the defective rate in manufacturing processes is quite low in practice.To address this potential issue,a proper imbalanced technique needs to be taken into account to improve model performance.Such implementation is discussed in the following sections.Moreover,we propose an integrated algorithm to solve a multiobjective problem based on an artificial neural network(ANN)and genetic algorithm(GA)to establish a fault diagnosis solution by extracting the most relevant features and then using these features as an input for classifiers.It should be mentioned that multiobjective evolutionary algorithms(MOEA)are divided into different categories,i.e.,decomposition-based and dominance-based methods.In this work,a decomposition method(weighted sum technique) based on a binary GA and ANN is proposed.This approach is practicable for all kinds of manufacturing analyses in the context of feature extraction/selection,dynamic optimization,and fault detection.Specifically,we investigate the following:
1) How a hybrid model based on an evolutionary algorithm combined with ANN can be proposed to model nonlinearity;
2) How to integrate the capabilities of ML combined with AI to implement a highly flexible and personalized smart manufacturing environment;
3) Whether combining ML with AI can outperform the traditional methods.
Given the extracted features, various classification methods are tested,and the one with the minimum classification error rate is selected. Also,a comparison between the proposed solution and traditional methods is presented.The integrated approach is shown to outperform the others in terms of the accuracy and performance of a manufacturing system.This model can also be useful for fault detection without requiring specialized knowledge.In this implementation,we have encountered several issues,e.g., handling imbalanced data,exploration,and exploitation in an optimization process.To address such concerns,various scenarios are discussed throughout the paper.
Modern embedded systems,an emerging area of ML,AI,and IOT,can be a promising solution for efficient,cost effective manufacturing production.Semiconductor manufacturing is a highly interdisciplinary process,complex and costly including various phases.Failures during the manufacturing phases result in faulty products.Hence,detecting the causes of failures is crucial for effective policymaking and is a challenging task in the business planning stage,as demonstrated in Fig.1.This can be achieved by fully exploring production phases and extracting relevant manufacturing features involved in the production.Therefore,fault detection and feature extraction are of much importance.Accordingly, we deal with implementing a model for feature extraction and classification in semiconductor manufacturing.The solution involves developing a model for monitoring processes based on ML and AI algorithms.The overall output of manufacturing processes can then be enhanced by extracting the most relevant features.Consequently,interpreting these features(manufacturing processes) provides us with the ability to identify the root cause of a defect quickly.One such efficient model contributes to cost reduction and productivity improvement.
While the most significant challenge in this work is a feature extraction/feature selection task,some other datarelated issues,such as imbalanced data and outliers,should first be addressed.These data preparation steps aim to transform raw data into meaningful and useful ones that can be used to distinguish data patterns and enable us to implement effective strategies.To solve the imbalanced classification issue, we have adopted a synthetic minority over-sampling algorithm to boost the small number of defective cases and assign a higher cost to the misclassification of defective products than that of normal products.A confidence interval is defined and outliers have been identified based on this measurement and eliminated.Then, the initial set of data is fed into a feature selection algorithm.Feature extraction aims to project high-dimensional data sets into lower-dimensional ones in which relevant features can be preserved.These features, then,are used to distinguish patterns.
The proposed dynamic feature selection model is based on an integrated algorithm including a meta-heuristic method(GA)and an artificial neural network.We have implemented a binary GA to determine the optimal number of features and its relevant cost,which are used to create a predictive model.Our goal is a solution with low-cost values in a search process.The cost function has been defined by using a multilayer perceptron,and considered as an embedded part of a feature selection algorithm.GA consists of different phases,i.e.,parent selection,crossover,mutation,and creating final population(the selected features)[14].Parent selection is a crucial part of GA and consists of a finite repetition of different operations,i.e.,selection of parent strings,recombination,and mutation.The objective in a reproductive phase is to select cost-efficient chromosomes from the population, which create offsprings for the next generation.To address the exploration and exploitation and to avoid premature convergence,we have proposed a selection scheme by combining different crossover operations.The mentioned issue is heavily related to the loss of diversity.The proposed solution also eliminates the cost scaling issue and adjusts the selection pressure throughout the selection phase.We adjust the balance between exploration and exploitation by recombining crossover operators with adjustment of their probabilities. A discussion for determining the exploration and exploitation rate is presented in the following sections.Consequently,off springs are created by adjusting such probabilities throughout the mating pool by establishing a hybrid roulette-tournament pick operator.Selected features are fed to a predictive model to determine fault status.It is worth mentioning that the algorithm considers two major conflicting objectives:minimizing the number of features and maximizing the classification performance.Consequently, the result of the proposed model is compared with traditional approaches.The experiments have verified the effectiveness and efficiency of our approach as compared to those in the literature.In summary,the overall objective is to propose an AI-based multi-objective feature selection method together with an efficient classification algorithm to scrutinise manufacturing processes.
The remainder of this paper is organized as follows:some related work about manufacturing processes,feature extraction and application of AI is described in Section II;a preprocessing procedure is discussed in Section III; the proposed approach with its associated discussions is given in Section IV;the experimental settings and the classification results are shown in Section V;and the future work and conclusions are presented in Section VI.
Recently,the rapid evolution of high-throughput technologies has resulted in the exponential growth of manufacturing data[15].Since traditional approaches toward data management are impractical due to high dimensionality,proposing an effective and efficient data management strategy has become crucial.To do so,ML can help develop strategies to identify patterns from high dimensional datasets automatically.The key to leveraging manufacturing data lies in constant monitoring of processes,which can be associated with different issues,e.g.,noisy signals.Dimensionality reduction and feature selection/extraction methods,e.g.,principal component analysis(PCA),linear discriminant analysis(LDA),and canonical correlation analysis(CCA),play a critical role in dealing with noise and redundant features and must be considered as a preprocessing stage of manufacturing data analysis,which leads to better insights and robust decisions[16].Some previous manufacturing fault detection studies have focused on utilizing the mentioned techniques for extracting the most relevant features and classification.Feature selection methods can be divided into three main categories,i.e.,filter, w rapper,and embedded methods.The filter methods act based on ranking the features.In wrapper methods,features are selected based on the performance of predictors.Finally,embedded methods include variable selection as part of the training process without splitting the data into training and testing sets.
In[17],the authors have utilized PCA to extract features to decrease the computational cost and complexity.Given the extracted features, they have implemented a classification algorithm to infer whether a semiconductor device is a defective or normal sample.To that end,they have adopted a k-nearest neighbor(KNN)classification method.Cherryet al.have conducted another model based on a multiway PCA(MPCA) to monitor stream data[18].A decision tree algorithm has been developed in[19]to explore various types of defective devices.A KNN method has been utilized in [20],and Euclidean distance has been considered to measure similarities among features.Verdieret al.have improved the performance of a KNN algorithm tailored for fault detection in semiconductor manufacturing by defining similarity measurement based on Mahalanobis distance[21]. A support vector machine(SVM)is used to detect semiconductor failures in[22].The authors have developed their approach based on an RBF kernel to address the high dimensionality issue.In[23],an incremental clustering method is adopted for fault detection. A Bayesian model has been proposed to infer a manufacturing process.The authors have considered the root causes of manufacturing problems.However,their approach heavily relies on an expert’s knowledge regarding the related field.Zhenget al.have proposed a convolution neural network[24].They have decomposed multivariate time-series datasets into univariate ones.Then,features have been extracted and an MLP-based method has been implemented for data classification.Leeet al.have compared the performance of different fault detection models,including feature extraction algorithms and classification approaches[25].They have revealed that developing an algorithm based on features that are not suitable for a specific model can significantly deteriorate classifiers’performance.Therefore,it is desirable to consider both feature extraction and classification stages simultaneously to maximize a model’s performance.
Most studies in the literature have focused on using PCA and KNN algorithms for manufacturing data classification.However,PCA-based approaches project features to another space based on a linear combination of original features.Therefore,they cannot be interpreted in the original feature space[26].Moreover,most of the PCA-related work has considered linear PCA,which is not efficient in exploring non-linear patterns.Although these techniques try to cover maximum variance among manufacturing variables,inappropriate selection of parameters,e.g.,principal components,may result in great data loss.KNN is a memorybased classifier.Hence,in cases of high dimensional data sets,its performance degrades dramatically with data size.To overcome the mentioned concerns,an efficient global search method(e.g.,evolutionary computation(EC)techniques)should be considered to address feature selection problems better[27].These techniques are well-known for their global search ability.Derracet al.[28]have proposed a cooperative co-evolutionary algorithm for feature selection based on a GA.The proposed method addresses the feature selection task in a single process.However,it should be mentioned that EC algorithms are stochastic methods,which may produce different solutions when using different starting points.Therefore, the proposed model suffers an instability issue.Zamalloaet al.[29] have utilized a GA-based method to rank features.Consequently,features have been selected given the rank orders. A potential drawback of this work is that the proposed method might lead to data loss.Moreover,this solution has not considered the correlation among features.
To address the mentioned concerns,we have proposed our solution based on a dynamic feature selection method consisting of different modes to provide information on the variables that are crucial for fault diagnosis.To that end,we have integrated ANN into our model in order to examine nonlinear relationship among features. Advanced computing and AI can provide manufacturing with a higher degree of intelligence and low-cost sensing and improve efficiency [30].The process of conducting intelligent manufacturing can be regarded in two ways.Firstly, the manufacturing industry has become a great contributor to the service industry,and secondly,the lines between the cyber and physical systems are becoming blurred.Hence,architectural approaches like service-oriented architectures(Cloud manufacturing)can be taken into account in manufacturing modes and systems.In such distributed and heterogeneous systems, manufacturing resources can be aggregated based on an efficient serviceoriented manufacturing model and processed/monitored in an effective way.The application of those solutions can pave the way for large-scale analysis and leads to high productivity.Developing a successful model includes various steps,e.g.,data cleansing and data transformation,to reveal insights.As the quality of data affects the analysis,it is essential to employ a data preprocessing procedure.Such discussion is demonstrated next.
The data set used in this work is obtained from a semiconductor factory,semiconductor manufacturing(SECOM)dataset.It consists of various operation observations,i.e.,wafer fabrication production data,including 590 features(operation measurements).The target feature is binomial(FailureandSuccess),referring to the production status,and encoded as0 and 1.The first step in data analysis is data cleansing to address a variety of data quality issues,e.g.,noise,outliers,inconsistency,and missing values.We have dealt with missing value and noise resulting from inexact data collection.These can negatively affect a later process.Outlier labelling methods andT-squared statistics(T2) have been utilized.Any observation beyond the interval has been eliminated.
The observations that have been labeled asFailureare relatively rare(104 cases)as compared to theSuccessclass.Hence,we face an imbalanced classification issue.In other words,Successclass(the majority)outnumberedFailureclass(the minority)and both classes do not make up an equal portion of our data set.Two distinctive approaches can be considered to deal with this issue:1)skew-insensitive methods;2) re-sampling methods.The first category addresses the problem by assigning a cost to the training data set while the second one adjusts the original data set such that a more balanced class distribution is achieved.Re-sampling methods have become standard approaches and have been dominantly utilized recently[31]–[33].They can be classified into different categories,e.g.,sampling strategies,w rapper approaches,and ensemble-based methods.Implementing a proper method is crucial,otherwise,it can be problematic,e.g.,data loss and overfitting,and can result in a poor outcome.Our goal in this phase is to balance class relatively distribution.To do so,we have utilized a synthetic minority over-sampling technique.There are various over-sampling algorithms,such as SMOTE,Borderline-SMOTE,and Safe-Level-SMOTE, just to name a few.The mentioned methods create synthetic samples based on the nearest neighbour approach and can be negatively impacted by the overgeneralization issue.To overcome these problems,in this work,a density-based SMOTE[34],[35] technique is utilized and by synthetically addingFailureclass instances,we make the distribution more balanced.It is an over-sampling method in which theFailureclass is over-sampled by generating its synthetic instances.
As stated,the data set consists of nearly 600 features.Data sets with high dimensions can cause serious challenges such as overfitting in learning processes,known as the curse of dimensionality.To address these challenges, the dimensionality needs to be reduced and different approaches have been proposed in the literature.Generally speaking,dimensionality reduction can be considered as an approach to eliminate redundant(or noisy)features.It can be divided into two categories,feature extraction and feature selection.The former refers to those methods(PCA and LDA)that map original features to a new feature space with lower dimensionality while the latter aims to select a subset of features such that the trained model(based on the selected features)minimizes redundancy and maximizes relevance to the target feature.PCA(a classic approach to dimensionality reduction),multidimensional scaling,and independent component analysis(ICA)all suffer from a global linearity issue.To address the mentioned shortcoming,nonlinear techniques have been proposed: kernel PCA,Laplacian eigenmaps and semidefinite embedding.Since reconstructing observations(after the projection phase)in these nonlinear methods is not a trivial task,finding the corresponding pattern is sometimes impractical.In a feature extraction approach,observations are projected into another space where there is no physical meaning between newly generated features and the original ones.Hence,feature selection methods are superior in terms of readability and interpretability in this sense.Therefore,to avoid complexity and uncertainty that feature extraction techniques bring,a feature selection approach has been opted for in this work.To this end,we have proposed an integrated approach,consisting of a metaheuristic algorithm(GA)and an artificial neural network.GA is a heuristic search method and inspired by Charles Darwin’s theory of natural evolution.Since selecting features can be considered as a binary problem, we have developed our model based on binary GA that treated candidate features(chromosomes in GA terminology)as bit-strings.
GA relies on a population of individuals to explore a search space.Each individual is a set of chromosomes,encoded as strings of 0(if the corresponding feature is not selected)and 1(if the feature is selected).GA utilizes an initial population and some genetic operators,e.g.,crossover and mutation,to generate a new generation by recombining a population’s chromosomes.Then,fitter individuals are selected according to a cost-function (objective-function)in a reproduction phase.GA maintains its effectiveness from two sources:exploration and exploitation.The former can be considered as a process of exploring a search space(by genetic search operators,e.g.,crossover operation),while the latter is the process of employing a mutation operator and modifying offsprings’chromosomes.A balance between the mentioned abilities(exploration and exploitation)should be maintained.To that end,beneficial aspects of existing solutions(individuals with lower costs)should be exploited.Moreover,exploring the feature space in order to find an optimal solution(optimal features)is crucial.While a crossover operation is the main search operator,a mutation operator is employed to avoid premature convergence.The level of exploration/exploitation can be controlled by selection processes,e.g.,selection pressure parameter.Selecting an appropriate pressure measurement( β in this work)can maintain a balance between exploration and exploitation.Such discussions are provided in Section IV.Parameter β has been used in the parent selection stage and candidate individuals have been taken into account in the generation production.This operation,iteratively, has been repeated until the termination criteria(number of iterations or number of function evaluations(NFE))are met.The best individual(the one with the minimum cost)is selected,and in this way optimal features are then identified.Fig.2 displays our proposed feature selection model.
As mentioned earlier,our objective is to modify the output of each iteration(a subset of features)by searching feature space and finding proper values for the input features such that the measured cost is minimized.Given Fig.2,our proposed feature selection model consists of different phases.It starts with defining an initial population,i.e.,individuals includingm-dimensional chromosomes.
As mentioned earlier,in this work,we deal with a classification problem with a relatively large number of variables.It has been widely discussed[37]that irrelevant variables may deteriorate the performance of algorithms.The application of feature extraction/selection methods can make it possible to choose a subset of features possible and thus helps achieve reliable performance.Most studies in the literature have considered feature selection as a single objective problem while our solution is based on a multi-objective approach.In this section,different approaches,i.e.,conventional feature extraction methods and the model proposed in this work are compared.Our objective is to demonstrate that an intelligent algorithm can outperform the results of other competing classification methods.
Different scenarios in the context of feature extractions are available to remove irrelevant features.All solutions have been considered as a preprocessing task in order to increase the learning accuracy.These conventional methods can be categorized into the filter, wrapper,embedded,and hybrid techniques.Filter methods are divided into univariate and multivariate layers.The relevance of features is evaluated based on ranking techniques.W rapper methods,e.g.,sequential selection and heuristic search algorithms,are basically a search algorithm and relevant features are selected by training and testing a classification model.Embedded methods are performed based on dependencies among features.Finally,the hybrid method is based on a combination of other approaches and consists of different phases.These methods have some serious drawbacks which can make their results unrealistic.Filter methods do not consider the features’dependencies and the relationship between independent and dependent features.There is a high risk of an overfitting problem in the w rapper approach.Embedded methods are more of a local discrimination approach than a global one and the hybrid methods are computationally expensive. Next,we compare the results of the models implemented in this work.
The experiment has been conducted on a computer with quad-core Intel i9-7900X CPU 8 GHz processor and 32 GB memory.It was equipped with a NVIDIA GeForce GTX 1080 GPU and 8 GB memory.The parallel algorithm has been implemented by CUDA programming.
The proposed algorithm for feature selection is based on an adaptive and dynamic GA combined with a neural network.Our meta-heuristic method evaluates various subsets of features to optimize our defined cost function whose calculation has been given to a multilayer perceptron.We consider the volume of our data and the number of features and samples for defining the initial population rate.We choose the number of neurons based on a trial and error method.It should be mentioned that we have used the neural network as a cost function,and in this context,the main objective is to decrease the cost function’s values.The algorithm gets the initial solutions(manufacturing operations)and obtains the optimal features after a series of iterative computations(given the termination criteria,e.g.,number of function evaluations).Fig.4 displays the cost values in each iteration.
Fig.4.Cost function value versus NFE.
Fig.5.ROC curves for different classification methods:(a)linear discriminant;(b)random forest;(c)logistic regression;(d)Gaussian SVM;(e)k-NN;and(f)SVM with RBF kernel.
Finally, we have examined various classification techniques,and the most appropriate one is selected.To do so,different classification models,e.g.,Gaussian support vector machine,random forest,linear discriminant,k-NN,and SVM with RBF kernel,have been tested.The classifiers’performance is evaluated according to their classification accuracies.The ability of each method to accurately predict the correct class is measured and expressed as a percentage.ROC curves are used to determine the predictive performance of the examined classification algorithms.The area under a ROC curve can be considered as an evaluation criterion to select the best classification algorithm.When the area under the curve is approaching 1,it indicates that the classification has been carried out correctly.Fig.5 shows AUC-ROC curves resulted from implementing different classification methods.
Some statistical results(e.g.,the percentage of correct predictions)have also been provided in Table I.Given the results demonstrated,Gaussian SVM has been selected as the classification model.Some other feature selection methods are also utilized to compare their results with our proposed approach.The discussion regarding it is presented next.
TABLE I Comparisons of Different Classification Machine Learning Models
As discussed in the previous sections,most studies regarding manufacturing data analysis have considered PCAbased approaches which aim to detect the directions of most variation.Together with PCA,we have tested most popular algorithms,e.g.,family-wise error rate(FWE),false discovery rate(FDR),sequential forward selection(SFS),sequential backward selection(SBS),filtration feature selection(FFS),correlation-based feature selection(CFS),Lasso regression and ensemble methods,for feature extraction[38].We have implemented these traditional methods to reduce the dimensionality of our data set and compared the results.To do so, the extracted features are used as the input for the chosen classifier.Fig.6 displays the analysis based on the Lasso regression method.
The experimental results(Table II)show that our proposed model is superior over those conventional ones.The corresponding accuracy rate of the proposed model is over 90%.An ROC comparison between our method and two of the traditional techniques is demonstrated in Fig.7.
The goal of manufacturing enterprises is to develop costeffective and competitive products.Manufacturing intelligence can significantly improve effectiveness by bridging business and manufacturing models with the help of low-cost sensor data.It aims to achieve a high level of intelligence with the latest appropriate technology-based computing,advanced analytics,and new levels of Internet connectivity.The landscape of Industry 4.0 includes achieving visibility on real-time processes,mutual recognition,and establishing an effective relationship among the workforce,equipment,and products.Most work in the area of manufacturing data analysis are based on PCA-based approaches.They are not able to recognize nonlinear relationships among features and extract complex pattern.To address this concern,we have proposed a dynamic feature selection method based on GA and ANN.We have compared the results achieved in this work with traditional approaches to prove our proposed solution’s effectiveness.As a part of our future work, we plan to consider other MOEAs,e.g.,dominance-based algorithms[39]–[46],to solve our optimization problem so that both feature selections’objective functions are optimized simultaneously.Moreover, we plan to compare the current model with other evolutionary algorithms proposed for feature selection [47]–[51].
GA has different parameters and the performance of a GAbased model depends on these parameters.We have discussed how they have been selected throughout this work.Table III reveals the impacts of different parameter setting.
Fig.6.Selecting features based on a conventional method,i.e.,Lasso regression.The panels show the Lasso coefficient estimates and the curve of the measurements for the degrees of freedom of the Lasso.
TABLE II Algorithms Comparisons
Fig.7.Comparing ROC results from the (a) proposed method vs (b)PCA vs(c)Lasso regression.
TABLE III Comparing the Results of our Hybrid Model Given Different Parameter Setting
IEEE/CAA Journal of Automatica Sinica2020年4期