亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Variable Selection and Parameter Estimation with M-Estimation Method for Ultra-high Dimensional Linear Model

        2021-10-20 03:26:22ZHUYanling朱艷玲WANGKai汪凱ZHAOMingtao趙明濤
        應(yīng)用數(shù)學(xué) 2021年4期

        ZHU Yanling(朱艷玲),WANG Kai(汪凱),ZHAO Mingtao(趙明濤)

        (School of Statistics and Applied Mathematics,Anhui University of Finance and Economics,Bengbu 233000,China)

        Abstract:In this paper,the variable selection and parameter estimation of linear regression model in ultra-high dimensional case are considered.The proposed penalty likelihood M-estimator is proved to have good large sample properties by unifying the least squares,least absolute deviation,quantile regression and Huber regression into a general framework.The variable selection and parameter estimation are performed best by combining backward regression with local linear regression in the numerical simulation.In the case of ultra-high dimension,our general method has good robustness and effectiveness in variable selection and parameter estimation.

        Key words:Ultra-high dimensionality;M-estimation;Penalized likelihood;Variable selection

        1.Introduction

        For the classical linear regression modelY=Xβ+ε,whereY=(y1,y2,...,yn)Tis the response vector,X=(X1,X2,...,Xpn)=(x1,x2,...,xn)T=(xij)n×pnis ann×pndesign matrix,andε=(ε1,ε2,...,εn)Tis a random vector.When dimensionpnis high,it is often assumed that only a small number of predictors among all the predictors contribute to the response,which amounts to assuming ideally that the parameter vectorβis sparse.In order to implement sparsity,variable selection can improve the accuracy of estimation by effectively identifying the subset of important predictors,and also enhance model interpretability of the model.

        In this paper,we assume that the functionρis convex,hence the objective function is still convex and the obtained local minimizer is global minimizer.In Section 2,we discuss some theoretical properties of LLA estimator.In Section 3,we supply a simple and efficient algorithm to give the numerical simulation results.The proofs are given in Section 4.

        2.Main Results

        3.Simulation Results

        In this section we evaluate the performance of the M-estimator proposed in(1.1)by simulation studies.

        For the RSIS+LLA method,it tends to make a more robust selection in the first step,and performs well on the two indicators of estimation error and prediction error,but thus loses part of the ability to misidentify important variables,which makes it easy to omit important variables and cause great errors.In fact,we can guarantee the robustness of the selection and estimation by using the LAD loss function in the second step variable selection.

        For the HOLP+LLA method,its performance in the three indicators of prediction error,correct exclusion of unimportant variables,and error identification of important variables is almost equal to that of FR+LLA,but it is slightly worse in the estimation error of the first index.

        Tab.1 Numerical Results for ε ~N(0,1)

        Example 2 We consider the case of the LAD loss function and the error termε ~t5.The estimated results are shown in Tab.2.

        Tab.2 Numerical Results for ε ~t5

        When the selection error term follows the heavy-tailed distribution,all six methods in Tab.2 perform better than the error term of the standard normal distribution in the first index estimation error and the third index correctly excluding the unimportant variable.The second indicator is slightly worse than the forecast error,and the fourth indicator is basically flat.Overall conclusion is consistent with Example 1,i.e.the performance of the FR+LLA method is slightly superior.

        Example 3We consider the case of the LAD loss function and the error termε ~0.9N(0,1)+0.1N(0,9).The estimated results are shown in Tab.3.

        Tab.3 Numerical Results for ε ~0.9N(0,1)+0.1N(0,9)

        Synthesizing simulation results from Example 1 to Example 3,it can be seen that in the case that the number of explanatory variables is larger than the sample size,we design the plan that the backward regression FR method is used in the first step of variable screening,and the second step uses the local linear penalty LLA method proposed,and the performance on the four indicators is quite good.It also shows that for ultra-high dimensional data models,using the screening method of FR+LLA we provide is feasible and effective,and can be applied to more extensive data to obtain more satisfactory results.

        4.Proofs of Main Results

        久久精品中文字幕免费| 日本老熟欧美老熟妇| 国产99re在线观看只有精品| √天堂中文官网8在线| 无码天堂亚洲国产av麻豆| 女优视频一区二区三区在线观看 | 中国少妇内射xxxx狠干| 久久99精品久久久久久久清纯| 熟女白浆精品一区二区| 少妇又紧又爽丰满在线视频| 性饥渴的农村熟妇| 亚洲综合色成在线播放| 高潮社区51视频在线观看| 日本综合视频一区二区| 国语自产偷拍在线观看| 人人妻人人玩人人澡人人爽 | 亚洲国产精品一区二区成人片国内| 全球中文成人在线| 欧洲亚洲第一区久久久| 国产av自拍在线观看| 欧美性生交大片免费看app麻豆 | www.五月激情| 亚洲精品不卡av在线免费| 午夜爽爽爽男女污污污网站| 人人妻人人澡av天堂香蕉| 中文亚洲成a人片在线观看| 日本一区二区三区视频免费在线 | 亚洲自偷自拍另类第一页| 日本免费视频| 18禁无遮挡无码网站免费| 无码国产日韩精品一区二区| 杨幂一区二区系列在线| 午夜福利理论片在线观看| 91精品福利一区二区| 日韩av中文字幕一卡二卡| 强迫人妻hd中文字幕| a级国产乱理伦片| 国产精品乱码在线观看| 亚洲一区二区女优av| 精品激情成人影院在线播放| 久久精品国产第一区二区三区 |