亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        一種二維魯棒隨機(jī)權(quán)網(wǎng)絡(luò)及其應(yīng)用

        2016-08-01 09:05:31陳甲英曹飛龍
        關(guān)鍵詞:人工神經(jīng)網(wǎng)絡(luò)人臉識(shí)別

        陳甲英,曹飛龍

        (中國(guó)計(jì)量大學(xué) 理學(xué)院,浙江 杭州 310018)

        ?

        一種二維魯棒隨機(jī)權(quán)網(wǎng)絡(luò)及其應(yīng)用

        陳甲英,曹飛龍

        (中國(guó)計(jì)量大學(xué) 理學(xué)院,浙江 杭州 310018)

        【摘要】二維隨機(jī)權(quán)網(wǎng)絡(luò)的主要特點(diǎn)是將矩陣數(shù)據(jù)直接作為輸入,可以保留矩陣數(shù)據(jù)本身的結(jié)構(gòu)信息,從而提高識(shí)別率.然而,二維隨機(jī)權(quán)網(wǎng)絡(luò)在處理含有離群值的人臉圖像識(shí)別問(wèn)題時(shí)效果往往不佳.為了解決該問(wèn)題,提出一種新的人臉識(shí)別方法——二維魯棒隨機(jī)權(quán)網(wǎng)絡(luò),并用期望最大化算法來(lái)求解網(wǎng)絡(luò)參數(shù).實(shí)驗(yàn)結(jié)果顯示,該方法能夠較好地處理含有離群值的人臉識(shí)別問(wèn)題.

        【關(guān)鍵詞】人工神經(jīng)網(wǎng)絡(luò);二維隨機(jī)權(quán)網(wǎng)絡(luò);人臉識(shí)別;期望最大化算法

        人臉識(shí)別技術(shù)是模式識(shí)別、圖像處理、機(jī)器視覺(jué)、神經(jīng)網(wǎng)絡(luò)等領(lǐng)域的研究熱點(diǎn)之一,已廣泛應(yīng)用于身份識(shí)別、證件驗(yàn)證、銀行和海關(guān)的監(jiān)控、門衛(wèi)系統(tǒng)、視頻會(huì)議、機(jī)器智能化以及醫(yī)學(xué)等方面.傳統(tǒng)的人臉識(shí)別一般分為四個(gè)步驟:人臉檢測(cè),圖像預(yù)處理,特征提取和構(gòu)造分類器分類.其中特征提取和分類是人臉識(shí)別研究的關(guān)鍵.

        人臉識(shí)別按照人臉樣本的來(lái)源可分為兩類:基于靜態(tài)人臉圖像的識(shí)別和基于包含人臉動(dòng)態(tài)信息的識(shí)別.本文所提及的方法都是基于靜態(tài)人臉圖像的識(shí)別.現(xiàn)有的人臉識(shí)別方法主要有:基于幾何特征的人臉識(shí)別方法[1-2],基于統(tǒng)計(jì)特征的人臉識(shí)別方法[3-10],基于模型的人臉識(shí)別方法[11-14]和基于神經(jīng)網(wǎng)絡(luò)的識(shí)別方法.Carpenter首次將神經(jīng)網(wǎng)絡(luò)應(yīng)用于模式識(shí)別[15],Cottrell等在文獻(xiàn)[16]中使用BP神經(jīng)網(wǎng)絡(luò)進(jìn)行人臉識(shí)別.隨后,LIN和KUNG等人[17]結(jié)合神經(jīng)網(wǎng)絡(luò)和統(tǒng)計(jì)學(xué)方法提出了概率決策神經(jīng)網(wǎng)絡(luò)(probabilistic decision-based neural network, PDBNN)模型,并將該模型應(yīng)用于人臉識(shí)別,文獻(xiàn)[18]、[19]提出了基于徑向基神經(jīng)網(wǎng)絡(luò)(radial basis function neural network, RBF)的識(shí)別方法,文獻(xiàn)[20]和[21]提出了基于支持向量機(jī)(support vector machine, SVM)的識(shí)別方法.但是,這些均是處理輸入向量形式的識(shí)別方法,2014年LU等人[22]考慮了識(shí)別圖像本身的結(jié)構(gòu)信息,將向量輸入的一維隨機(jī)權(quán)網(wǎng)絡(luò)推廣到了二維情形,提出了一種二維隨機(jī)權(quán)網(wǎng)絡(luò)(two dimensional neural networks with random weights, 2DNNRW)并將它成功地應(yīng)用到了人臉識(shí)別中.2015年CAO等人[23]提出了一種概率魯棒隨機(jī)權(quán)模型(probabilistic neural networks with random weights, PRNNRW)能夠很好地處理含有噪聲的數(shù)據(jù).我們知道,在實(shí)際應(yīng)用中,人臉圖像往往含有離群值,在考慮結(jié)構(gòu)信息的同時(shí)又能夠較好地處理含有離群值的人臉識(shí)別問(wèn)題無(wú)疑是值得研究的問(wèn)題,本文的目的就在于此.

        本文結(jié)構(gòu)如下:第一節(jié)簡(jiǎn)要介紹一維隨機(jī)權(quán)網(wǎng)絡(luò);第二節(jié)主要介紹二維魯棒隨機(jī)權(quán)網(wǎng)絡(luò)及其外權(quán)的求解方法;第三節(jié)是實(shí)驗(yàn)部分,用四個(gè)不同的人臉數(shù)據(jù)庫(kù)中的數(shù)據(jù)做實(shí)驗(yàn),實(shí)驗(yàn)結(jié)果說(shuō)明本文所提出的方法是有效的;第四節(jié)給出本文的結(jié)論.

        1隨機(jī)權(quán)網(wǎng)絡(luò)

        一般地,一個(gè)單隱層前向神經(jīng)網(wǎng)絡(luò)模型可以表示如下:

        在監(jiān)督學(xué)習(xí)理論中,神經(jīng)網(wǎng)絡(luò)的隱層參數(shù)和輸出權(quán)需要根據(jù)訓(xùn)練樣本訓(xùn)練得到.一個(gè)常用的參數(shù)學(xué)習(xí)方法為BP算法(errorback-propagation, BP),它通過(guò)梯度下降的方法來(lái)調(diào)節(jié)權(quán)值和閾值的.然而BP算法收斂慢而且往往得到的是局部極小解.文獻(xiàn)[24]首次提出一種單隱層前向神經(jīng)網(wǎng)絡(luò)的快速學(xué)習(xí)算法,即隨機(jī)權(quán)網(wǎng)絡(luò)(neural networks with random weights, NNRW).之后,PAO等人[25-27]提出了類似的方法:隨機(jī)泛函連接網(wǎng)絡(luò),并且證明了它有萬(wàn)能逼近能力.最近,在文獻(xiàn)[28-30]中又提出一些先進(jìn)的隨機(jī)學(xué)習(xí)算法.事實(shí)上,NNRW的主要思想是:給定一個(gè)樣本集,輸入權(quán)值和閾值是服從某種隨機(jī)分布的隨機(jī)變量,輸出權(quán)可以利用最小均方誤差算法來(lái)計(jì)算.

        在處理人臉識(shí)別問(wèn)題時(shí),NNRW雖然耗時(shí)少,但是要先將樣本轉(zhuǎn)換為列向量,這樣的處理將破壞樣本相鄰像素之間的相關(guān)性,從而影響識(shí)別效果.同時(shí),對(duì)于含有離群值的數(shù)據(jù),NNRW處理效果并不理想.針對(duì)這些問(wèn)題,本文提出一種二維魯棒隨機(jī)權(quán)網(wǎng)絡(luò)(Two Dimension Robust Neural Networks with Random Weights, 2DRNNRW).

        2二維魯棒隨機(jī)權(quán)網(wǎng)絡(luò)

        利用NNRW進(jìn)行人臉識(shí)別時(shí),需要先將人臉圖像或者圖像特征轉(zhuǎn)換為列向量,這將破壞原始圖像或圖像特征元素之間的相關(guān)性,從而影響分類結(jié)果.同時(shí),2DNNRW[22]在處理含有離群值的人臉識(shí)別問(wèn)題時(shí)效果并不理想.為了得到一個(gè)對(duì)離群值具有魯棒性且識(shí)別效果較好的模型,本文提出了一種二維魯棒隨機(jī)權(quán)網(wǎng)絡(luò)(2DRNNRW),用期望最大化算法(Expectation Maximization Algorithm,EM)[31]來(lái)求解網(wǎng)絡(luò)外權(quán),并將它成功地應(yīng)用到人臉識(shí)別問(wèn)題.

        根據(jù)文獻(xiàn)[22],2DNNRW可以表示為

        (5)

        給定一組人臉圖像訓(xùn)練樣本集:

        {(Xi,yi)|Xi∈Rm×n,yi∈Rl,i=1,2,…N},

        N是訓(xùn)練樣本數(shù),l是類別數(shù).根據(jù)(5)式我們可以得到如下的線性方程:

        Gβ=Y.

        (6)

        其中β=[β1,β2,…,βL]Τ,Y=[y1,y2,…,yN]T,

        (7)

        同樣地,2DNNRW的輸出權(quán)β可以通過(guò)求解如下的均方誤差最小優(yōu)化問(wèn)題:

        (8)

        且A∈Rm×n,aij是A中第i行j列的元素.

        根據(jù)文獻(xiàn)[32],輸出權(quán)的范數(shù)和訓(xùn)練誤差越小,網(wǎng)絡(luò)的泛化性能越好.基于此,文獻(xiàn)[29]提出了一種正則化模型

        (9)

        其中μ>0是一個(gè)常數(shù),用來(lái)平衡訓(xùn)練誤差項(xiàng)和懲罰項(xiàng).由于F范數(shù)懲罰項(xiàng)的作用,模型(9)有更好的泛化性和穩(wěn)定性.然而,在有離群值存在的情況下,模型(8)和(9)中誤差的F范數(shù)損失函數(shù)缺乏魯棒性.

        事實(shí)上,樣本數(shù)據(jù)通常含有離群值和噪聲,因此,考慮到離群值和噪聲的影響,(6)式變?yōu)?/p>

        Y=Gβ+e.

        (10)

        其中G如(7)式所示,Y=[y1,y2,…,yN]T是類別標(biāo)簽矩陣,e是誤差矩陣.

        (11)

        其中τ>0是正則化參數(shù).

        由于求解模型(11)是一個(gè)NP-Hard問(wèn)題[33],文獻(xiàn)[34]指出,最小化l1范數(shù)可以得到稀疏解.因此,用l1范數(shù)來(lái)代替(11)式中的l0范數(shù),從而得到了一個(gè)新的魯棒隨機(jī)權(quán)模型:

        (12)

        假設(shè)輸出權(quán)β滿足Gaussian分布,誤差e滿足Laplace分布,根據(jù)文獻(xiàn)[23]可以將求解模型(12)的問(wèn)題等價(jià)轉(zhuǎn)化為一個(gè)最大后驗(yàn)概率估計(jì)問(wèn)題,即

        Y=Gβ+e

        (13)

        eik|λ~L(eik|0,λ),

        i=1,2…N,j=1,2…L,k=1,2…L.

        (14)

        關(guān)于e的Laplace概率密度函數(shù)為

        (15)

        為了求解模型(13),下面給出Laplace分布的一條性質(zhì)[35]:

        (16)

        為了便于求解,引入一個(gè)與目標(biāo)標(biāo)簽矩陣Y相關(guān),并且服從指數(shù)先驗(yàn)分布的潛在變量W∈RN×l,根據(jù)Laplace分布的性質(zhì)[35],每個(gè)eik的Laplace分布可以表示為下面的形式:

        (17)

        i=1,2,…N,k=1,2…L.

        因此,模型(13)可以等價(jià)轉(zhuǎn)化為求解如下的概率:

        (18)

        采用EM算法[31]求解模型(18).EM算法通過(guò)E-step和M-step的交替迭代來(lái)求解關(guān)于模型參數(shù)的最大后驗(yàn)概率.通常模型(18)與潛在變量W是獨(dú)立的.將待估計(jì)參數(shù)第k次的估計(jì)值表示為βc.

        在E-step計(jì)算所有數(shù)據(jù)關(guān)于潛在變量W的對(duì)數(shù)后驗(yàn)概率的期望:

        Q(β,βc)=EW(log(p(β|Y,W)|Y,βc)).

        (19)

        根據(jù)貝葉斯準(zhǔn)則,關(guān)于潛在變量W對(duì)數(shù)后驗(yàn)概率可以表示為

        logp(β|Y,W)=

        (20)

        其中Gi表示輸入權(quán)的第i行.

        根據(jù)文獻(xiàn)[36]中的命題18,可以得到

        (21)

        (22)

        接下來(lái),進(jìn)行M-step求解β,即最大化關(guān)于β的模型(19),對(duì)β求偏導(dǎo)數(shù),并且令偏導(dǎo)數(shù)為0,求得β的形式解如下:

        (23)

        總結(jié)具體的算法如下.

        魯棒二維隨機(jī)權(quán)算法

        輸入:樣本集,

        {(Xi,yi)|Xi∈Rm×n,yi∈Rl,i=1,2,…N},

        隱層節(jié)點(diǎn)數(shù)L,激活函數(shù)f,正則化參數(shù)λ>0,λβ>0,終止誤差ε,最大迭代次數(shù)max_iter.

        步驟3隨機(jī)初始化輸出權(quán)β0;

        步驟4用EM算法求解外權(quán)βc:

        輸出:β.

        3實(shí)驗(yàn)結(jié)果及分析

        本文所有實(shí)驗(yàn)都是在MATLAB7.11.0環(huán)境下進(jìn)行,計(jì)算機(jī)配置為Intel(R)Core(TM)i3-4150@3.50GHz,內(nèi)存為4.00GB.利用四個(gè)人臉數(shù)據(jù)庫(kù)的數(shù)據(jù)做實(shí)驗(yàn)來(lái)驗(yàn)證本文所提出方法的有效性.表1給出了數(shù)據(jù)庫(kù)相關(guān)信息,圖1給出了數(shù)據(jù)集的部分樣本.本文中所有實(shí)驗(yàn)結(jié)果數(shù)據(jù)都是實(shí)驗(yàn)運(yùn)行30次結(jié)果的平均值.

        本文參數(shù)設(shè)置為λ=0.01,λβ=0.1,

        表1 試驗(yàn)中用到的數(shù)據(jù)集

        本文用2DNNRW、PRNNRW和本文提出的2DRNNRW分別對(duì)表1給出的數(shù)據(jù)庫(kù)中的數(shù)據(jù)進(jìn)行分類,其分類精度顯示在表2中.可以發(fā)現(xiàn),對(duì)于不含離群值的數(shù)據(jù)集,2DNNRW和2DRNNRW的分類精度高于PRNNRW,這說(shuō)明在人臉識(shí)別中,以矩陣作為輸入的分類算法更具優(yōu)勢(shì).

        將表1中所列的人臉數(shù)據(jù)庫(kù)中的數(shù)據(jù)人為地加入10%的離群值,然后分別用2DNNRW、PRNNRW和本文提出的2DRNNRW對(duì)加入離群值的人臉數(shù)據(jù)分類,實(shí)驗(yàn)結(jié)果記錄在表3中.從表3可以發(fā)現(xiàn)兩種魯棒方法PRNNRW和2DRNNRW的識(shí)別精度均比2DNNRW高很多,因此,PRNNRW和2DRNNRW對(duì)于含有離群值的數(shù)據(jù)分類效果顯著.但是,由于PRNNRW和2DRNNRW是通過(guò)迭代求解的,與2DNNRW相比,它們比較耗時(shí).從表3、表4中實(shí)驗(yàn)結(jié)果可以發(fā)現(xiàn)較之PRNNRW,2DRNNRW耗時(shí)少且精度高.因此,在處理含有離群值的人臉識(shí)別問(wèn)題時(shí),2DRNNRW具有明顯優(yōu)勢(shì).

        表2在不含離群值的數(shù)據(jù)集上平均識(shí)別率的對(duì)比試驗(yàn)

        Table 2Comparison experiment of average recognition rate without outliers

        數(shù)據(jù)集PRNNRW2DNNRW2DRNNRWYALE0.89550.96750.9630PIE0.64970.74730.7601OLIVETTI0.85830.92370.9210FERET0.80250.84680.8435

        數(shù)據(jù)集樣本示例YALEPIEOLIVETTIFERET

        圖1數(shù)據(jù)庫(kù)樣本示例

        Figure 1Examples of face dataset

        在YALE和OLIVETTI數(shù)據(jù)庫(kù)上,選取隱層節(jié)點(diǎn)數(shù)分別為200、400、600、800、1 000、1 200和1 400,分別用PRNNRW、2DNNRW和2DRNNRW對(duì)數(shù)據(jù)庫(kù)中的人臉進(jìn)行分類實(shí)驗(yàn). 從圖2、圖3和表 5、表6可以發(fā)現(xiàn)隨著節(jié)點(diǎn)數(shù)的增大,三個(gè)模型都趨于穩(wěn)定.但是,本文提出的方法2DRNNRW的識(shí)別精度明顯大于其他兩種方法.

        表3在含有離群值的數(shù)據(jù)集上平均識(shí)別率的對(duì)比試驗(yàn)

        Table 3Comparison experiment of average recognition rate in the dataset with outliers

        數(shù)據(jù)集RPNNRW2DNNRW2DRNNRWYALE0.84470.08620.9175PIE0.63270.02020.7196OLIVETTI0.82880.02780.8855FERET0.78190.01540.8210

        表4在含有離群值的數(shù)據(jù)集上的平均測(cè)試時(shí)間

        Table 4Comparison experiment of average testing time in the dataset with outliers

        數(shù)據(jù)集RPNNRW2DNNRW2DRNNRWYALE15.84870.21289.0055PIE78.89130.790150.8900OLIVETTI39.73450.704318.8375FERET64.24750.456634.4675

        4結(jié)語(yǔ)

        由于傳統(tǒng)的人臉識(shí)別分類器是以向量數(shù)據(jù)作為輸入的,首先需要將人臉圖像或矩陣形式的特征轉(zhuǎn)化為向量形式,這樣不可避免地破壞了人臉圖像或者矩陣特征各元素之間的相關(guān)性,從而影響了分類效果.近年來(lái),盡管出現(xiàn)了一些二維分類器,但是對(duì)于含有離群值的人臉圖像識(shí)別問(wèn)題效果并不好.針對(duì)以上問(wèn)題,本文提出了一種二維魯棒隨機(jī)權(quán)網(wǎng)絡(luò).用左投影向量和右投影向量代替了單隱層前饋神經(jīng)網(wǎng)絡(luò)的高維輸入權(quán)值,從而保留了矩陣輸入的結(jié)構(gòu)信息;同時(shí),在求解外權(quán)時(shí),采用一種l1懲罰函數(shù)和F范正則項(xiàng)的混合正則化模型.在模型參數(shù)和離群值滿足某些分布的假設(shè)的前提下,用EM算法求解該模型.實(shí)驗(yàn)結(jié)果顯示,本文提出的方法能夠很好的處理含有離群值的人臉識(shí)別問(wèn)題.

        圖2 PRNNRW、2DNNRW和2DPRNNRW在YALE數(shù)據(jù)庫(kù)上不同的隱層節(jié)點(diǎn)下的識(shí)別率Figure 2 Recognition rate comparison of PRNNRW,2DNNRW and 2DRNNRW under differentnumber of hidden nodes on YALE datasets.

        Table 5Comparison experiment of average recognition rate under different number of hidden nodes on the YALE dataset with outliers

        算法200400600800100012001400PRNNRW0.71180.81340.83900.84350.86100.85160.86102DNNRW0.10450.07480.08090.07520.09110.08050.08292DRNNRW0.80280.88090.90240.91590.92850.92800.9191

        表6在含有離群值的OLIVETTI數(shù)據(jù)庫(kù)上不同隱層節(jié)點(diǎn)的平均識(shí)別率

        Table 6Comparison experiment of average recognition rate under different number of hidden nodes on the OLIVETTI dataset with outliers

        算法200400600800100012001400PRNNRW0.17570.53980.70270.79550.83500.86070.86822DNNRW0.03070.03930.02670.02570.02700.02580.02592DRNNRW0.25920.74170.84680.87730.89400.89900.9075

        圖3 PRNNRW、2DNNRW和2DPRNNRW在OLIVETTI數(shù)據(jù)庫(kù)上不同的隱層節(jié)點(diǎn)數(shù)下的識(shí)別率Figure 3 Recognition rate comparison of PRNNRW, 2DNNRW and 2DRNNRW under different number of hidden nodes on OLIVETTI datasets.

        【參考文獻(xiàn)】

        [1]LAM K M, YAN Hong. Locating and extracting the eye in human face images [J]. Pattern Recognition,1996,29(5):771-779.

        [2]DENG J Y, LAI Feipei. Region-based template deformation and masking for eye-feature extraction and description [J]. Pattern Recognition,1997,30(3):403-419.

        [3]TURK M, PENTLAND A P. Eigenfaces for recognition [J]. Journal of Cognitive Neuroscience,1991,3(1):71-86.

        [4]SHERMINA J. Illumination invariant face recognition using discrete cosine transform and principal component analysis[C]// 2011 International Conference on Emerging Trends in Electrical and Computer Technology (ICETECT). Tamil Nadu: IEEE,2011:826-830.

        [5]LU Jiwen, TAN Yappen, WANG Gang. Discriminative multi-manifold analysis for face recognition from a single training sample per person[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2013,35(1):39-51.

        [6]ONKARE R P, CHAVAN M S, PRASAD S R. Efficient Principal Component Analysis for Recognition of Human Facial Expressions [J]. International Journal of Advance Research in Computer Science and Management Studies,2015,3(2):53-60.

        [7]LU Guifu, ZOU Jian, WANG Yong. Incremental complete LDA for face recognition [J]. Pattern Recognition,2012,45(7):2510-2521.

        [8]ZHOU Changjun, WANG Lan, ZHANG Qiang, et al. Face recognition based on PCA image reconstruction and LDA [J]. Optik-International Journal for Light and Electron Optics,2013,124(22):5599-5603.

        [9]OH S K, YOO S H, PEDRYCZ W. Design of face recognition algorithm using PCA-LDA combined for hybrid data pre-processing and polynomial-based RBF neural networks: Design and its application [J]. Expert Systems with Applications,2013,40(5):1451-1466.

        [10]BANSAL A, MEHTA K, ARORA S. Face recognition using PCA and LDA algorithm[C]// 2012 Second International Conference on Advanced Computing & Communication Technologies (ACCT). Rohtak, Haryana: IEEE,2012:251-254.

        [11]SHARIF M, SHAH J H, MOHSIN S, et al. Sub-holistic hidden markov model for face recognition [J]. Research Journal of Recent Sciences,2013,2(5):10-14.

        [12]CHUK T, NG A C W, COVIELLO E, et al. Understanding eye movements in face recognition with hidden Markov model[C]// Proceedings of the 35th Annual Conference of the Cognitive Science Society. Berlin: Cognitive Science Society,2013:328-333.

        [13]MILBORROW S, NICOLLS F. Locating Facial Features with an Extended Active Shape Model[M]. Berlin Heidelberg: Springer,2008:504-513.

        [14]COOTES T F, EDWARDS G J, TAYLOR C J. Active appearance models [J]. IEEE Transactions on Pattern Analysis & Machine Intelligence,2001(6):681-685.

        [15]CARPENTER G A. Neural network models for pattern recognition and associative memory [J]. Neural Networks,1989,2(4):243-257.

        [16]FLEMING M K, COTTRELL G W. Categorization of faces using unsupervised feature extraction [C]// 1990 IJCNN International Joint Conference on Neural Networks. Maui: IEEE,1990:65-70.

        [17]LIN Shanghung, KUNG Sunyuan, LIN Longji. Face recognition/detection by probabilistic decision-based neural network [J]. IEEE Transactions on Neural Networks,1997,8(1):114-132.

        [18]JOOER M, CHEN Weilong, WU Shiqian. High-speed face recognition based on discrete cosine transform and RBF neural networks [J]. IEEE Transactions on Neural Networks,2005,16(3):679-691.

        [19]MIGNON A, JURIE F. Reconstructing faces from their signatures using RBF regression [C]// 2013 Conference on British Machine Vision. Bristol, United Kingdom: [s.n.],2013:1-12.

        [20]OSUNA E, FREUND R, GIROSI F. Training support vector machines: an application to face detection [C]// In 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Juan: IEEE,1997:130-136.

        [21]WEI Jin, ZHANG Jianqi, ZHANG Xiang. Face recognition method based on support vector machine and particle swarm optimization [J]. Expert Systems with Applications,2011, 38(4):4390-4393.

        [22]LU Jin, ZHAO Jianwei, CAO Feilong. Extended feed forward neural networks with random weights for face recognition [J]. Neurocomputing,2014,136:96-102.

        [23]CAO Feilong, YE Hailiang, WANG Dianhui. A probabilistic learning algorithm for robust modeling using neural networks with random weights [J]. Information Sciences,2015,313:62-78.

        [24]SCHMIDT W F, KRAAIJVELD M, DUIN R P W. Feedforward neural networks with random weights [C]// 11th IAPR International Conference on Pattern Recognition Methodology and Systems. Hague: IEEE,1992:1-4.

        [25]PAO Y H, TAKEFUJI Y. Functional-link net computing: theory, system architecture, and functionalities [J]. IEEE Computer Journal,1992,25(5):76-79.

        [26]PAO Y H, PARK G H, SOBAJIC D J. Learning and generalization characteristics of the random vector functional-link net [J]. Neurocomputing,1994,6(2):163-180.

        [27]IGELNIK B, PAO Y H. Stochastic choice of basis functions in adaptive function approximation and the functional-link net [J]. IEEE Transactions on Neural Networks,1995,6(6):1320-1329.

        [28]ALHAMDOOSH M, WANG Dianhui. Fast de-correlated neural network ensembles with random weights [J]. Information Sciences,2014,264:104-117.

        [29]CAO Feilong, TAN Yuanpeng, CAI Miaomiao. Sparse algorithms of random weight networks and applications [J]. Expert Systems with Applications,2014,41(5):2457-2462.

        [30]SCARDAPANE S, WANG Dianhui, PANELLA M, et al. Distributed learning for random vector functional-link networks [J]. Information Sciences,2015,301:271-284.

        [31]DEMPSTER A P, LAIRD N M, RUBIN D B. Maximum likelihood from incomplete data via the EM algorithm [J]. Journal of the Royal Statistical Society, Series B (Methodological),1977,39(1):1-38.

        [32]BARTLETT P L. The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network [J]. IEEE Transactions on Information Theory,1998,44(2):525-536.

        [33]NATARAJAN B K. Sparse approximate solutions to linear systems [J]. SIAM Journal on Computing,1995,24(2):227-234.

        [34]DONOHO D L. For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution [J]. Communications on Pure and Applied Mathematics,2006,59(6):797-829.

        [35]LANGE K, SINSHEIMER J S. Normal/independent distributions and their applications in robust regression [J]. Journal of Computational and Graphical Statistics,1993,2(2):175-198.

        [36]ZHANG Zhihua, WANG Shusen, LIU Dehua, et al. EP-GIG priors and applications in Bayesian sparse learning [J]. The Journal of Machine Learning Research, 2012,13(1):2031-2061.

        【文章編號(hào)】1004-1540(2016)02-0239-08

        DOI:10.3969/j.issn.1004-1540.2016.02.020

        【收稿日期】2015-12-14《中國(guó)計(jì)量學(xué)院學(xué)報(bào)》網(wǎng)址:zgjl.cbpt.cnki.net

        【基金項(xiàng)目】國(guó)家自然科學(xué)基金資助項(xiàng)目(No.61272023,91330118).

        【作者簡(jiǎn)介】陳甲英(1989-),女,甘肅省金昌人,碩士研究生,主要研究方向?yàn)榫匦位謴?fù)、神經(jīng)網(wǎng)絡(luò).E-mail:1041074676@qq.com 通信聯(lián)系人:曹飛龍,男,教授.E-mail: flcao@cjlu.edu.cn

        【中圖分類號(hào)】TP183

        【文獻(xiàn)標(biāo)志碼】A

        A novel two dimension robust neural networks with random weights and its applications

        CHEN Jiaying, CAO Feilong

        (College of Sciences, China Jiliang University, Hangzhou 310018, China)

        Abstract:The major advantage of two dimensional neural networks with random weights (2DNNRW) is to use matrix data as the input directly to reserve the structural information of the matrix data itself. Hence, compared with the neural networks with random weights (NNRW), the recognition rate is improved. However, the existing 2DNNRW is not good at the face recognition with outliers. Now, we proposed a two dimension robust neural networks with random weights (2DRNNRW). The expectation-maximization algorithm (EM) was used to calculate the parameters of the networks. Experiments on different face databases demonstrate that the proposed algorithm is effective to deal with the problem of face recognition with outliers.

        Key words:artificial neural networks; neural networks with random weights; face recognition; expectation-maximization algorithm

        猜你喜歡
        人工神經(jīng)網(wǎng)絡(luò)人臉識(shí)別
        人臉識(shí)別 等
        揭開人臉識(shí)別的神秘面紗
        利用人工神經(jīng)網(wǎng)絡(luò)快速計(jì)算木星系磁坐標(biāo)
        人工神經(jīng)網(wǎng)絡(luò)實(shí)現(xiàn)簡(jiǎn)單字母的識(shí)別
        電子制作(2019年10期)2019-06-17 11:45:10
        滑動(dòng)電接觸摩擦力的BP與RBF人工神經(jīng)網(wǎng)絡(luò)建模
        基于(2D)2PCA-LBP 的人臉識(shí)別方法的研究
        電子制作(2017年17期)2017-12-18 06:40:55
        人臉識(shí)別在高校安全防范中的應(yīng)用
        電子制作(2017年1期)2017-05-17 03:54:46
        人工神經(jīng)網(wǎng)絡(luò)和安時(shí)法電池SOC估計(jì)
        基于類獨(dú)立核稀疏表示的魯棒人臉識(shí)別
        基于聲發(fā)射和人工神經(jīng)網(wǎng)絡(luò)的混凝土損傷程度識(shí)別
        亚洲欧洲日本综合aⅴ在线| 亚洲女同系列在线观看| 极品尤物人妻堕落沉沦| 亚洲精品无码国产| 亚洲香蕉视频| 亚洲av综合日韩精品久久久| av影片手机在线观看免费网址| 久久久久九九精品影院| 日日碰狠狠躁久久躁| 男女好痛好深好爽视频一区| 风流少妇一区二区三区91| 嫩草伊人久久精品少妇av| 97在线观看| 国产午夜精品福利久久| 日韩精品自拍一区二区| 丰满少妇高潮惨叫久久久| 无码精品人妻一区二区三区影院| 毛片无遮挡高清免费久久| 加勒比一区二区三区av| 色综合久久久久综合体桃花网| 一本无码人妻在中文字幕免费| 久久精品国产亚洲综合色| 中文字幕东京热一区二区人妻少妇| 亚洲色大成网站www永久| 欧美孕妇xxxx做受欧美88| 中文字幕大乳少妇| 日本二区在线视频观看| 欧美成人片在线观看| 午夜tv视频免费国产区4| 粗大挺进孕妇人妻在线| 精品人妻系列无码人妻漫画| 99久久精品费精品国产一区二区 | 男女上下猛烈啪啪免费看| 99久久人妻无码精品系列蜜桃| 青青草视频网站免费看| а天堂中文在线官网在线| 人成午夜免费大片| 国产一级做a爱视频在线| 视频在线观看国产自拍| 特级做a爰片毛片免费看无码| 三级国产女主播在线观看|