亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        基于多尺度核特征卷積神經(jīng)網(wǎng)絡(luò)的實(shí)時(shí)人臉表情識(shí)別

        2019-10-31 09:21:33李旻擇李小霞王學(xué)淵孫維
        計(jì)算機(jī)應(yīng)用 2019年9期

        李旻擇 李小霞 王學(xué)淵 孫維

        摘 要:針對(duì)人臉表情識(shí)別的泛化能力不足、穩(wěn)定性差以及速度慢難以滿(mǎn)足實(shí)時(shí)性要求的問(wèn)題,提出了一種基于多尺度核特征卷積神經(jīng)網(wǎng)絡(luò)的實(shí)時(shí)人臉表情識(shí)別方法。首先,提出改進(jìn)的MobileNet結(jié)合單發(fā)多盒檢測(cè)器(MSSD)輕量化人臉檢測(cè)網(wǎng)絡(luò),并利用核相關(guān)濾波(KCF)模型對(duì)檢測(cè)到的人臉坐標(biāo)信息進(jìn)行跟蹤來(lái)提高檢測(cè)速度和穩(wěn)定性;然后,使用三種不同尺度卷積核的線(xiàn)性瓶頸層構(gòu)成三條支路,用通道合并的特征融合方式形成多尺度核卷積單元,利用其多樣性特征來(lái)提高表情識(shí)別的精度;最后,為了提升模型泛化能力和防止過(guò)擬合,采用不同的線(xiàn)性變換方式進(jìn)行數(shù)據(jù)增強(qiáng)來(lái)擴(kuò)充數(shù)據(jù)集,并將FER-2013人臉表情數(shù)據(jù)集上訓(xùn)練得到的模型遷移到小樣本CK+數(shù)據(jù)集上進(jìn)行再訓(xùn)練。實(shí)驗(yàn)結(jié)果表明,所提方法在FER-2013數(shù)據(jù)集上的識(shí)別率達(dá)到73.0%,較Kaggle表情識(shí)別挑戰(zhàn)賽冠軍提高了1.8%,在CK+數(shù)據(jù)集上的識(shí)別率高達(dá)99.5%。對(duì)于640×480的視頻,人臉檢測(cè)速度達(dá)到每秒158幀,是主流人臉檢測(cè)網(wǎng)絡(luò)多任務(wù)級(jí)聯(lián)卷積神經(jīng)網(wǎng)絡(luò)(MTCNN)的6.3倍,同時(shí)人臉檢測(cè)和表情識(shí)別整體速度達(dá)到每秒78幀。因此所提方法能夠?qū)崿F(xiàn)快速精確的人臉表情識(shí)別。

        關(guān)鍵詞:人臉表情識(shí)別;卷積神經(jīng)網(wǎng)絡(luò);人臉檢測(cè);核相關(guān)濾波;遷移學(xué)習(xí)

        中圖分類(lèi)號(hào):TP391.4

        文獻(xiàn)標(biāo)志碼:A

        Real-time facial expression recognition based on convolutional neural network with multi-scale kernel feature

        LI Minze1, LI Xiaoxia1,2*, WANG Xueyuan1,2, SUN Wei2

        1.School of Information Engineering, Southwest University of Science and Technology, Mianyang Sichuan 621010, China;

        2.Key Laboratory of Special Environmental Robotics in Sichuan Province (Southwest University of Science and Technology), Mianyang Sichuan 621010, China

        Abstract:

        Aiming at the problems of insufficient generalization ability, poor stability and difficulty in meeting the real-time requirement of facial expression recognition, a real-time facial expression recognition method based on multi-scale kernel feature convolutional neural network was proposed. Firstly, an improved MSSD (MobileNet+Single Shot multiBox Detector) lightweight face detection network was proposed, and the detected face coordinates information was tracked by Kernel Correlation Filter (KCF) model to improve the detection speed and stability. Then, three linear bottlenecks of three different scale convolution kernels were used to form three branches. The multi-scale kernel convolution unit was formed by the feature fusion of channel combination, and the diversity feature was used to improve the accuracy of expression recognition. Finally, in order to improve the generalization ability of the model and prevent over-fitting, different linear transformation methods were used for data enhancement to augment the dataset, and the model trained on the FER-2013 facial expression dataset was migrated to the small sample CK+ dataset for retraining. The experimental results show that the recognition rate of the proposed method on the FER-2013 dataset reaches 73.0%, which is 1.8% higher than that of the Kaggle Expression Recognition Challenge champion, and the recognition rate of the proposed method on the CK+ dataset reaches 99.5%. For 640×480 video, the face detection speed of the proposed method reaches 158 frames per second, which is 6.3 times of that of the mainstream face detection network MTCNN (MultiTask Cascaded Convolutional Neural Network). At the same time, the overall speed of face detection and expression recognition of the proposed method reaches 78 frames per second. It can be seen that the proposed method can achieve fast and accurate facial expression recognition.

        別的速度,因此采用深度可分離卷積來(lái)構(gòu)建網(wǎng)絡(luò)。在MSSD網(wǎng)絡(luò)中,輸入端通過(guò)1個(gè)卷積核大小為3×3、步長(zhǎng)為2的標(biāo)準(zhǔn)卷積層,再經(jīng)過(guò)13個(gè)深度可分離卷積層,后面輸出端連接了4個(gè)卷積核分別為1×1、3×3交替組合的標(biāo)準(zhǔn)卷積層和1個(gè)最大池化層,考慮到池化層會(huì)損失一部分有效特征,因此在網(wǎng)絡(luò)的標(biāo)準(zhǔn)卷積層中使用了步長(zhǎng)為2的卷積核替代池化層。

        網(wǎng)絡(luò)淺層特征的感受野較小,擁有更多的細(xì)節(jié)信息,對(duì)小目標(biāo)的檢測(cè)更具優(yōu)勢(shì),因此MSSD人臉檢測(cè)網(wǎng)絡(luò)采用淺層與深層特征融合的方式。經(jīng)實(shí)驗(yàn)分析,將第7層的淺層特征與深層特征融合時(shí)效果最好,因此網(wǎng)絡(luò)采用第7、15、16、17、18、19層的融合特征。網(wǎng)絡(luò)先將這六層的特征圖分別重新調(diào)整為一維向量,再進(jìn)行串聯(lián)融合,實(shí)現(xiàn)多尺度人臉檢測(cè)。

        2.2 結(jié)合跟蹤模型的人臉檢測(cè)

        為了進(jìn)一步地提高檢測(cè)速度,將人臉檢測(cè)網(wǎng)絡(luò)和跟蹤模型相結(jié)合,形成檢測(cè)跟蹤檢測(cè)的模式。這樣的結(jié)合方式不僅有效地提高了人臉檢測(cè)的速度,還可處理多角度、有遮擋的人臉檢測(cè)問(wèn)題。跟蹤模型是基于統(tǒng)計(jì)學(xué)習(xí)的跟蹤算法KCF,該算法主要使用了輪轉(zhuǎn)矩陣對(duì)樣本進(jìn)行采集,然后使用快速傅里葉變換對(duì)其進(jìn)行加速運(yùn)算,這使得該算法的跟蹤效果和速度都大大提升。先利用MSSD模型對(duì)人臉進(jìn)行檢測(cè),并進(jìn)行KCF跟蹤模型更新;然后,將檢測(cè)到的人臉坐標(biāo)信息輸入跟蹤模型KCF中,以此作為人臉基礎(chǔ)樣本框并采用檢測(cè)1幀跟蹤10幀的策略來(lái)進(jìn)行跟蹤;最后,為了防止跟蹤丟失,再次進(jìn)行MSSD模型更新,重新對(duì)人臉進(jìn)行檢測(cè)。圖3為結(jié)合跟蹤的人臉檢測(cè)流程。

        3 多尺度核特征人臉表情識(shí)別網(wǎng)絡(luò)

        3.1 深度可分離卷積

        Howard等[24]在2017年提出MobileNet,對(duì)標(biāo)準(zhǔn)卷積進(jìn)行了分解,分為了深度卷積和點(diǎn)卷積兩個(gè)部分,共同構(gòu)成深度可分離卷積,標(biāo)準(zhǔn)卷積核與深度可分離卷積核的對(duì)比如圖4(a)和圖4(b)所示。

        假設(shè)輸入特征圖尺寸為DF×DF,通道數(shù)為M,卷積核大小為DK×DK,卷積核個(gè)數(shù)為N。

        對(duì)于同樣的輸入和輸出,標(biāo)準(zhǔn)卷積過(guò)程計(jì)算量為:DK×DK×M×N×DF×DF,深度可分離卷積過(guò)程計(jì)算量為:DK×DK×1×M×DF×DF+1×1×M×N×DF×DF。

        通過(guò)以上可知深度可分離卷積方式與標(biāo)準(zhǔn)卷積方式的計(jì)算量比例為:

        (DK×DK×1×M×DF×DF+1×1×M×N×DF×DF)/

        (DK×DK×M×N×DF×DF)=(1/N)+(1/D2K)(1)

        對(duì)于卷積核大小為3×3的卷積過(guò)程,計(jì)算量可減少至原來(lái)1/9??梢?jiàn)這樣的結(jié)構(gòu)使其極大地減少了計(jì)算量,有效提高了訓(xùn)練與識(shí)別的速度。

        3.2 多尺度核卷積單元

        多尺度核卷積單元主要以深度可分離卷積為基礎(chǔ),分支中采用了MobileNetV2[25]的線(xiàn)性瓶頸層結(jié)構(gòu)并對(duì)其進(jìn)行了改進(jìn),將其中的非線(xiàn)性激活函數(shù)改為PReLU[26],圖5是改進(jìn)的線(xiàn)性瓶頸層(bottleneck_p)結(jié)構(gòu)。

        深度卷積(圖中為Dw_Conv)作為特征提取部分,點(diǎn)卷積(圖中為Conv 1×1)作為瓶頸層進(jìn)行通道數(shù)的縮放,并且輸出端的點(diǎn)卷積采用的是線(xiàn)性結(jié)構(gòu),因?yàn)樵撎廃c(diǎn)卷積是用于通道數(shù)的壓縮,若再進(jìn)行非線(xiàn)性操作,則會(huì)損失大量有用特征。圖6是多尺度核卷積單元結(jié)構(gòu)圖,它包含了三條分支,每個(gè)分支均采用步長(zhǎng)為2的改進(jìn)的線(xiàn)性瓶頸層結(jié)構(gòu)。通過(guò)三個(gè)不同深度卷積核大小的分支并聯(lián)形成的多尺度核卷積單元,融合了不同卷積核大小提取的多樣性特征,進(jìn)而有效地提高人臉表情的識(shí)別率。

        為了說(shuō)明多尺度核特征的有效性以及卷積核大小的選取,用表1所示網(wǎng)絡(luò)結(jié)構(gòu)進(jìn)行了10組對(duì)比實(shí)驗(yàn)。表1是在FER-2013上的多尺度核特征有效性評(píng)估結(jié)果。實(shí)驗(yàn)1是將多尺度核卷積單元改為核大小為3×3的標(biāo)準(zhǔn)卷積進(jìn)行的實(shí)驗(yàn),實(shí)驗(yàn)2~6是將多尺度核卷積單元的三條支路均使用同一大小的卷積核進(jìn)行的實(shí)驗(yàn),實(shí)驗(yàn)7~10是改變多尺度核卷積單元三條支路的卷積核大小進(jìn)行的實(shí)驗(yàn)。實(shí)驗(yàn)1~6表明網(wǎng)絡(luò)使用適當(dāng)卷積核大小的單一尺度核卷積單元比不使用的識(shí)別率更高;實(shí)驗(yàn)2~6表明具有單一尺度核卷積單元的網(wǎng)絡(luò)使用3×3卷積核的效果比其他卷積核大小更好;實(shí)驗(yàn)2~10表明除了實(shí)驗(yàn)9的情況外,多尺度核卷積單元比單一尺度核卷積單元更有效,同時(shí)實(shí)驗(yàn)9的情況說(shuō)明了多尺度核卷積單元的三個(gè)卷積核不能都取比較大的尺寸。

        通過(guò)以上分析,多尺度核卷積單元的核大小選取了3×3、11×11、19×19三種最優(yōu)尺度,使用多尺度核卷積比標(biāo)準(zhǔn)卷積的識(shí)別率提升了3.2%。

        在多尺度核卷積單元中,除了用于壓縮的點(diǎn)卷積不使用非線(xiàn)性激活函數(shù)外,其他卷積層均使用PReLU激活函數(shù)。式(2)、式(3)分別是激活函數(shù)ReLU[27]和PReLU的表達(dá)式,i表示不同通道。

        ReLU (xi ) = xi , xi>0

        0,xi≤0(2)

        PReLU (xi) = xi , xi>0

        aixi ,xi≤0(3)

        ReLU激活函數(shù)是將所有負(fù)值都設(shè)為0,其余保持不變。當(dāng)訓(xùn)練過(guò)程中有較大梯度經(jīng)過(guò)ReLU時(shí),會(huì)引起輸入數(shù)據(jù)產(chǎn)生巨大變化,會(huì)出現(xiàn)大多數(shù)輸入是負(fù)數(shù)的情況,這種情況下會(huì)導(dǎo)致神經(jīng)元永久性失活,梯度永遠(yuǎn)為0,無(wú)法繼續(xù)進(jìn)行網(wǎng)絡(luò)權(quán)重的更新。然而在PReLU中修正了數(shù)據(jù)的分布,使得一部分負(fù)值也能夠得以保留,很好地解決了ReLU中存在的問(wèn)題,并且式(3)中的參數(shù)ai可以通過(guò)訓(xùn)練得到,能夠根據(jù)數(shù)據(jù)的變化而變化,靈活性與適應(yīng)性更強(qiáng)。

        通過(guò)以上分析,將不同激活函數(shù)對(duì)多尺度核特征人臉表情識(shí)別效果進(jìn)行了對(duì)比,表2是不同激活函數(shù)在FER-2013數(shù)據(jù)集上的識(shí)別率,可知使用PReLU比ReLU的識(shí)別率高1.8個(gè)百分點(diǎn),因此選擇PReLU作為激活函數(shù)。

        3.3 多尺度核特征網(wǎng)絡(luò)

        用于人臉表情識(shí)別的多尺度核特征網(wǎng)絡(luò)結(jié)構(gòu)如表3所示。表中multi_conv2d、bottleneck_p(1~5)分別表示3.2節(jié)介紹的多尺度核卷積單元和改進(jìn)的線(xiàn)性瓶頸層。網(wǎng)絡(luò)的輸入首先經(jīng)過(guò)一個(gè)多尺度核卷積單元(multi_conv2d),采用6倍的擴(kuò)張系數(shù),每個(gè)分支采用16個(gè)卷積核進(jìn)行卷積,輸出通道數(shù)為16,步長(zhǎng)為2,再將三分支特征進(jìn)行融合,輸出通道數(shù)變?yōu)?8;然后經(jīng)過(guò)12個(gè)改進(jìn)的線(xiàn)性瓶頸層,每層的深度卷積核大小均使用3×3,并且在訓(xùn)練期間進(jìn)行數(shù)據(jù)的批量歸一化;最后會(huì)通過(guò)一個(gè)卷積核大小為1×1、步長(zhǎng)為1的標(biāo)準(zhǔn)卷積層和一個(gè)核大小為3×3的平均池化層;輸出端的分類(lèi)器設(shè)計(jì)采用了全卷積神經(jīng)網(wǎng)絡(luò)的分類(lèi)策略,使用了步長(zhǎng)為1、核大小為1×1、輸出通道數(shù)為7(7類(lèi)表情)的標(biāo)準(zhǔn)卷積層來(lái)替代全連接層,加快表情識(shí)別速度。

        4 實(shí)驗(yàn)結(jié)果及分析

        實(shí)驗(yàn)配置如下:

        中央處理器(Central Processing Unit, CPU):Inter Core i7-7700K,主頻為4.20GHz,內(nèi)存為16GB;圖像處理器(Graphic Processing Unit, GPU):GeForce GTX 1080Ti,顯存為12GB。

        4.1 數(shù)據(jù)集

        實(shí)驗(yàn)中用到了三種數(shù)據(jù)集:WIDER FACE[30]、CK+[13]和FER-2013[14]。

        WIDER FACE數(shù)據(jù)集為人臉檢測(cè)基準(zhǔn)數(shù)據(jù)集,共包含了32203張圖像,并對(duì)393703個(gè)面部進(jìn)行了標(biāo)記,具有不同的尺寸、姿勢(shì)、遮擋、表情、光照以及化妝的人臉。所有的圖像被分為61類(lèi),每類(lèi)隨機(jī)選擇40%作為訓(xùn)練集、10%作為驗(yàn)證集、50%作為測(cè)試集,即訓(xùn)練集12881張、驗(yàn)證集3220張、測(cè)試集16102張。

        CK+人臉表情數(shù)據(jù)集包括123個(gè)人,593個(gè)圖像序列,每個(gè)圖像序列的最后一張都有動(dòng)作單元標(biāo)簽,而其中327個(gè)圖像序列有表情標(biāo)簽,被標(biāo)注為七類(lèi)表情標(biāo)簽:憤怒、鄙視、厭惡、恐懼、高興、悲傷和驚訝。但是在其他的表情數(shù)據(jù)集中沒(méi)有鄙視這類(lèi)表情,為了和其他數(shù)據(jù)集能夠相互兼容,因此去掉了鄙視這類(lèi)表情。

        FER-2013是Kaggle人臉表情識(shí)別挑戰(zhàn)賽提供的一個(gè)人臉表情數(shù)據(jù)集。該數(shù)據(jù)集總共包含35887張表情圖像,分為7類(lèi)基本表情:憤怒、厭惡、恐懼、高興、悲傷、驚訝和中性。FER2013已被挑戰(zhàn)賽舉辦方分為了三部分:訓(xùn)練集28709張、公共測(cè)試集3589張和私有測(cè)試集3589張。在訓(xùn)練時(shí)將公共測(cè)試集作為驗(yàn)證集,私有測(cè)試集作為最終指標(biāo)判斷的測(cè)試集,該數(shù)據(jù)集包含了不同年齡、不同角度的人臉表情,并且分辨率也相對(duì)較低,很多圖片還有手、頭發(fā)和圍巾等的遮擋,非常具有挑戰(zhàn)性,很符合真實(shí)環(huán)境中的條件。

        4.2 數(shù)據(jù)增強(qiáng)

        為了增強(qiáng)人臉表情識(shí)別模型對(duì)噪聲和角度變換等干擾的穩(wěn)定性,對(duì)實(shí)驗(yàn)數(shù)據(jù)集進(jìn)行了數(shù)據(jù)增強(qiáng),對(duì)每張圖像都使用了不同的線(xiàn)性變換方式進(jìn)行增強(qiáng),如圖7所示。進(jìn)行數(shù)據(jù)增強(qiáng)的變換有隨機(jī)水平翻轉(zhuǎn)、比例為0.1的水平和豎直方向偏移、比例為0.1的隨機(jī)縮放、在(-10,10)之間進(jìn)行隨機(jī)轉(zhuǎn)動(dòng)角度、歸一化為零均值和單位方差向量,并對(duì)變換過(guò)程中出現(xiàn)的空白區(qū)域按照最近像素點(diǎn)進(jìn)行填充。

        4.3 人臉檢測(cè)實(shí)驗(yàn)結(jié)果

        對(duì)于結(jié)合跟蹤的MSSD人臉檢測(cè)網(wǎng)絡(luò),先將MSSD的基礎(chǔ)網(wǎng)絡(luò)MobileNet在ImageNet[31]1000分類(lèi)的大型圖像數(shù)據(jù)庫(kù)上進(jìn)行預(yù)訓(xùn)練;然后再將預(yù)訓(xùn)練好的模型遷移到MSSD網(wǎng)絡(luò)中,用人臉檢測(cè)基準(zhǔn)數(shù)據(jù)庫(kù)WIDER FACE進(jìn)行微調(diào);最后用WIDER FACE的測(cè)試集進(jìn)行測(cè)試。圖8是測(cè)試集中部分圖片檢測(cè)結(jié)果,可知MSSD人臉檢測(cè)網(wǎng)絡(luò)對(duì)多尺寸、多角度和遮擋等均具有較好的檢測(cè)效果,穩(wěn)定性強(qiáng)。

        在檢測(cè)速度方面,使用大小為640×480的視頻進(jìn)行測(cè)試,取視頻的前3000幀來(lái)計(jì)算平均處理速度,并與主流的人臉檢測(cè)網(wǎng)絡(luò)模型進(jìn)行了對(duì)比實(shí)驗(yàn)。表4是不同方法人臉檢測(cè)速度對(duì)比結(jié)果。MSSD網(wǎng)絡(luò)人臉檢測(cè)速度為63幀/s,再結(jié)合KCF跟蹤器,速度可達(dá)158幀/s。多任務(wù)級(jí)聯(lián)卷積神經(jīng)網(wǎng)絡(luò)(MultiTask Cascaded Convolutional Neural Network, MTCNN)是主流的人臉檢測(cè)網(wǎng)絡(luò),本文方法的檢測(cè)速度是它的6.3倍,優(yōu)勢(shì)非常明顯。

        4.4 人臉表情識(shí)別實(shí)驗(yàn)結(jié)果

        人臉表情識(shí)別實(shí)驗(yàn)主要是在FER-2013和CK+兩個(gè)數(shù)據(jù)上進(jìn)行訓(xùn)練和測(cè)試,在訓(xùn)練過(guò)程中均采用隨機(jī)初始化權(quán)重和偏置,批量大小為16,初始學(xué)習(xí)率為0.01,并且采用了訓(xùn)練自動(dòng)停止策略,即出現(xiàn)過(guò)擬合現(xiàn)象時(shí),訓(xùn)練經(jīng)過(guò)20個(gè)循環(huán)后自動(dòng)停止并保存模型。

        模型訓(xùn)練過(guò)程使用FER-2013的訓(xùn)練集(28709張)進(jìn)行訓(xùn)練,公共測(cè)試集(3589張)作為驗(yàn)證集來(lái)調(diào)整模型的權(quán)重參數(shù),最后用私有測(cè)試集(3589張)進(jìn)行最后的測(cè)試。然后與目前先進(jìn)的表情識(shí)別網(wǎng)絡(luò)進(jìn)行了對(duì)比。表5第一部分是不同方法在FER-2013上的識(shí)別率對(duì)比結(jié)果??芍疚姆椒▋?yōu)于其他主流方法,達(dá)到了73.0%的識(shí)別率,比Kaggle人臉表情識(shí)別挑戰(zhàn)賽冠軍Tang[16]的識(shí)別率提高了1.8個(gè)百分點(diǎn),同時(shí)識(shí)別速度達(dá)到了154幀/s。

        在CK+數(shù)據(jù)集上的實(shí)驗(yàn)采用了遷移學(xué)習(xí)方法,將模型在FER-2013上訓(xùn)練得到的權(quán)重參數(shù)作為預(yù)訓(xùn)練結(jié)果,然后在CK+上進(jìn)行微調(diào),并采用10折交叉驗(yàn)證對(duì)模型性能進(jìn)行評(píng)估。表5第二部分是不同方法在CK+數(shù)據(jù)集上的識(shí)別率對(duì)比,本文方法取得了99.5%的最高識(shí)別率。

        表6和表7分別是在FER-2013和CK+兩個(gè)數(shù)據(jù)集上的識(shí)別結(jié)果混淆矩陣。在數(shù)據(jù)集FER-2013中,高興的識(shí)別率最高為90.0%,其次是驚訝和厭惡,對(duì)恐懼和悲傷的識(shí)別率相對(duì)較低。從表7可看出造成這兩者識(shí)別率較低的原因是這兩類(lèi)表情容易相互混淆。為了更直觀(guān)地對(duì)這兩類(lèi)表情進(jìn)行分析,圖9給出了FER-2013中的恐懼和悲傷兩類(lèi)表情圖像,可知在該數(shù)據(jù)集中恐懼和悲傷兩類(lèi)表情極易混淆,人工都很難進(jìn)行準(zhǔn)確判斷。在數(shù)據(jù)集CK+中,其數(shù)據(jù)集較小并且沒(méi)有FER-2013中那么多的標(biāo)簽噪聲,同時(shí)又全是清晰的正面表情照片,因此本文方法在該數(shù)據(jù)集中除了厭惡之外的各類(lèi)表情識(shí)別率均為100%,僅將厭惡表情中的3%識(shí)別為了憤怒,整體識(shí)別率高達(dá)99.5%。

        5 結(jié)語(yǔ)

        針對(duì)人臉表情識(shí)別的泛化能力不足、穩(wěn)定性差以及速度難以達(dá)到實(shí)時(shí)性要求的問(wèn)題,提出了一種基于多尺度核特征卷積神經(jīng)網(wǎng)絡(luò)的實(shí)時(shí)穩(wěn)定人臉表情識(shí)別方法。用檢測(cè)加跟蹤的模式進(jìn)行人臉檢測(cè),實(shí)現(xiàn)了158幀/s的快速穩(wěn)定人臉檢測(cè),而且多尺度核特征表情識(shí)別網(wǎng)絡(luò)在FER-2013和CK+數(shù)據(jù)集上分別達(dá)到了73.0%和99.5%的高識(shí)別率。整個(gè)系統(tǒng)采用輕量化網(wǎng)絡(luò)結(jié)構(gòu),總體處理速度高達(dá)78幀/s。精度和速度都能滿(mǎn)足實(shí)際需求。在后續(xù)的研究中,可以利用反卷積等方法可視化各層特征,結(jié)合高低層有效特征進(jìn)一步提高網(wǎng)絡(luò)的精度。另外,可以采用更加接近真實(shí)環(huán)境的表情數(shù)據(jù)集進(jìn)行訓(xùn)練,并且增加疼痛之類(lèi)的表情類(lèi)別,使得理論研究能夠與實(shí)際相結(jié)合,將該方法使用在醫(yī)療監(jiān)護(hù)等的實(shí)際場(chǎng)景中。

        參考文獻(xiàn)

        [1]EKMAN P. Contacts across cultures in the face and emotion [J]. Journal of Personality and Social Psychology, 1971, 17(2): 124-129.

        [2]ZHAO X, ZHANG S. Facial expression recognition based on local binary patterns and kernel discriminant isomap [J]. Sensors, 2011, 11(10): 9573-9588.

        [3]KUMAR P, HAPPY S L, ROUTRAY A. A real-time robust facial expression recognition system using HOG features [C]// CAST 2016: Proceedings of the 2016 International Conference on Computing, Analytics and Security Trends. Piscataway, NJ: IEEE, 2016: 289-293.

        [4]劉帥師,田彥濤,萬(wàn)川.基于Gabor多方向特征融合與分塊直方圖的人臉表情識(shí)別方法[J]. 自動(dòng)化學(xué)報(bào),2011,37(12):1455-1463.(LIU S S, TIAN Y T, WAN C. Facial expression recognition method based on gabor multi-orientation features fusion and block histogram [J]. Acta Automatica Sinica, 2011, 37(12): 1455-1463.)

        [5]BERRETTI S, del BIMBO A, PALA P, et al. A set of selected SIFT features for 3D facial expression recognition [C]// ICPR 2010: Proceedings of the 2010 20th International Conference on Pattern Recognition. Piscataway, NJ: IEEE, 2010: 4125-4128.

        [6]CHEON Y, KIM D. Natural facial expression recognition using differential-AAM and manifold learning [J]. Pattern Recognition, 2009, 42(7): 1340-1350.

        [7]尹星云,王洵,董蘭芳,等.用隱馬爾可夫模型設(shè)計(jì)人臉表情識(shí)別系統(tǒng)[J].電子科技大學(xué)學(xué)報(bào),2003, 32(6):725-728.(YIN X Y, WANG X, DONG L F, et al. Design of recognition for facial expression by hidden markov model [J]. Journal of University of Electronic Science and Technology of China, 2003, 32(6): 725-728.)

        [8]VAPNIK V N, LERNER A Y. Recognition of patterns with help of generalized portraits [J]. Avtomatika I Telemekhanika, 1963, 24(6): 774-780.

        [9]ROWEIS S T. Nonlinear dimensionality reduction by locally linear embedding [J]. Science, 2000, 290(5500): 2323-2326.

        [10]HART P E. The condensed nearest neighbor rule [J]. IEEE Transactions on Information Theory, 1968, 14(3): 515-516.

        [11]KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks [C]// NIPS ‘12: Proceedings of the 25th International Conference on Neural Information Processing Systems. North Miami Beach, FL, USA: Curran Associates, 2012: 1097-1105.

        [12]LYONS M J, AKAMATSU S, KAMACHI M G, et al. Coding facial expressions with Gabor wavelets[C]// AFGR 1998: Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition. Piscataway, NJ: IEEE, 1998: 200-205.

        [13]LUCEY P, COHN J F, KANADE T, et al. The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression [C]// CVPRW 2010: Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2010: 94-101.

        [14]GOODFELLOW I J, ERHAN D, CARRIER P L, et al. Challenges in representation learning: a report on three machine learning contests [J]. Neural Networks, 2013, 64: 59-63.

        [15]DHALL A, GOECKE R, LUCEY S, et al. Static facial expression analysis in tough conditions: data, evaluation protocol and benchmark [C]// ICCVW 2011: Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops. Piscataway, NJ: IEEE, 2011: 2106-2112.

        [16]TANG Y. Deep learning using linear support vector machines [EB/OL]. arXiv:1306.0239[2018-12-21]. https://arxiv.org/pdf/1306.0239.pdf.

        [17]AL-SHABI M, CHEAH W P, CONNIE T. Facial expression recognition using a hybrid CNN-SIFT aggregator [EB/OL]. arXiv: 1608. 02833[2018-08-17]. https://arxiv.org/ftp/arxiv/papers/1608/1608.02833.pdf.

        [18]FANG H, PARTHALIN N M, AUBREY A J, et al. Facial expression recognition in dynamic sequences: an integrated approach [J]. Pattern Recognition, 2014, 47(3): 1271-1281.

        [19]JEON J, PARK J-C, JO Y J, et al. A real-time facial expression recognizer using deep neural network [C]// IMCOM ‘16: Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication. New York: ACM, 2016: Article No. 94.

        [20]NEHAL O, NOHA A, FAYEZ W. Intelligent real-time facial expression recognition from video sequences based on hybrid feature tracking algorithms [J]. International Journal of Advanced Computer Science and Applications, 2017, 8(1): 245-260.

        [21]LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector [C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9905. Berlin: Springer, 2016: 21-37.

        [22]HENRIQUES J F, CASEIRO R, MARTINS, et al. High-speed tracking with kernelized correlation filters [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596.

        [23]SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. arXiv: 1409. 1556[2019-01-10]. https://arxiv.org/pdf/1409.1556.pdf.

        [24]HOWARD A G, ZHU M, CHEN B. et al. MobileNets: efficient convolutional neural networks for mobile vision applications [EB/OL]. arXiv: 1704. 04861[2018-12-17]. https://arxiv.org/pdf/1704.04861.pdf.

        [25]SANDLER M, HOWARD A, ZHU M, et al. Inverted residuals and linear bottlenecks: mobile networks for classification, detection and segmentation [EB/OL]. arXiv:1801.04381[2018-12-16]. https://arxiv.org/pdf/1801.04381v2.pdf.

        [26]HE K, ZHANG X, REN S, et al. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification [EB/OL]. arXiv: 1502. 01852[2018-12-06]. https://arxiv.org/pdf/1502.01852.pdf.

        [27]JARRETT K, KAVUKCUOGLU K, RANZATO M, et al. What is the best multi-stage architecture for object recognition? [C]// ICCV 2009: Proceedings of the IEEE 12th International Conference on Computer Vision. Piscataway, NJ: IEEE, 2009: 2146-2153.

        [28]LIEW S S, KHALIL-HANI M, BAKHTERI R. Bounded activation functions for enhanced training stability of deep neural networks on visual pattern recognition problems [J]. Neurocomputing, 2016, 216(C): 718-734.

        [29]DJORK-ARN C, UNTERTHINER T, HOCHREITER S. Fast and accurate deep network learning by Exponential Linear Units (ELUs) [EB/OL]. arXiv:15222.07289[2019-01-22]. https://arxiv.org/pdf/1511.07289.pdf.

        [30]YANG S, LUO P, LOY C C, et al. WIDER FACE: a face detection benchmark [C]// CVPR 2016: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 5525-5533.

        [31]DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database [C]// CVPR 2009: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2009: 248-255.

        [32]YANG S, LUO P, LOY C C, et al. From facial parts responses to face detection: a deep learning approach [C]// ICCV 2015: Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE, 2015: 3676-3684.

        [33]ZHANG K, ZHANG Z, LI Z, et al. Joint face detection and alignment using multitask cascaded convolutional networks [J]. IEEE Signal Processing Letters, 2016, 23(10):1499-1503.

        [34]SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, Inception-ResNet and the impact of residual connections on learning [C]// AAAI 2017: Proceedings of the 31st AAAI Conference on Artificial Intelligence. Menlo Park, CA: AAAI Press, 2017: 23-38.

        [35]GUO Y, TAO D, YU J, et al. Deep neural networks with relativity learning for facial expression recognition [C]// ICMEW 2016: Proceedings of the 2016 IEEE International Conference on Multimedia and Expo Workshops. Piscataway, NJ: IEEE, 2016: 1-6.

        [36]YAN J, ZHENG W, CUI Z, et al. A joint convolutional bidirectional LSTM framework for facial expression recognition [J]. IEICE Transactions on Information and Systems, 2018, 101(4): 1217-1220.

        [37]FERNANDEZ P D M, PEA F A G, REN T I, et al. FERAtt: facial expression recognition with attention net [EB/OL]. arXiv:1902.03284[2019-02-08]. https://arxiv.org/pdf/1902.03284.pdf.

        [38]SONG X, BAO H. Facial expression recognition based on video [C]// AIPR 2017: Proceedings of the 2016 IEEE Applied Imagery Pattern Recognition Workshop. Washington, DC: IEEE Computer Society, 2016, 1: 1-5.

        [39]ZHANG K, HUANG Y, DU Y, et al. Facial expression recognition based on deep evolutional spatial-temporal networks [J]. IEEE Transactions on Image Processing, 2017, 26(9): 4193-4203.

        This work is partially supported by the National Natural Science Foundation of China (61771411),

        the Sichuan Science and Technology Project (2019YJ0449),

        the Graduate Innovation Fund of Southwest University of Science and Technology (18ycx123).

        LI Minze, born in 1992, M. S. candidate. His research interests include deep learning, computer vision.

        LI Xiaoxia, born in 1976, Ph. D., professor. Her research interests include pattern recognition, computer vision.

        WANG Xueyuan, born in 1974, Ph. D., associate professor. His research interests include image processing, machine learning.

        SUN Wei, born in 1995, M. S. candidate. His research interests include image processing, deep learning.

        国产自产二区三区精品| 日中文字幕在线| AV在线中出| 色男色女午夜福利影院| 中文字幕无码中文字幕有码| 四虎影视永久地址www成人| 久久av无码精品一区二区三区| 国内精品人人妻少妇视频| 风韵人妻丰满熟妇老熟| 东京热无码av一区二区| 麻豆av传媒蜜桃天美传媒| 亚洲a∨好看av高清在线观看| 国产91精品一区二区麻豆亚洲| 亚洲一区av在线观看| 色欲麻豆国产福利精品| 国产精品一区二区午夜久久| 国产av剧情久久精品久久| 免费观看18禁无遮挡真人网站| 亚洲国产成人精品无码区在线观看| 视频女同久久久一区二区三区 | 狠狠丁香激情久久综合| 风间由美中文字幕在线| 精品无码国产自产在线观看水浒传| 少妇装睡让我滑了进去| 久久国产精品免费一区二区| 爱爱免费视频一区二区三区| 无码av不卡一区二区三区| 在线永久看片免费的视频| 国产精品国产三级国产an| 日韩一区二区三区精品视频| 中文字幕肉感巨大的乳专区| 五月天国产精品| 亚洲精品av一区二区日韩| 欲求不満の人妻松下纱荣子| 精品香蕉久久久午夜福利| 国产人禽杂交18禁网站| 日本在线观看不卡一区二区| 国产亚洲日韩在线一区二区三区| 國产AV天堂| 中文字幕文字幕一区二区| 精品国际久久久久999波多野|