亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        基于卷積神經(jīng)網(wǎng)絡(luò)的花生籽粒完整性識別算法及應(yīng)用

        2018-11-06 04:02:40趙志衡朱江波
        農(nóng)業(yè)工程學(xué)報 2018年21期
        關(guān)鍵詞:分類優(yōu)化

        趙志衡,宋 歡,朱江波,盧 雷,孫 磊

        ?

        基于卷積神經(jīng)網(wǎng)絡(luò)的花生籽粒完整性識別算法及應(yīng)用

        趙志衡1,宋 歡1,朱江波1,盧 雷1,孫 磊2

        (1. 哈爾濱工業(yè)大學(xué)電氣工程及自動化學(xué)院,哈爾濱 150001;2. 上海安西機械制造有限公司,上海 201109)

        針對現(xiàn)有色選設(shè)備在花生顆粒篩選過程中處理速度慢、準(zhǔn)確率低的缺點,提出基于卷積神經(jīng)網(wǎng)絡(luò)的花生籽粒完整性識別算法。以完好花生、表皮破損花生和果仁破損花生的分類為例,構(gòu)建花生圖像庫;搭造卷積神經(jīng)網(wǎng)絡(luò),提取花生圖像特征;為提高分類準(zhǔn)確率和實時性,從訓(xùn)練集構(gòu)成、減小過擬合、加快訓(xùn)練收斂速度、簡化網(wǎng)絡(luò)結(jié)構(gòu)等幾方面對卷積神經(jīng)網(wǎng)絡(luò)進(jìn)行優(yōu)化;最終利用含2個卷積層、2個池化層、2個全連接層的3層神經(jīng)網(wǎng)絡(luò)實現(xiàn)了上述3類花生的分類。試驗結(jié)果表明:該方法對花生分類的準(zhǔn)確率達(dá)到98.18%,平均檢測一幅單?;ㄉ鷪D像的時間為18 ms,與現(xiàn)有色選設(shè)備相比有效提高了色選設(shè)備篩選的準(zhǔn)確率和實時性。

        農(nóng)產(chǎn)品;圖像處理;識別;卷積神經(jīng)網(wǎng)絡(luò);特征提??;色選系統(tǒng);花生顆粒篩選

        0 引 言

        色選機是采用色選技術(shù)的一種新型農(nóng)副產(chǎn)品加工器械[1-2],利用農(nóng)副產(chǎn)品不同的光學(xué)特性,在大量的物料中將顏色異?;虮砻嬗腥毕莸拇闷泛碗s質(zhì)檢測出來,并自動進(jìn)行分選剔除[3-4]。在合格品與不合格品非常相似、傳統(tǒng)篩選難以識別或在篩選效率要求較高的場合,色選機的優(yōu)勢非常明顯[5-6]。目前已有許多從業(yè)者對色選系統(tǒng)中的農(nóng)作物篩選算法進(jìn)行了一定的研究。Wang等分析了不同光照條件下櫻桃成像中、、數(shù)值的變化特點,設(shè)計了櫻桃的顏色評級系統(tǒng)[7]。Pearson等在RGB、HSV、CIE Lab 3種顏色模型下,分析了病變玉米粒與正常玉米粒在各種顏色分量數(shù)值上的區(qū)別,并基于色度和-分量設(shè)計了玉米篩選系統(tǒng),精度達(dá)到90%[8]。 趙吉文等根據(jù)西瓜子的特征,采用灰度帶比例作為分類特征參數(shù),分選出合格的瓜子,準(zhǔn)確率達(dá)到95%[9]。以上3種方法在篩選農(nóng)作物時,都依賴于某一點具體的顏色數(shù)值。但在實際應(yīng)用過程中,農(nóng)作物種類不一,個體差異性較大,僅通過顏色值限定進(jìn)行篩選將出現(xiàn)誤差。

        近年來深度學(xué)習(xí)[10]迅猛發(fā)展,Hinton[11]、Bengio[12-13]等研究團隊相繼提出深度神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu),其研究成果開啟了學(xué)術(shù)界和工業(yè)界的深度學(xué)習(xí)浪潮[14-18]。卷積神經(jīng)網(wǎng)絡(luò)(convolutional neural network, CNN)是一種具有代表性的深度學(xué)習(xí)方法,已廣泛應(yīng)用于圖像識別領(lǐng)域[19-22]。本文將卷積神經(jīng)網(wǎng)絡(luò)應(yīng)用于花生籽粒完整性識別,并改進(jìn)和優(yōu)化神經(jīng)網(wǎng)絡(luò),以期提高識別的準(zhǔn)確率和實時性。

        本文以完好花生、表皮破損花生和果仁破損花生的分類為例,建立卷積神經(jīng)網(wǎng)絡(luò);然后采用L2范數(shù)正則化、指數(shù)衰減法和滑動平均模型的方法優(yōu)化卷積神經(jīng)網(wǎng)絡(luò),提高分類的準(zhǔn)確性;最后簡化神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu),以期提高實時性。

        1 基于CNN的花生完籽粒整性識別算法構(gòu)建

        1.1 數(shù)據(jù)采集及預(yù)處理

        本文研究的圖像分類算法應(yīng)用于彩色色選設(shè)備。以花生顆粒作為研究對象,根據(jù)完整性將花生分為3類:完好的花生、表皮破損的花生、果仁破損花生。

        色選系統(tǒng)實地采集407張有效的花生樣品圖像,每?;ㄉ鷪D像的分辨率為100×100像素,按上述特征分類并手工添加標(biāo)簽,然后將這些圖像分為訓(xùn)練集和測試集,其中訓(xùn)練集占80%共325張,測試集占20%共82張,且訓(xùn)練集和測試集中上述3類花生圖像呈均勻分布。訓(xùn)練集中部分花生圖像如圖1所示,從上至下依次為完好花生、表皮破損花生、果仁破損花生。

        由于相機在拍攝過程中受環(huán)境因素干擾,原始圖像中通常會含有各種噪聲[23],干擾后續(xù)的圖像分類,故分類前需要先對原始圖像進(jìn)行濾波。

        圖1 訓(xùn)練集部分花生圖像

        圖2 濾波前后的花生圖像

        1.2 卷積神經(jīng)網(wǎng)絡(luò)的構(gòu)建

        典型卷積神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu)[28-29]包含卷積層、池化層和全連接層。參照文獻(xiàn)[30]中卷積神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu),建立如圖3所示的卷積神經(jīng)網(wǎng)絡(luò),各層參數(shù)如表1所示。

        卷積神經(jīng)網(wǎng)絡(luò)訓(xùn)練過程最耗時的部分就是卷積運算。卷積運算處理的圖像數(shù)據(jù)通常都是以矩陣形式有序儲存的,且這些圖像數(shù)據(jù)之間耦合性低,故需要運算速度快、數(shù)據(jù)吞吐量大、存儲空間大的硬件平臺。

        本文選用GPU+CPU平臺,該平臺中GPU具有極強的數(shù)據(jù)運算能力,在PC機中專門用于圖像處理。CPU與GPU組成了協(xié)同處理環(huán)境。CPU運算非常復(fù)雜的序列代碼,而GPU則運行大規(guī)模并行應(yīng)用程序,從而大大提高了運行速度,且PC機內(nèi)存遠(yuǎn)大于嵌入式系統(tǒng)內(nèi)存。

        本文選用技嘉公司GV-N75TWF2OC型號的顯卡,搭載NVIDIA GTX 750Ti核心的GPU,顯存為4GB/128Bit GDDR5,PCI-E 3.0接口,選用CPU為Intel Core i3-2120處理器,并安裝Linux系統(tǒng)、Python3.5編譯環(huán)境、Anaconda軟件、CUDA架構(gòu)、cuDNN開發(fā)庫以及Tensorflow深度學(xué)習(xí)框架。在此基礎(chǔ)上采用Python語言進(jìn)行深度學(xué)習(xí)編程。

        1.3 評價指標(biāo)

        使用準(zhǔn)確率(accuracy)指標(biāo)來評價所提出分類算法的性能,定義如下:

        圖3 卷積神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)

        表1 卷積神經(jīng)網(wǎng)絡(luò)參數(shù)

        將所建立的卷積神經(jīng)網(wǎng)絡(luò)在CPU+GPU平臺上進(jìn)行訓(xùn)練,迭代40次后,在測試集上分類準(zhǔn)確率穩(wěn)定達(dá)到90.91%。對比傳統(tǒng)的BP神經(jīng)網(wǎng)絡(luò)[31],選用8-5-3的結(jié)構(gòu)(輸入層8個單元,隱藏層5個單元,輸出層3個單元)在本文建立的數(shù)據(jù)集上進(jìn)行相同環(huán)境的訓(xùn)練,學(xué)習(xí)100次后分類準(zhǔn)確率為85.45%??芍疚臉?gòu)建的卷積神經(jīng)網(wǎng)絡(luò)算法有效提高了花生完整性分類的準(zhǔn)確率。

        2 基于CNN的花生完整性識別算法優(yōu)化

        為了進(jìn)一步提高花生籽粒完整性識別的準(zhǔn)確率和實時性,需要對所建立的卷積神經(jīng)網(wǎng)絡(luò)進(jìn)行優(yōu)化。

        2.1 L1和L2范數(shù)正則化

        過擬合指的是當(dāng)一個模型過分復(fù)雜之后,它可以很好地“記憶”每一個訓(xùn)練數(shù)據(jù)中隨機噪音的部分而忘記了要去“學(xué)習(xí)”訓(xùn)練數(shù)據(jù)中通用的趨勢。為了避免過擬合問題,本文采用正則化的方法,其思想是在損失函數(shù)中加入刻畫模型復(fù)雜程度的指標(biāo)。假設(shè)用于刻畫模型在訓(xùn)練數(shù)據(jù)上表現(xiàn)的損失函數(shù)為(),那么在優(yōu)化時不是直接優(yōu)化(),而是優(yōu)化()+()。其中,為權(quán)值向量,()刻畫的是模型的復(fù)雜程度,有2種形式:L1正則化和L2正則化,如式(2)、式(3)所示,表示模型復(fù)雜度損失在總損失中的比例,本文為0.1。

        可知,L1和L2正則化的基本思想都是通過限制權(quán)值向量的大小,使得模型不能任意擬合訓(xùn)練數(shù)據(jù)中的隨機噪音。

        2.2 指數(shù)衰減法

        神經(jīng)網(wǎng)絡(luò)在訓(xùn)練的過程中采用反向傳播算法即梯度下降及鏈?zhǔn)角髮?dǎo)法則來優(yōu)化神經(jīng)網(wǎng)絡(luò),梯度下降算法中一個重要的參數(shù)是學(xué)習(xí)率,學(xué)習(xí)率決定了參數(shù)移動到最優(yōu)值的速度快慢。如果學(xué)習(xí)率過大,很可能會越過最優(yōu)值;反之如果學(xué)習(xí)率過小,優(yōu)化的效率可能過低,長時間算法無法收斂。本文采用指數(shù)衰減的方法設(shè)置學(xué)習(xí)率,首先使用較大的學(xué)習(xí)率來快速得到一個較優(yōu)的解,然后隨著迭代的繼續(xù)逐步減小學(xué)習(xí)率,使得模型在訓(xùn)練后期更加穩(wěn)定。學(xué)習(xí)率隨迭代次數(shù)變化的計算公式為

        式中為優(yōu)化時使用的學(xué)習(xí)率;為迭代次數(shù);0為初始學(xué)習(xí)率;為衰減系數(shù),0<1,本文設(shè)置其數(shù)值為0.99;為衰減速度。在實際編程中選用TensorFlow中的tf.train. exponential_decay函數(shù)實現(xiàn)指數(shù)衰減法。

        2.3 滑動平均模型

        本文選用滑動平均模型來減小訓(xùn)練數(shù)據(jù)中的噪音對模型帶來的影響,其計算公式為

        式中θ+1表示本次迭代后輸出的結(jié)果,θ表示上一次迭代后輸出的結(jié)果,表示本次迭代的輸入值,表示衰減率,0<1。由式(5)可知,衰減率決定了模型更新的速度,越大模型更新越慢。選用Tensorflow中的tf.train. ExponentialMovingAverage函數(shù)實現(xiàn)滑動平均模型,該函數(shù)提供了num_updates更新參數(shù)用來動態(tài)設(shè)置衰減率的數(shù)值,計算公式如式(6)所示,初始化的值為0.99。

        2.4 神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)的簡化

        初步構(gòu)建的卷積神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)中包括4個卷積層和4個池化層,網(wǎng)絡(luò)結(jié)構(gòu)較為復(fù)雜,而本文需要將該算法應(yīng)用到色選機上,對傳送帶上的物料進(jìn)行實時判斷和處理,對實時性要求很高。又因本文篩選物料為花生,圖像信息較為簡單,故可以對網(wǎng)絡(luò)結(jié)構(gòu)進(jìn)行簡化以提高處理實時性。本文從減少卷積層和池化層的角度對該網(wǎng)絡(luò)結(jié)構(gòu)進(jìn)行了優(yōu)化,優(yōu)化后的網(wǎng)絡(luò)結(jié)構(gòu)如圖4所示,網(wǎng)絡(luò)各層參數(shù)如表2所示。采用簡化后的卷積神經(jīng)網(wǎng)絡(luò)在CPU+ GPU平臺上測試,迭代40次后,在測試集上分類準(zhǔn)確率穩(wěn)定達(dá)到87.42%。

        圖4 簡化卷積神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)

        表2 簡化神經(jīng)網(wǎng)絡(luò)參數(shù)

        3 算法優(yōu)化結(jié)果

        3.1 L1和L2范數(shù)正則化

        對比僅選用L1范數(shù)正則化、僅選用L2范數(shù)正則化與最初構(gòu)建的神經(jīng)網(wǎng)絡(luò)分類準(zhǔn)確率如圖5所示。

        由圖5可知,L2范數(shù)正則化后,準(zhǔn)確率較原始準(zhǔn)確率有明顯提高,且準(zhǔn)確率隨著訓(xùn)練次數(shù)的增加基本呈單調(diào)上升趨勢,故緩解了過擬合現(xiàn)象。而L1正則化后,雖然初期準(zhǔn)確率數(shù)值較原始準(zhǔn)確率有所下降,但隨著訓(xùn)練次數(shù)的增加,準(zhǔn)確率波動減小且基本呈單調(diào)上升趨勢,故也緩解了過擬合的現(xiàn)象。分析L1、L2范數(shù)正則化后準(zhǔn)確率出現(xiàn)明顯差距的原因為:1)L1范數(shù)正則化會讓參數(shù)更稀疏,即更多的參數(shù)變?yōu)?,而L2范數(shù)正則化不會;2)L1范數(shù)正則化的計算公式不可導(dǎo),而L2范數(shù)正則化公式可導(dǎo)。由于在優(yōu)化時需要計算損失函數(shù)的偏導(dǎo)數(shù),故對含有L2范數(shù)正則化損失函數(shù)的優(yōu)化要更加簡潔。上述結(jié)果證明在本文建立的數(shù)據(jù)集上L2范數(shù)正則化有效地提高了識別的準(zhǔn)確率,故本文選用L2范數(shù)正則化優(yōu)化卷積神經(jīng)網(wǎng)絡(luò)。

        圖5 范數(shù)正則化前后準(zhǔn)確率對比

        3.2 指數(shù)衰減法

        對比僅選用指數(shù)衰減法優(yōu)化后和最初構(gòu)建的神經(jīng)網(wǎng)絡(luò)在測試集上的分類準(zhǔn)確率如圖6所示,可知在前期訓(xùn)練的過程中優(yōu)化算法的準(zhǔn)確率增加幅度較大,后期增加幅度較小。這是由于在指數(shù)衰減法中先設(shè)置了一個較大的學(xué)習(xí)率,然后隨著迭代次數(shù)增加學(xué)習(xí)率逐步減小。

        圖6 指數(shù)衰減法優(yōu)化前后準(zhǔn)確率對比

        3.3 滑動平均模型

        對比僅選用滑動平均模型優(yōu)化后和最初構(gòu)建的神經(jīng)網(wǎng)絡(luò)在測試集上的分類準(zhǔn)確率如圖7所示,可知優(yōu)化后分類的準(zhǔn)確率在訓(xùn)練前期低于優(yōu)化前,但在后10次訓(xùn)練中,明顯高于優(yōu)化前,且滑動平均模型增加了模型的穩(wěn)定性使得準(zhǔn)確率波動減小。

        圖7 滑動平均模型優(yōu)化前后準(zhǔn)確率對比

        3.4 綜合優(yōu)化方案

        根據(jù)上述測試結(jié)果,最終選用的優(yōu)化方案為:L2范數(shù)正則化+指數(shù)衰減學(xué)習(xí)率+滑動平均模型+簡化網(wǎng)絡(luò)結(jié)構(gòu)。對比最初構(gòu)建的卷積神經(jīng)網(wǎng)絡(luò)算法與最終優(yōu)化算法在測試集上的分類準(zhǔn)確率如圖8所示??芍罱K優(yōu)化模型的準(zhǔn)確率明顯提高,在37次訓(xùn)練后準(zhǔn)確率達(dá)到98.18%,且穩(wěn)定不變,滿足色選系統(tǒng)的性能需求。

        運用優(yōu)化前的卷積神經(jīng)網(wǎng)絡(luò)算法測試數(shù)據(jù)集中407張單?;ㄉ鷪D像,共用時12.51 s,平均一幅單?;ㄉ鷪D像的處理時間為30.7 ms。運用優(yōu)化后算法的測試用時為7.44 s,即平均每張花生圖像的處理時間為18.3 ms。對比傳統(tǒng)的嵌入式平臺,對一張單?;ㄉ鷪D像進(jìn)行簡單的中值濾波所需時間在數(shù)百ms量級[32]。可知基于CPU+GPU平臺的深度學(xué)習(xí)算法極大的提高了運算速度,滿足了色選設(shè)備在篩選物料時的實時性要求。

        圖8 綜合優(yōu)化前后準(zhǔn)確率對比

        4 色選系統(tǒng)實測實現(xiàn)分析

        色選系統(tǒng)工作原理如圖9所示。在色選系統(tǒng)履帶尾部采用上下2組工業(yè)線陣CCD相機同時拍攝花生的正反面圖像,以全方位識別破損。拍攝的線陣圖像經(jīng)拼接、邊緣檢測和分割后得到單粒花生圖像[32],上述過程用時1~2 s,再使用本文的分類算法進(jìn)行篩選,在檢測到表皮破損和果仁破損的花生時通過控制空氣噴槍動作將其剔除,調(diào)節(jié)傳送帶速度保證花生在指定區(qū)域完成篩選。

        圖9 色選系統(tǒng)工作原理圖

        實測結(jié)果表明采用本文識別算法的色選系統(tǒng)表現(xiàn)較為穩(wěn)定,實時性滿足要求,分類識別準(zhǔn)確率與本文結(jié)果相近,多次試驗準(zhǔn)確率均在95%以上。由于受空氣噴槍動作精度、力度影響,實際花生瑕疵品篩選精度在90%左右,較應(yīng)用傳統(tǒng)分類算法的色選系統(tǒng)篩選精度有明顯提高。

        5 結(jié) 論

        本文提出將基于卷積神經(jīng)網(wǎng)絡(luò)的圖像分類算法應(yīng)用于色選設(shè)備農(nóng)作物篩選過程。相比于傳統(tǒng)的基于顏色值的圖像分類算法,基于深度學(xué)習(xí)的圖像分類算法不僅具有準(zhǔn)確率高、速度快的優(yōu)點,而且適用于顏色豐富、形狀不一的復(fù)雜物料的篩選場合。選用L2范數(shù)正則化、指數(shù)衰減法和滑動平均模型的方法優(yōu)化卷積神經(jīng)網(wǎng)絡(luò),以提高分類的準(zhǔn)確性,同時簡化神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu)以提高實時性。試驗結(jié)果表明優(yōu)化后的卷積神經(jīng)網(wǎng)絡(luò)具有98.18%的分類準(zhǔn)確率和幅單?;ㄉ鷪D像18.3 ms/粒的處理速度。實測結(jié)果驗證了深度學(xué)習(xí)在農(nóng)作物篩選領(lǐng)域的應(yīng)用是切實可行的。

        [1] 林茂先. 新型雜糧色選機的應(yīng)用[J]. 糧油加工,2014(10):24-27. Lin Maoxian. Application of a new type of hybrid grain selection machine[J]. Cereals and Oils Processing, 2014(10): 24-27. (in Chinese with English abstract)

        [2] 姚惠源,方輝.色選技術(shù)在糧食和農(nóng)產(chǎn)品精加工領(lǐng)域的應(yīng)用及發(fā)展趨勢[J].糧食與食品工業(yè), 2011, 18(2):4-6. Yao Huiyuan, Fang Hui. Application and development trend of color selection technology in food and agricultural products finishing[J]. Cereal and Food Industry, 2011, 18(2): 4-6. (in Chinese with English abstract)

        [3] 白穎杰. 基于機器視覺的圖像處理與特征識別方法的研究[D]. 重慶:重慶大學(xué),2010. Bai Yingjie. Research on Image Processing and Feature Recognition Method Based on Machine Vision[D]. Chongqing: Chongqing University, 2010. (in Chinese with English abstract)

        [4] 張五一, 趙強松, 王東云. 機器視覺的現(xiàn)狀及發(fā)展趨勢[J].中原工學(xué)院學(xué)報, 2008(1): 9-12,15. Zhang Wuyi, Zhao Qiangsong, Wang Dongyun. Actualities and developing trend of machine vision[J]. Journal of Zhongyuan University of Technology, 2008(1): 9-12, 15. (in Chinese with English abstract)

        [5] 王潤濤, 張長利, 房俊龍, 等. 基于機器視覺的大豆籽粒精選技術(shù)[J]. 農(nóng)業(yè)工程學(xué)報,2011, 27(8):355-359. Wang Runtao, Zhang Changli, Fang Junlong, et al. Soybean seeds selection based on computer vision[J].Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2011, 27(8): 355-359. (in Chinese with English abstract)

        [6] 周竹, 黃懿, 李小昱, 等. 基于機器視覺的馬鈴薯自動分級方法[J]. 農(nóng)業(yè)工程學(xué)報, 2012, 28(7): 178-183. Zhou Zhu, Huang Yi, Li Xiaoyu, et al. Automatic detecting and grading method of potatoes based on machine vision[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2012, 28(7): 178-183. (in Chinese with English abstract)

        [7] Wang Qi, Wang Hui, Xie Lijuan, et al. Outdoor color rating of sweet cherries using computer vision[J]. Computers and Electronics in Agriculture, 2012, 87: 113-120.

        [8] Pearson T, Dan M, Pearson J. A machine vision system for high speed sorting of small spots on grains[J]. Journal of Food Measurement & Characterization, 2012, 6(1/2/3/4): 27-34.

        [9] 趙吉文,魏正翠,汪洋,等. 基于灰度帶比例的優(yōu)質(zhì)西瓜子識別算法研究與實現(xiàn)[J]. 農(nóng)業(yè)工程學(xué)報,2011,27(4):340-344. Zhao Jiwen, Wei Zhengcui, Wang Yang, et al. Research and implementation of recognition algorithm based on gray scale of watermelon seeds[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2011, 27(4): 340-344. (in Chinese with English abstract)

        [10] 何希平,劉波. 深度學(xué)習(xí)理論與實踐[M]. 北京:科學(xué)出版社,2017.

        [11] Hinton G E, Osindero S, Teh Y W. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2014, 18(7): 1527-1554.

        [12] Bengio Y, Vincent P, Janvin C. A neural probabilistic language model[J]. Journal of Machine Learning Research, 2003, 3(6): 113-1155.

        [13] Bengio Y, Lecun Y. Scaling learning algorithms towards AI[C]// Large-Scale Kernel Machines. 2007:321-359.

        [14] 盧宏濤,張秦川. 深度卷積神經(jīng)網(wǎng)絡(luò)在計算機視覺中的應(yīng)用研究綜述[J]. 數(shù)據(jù)采集與處理,2016,31(1):1-17. Lu Hongtao, Zhang Qinchuan. Applications of deep convolutional neural network in computer vision[J]. Journal of Data Acquisition and Processing, 2016, 31(1):1-17. (in Chinese with English abstract)

        [15] 孫志軍,薛磊,許陽明,等. 深度學(xué)習(xí)研究綜述[J]. 計算機應(yīng)用研究,2012,29(8):2806-2810. Sun Zhijun, Xue Lei, Xu Yangming, et al. Overview of deep learning[J]. Application Research of Computers, 2012, 29(8): 2806-2810. (in Chinese with English abstract)

        [16] 楊斌,鐘金英. 卷積神經(jīng)網(wǎng)絡(luò)的研究進(jìn)展綜述[J]. 南華大學(xué)學(xué)報:自然科學(xué)版,2016,30(3):66-72. Yang Bin, Zhong Jinying. Review of convolution neural network[J]. Journal of University of South China(Science and Technology), 2016, 30(3): 66-72. (in Chinese with English abstract)

        [17] 胡正平,陳俊嶺,王蒙,等. 卷積神經(jīng)網(wǎng)絡(luò)分類模型在模式識別中的新進(jìn)展[J]. 燕山大學(xué)學(xué)報,2015,39(4): 283-291.Hu Zhengping, Chen Junling, Wang Meng, et al. Recent progress on convolutional neural network in pattern recognition[J]. Journal of Yanshan University, 2015, 39(4): 283-291. (in Chinese with English abstract)

        [18] 高震宇,王安,劉勇,等. 基于卷積神經(jīng)網(wǎng)絡(luò)的鮮茶葉智能分選系統(tǒng)研究[J/OL]. 農(nóng)業(yè)機械學(xué)報,2017,48(7):53-58. Gao Zhenyu, Wang An, Liu Yong, et al. Intelligent fresh-tea- leaves sorting system research based on convolution neural network[J]. Transactions of the Chinese Society for Agricultural Machinery, 2017, 48(7): 53-58. (in Chinese with English abstract)

        [19] Szegedy C, Toshev A, Erhan D. Deep neural networks for object detection[J]. Advances in Neural Information Processing Systems, 2013, 26: 2553-2561.

        [20] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//IEEE Conference on Computer Vision and Pattern Recognition, 2015: 1-9.

        [21] Krizhevshy A, Sutskever I, Hinton G E. Image net classification with deep convolutional neural networks[C]// Advances in Neural Information Processing Systems, 2012: 1-9.

        [22] 黃凱奇,任偉強,譚鐵牛. 圖像物體分類與檢測算法綜述[J]. 計算機學(xué)報,2014,37(6),1225-1240. Huang Kaiqi, Ren Weiqiang, Tan Tieniu. A review on image object classification and detection[J]. Chinese Journal of Computers, 2014, 37(6): 1225-1240. (in Chinese with English abstract)

        [23] Bailey D G. The advantages and limitations of high level synthesis for FPGA based image processing[C]// Proceedings of the 9th International Conference on Distributed Smart Cameras, 2015: 134-139.

        [24] 王科俊,熊新炎,任楨. 高效均值濾波算法[J]. 計算機應(yīng)用研究,2010,27(2):434-438 Wang Kejun, Xiong Xinyan, Ren Zhen. High efficiency mean value filtering algorithm[J]. Computer Applications, 2010, 27(2): 434-438. (in Chinese with English abstract)

        [25] 趙高長,張磊,武風(fēng)波. 改進(jìn)的中值濾波算法在圖像去噪中的應(yīng)用[J]. 應(yīng)用光學(xué),2011,32(4):678-682. Zhao Gaochang, Zhang Lei, Wu Fengbo. Application of improved median filtering algorithm in image denoising[J]. Journal of Applied Optics, 2011, 32(4): 678-682. (in Chinese with English abstract)

        [26] 王海菊,譚常玉,王坤林,等. 自適應(yīng)高斯濾波圖像去噪算法[J]. 福建電腦, 2017,33(11):5-6.

        [27] 姒紹輝,胡伏原,顧亞軍,等. 一種基于不規(guī)則區(qū)域的高斯濾波去噪算法[J]. 計算機科學(xué),2014(11):313-316. Si Shaohui, Hu Fuyuan, Gu Yajun, et al. Improved denoising algorithm based on non-regular area gaussian filtering[J]. Computer Science, 2014(11): 313-316. (in Chinese with English abstract)

        [28] 常亮,鄧小明,周明全. 圖像理解中的卷積神經(jīng)網(wǎng)絡(luò)[J].自動化學(xué)報,2016,9(42):1302-1303.Chang Liang, Deng Xiaoming, Zhou Mingquan. Convolutionalneural networks in image understanding[J]. Acta Automatica Sinica, 2016, 9(42): 1302-1303. (in Chinese with English abstract)

        [29] Bouvrie J. Notes on convolutional neural networks[EB/OL]. [2018-05-01].https://pdfs.semanticscholar.org/714a/c6c7dbb83d69b8118e5138b3a50d8feb789b.pdf?_ga=2.255005896.1551754364.1538209923-2104266169.1536045423.

        [30] 劉園園. 基于卷積神經(jīng)網(wǎng)絡(luò)的花卉圖像分類算法的研究[D].北京:華北電力大學(xué),2017. Liu Yuanyuan. Research on Flower Image Classification Algorithm Based on Convolutional Neural Network[D]. Beijing: North China Electric Power University, 2017. (in Chinese with English abstract)

        [31] 王樹文,張長利,房俊龍. 基于計算機視覺的番茄損傷自動檢測與分類研究[J]. 農(nóng)業(yè)工程學(xué)報,2005,21(8):98-101. Wang Shuwen, Zhang Changli, Fang Junlong. Automatic detection and classification of tomato damage based on computer vision[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2005, 21(8): 98-101.(in Chinese with English abstract)

        [32] 馬涌. 基于機器視覺的顆粒狀農(nóng)作物色選系統(tǒng)研究[D]. 哈爾濱:哈爾濱工業(yè)大學(xué),2016. Ma Yong. Research on Granular Plant Color Selection System Based on Machine Vision[D]. Harbin: Harbin Institute of Technology, 2016.(in Chinese with English abstract)

        Identification algorithm and application of peanut kernel integrity based on convolution neural network

        Zhao Zhiheng1,Song Huan1, Zhu Jiangbo1, Lu Lei1, Sun Lei2

        (1.150001,; 2.201109,)

        Aiming at the shortcomings of the existing color sorter machine for crop sorting, such as slow processing speed, low accuracy, and the dependence on experience value, a granular crop integrity identification algorithm based on convolutional neural network was proposed. Taking the classification of intact peanuts, skin damaged peanuts and half peanuts as instance, the three types of peanut images were acquired. After comparing the filtering effects of mean filtering, median filtering and Gaussian filtering, median filtering was adopted for image preprocessing. 407 effective peanut images were divided into the above three categories and manually labeled. Then the images were divided into training sets and validation sets, and the above three types of peanut pictures in the training set and the validation set were evenly distributed. A convolutional neural network with 4 convolutional layers, 4 pooling layers and 3 fully connected layers was built to extract the peanut image features. The accuracy of testing peanut classification on the CPU(central processing unit) platform combined GPU(graphics processing unit) was 90.91%. In contrast, the classification accuracy of the traditional BP neural network was 85.45%. It could be seen that the convolutional neural network algorithm constructed in this paper effectively improved the accuracy of granular crop recognition. In order to further improve the accuracy and real-time performance of the classification algorithm, it was necessary to optimize the established convolutional neural network. Over-fitting referred to the fact that when a model was overly complex, it could "memorize" the portion of random noise in each training data and forgot to "learn" the tendencyof the training data. In this paper, the regularization method was used to reduce the over-fitting, and the experimental results of L1 regularization and L2 regularization were compared. It was proved that the L2 regularization on the data set effectively improved the classification accuracy and reduced the over-fitting. In the process of training, the neural network used the back propagation algorithm, namely gradient descent and chain derivation rule, to optimize the neural network. The learning rate was an important parameter in the gradient descent algorithm. In this paper, the exponential decay method was used to set the learning rate. Firstly, a large learning rate was used to quickly obtain a better solution. Then, as the iteration continued, the learning rate was gradually reduced, making the model more stable in the later stage of training. The accuracy increase was larger, the latter was smaller, and the overall improvement was better than that before optimization, and the expected effect was achieved. In this paper, the moving average model was used to reduce the influence of noise in the training data on the model, and the training convergence speed was accelerated. The experiment proved that the accuracy fluctuation was reduced and the model stability was enhanced. Since the algorithm needed to be applied to the color sorting system, real-time judgment and processing of the materials on the conveyor belt required high real-time performance. Considering that the image information of peanut was relatively simple, the network structure could be simplified to improve the real-time performance. The simplified convolutional neural network consisted of 2 convolutional layers, 2 pooling layers, and 2 fully connected layers. The final optimization scheme included L2 norm regularization, exponential decay learning rate, moving average model and simplified network structure. The accuracy of optimized classification algorithm applied on the peanut data set was 98.18%, and the average processing time for detecting one peanut image was 18.3 ms, which demonstrated that the optimized convolutional neural network significantly improved the classification accuracy and real-time performance. The research work in this paper showed that the application of deep learning in the crop sorting field was feasible and effective.

        agricultural products; image processing; recognition; convolutional neural network; feature extraction; color sorting system; peanut particle screening

        10.11975/j.issn.1002-6819.2018.21.023

        TP391.41

        A

        1002-6819(2018)-21-0195-07

        2018-05-01

        2018-09-26

        國家科技重大專項(2014zx04001171)

        趙志衡,黑龍江哈爾濱人,教授,博士生導(dǎo)師,研究方向為電磁場和嵌入式系統(tǒng)。Email:zhzhhe@hit.edu.cn

        趙志衡,宋 歡,朱江波,盧 雷,孫 磊.基于卷積神經(jīng)網(wǎng)絡(luò)的花生籽粒完整性識別算法及應(yīng)用[J]. 農(nóng)業(yè)工程學(xué)報,2018,34(21):195-201. doi:10.11975/j.issn.1002-6819.2018.21.023 http://www.tcsae.org

        Zhao Zhiheng, Song Huan, Zhu Jiangbo, Lu Lei, Sun Lei. Identification algorithm and application of peanut kernel integrity based on convolution neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(21): 195-201. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2018.21.023 http://www.tcsae.org

        猜你喜歡
        分類優(yōu)化
        超限高層建筑結(jié)構(gòu)設(shè)計與優(yōu)化思考
        民用建筑防煙排煙設(shè)計優(yōu)化探討
        關(guān)于優(yōu)化消防安全告知承諾的一些思考
        分類算一算
        垃圾分類的困惑你有嗎
        大眾健康(2021年6期)2021-06-08 19:30:06
        一道優(yōu)化題的幾何解法
        由“形”啟“數(shù)”優(yōu)化運算——以2021年解析幾何高考題為例
        分類討論求坐標(biāo)
        數(shù)據(jù)分析中的分類討論
        教你一招:數(shù)的分類
        曰韩精品无码一区二区三区| 成人试看120秒体验区| 香港三级精品三级在线专区| 91尤物视频在线观看| 国产精品亚洲一区二区极品| 丰满又紧又爽又丰满视频| 国产精品国产三级国产专播| 国产人妻无码一区二区三区免费| 中文字幕无码免费久久99| 国产午夜精品综合久久久| 无码爽视频| 国产成人乱色伦区| 亚洲另类激情专区小说婷婷久| 亚洲综合中文一区二区| 无套内射在线无码播放| 国产成人无码区免费内射一片色欲| 国产小屁孩cao大人| 一区二区在线观看日本免费| 欧美性生交大片免费看app麻豆| 国产乱色精品成人免费视频| 国产av综合一区二区三区最新| 视频在线亚洲视频在线| 亚洲av永久无码精品一福利| 精品久久亚洲中文无码| 久久亚洲aⅴ精品网站婷婷| 沐浴偷拍一区二区视频| 亚洲av无码成人网站在线观看 | 国产成人精选在线不卡| 中文字幕亚洲精品一二三区| 蜜桃视频在线看一区二区三区| 亚洲欧美激情在线一区| 人妻av一区二区三区av免费| 中文字幕亚洲五月综合婷久狠狠| 色五月丁香五月综合五月| 男女超爽视频免费播放| 扒开双腿操女人逼的免费视频| 国产高清在线视频一区二区三区| 免费无码一区二区三区蜜桃大| 成人不卡国产福利电影在线看| 精品精品国产一区二区性色av| 国内揄拍国内精品少妇|