馬浚誠,劉紅杰,鄭飛翔,杜克明※,張領(lǐng)先,胡 新,孫忠富
?
基于可見光圖像和卷積神經(jīng)網(wǎng)絡(luò)的冬小麥苗期長勢參數(shù)估算
馬浚誠1,劉紅杰2,鄭飛翔1,杜克明1※,張領(lǐng)先3,胡 新2,孫忠富1
(1. 中國農(nóng)業(yè)科學(xué)院農(nóng)業(yè)環(huán)境與可持續(xù)發(fā)展研究所,北京 100081;2. 河南省商丘市農(nóng)林科學(xué)院小麥研究所,商丘 476000;3. 中國農(nóng)業(yè)大學(xué)信息與電氣工程學(xué)院,北京 100083)
針對目前基于計算機視覺估算冬小麥苗期長勢參數(shù)存在易受噪聲干擾且對人工特征依賴性較強的問題,該文綜合運用圖像處理和深度學(xué)習(xí)技術(shù),提出一種基于卷積神經(jīng)網(wǎng)絡(luò)(convolutional neural network, CNN)的冬小麥苗期長勢參數(shù)估算方法。以冬小麥苗期冠層可見光圖像作為輸入,構(gòu)建了適用于冬小麥苗期長勢參數(shù)估算卷積神經(jīng)網(wǎng)絡(luò)模型,通過學(xué)習(xí)的方式建立冬小麥冠層可見光圖像與長勢參數(shù)的關(guān)系,實現(xiàn)了農(nóng)田尺度冬小麥苗期冠層葉面積指數(shù)(leaf area index, LAI)和地上生物量(above ground biomass, AGB)的準確估算。為驗證方法的有效性,該研究采用以冠層覆蓋率(canopy cover, CC)作為自變量的線性回歸模型和以圖像特征為輸入的隨機森林(random forest, RF)、支持向量機回歸(support vector machines regression, SVM)進行對比分析,采用決定系數(shù)(coefficient of determination,2)和歸一化均方根誤差(normalized root mean square error, NRMSE)定量評價估算方法的準確率。結(jié)果表明:該方法估算準確率均優(yōu)于對比方法,其中AGB估算結(jié)果的2為0.791 7,NRMSE為24.37%,LAI估算結(jié)果的2為0.825 6,NRMSE為23.33%。研究可為冬小麥苗期長勢監(jiān)測與田間精細管理提供參考。
作物;生長;參數(shù)估算;冬小麥;苗期;葉面積指數(shù);地上生物量;卷積神經(jīng)網(wǎng)絡(luò)
葉面積指數(shù)(leaf area index, LAI)和地上生物量(above ground biomass, AGB)是表征冬小麥長勢的2個重要參數(shù)[1]。農(nóng)田尺度的LAI和AGB估算對于冬小麥苗期長勢監(jiān)測與田間精細管理具有重要的意義。傳統(tǒng)的LAI和AGB測量方法需要田間破壞性取樣和人工測量分析,存在效率低、工作量大等問題,不能滿足高通量、自動化的植物表型分析需求[2-4]。遙感是目前冬小麥長勢參數(shù)無損測量的主要方法之一,利用獲取的冬小麥冠層光譜數(shù)據(jù),通過計算植被指數(shù)并與長勢參數(shù)實測數(shù)據(jù)進行回歸分析,能夠?qū)崿F(xiàn)LAI和AGB的無損測量[1,5-8]。但由于光譜數(shù)據(jù)采集需要使用專用的設(shè)備,該方法在使用成本和便捷性方面存在一定不足[2,9]。
可見光圖像具有成本低、數(shù)據(jù)獲取方便等優(yōu)點[10-14]?;谟嬎銠C視覺技術(shù),從可見光圖像中提取數(shù)字特征,能夠?qū)AI和AGB進行準確的擬合分析[11,15-18],例如:陳玉青等[4]基于Android手機平臺開發(fā)了一種冬小麥葉面積指數(shù)快速測量系統(tǒng),該系統(tǒng)利用冬小麥冠層HSV圖像中的H分量和V分量進行冠層分割,然后利用分割后的冠層圖像計算LAI。結(jié)果表明,該系統(tǒng)測量結(jié)果與實測LAI之間存在良好的線性關(guān)系。崔日鮮等[19]利用可見光圖像分析,提取了冠層覆蓋率等多個顏色特征,利用逐步回歸和BP神經(jīng)網(wǎng)絡(luò)方法進行冬小麥地上部生物量估算研究。結(jié)果表明,利用冠層覆蓋度和BP神經(jīng)網(wǎng)絡(luò),能夠?qū)崿F(xiàn)冬小麥地上部生物量的準確估算。雖然基于計算機視覺技術(shù)的方法取得了一定效果,但仍然存在2個問題[20-21]:1)易受噪聲干擾,田間采集的冬小麥圖像中包含大量由光照不均勻和復(fù)雜背景產(chǎn)生的噪聲,對冬小麥圖像分割及特征提取的準確率有嚴重的影響;2)對圖像特征的依賴程度較高,但通常人工設(shè)計的圖像特征泛化能力有限,導(dǎo)致該方法難以拓展應(yīng)用。
卷積神經(jīng)網(wǎng)絡(luò)(convolutional neural network, CNN)是目前最有效的深度學(xué)習(xí)方法之一,能夠直接以圖像作為輸入,具有識別準確率高等優(yōu)點[22-24],已在雜草和害蟲識別[25-26]、植物病害和脅迫診斷[20,24]、農(nóng)業(yè)圖像分割[27-29]等多個領(lǐng)域得到了廣泛的應(yīng)用。本研究擬開展基于卷積神經(jīng)網(wǎng)絡(luò)的冬小麥苗期長勢參數(shù)估算研究,以冬小麥苗期冠層可見光圖像作為輸入,利用卷積神經(jīng)網(wǎng)絡(luò)從冠層圖像中自動學(xué)習(xí)特征,通過學(xué)習(xí)的方法建立冬小麥冠層可見光圖像與長勢參數(shù)的關(guān)系,實現(xiàn)農(nóng)田尺度的冬小麥苗期LAI和AGB快速估算,以期為冬小麥苗期長勢監(jiān)測與田間精細管理提供有效支撐。
本研究試驗于2017年10月—2018年6月在河南省商丘市農(nóng)林科學(xué)院田間試驗基地進行。試驗采用的冬小麥品種為國麥301,播種時間為2017年10月14日。共設(shè)置12個小區(qū),小區(qū)規(guī)格為2.4 m×5 m。在每個小區(qū)內(nèi)設(shè)置3個1 m×1 m的圖像采樣區(qū)。采用佳能600D數(shù)碼相機(有效像素1800萬,最高圖像分辨率為5 184×3 456像素)對每個圖像采樣區(qū)進行拍照。采集圖像時,利用三腳架將相機放置于圖像采樣區(qū)正上方1.5 m處,鏡頭垂直向下,不使用光學(xué)變焦,保持閃光燈關(guān)閉。試驗期間共進行17次圖像采集,獲得612張冬小麥苗期冠層可見光圖像,具體圖像采集日期如表1所示。
表1 冬小麥苗期冠層圖像采集日期
采集的圖像格式為JPG,原始分辨率為5 184×3 456像素。獲取圖像后,利用手動剪裁的方式將圖像中非圖像采樣區(qū)的部分剔除。
將冬小麥苗期冠層可見光圖像數(shù)據(jù)集劃分為訓(xùn)練集、驗證集和測試集。為擴充數(shù)據(jù)集的數(shù)據(jù)量,避免過擬合現(xiàn)象的發(fā)生,本研究對圖像數(shù)據(jù)集進行擴充:首先將原始圖像分別旋轉(zhuǎn)90°、180°和270°,然后進行水平和垂直翻轉(zhuǎn)。為使構(gòu)建的估算模型能夠克服大田環(huán)境下光照噪聲,將冬小麥苗期冠層可見光圖像轉(zhuǎn)換到HSV空間,通過調(diào)整V通道改變圖像亮度,模擬大田環(huán)境下光照條件的變化,進一步擴充圖像數(shù)據(jù)集[27]。通過數(shù)據(jù)擴充,將原始數(shù)據(jù)集擴充至26倍。擴充后的圖像數(shù)據(jù)集共包含15 912張冬小麥冠層圖像,其中訓(xùn)練集、驗證集和測試集中圖像的數(shù)量分別為8 486、2 122和5 304(訓(xùn)練集與測試集按照7:3的比例進行劃分,其中驗證集占訓(xùn)練集的20%)。擴充后,考慮到模型網(wǎng)絡(luò)結(jié)構(gòu)、實際應(yīng)用效率、網(wǎng)絡(luò)訓(xùn)練時間、模型計算量和硬件設(shè)備等因素,將數(shù)據(jù)集中圖像的尺寸調(diào)整為96像素×96像素,降低CNN模型的參數(shù)量。
冬小麥苗期LAI與AGB數(shù)據(jù)的采集與圖像采集同時進行。AGB數(shù)據(jù)采集采用破壞性取樣的方法,在每個小區(qū)內(nèi)隨機選擇5株冬小麥進行烘干稱質(zhì)量(該5株小麥均不在圖像采樣圖內(nèi))。將5株冬小麥的干質(zhì)量平均后乘以相應(yīng)的植株密度,從而獲得該試驗區(qū)的實測AGB數(shù)據(jù)。LAI數(shù)據(jù)通過比葉重法計算獲取[30]。
本研究CNN模型結(jié)構(gòu)如圖1所示。本研究CNN模型的輸入為冬小麥苗期冠層圖像,輸入圖像的尺寸為96×96×3(寬×高×顏色通道),共包含4個卷積層、3個池化層和2個全連接層。卷積層中采用大小為5×5的卷積核提取圖像特征,4個卷積層中卷積核的數(shù)量分別為32、64、128和256[23]。為保持特征圖的尺寸為整數(shù),卷積層2中采用了邊界擴充(Padding=1)。池化層卷積核的大小為2×2,步長為2,采用平均池化函數(shù)。全連接層1中隱藏神經(jīng)元的個數(shù)為500,丟棄率為0.5,全連接層2包含2個隱藏神經(jīng)元,對應(yīng)輸出層估算的參數(shù)數(shù)量,丟棄率為0.5。輸出層為冬小麥苗期冠層LAI和AGB。
本研究CNN模型采用梯度下降算法(stochastic gradient descent, SGD)進行訓(xùn)練,動量因子(momentum)設(shè)置為0.9,訓(xùn)練過程中保持不變;CNN模型的學(xué)習(xí)率(learning rate)和圖像批處理大?。╩ini-batchsize)2個參數(shù)通過網(wǎng)格式搜索確定,選擇模型估算準確率最高的參數(shù)組合。初始learning rate設(shè)置為0.001,每20次訓(xùn)練后學(xué)習(xí)率下降為原始學(xué)習(xí)率的10%,mini-batchsize設(shè)置為32,最大訓(xùn)練次數(shù)設(shè)置為300。
為驗證本研究冬小麥長勢參數(shù)估算方法的有效性,本研究采用傳統(tǒng)的估算方法進行對比試驗。已有研究表明,冠層覆蓋率(canopy cover, CC)與冬小麥長勢參數(shù)具有良好的線性關(guān)系[12,15,19,31-32],因此,本研究采用以CC作為自變量的線性回歸(linear regression,LR)模型(LR-CC)作為對比方法之一。CC通過計算冬小麥冠層圖像中植被像素占圖像總像素的比例得出[12]。本研究還采用了隨機森林(random forest, RF)和支持向量機回歸(support vector machines regression, SVM)2種傳統(tǒng)分類器結(jié)合特征提取作為對比。
由于采集的冬小麥苗期冠層圖像中含有背景噪聲,因此在提取圖像特征用于對比方法估算長勢參數(shù)之前,首先要進行冠層圖像分割,剔除圖像中的背景噪聲。本研究采用Canopeo[15,17]實現(xiàn)冠層圖像分割,然后從分割后的冠層圖像中提取圖像特征。提取的特征包含RGB、HSV和***3個顏色空間9個顏色分量的一階矩(Avg)和二階矩(std)2個顏色特征以及能量(Energy)、相關(guān)度(Correlation)、對比度(Contrast)和同質(zhì)性(Homogeneity)4個紋理特征,共計54個圖像特征。在提取特征后,利用Pearson相關(guān)分析選擇與估算參數(shù)相關(guān)性較高的特征構(gòu)建模型。
注:3@96×96代表3幅96×96像素的特征圖,余同。卷積層1中卷積核大小為5×5,數(shù)量為32,卷積層2中卷積核大小為5×5,數(shù)量為64,卷積層3中卷積核大小為5×5,數(shù)量為128,卷積層4中卷積核大小為5×5,數(shù)量為256,全連接層1中神經(jīng)元個數(shù)為500,全連接層2中神經(jīng)元個數(shù)為2。局部連接采用ReLU激活函數(shù)實現(xiàn)。
本研究對模型估算的冬小麥苗期長勢參數(shù)和實測長勢參數(shù)進行線性回歸分析,定量評價估算模型的準確率。采用決定系數(shù)(coefficient of determination,2)和標準均方根誤差(normalized root mean square error, NRMSE)作為評價指標。
本研究CNN模型采用Matlab 2018a編程實現(xiàn),試驗軟件環(huán)境為Window 10專業(yè)版,硬件環(huán)境為Intel Xeon E5-2620 CPU 2.1 GHz,內(nèi)存32GB,GPU為NVIDIA Quadro P4000。
采用SGD方法進行CNN模型訓(xùn)練的過程如圖2所示。隨著迭代次數(shù)的增加,訓(xùn)練集和驗證集的損失逐漸降低。模型在較短的迭代次數(shù)內(nèi)能夠迅速收斂,表明模型取得了良好的訓(xùn)練效果。利用訓(xùn)練完的CNN模型進行冬小麥苗期冠層AGB和LAI估算,估算結(jié)果如圖3和4。
圖2 訓(xùn)練和驗證損失函數(shù)曲線
圖3 基于CNN的地上生物量估算結(jié)果
從估算結(jié)果中可以看出,本研究基于CNN模型估算的長勢參數(shù)和實測長勢參數(shù)之間存在良好的線性關(guān)系。在AGB的估算結(jié)果中,基于CNN模型在訓(xùn)練集和驗證集上取得了較高的準確率,2均達到了0.9以上,NRMSE均低于5%;在測試集上,基于CNN模型的估算準確率相較于訓(xùn)練集和驗證集出現(xiàn)了一定的下降,但依然取得了良好的估算結(jié)果,2為0.791 7,NRMSE為24.37%。LAI的估算結(jié)果與AGB類似,基于CNN的模型在訓(xùn)練集和驗證集上的準確率較高,2均超過了0.98,NRMSE均低于25%,在測試集的估算結(jié)果2為0.825 6,NRMSE為23.33%。測試結(jié)果表明,采用基于CNN的模型,能夠?qū)崿F(xiàn)冬小麥苗期長勢參數(shù)的準確估算。
圖4 基于CNN的葉面積指數(shù)估算結(jié)果
2.2.1 與LR-CC估算方法對比
在用Canopeo進行冠層圖像分割之前,為降低方法運算量,提高效率,將冠層圖像的尺寸統(tǒng)一調(diào)整為1000像素×1000像素。根據(jù)本研究試驗設(shè)置,每個小區(qū)內(nèi)設(shè)置了3個圖像采樣區(qū),因此在計算每個小區(qū)對應(yīng)的CC值時,本研究將該小區(qū)內(nèi)3個圖像采樣區(qū)的CC值進行平均?;谝陨显囼炘O(shè)置,本研究建立了CC數(shù)據(jù)集,用于LR-CC模型的構(gòu)建。在異常值(由于光照過強導(dǎo)致的偏差較大的CC值)檢測后,將CC數(shù)據(jù)集劃分為訓(xùn)練集和測試集,其中訓(xùn)練集的樣本量為144,測試集的樣本量為48?;贚R-CC模型的冬小麥苗期冠層LAI和AGB估算果如圖5所示。
圖5 基于線性回歸模型的長勢參數(shù)估算結(jié)果
從估算結(jié)果可以看出,LR-CC估算AGB的2為0.724 6,NRMSE為29.31%,估算LAI的2為0.794 9,NRMSE為35.18%??傮w來說,LR-CC的估算效果低于CNN模型。
2.2.2 與RF、SVM估算方法對比
在采用RF和SVM進行冬小麥長勢參數(shù)估算前,本研究采用Pearson相關(guān)系數(shù)進行圖像特征的選擇。將提取的54個圖像特征分別與AGB數(shù)據(jù)和LAI實測數(shù)據(jù)進行相關(guān)性分析,選擇相關(guān)性較高的特征構(gòu)建估算模型,特征選擇的結(jié)果如表2和表3所示。
表2 冬小麥苗期冠層圖像特征選擇結(jié)果(與AGB相關(guān)性)
注:**表示在0.01水平顯著。下同。
Note:**represents significant at the 0.01 level. The same below.
表3 冬小麥苗期冠層圖像特征選擇結(jié)果(與LAI相關(guān)性)
從相關(guān)性分析結(jié)果中可以看出,原始特征集中的16個圖像特征與AGB數(shù)據(jù)相關(guān)性較高,7個圖像特征與LAI數(shù)據(jù)相關(guān)性較高,因此,本研究建立包含16個特征的數(shù)據(jù)集進行AGB估算,建立包含7個特征的數(shù)據(jù)集進行LAI估算。采用與CC數(shù)據(jù)集相同的劃分比例將構(gòu)建的2個數(shù)據(jù)集劃分為訓(xùn)練集和測試集,分別采用RF和SVM模型進行冬小麥苗期AGB和LAI估算,估算結(jié)果如圖6所示。
從圖6中可以看出,對于AGB的估算,RF、SVM與LR-CC模型的估算能力類似,RF估算AGB的2為0.773 8,NRMSE為28.85%,估算準確率略高于SVM,SVM估算AGB的2為0.645 5,NRMSE為53.73%。LAI估算結(jié)果方面,RF和SVM的估算結(jié)果均不準確,基于RF的2為0.18,NRMSE為29.65%,基于SVM的2為0.189 4,NRMSE為74.68%,估算效果遠低于LR-CC模型。
2.2.3 討 論
從對比結(jié)果中可以看出,相比于傳統(tǒng)的冬小麥長勢參數(shù)估算方法,本研究提出的基于CNN的估算方法能夠取得更準確的農(nóng)田尺度冬小麥苗期AGB和LAI估算。通過本研究試驗過程可知,基于CNN的估算方法不需要對冬小麥圖像進行分割,是更直接的估算方法,并且該方法能夠直接以冬小麥冠層圖像作為輸入并從訓(xùn)練數(shù)據(jù)中自動學(xué)習(xí)、選擇特征,避免了傳統(tǒng)估算方法中圖像分割和人工特征提取等環(huán)節(jié),且CNN模型學(xué)習(xí)的特征具有更好的泛化能力[20,21,33],進一步提升了在大田環(huán)境下實際應(yīng)用的潛力。而LR-CC、RF和SVM3種對比方法在提取圖像特征前需要進行圖像分割,提取冬小麥冠層圖像。由于大田環(huán)境下光照和背景等噪聲對圖像分割具有較大的影響,且冬小麥葉片細長,導(dǎo)致冬小麥冠層圖像分割往往難以取得理想的效果。Canopeo[15,17]是目前廣泛應(yīng)用的冠層圖像分割方法之一,但由于Canopeo是基于顏色信息的圖像分割方法,而顏色信息容易受到光照和背景等噪聲的影響[21],從而導(dǎo)致對比方法估算的準確性和魯棒性較低。除此之外,這3種對比方法都需要人工設(shè)計、提取圖像底層特征,由于人工設(shè)計的圖像特征泛化能力有限,也導(dǎo)致了這些方法難以在大田實際環(huán)境中應(yīng)用。
該研究基于圖像處理與深度學(xué)習(xí)技術(shù),提出了基于卷積神經(jīng)網(wǎng)絡(luò)的冬小麥苗期長勢參數(shù)估算方法。主要結(jié)論如下:
1)以冠層可見光圖像作為輸入,本研究提出了適用于冬小麥苗期長勢參數(shù)估算卷積神經(jīng)網(wǎng)絡(luò)模型,實現(xiàn)了農(nóng)田尺度冬小麥苗期AGB和LAI的準確估算,其中AGB估算結(jié)果的2為0.791 7,NRMSE為24.37%,LAI估算結(jié)果的2為0.825 6,NRMSE為23.33%。
2)采用以冠層覆蓋率作為自變量的線性回歸模型、隨機森林和支持向量機回歸進行估算準確率的定量對比。結(jié)果表明,以冠層覆蓋率作為自變量的線性回歸模型估算AGB的2為0.724 6,NRMSE為29.31%,估算LAI的2為0.794 9,NRMSE為35.18%,隨機森林估算AGB的2為0.773 8,NRMSE為28.85%,估算LAI的2為0.18,NRMSE為29.65%,支持向量機估算AGB的2為0.645 5,NRMSE為53.73%,估算LAI的2為0.189 4,NRMSE為74.68%。與對比估算方法相比,本研究提出的基于CNN的估算方法準確率更高,更適用于田間實際環(huán)境的冬小麥苗期長勢參數(shù)估算。
本研究提出的基于卷積神經(jīng)網(wǎng)絡(luò)的冬小麥苗期長勢參數(shù)估算方法,實現(xiàn)了農(nóng)田尺度冬小麥長勢參數(shù)的準確估算,可為冬小麥苗期長勢監(jiān)測與田間精細管理提供支撐。
[1] 徐旭,陳國慶,王良,等. 基于敏感光譜波段圖像特征的冬小麥LAI和地上部生物量監(jiān)測[J]. 農(nóng)業(yè)工程學(xué)報,2015,31(22):169-175. Xu Xu, Chen Guoqing, Wang Liang, et al. Monitoring leaf area index and biomass above ground of winter wheat based on sensitive spectral waveband and corresponding image characteristic[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2015, 31(22): 169-175. (in Chinese with English abstract)
[2] Zhang L, Verma B, Stockwell D, et al. Density weighted connectivity of grass pixels in image frames for biomass estimation[J]. Expert Systems with Applications, 2018, 101: 213-227.
[3] Walter J, Edwards J, McDonald G, et al. Photogrammetry for the estimation of wheat biomass and harvest index[J]. Field Crops Research, 2018, 216: 165-174.
[4] 陳玉青,楊瑋,李民贊,等. 基于Android手機平臺的冬小麥葉面積指數(shù)快速測量系統(tǒng)[J]. 農(nóng)業(yè)機械學(xué)報,2017,48(增刊):123-128. Chen Yuqing, Yang Wei, Li Minzan, et al. Measurement system of winter wheat LAI based on android mobile platform[J]. Transactions of the Chinese Society for Agricultural Machinery, 2017, 48 (Supp): 123-128. (in Chinese with English abstract)
[5] 高林,楊貴軍,于海洋,等. 基于無人機高光譜遙感的冬小麥葉面積指數(shù)反演[J]. 農(nóng)業(yè)工程學(xué)報,2016,32(22):113-120. Gao Lin, Yang Guijun, Yu Haiyang, et al. Retrieving winter wheat leaf area index based on unmanned aerial vehicle hyperspectral remoter sensing[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2016, 32(22): 113-120. (in Chinese with English abstract)
[6] Schirrmann M, Hamdorf A, Garz A, et al. Estimating wheat biomass by combining image clustering with crop height[J]. Computers and Electronics in Agriculture, 2016, 121: 374-384.
[7] Rasmussen J, Ntakos G, Nielsen J, et al. Are vegetation indices derived from consumer-grade cameras mounted on UAVs sufficiently reliable for assessing experimental plots?[J]. European Journal of Agronomy, 2016, 74: 75-92.
[8] 蘇偉,張明政,展郡鴿,等. 基于機載LiDAR數(shù)據(jù)的農(nóng)作物葉面積指數(shù)估算方法研究[J]. 農(nóng)業(yè)機械學(xué)報,2016,47(3):272-277. Su Wei, Zhang Mingzheng, Zhan Junge, et al. Estimation method of crop leaf area index based on airborne LiDAR data[J]. Transactions of the Chinese Society for Agricultural Machinery, 2016, 47(3): 272-277. (in Chinese with English abstract)
[9] 李明,張長利,房俊龍. 基于圖像處理技術(shù)的小麥葉面積指數(shù)的提取[J]. 農(nóng)業(yè)工程學(xué)報,2010,26(1):205-209. Li Ming, Zhang Changli, Fang Junlong. Extraction of leaf area index of wheat based on image processing technique[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2010, 26(1): 205-209. (in Chinese with English abstract)
[10] Hamuda E, Mc Ginley B, Glavin M, et al. Automatic crop detection under field conditions using the HSV colour space and morphological operations[J]. Computers and Electronics in Agriculture, 2017, 133: 97-107.
[11] González-Esquiva J M, Oates M J, García-Mateos G, et al. Development of a visual monitoring system for water balance estimation of horticultural crops using low cost cameras[J]. Computers and Electronics in Agriculture, 2017, 141: 15-26.
[12] Casadesús J, Villegas D. Conventional digital cameras as a tool for assessing leaf area index and biomass for cereal breeding[J]. Journal of Integrative Plant Biology, 2014, 56(1): 7-14.
[13] 馬浚誠,杜克明,鄭飛翔,等. 可見光光譜和支持向量機的溫室黃瓜霜霉病圖像分割[J]. 光譜學(xué)與光譜分析,2018,38(6):1863-1868. Ma Juncheng, Du Keming, Zheng Feixiang, et al. A segmenting method for greenhouse cucumber downy mildew images based on visual spectral and support vector machine [J]. Spectroscopy and Spectral Analysis, 2018, 38(6): 1863-1868. (in Chinese with English abstract)
[14] Ma J, Li X, Wen H, et al. A key frame extraction method for processing greenhouse vegetables production monitoring video[J]. Computers and Electronics in Agriculture, 2015, 111: 92-102.
[15] Chung Y S, Choi S C, Silva R R, et al. Case study: Estimation of sorghum biomass using digital image analysis with Canopeo[J]. Biomass and Bioenergy, 2017, 105: 207-210.
[16] Neumann K, Klukas C, Friedel S, et al. Dissecting spatiotemporal biomass accumulation in barley under different water regimes using high-throughput image analysis[J]. Plant, Cell and Environment, 2015, 38(10): 1980-1996.
[17] Patrignani A, Ochsner T E. Canopeo: A powerful new tool for measuring fractional green canopy cover[J]. Agronomy Journal, 2015, 107(6): 2312-2320.
[18] Virlet N, Sabermanesh K, Sadeghi-Tehran P, et al. Field scanalyzer: An automated robotic field phenotyping platform for detailed crop monitoring[J]. Functional Plant Biology, 2017, 44(1): 143-153.
[19] 崔日鮮,劉亞東,付金東. 基于可見光光譜和BP人工神經(jīng)網(wǎng)絡(luò)的冬小麥生物量估算研究[J]. 光譜學(xué)與光譜分析,2015,35(9):2596-2601. Cui Rixian, Liu Yadong, Fu Jindong. Estimation of winter wheat biomass using visible spectral and bp based artificial neural network[J]. Spectroscopy and Spectral Analysis, 2015, 35(9): 2596-2601. (in Chinese with English abstract)
[20] 馬浚誠,杜克明,鄭飛翔,等. 基于卷積神經(jīng)網(wǎng)絡(luò)的溫室黃瓜病害識別系統(tǒng)[J]. 農(nóng)業(yè)工程學(xué)報,2018,34(12):186-192. Ma Juncheng, Du Keming, Zheng Feixiang, et al. Disease recognition system for greenhouse cucumbers based on deep convolutional neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(12): 186-192. (in Chinese with English abstract)
[21] Ma J, Du K, Zhang L, et al. A segmentation method for greenhouse vegetable foliar disease spots images using color information and region growing[J]. Computers and Electronics in Agriculture, 2017, 142: 110-117.
[22] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[J]. Advances in Neural Information Processing Systems, 2012, 25(2): 1?9.
[23] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[C]// International Conference on Learning Representations, 2014: 1-14.
[24] Ghosal S, Blystone D, Singh A K, et al. An explainable deep machine vision framework for plant stress phenotyping[J]. Proceedings of the National Academy of Sciences of the United States of America, 2018, 115(18): 4613-4618.
[25] Ferreira A D S, Freitas D M, Silva G G D, et al. Weed detection in soybean crops using ConvNets[J]. Computers and Electronics in Agriculture, 2017, 143: 314-324.
[26] Ding W, Taylor G. Automatic moth detection from trap images for pest management[J]. Computers and Electronics in Agriculture, 2016, 123: 17-28.
[27] Xiong X, Duan L, Liu L, et al. Panicle-SEG: A robust image segmentation method for rice panicles in the field based on deep learning and superpixel optimization[J]. Plant Methods, 2017, 13(1): 1-15.
[28] 段凌鳳,熊雄,劉謙,等. 基于深度全卷積神經(jīng)網(wǎng)絡(luò)的大田稻穗分割[J]. 農(nóng)業(yè)工程學(xué)報,2018,34(12):202-209. Duan Lingfeng, Xiong Xiong, Liu Qian, et al. Field rice panicles segmentation based on deep full convolutional neural network[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(12): 202-209. (in Chinese with English abstract)
[29] 劉立波,程曉龍,賴軍臣. 基于改進全卷積網(wǎng)絡(luò)的棉田冠層圖像分割方法[J]. 農(nóng)業(yè)工程學(xué)報,2018,34(12):193-201. Liu Libo, Cheng Xiaolong, Lai Junchen. Segmentation method for cotton canopy image based on improved fully convolutional network model[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(12): 193-201. (in Chinese with English abstract)
[30] 劉镕源,王紀華,楊貴軍,等. 冬小麥葉面積指數(shù)地面測量方法的比較[J]. 農(nóng)業(yè)工程學(xué)報,2011,27(3):220-224. Liu Rongyuan, Wang Jihua, Yang Guijun, et al. Comparison of ground-based LAI measuring methods on winter wheat[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2011, 27(3): 220-224. (in Chinese with English abstract)
[31] Baresel J P, Rischbeck P, Hu Y, et al. Use of a digital camera as alternative method for non-destructive detection of the leaf chlorophyll content and the nitrogen nutrition status in wheat[J]. Computers and Electronics in Agriculture, 2017, 140: 25-33.
[32] Ma J, Li Y, Du K, et al. Estimating above ground biomass of winter wheat at early growth stages using digital images and deep convolutional neural network[J]. European Journal of Agronomy, 2019, 103: 117-129.
[33] Ma J, Du K, Zheng F, et al. A recognition method for cucumber diseases using leaf symptom images based on deep convolutional neural network[J]. Computers and Electronics in Agriculture, 2018, 154: 18-24.
Estimating growth related traits of winter wheat at seedling stages based on RGB images and convolutional neural network
Ma Juncheng1, Liu Hongjie2, Zheng Feixiang1, Du Keming1※, Zhang Lingxian3, Hu Xin2, Sun Zhongfu1
(1100081,;2. 476000,;3 . College of Information and Electrical Engineering, China Agricultural University, Beijing100083,)
Leaf area index (LAI) and above ground biomass (AGB) are two critical traits indicating the growth of winter wheat. Currently, non-destructive methods for measuring LAI and AGB heavily are subjected to limitations that the methods are susceptible to the environmental noises and greatly depend on the manual designed features. In this study, an easy-to-use growth-related traits estimation method for winter wheat at early growth stages was proposed by using digital images captured under field conditions and Convolutional Neural Network (CNN). RGB images of winter wheat canopy in 12 plots were captured at the field station of Shangqiu Academy of Agriculture and Forestry Sciences, Henan, China. The canopy images were captured by a low-cost camera at the early growth stages. Using canopy images at early growth stages as input, a CNN structure suitable for the estimation of growth related traits was explored, which was then trained to learn the relationship between the canopy images and the corresponding growth-related traits. Based on the trained CNN, the estimation of LAI and AGB of winter wheat at early growth stages was achieved. In order to compare the results of the CNN, conventionally adopted methods for estimating LAI and AGB in conjunction with a collection of color and texture feature extraction techniques were used. The conventional methods included a linear regression model using canopy cover as the predictor variable (LR-CC), Random Forest (RF) and Support Vector Machine Regression (SVR). The canopy images of winter wheat were captured at early growth stages, resulting in the existence of pixels representing non-vegetation elements in these images, such as soil. Therefore, it was necessary to perform image segmentation of vegetation for the compared methods prior to feature extraction. The segmentation was achieved by Canopeo. The linear regression was used to compare the accuracy of the methods. Normalized Root-Mean-Squared error (NRMSE) and coefficient of determination (2) were used as the criterion for model evaluation. Results showed the CNN demonstrated superior results to the compared methods in the two metrics. Strong correlations could be observed between the actual measurements of traits to those estimated by the CNN. The estimation results of LAI had2equaled to 0.825 6 and NRMSE equaled to 23.33%, and the results of AGB had2equaled to 0.791 7 and NRMSE equals to 24.37%. Compare to the comparative methods, the CNN was a more direct method for AGB and LAI estimation. The image segmentation of vegetation was not necessary because the CNN was able to use the important features to estimate AGB and LAI and ignore the non-important features, which not only reduced the computation cost but also increased the efficiency of the estimation. In contrast, the performances of the compared estimating methods greatly depended on the results of image segmentation. Accurate segmentation results guaranteed accurate data sources to feature extraction. However, canopy images captured under real field conditions were suffering from uneven illumination and complicated background, which was a big challenge to achieve robust image segmentation of vegetation. It was revealed that robust estimation of AGB and LAI of winter wheat at early growth stages could be achieved by CNN, which can provide support to growth monitoring and field management of winter wheat.
crops; growth; parameter estimation; winter wheat; seedling stages; leaf area index; above ground biomass; convolutional neural network
2018-09-27
2019-01-17
國家自然科學(xué)基金(31801264);國家重點研發(fā)計劃項目(2016YFD0300606和2017YFD0300402)
馬浚誠,助理研究員,博士,主要從事基于計算機視覺的作物信息獲取與分析研究。Email:majuncheng@caas.cn
杜克明,助理研究員,博士,主要從事農(nóng)業(yè)物聯(lián)網(wǎng)研究。Email:dukeming@caas.cn
10.11975/j.issn.1002-6819.2019.05.022
S512.1+1;TP391.41
A
1002-6819(2019)-05-0183-07
馬浚誠,劉紅杰,鄭飛翔,杜克明,張領(lǐng)先,胡 新,孫忠富. 基于可見光圖像和卷積神經(jīng)網(wǎng)絡(luò)的冬小麥苗期長勢參數(shù)估算[J]. 農(nóng)業(yè)工程學(xué)報,2019,35(5):183-189.doi:10.11975/j.issn.1002-6819.2019.05.022 http://www.tcsae.org
Ma Juncheng, Liu Hongjie, Zheng Feixiang, Du Keming, Zhang Lingxian, Hu Xin, Sun Zhongfu. Estimating growth related traits of winter wheat at seedling stages based on RGB images and convolutional neural network [J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2019, 35(5): 183-189. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2019.05.022 http://www.tcsae.org