亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        基于機器視覺的雞胴體斷翅快速檢測技術(shù)

        2022-03-09 02:09:58吳江春王虎虎徐幸蓮
        農(nóng)業(yè)工程學報 2022年22期
        關鍵詞:檢測模型

        吳江春,王虎虎,徐幸蓮

        基于機器視覺的雞胴體斷翅快速檢測技術(shù)

        吳江春,王虎虎※,徐幸蓮

        (南京農(nóng)業(yè)大學肉品加工與質(zhì)量控制教育部重點實驗室,南京 210095)

        為實現(xiàn)肉雞屠宰過程中斷翅雞胴體的快速檢測,提高生產(chǎn)效率,該研究利用機器視覺系統(tǒng)采集了肉雞屠宰線上的1 053張肉雞胴體圖,構(gòu)建了一種快速識別斷翅缺陷的方法。通過機器視覺裝置采集雞胴體正視圖,經(jīng)圖像預處理后分別提取雞胴體左右兩端到質(zhì)心的距離及其差值(d1、d2、dc)、兩翅最低點高度及其差值(h1、h2、hc)、兩翅面積及其比值(S1、S2、Sr)、矩形度()和寬長比(rate)共11個特征值,并通過主成分分析降維至8個主成分。建立線性判別模型、二次判別模型、隨機森林、支持向量機、BP神經(jīng)網(wǎng)絡和VGG16模型,比較模型的F1分數(shù)和總準確率,在所有模型組合中,以VGG16模型的F1分數(shù)和總準確率最高,分別為94.35%和93.28%,平均預測速度為10.34張/s。利用VGG16建立的模型有較好的分類效果,可為雞胴體斷翅的快速識別與分類提供技術(shù)參考。

        機器視覺;機器學習;雞胴體;斷翅檢測

        0 引 言

        近年來,受國內(nèi)外禽流感、非洲豬瘟以及新冠肺炎疫情的影響,中國已逐步取消活禽市場交易,推廣集中宰殺,肉雞行業(yè)從活禽交易向以冰鮮雞或熟制品出售的方式轉(zhuǎn)型升級[1]。隨著雞肉市場需求的增加,消費者對雞肉品質(zhì)的要求也逐漸提高,他們常通過感官評定來選擇是否購買冰鮮雞,外觀上有明顯缺陷的雞胴體不能被消費者青睞。在肉雞屠宰過程中,雞的品種、養(yǎng)殖期、屠宰工藝和設備等因素會造成不同程度的雞胴體損傷[2]。據(jù)調(diào)研,生產(chǎn)線上主要的雞胴體缺陷類型有淤血、斷骨和皮損,斷骨常發(fā)生在兩翅區(qū)域。雖然大多數(shù)肉雞屠宰廠在生產(chǎn)的各個環(huán)節(jié)基本實現(xiàn)了機械化,但雞胴體的質(zhì)檢仍需經(jīng)驗豐富的工人用肉眼識別判斷[3]。生產(chǎn)線斷翅的人工判斷標準為人工檢查時無明顯肉眼可見的翅膀變形(外/斜翻、骨折、下垂)、翅骨外露、區(qū)部殘缺等特征。判定結(jié)果嚴重依賴經(jīng)驗,不同檢測工人之間的誤差較大,且顯著受個人主觀因素影響,無法避免漏檢與誤檢的發(fā)生,從而造成經(jīng)濟損失。若未檢出的斷翅品流入市場將降低企業(yè)產(chǎn)品整體品質(zhì),損害企業(yè)形象。因此亟需一種技術(shù)代替工人進行高效、客觀的檢測。機器視覺技術(shù)具有人眼視物的功能,能對被識別物進行分析和判斷[4]。其基本組成可分為硬件和軟件系統(tǒng),其中硬件系統(tǒng)由光源、相機、工控機和計算機組成,軟件系統(tǒng)包括計算機軟件等組分[5-8]。隨著我國智能制造的不斷深入,機器視覺憑借高效率、高精確度、無損等優(yōu)勢廣泛應用于各個領域[9]。在農(nóng)業(yè)領域,機器視覺可用于水果、蔬菜、堅果等農(nóng)產(chǎn)品的缺陷檢測和分級,或用于農(nóng)產(chǎn)品生長過程的監(jiān)測和病蟲害的控制[10-13]。此外機器視覺技術(shù)還可用于其他食品或食品原料的外觀品質(zhì)、新鮮度的檢測和分級,如牛肉新鮮度的測定[14]、雙黃蛋的識別[15]、餅干外形殘缺品的檢出與分類[16]。在禽類屠宰加工過程中,機器視覺可用于雞胴體及其分割組分的重量預測[7,17-21]和品質(zhì)檢測[22-24]、自動掏膛[25-27]和自動分割[28]等研究。然而現(xiàn)有研究中雞胴體品質(zhì)檢測的研究對象大多是已分割的雞翅、雞胸等組分,對整只雞胴體的研究較少,此外品質(zhì)檢測研究的缺陷類型集中在淤血、表面污染物、新鮮度、木質(zhì)化程度等方面,對雞胴體斷翅識別與檢測存在研究空白。本研究通過機器視覺系統(tǒng)采集雞胴體正視圖,經(jīng)過圖像預處理后提取特征值,建立快速識別雞胴體斷翅的模型,為雞胴體品質(zhì)檢測提供技術(shù)參考,對促進肉雞屠宰加工業(yè)自動化與智能化、減少工廠勞動力使用、提高生產(chǎn)效率具有一定意義。

        1 材料與方法

        1.1 試驗材料

        試驗對象為肉雞胴體,于江蘇省某大型肉雞屠宰線采集圖片。從生產(chǎn)線上收集經(jīng)工人質(zhì)檢的斷翅雞胴體和合格雞胴體,統(tǒng)一采集其正視圖,共獲得1 053張樣本圖片,其中斷翅雞胴體553只,合格雞胴體500只。代表性斷翅和正常樣本如圖1所示。

        圖1 部分斷翅品和合格品圖

        1.2 圖像采集裝置

        本研究中圖像采集裝置各組分的安裝和連接方法參考趙正東等[29]研究,安裝方位圖如圖2所示。

        圖2 圖像采集裝置簡圖

        1.3 雞胴體斷翅檢測方法

        1.3.1 圖像的預處理

        圖像噪聲是指圖像中造成干擾的無用信息,按來源可分為內(nèi)部噪聲和外部噪聲,外部噪聲由外界環(huán)境的變化引起,如光照強度、拍攝背景等,內(nèi)部噪聲由機器視覺系統(tǒng)內(nèi)部的抖動、數(shù)據(jù)傳輸?shù)纫蛩卦斐蒣30-31]。圖像預處理可減少噪聲,強化圖像的特征,利于后續(xù)提取特征值[32-33]。圖像預處理方法有灰度化、圖像增強、圖像分割和形態(tài)學處理等,本研究選用加權(quán)平均值法作為灰度化方法,在圖像增強中,對比了低通濾波、高通濾波、同態(tài)濾波、線性空域濾波和二維中值濾波對圖像中噪聲的去除效果,根據(jù)圖像處理前后清晰度的變化選擇合適的圖像增強方法。如圖3所示,低通濾波、高通濾波處理降低了圖像的清晰度,同態(tài)濾波處理使圖像的亮度增加,造成部分信息的丟失,線性空域濾波和二維中值濾波處理后圖像的清晰度都得到了提升,為了選擇最優(yōu),人為在灰度圖上增加椒鹽噪聲,二者對圖像中椒鹽噪聲的去除效果如圖4所示。通過觀察可以發(fā)現(xiàn)二維中值濾波法的處理效果優(yōu)于線性空域濾波,最終選擇二維中值濾波作為本研究的圖像增強方法。

        本研究比較了最大熵法閾值分割、Otsu閾值分割、迭代法閾值分割和K-means法的分割效果,最終結(jié)果如圖5所示,這4種方法中以迭代法閾值分割的分割效果最好,因此選擇迭代閾值法作為本研究的圖像分割方法。

        圖4 對添加椒鹽噪聲的灰度圖采取不同圖像增強方法的處理效果

        圖5 不同圖像分割方法處理效果

        如圖5c所示,采用迭代法閾值分割后的二值圖仍存在孔洞和白色噪點,因此,在此基礎上采用孔洞填充除去雞胴體區(qū)域內(nèi)部孔洞,選取最大連通域的方法來去除雞胴體外的白色噪點,如圖6a所示。將圖6a與原始RGB圖像點乘,得到除去背景干擾的RGB圖,如圖6b所示,并以此圖作為提取特征值和建立模型的基礎。

        圖6 圖像預處理最終效果

        1.3.2 特征值的提取

        通過觀察斷翅品與合格品之間的差別,可發(fā)現(xiàn)由于斷翅雞胴體翅膀處骨頭斷裂,肌肉之間缺少支撐,在重力的作用下斷翅向外展開,造成雞胴體最左/最右端到雞胴體質(zhì)心的距離要大于合格雞胴體,斷翅一側(cè)的雞翅區(qū)域最低點高度大于合格雞胴體,斷翅一側(cè)的雞翅區(qū)域在圖像上的投影面積要大于合格雞胴體?;谝陨系男螤钐卣骱蛶缀翁卣?,提取所有樣本相關的特征值。

        如圖7所示,提取雞胴體左右兩端到質(zhì)心的距離及其差值,利用MATLAB中regionprops()函數(shù)計算圖像連通分量的質(zhì)心,記為1(,),再計算出雞胴體區(qū)域最左端和最右端的坐標位點,分別記為1(1,1),1(2,2)。圖像左右兩端到質(zhì)心的距離及其差值的計算公式如下:

        注:O1為雞胴體質(zhì)心;L1為雞胴體最左端端點;R1為雞胴體最右端端點;d1為雞胴體最左端到質(zhì)心距離;d2為雞胴體最右端到質(zhì)心距離。

        圖像最左端到質(zhì)心的距離d1=-1,圖像最右端到質(zhì)心的距離d2=2-,左右兩端到質(zhì)心距離差值的絕對值dc= | d1-d2 |。

        要計算圖像中兩翅的面積,首先要將雞翅區(qū)域從胴體中分割出來。參考戚超等[18-19]利用雞胴體軀干部位擬合橢圓來獲取雞胸長和寬的方式,并在此基礎上做了改進,獲得一種從雞胴體圖像上分割雞翅區(qū)域的方法。該法可分為擬合橢圓和直線分割2個步驟,流程圖如圖8所示。如圖9a所示,擬合橢圓能去除胴體軀干部位像素,一定程度上將兩翅和胴體分離,但僅采取橢圓擬合的操作并不適用所有角度的雞胴體,如圖9 b~9d所示,2翅區(qū)域與雞脖、雞腿區(qū)域以及兩腿之間還存在不同程度的粘連,因此還需對粘連的像素進行直線分割。

        圖8 分割雞翅流程

        圖9 橢圓擬合分割效果

        如圖10所示,直線分割的步驟為:提取擬合橢圓的圓心,記為2(,),將圓心沿長軸向下平移三分之二的長軸長度,再沿短軸分別向左向右平移三分之二的短軸長度,得到點1和2,連接21和22,這兩條直線可分割粘連的雞翅和雞脖像素。

        將2沿長軸向上平移五分之二的長軸長度,得到點3,過3做一條平行于短軸的直線,該線可分割粘連的雞翅和雞腿像素。

        連接2、3,該線可分割兩腿間的粘連像素。

        根據(jù)左右兩翅在圖中的位置關系,利用regionprops()函數(shù)選出左翅和右翅,再用bwarea()函數(shù)計算左翅、右翅像素面積,分別記為S1、S2,其中Sr的計算公式如下:

        注:圖a中,O2為擬合橢圓的圓心;X1為分割左翅的直線與橢圓的交點; X2為分割右翅的直線與橢圓的交點;X3為分割兩腿的直線與橢圓的交點

        如圖11所示提取雞胴體圖像兩翅最低點的高度及其差值,圖像的大小為×,提取出左右兩翅最低點的坐標記為2(3,3),2(4,4),兩翅最低點高度的計算公式分別為h1=-3,h2=-4,二者差值記為hc,hc= | h1-h2 |。

        注:為水平坐標軸;為縱坐標軸;為圖像的長度;為圖像的寬度;2為左翅最低點;2為右翅最低點;3為左翅最低點的縱坐標值;4為右翅最低點的縱坐標值。

        Note:is the horizontal axis;is the vertical axis;is the length of the image;is the width of the image;2is the left wing minimum;2is the right wing minimum;3is the longitudinal coordinate value of the left wing minimum;4is the longitudinal coordinate value of the right wing minimum.

        圖11 雞胴體兩翅最低點的高度及其差值的提取

        Fig.11 Heights of the lowest point in the two wing and their difference

        如圖12所示,提取雞胴體圖的矩形度和寬長比。矩形度是一種形狀特征,指物體面積占其最小外接矩形的比值,表示物體對其最小外接矩形的充盈程度[34]。寬長比是指物體最小外接矩形的寬度和長度的比值,表示物體與正方形或圓形的接近程度,其值在0~1之間[35-36]。在本文中矩形度記為,寬長比記為。

        1.3.3 模型的建立與測試

        機器學習按照特征值提取方式可分為淺層機器學習和深度學習,淺層機器學習需手動提取特征值,即依靠研究者的經(jīng)驗和直覺選取合適的特征值,并將特征值輸入算法,深度學習可自動提取特征值,特征提取的準確度更高[37-38]。本研究使用的淺層機器學習算法為線性判別模型(Linear Discriminant Analysis,LDA)、二次判別模型(Quadratic Discriminant Analysis,QDA)、隨機森林(Random Forest,RF)、支持向量機(Support Vector Machines,SVM)、誤差反向傳播算法(Error Back- propagation Training,BP),深度學習算法為層數(shù)為16的視覺幾何群網(wǎng)絡(Visual Geometry Group Network,VGG16)。

        圖12 雞胴體的最小外接矩形

        在淺層機器學習模型的訓練中,對上述11個特征值進行主成分分析,選取方差累計貢獻率在95%以上的主成分作為模型的輸入?yún)?shù)。分別將11個特征值和主成分作為輸入?yún)?shù)輸入LDA、QDA、RF、SVM、BP模型,在1 053張圖中隨機選取700張圖片作為訓練集(斷翅品350張,合格品350張),353張圖片作為測試集(斷翅品203張,合格品150張)。

        在深度學習模型VGG16的訓練中,輸入值為除去背景的雞胴體RGB圖,將1 053張圖劃分為訓練集和測試集。其中訓練集為800張圖片(斷翅品、合格品各400張),并從中劃分出30%作為模型訓練過程中的驗證集,測試集為253張圖片(斷翅品153張,合格品100張)。所有模型的訓練參數(shù)如下表所示。

        表1 不同模型訓練參數(shù)

        注:LDA為線性判別模型;QDA為二次判別模型;RF為隨機森林模型;SVM為支持向量機模型;BP為誤差反向傳播模型;VGG16為層數(shù)為16的視覺幾何群網(wǎng)絡。

        Note: LDA is linear discriminant model; QDA is quadratic discriminant model; RF is random forest model; SVM is support vector machine model; BP is error back propagation model; VGG16 is visual geometric group network with 16 layers.

        模型分類效果的評判指標為召回率(Recall,Rec)、精確度(Precision,Pre)、F1分數(shù)(F1-score)和總準確率(Accuracy,Acc),計算公式如下:

        表2 分類結(jié)果混淆矩陣表

        注:TP為正確地預測為斷翅的樣本數(shù);TN為正確地預測為正常的樣本數(shù);FP錯誤地預測為斷翅的樣本數(shù);FN錯誤地預測為正常的樣本數(shù)。

        Note: TP is the number of samples correctly predicted as broken wings; TN is the number of samples correctly predicted as normal; FP is the number of samples incorrectly predicted as broken wings; FN is the number of samples incorrectly predicted as normal.

        2 結(jié)果與分析

        2.1 主成分分析

        如表3所示,前8個主成分的方差累計貢獻率為95.20%,能代表11個特征值的大部分信息,并且第8個主成分開始特征值的降幅趨于平緩,因此選取前8個主成分作為模型的輸入?yún)?shù)。

        表3 主成分方差貢獻率

        2.2 模型分類結(jié)果

        如表4所示,在淺層學習模型中,以特征值作為輸入?yún)?shù)的 RF模型召回率最高,為91.13%;以特征值作為輸入?yún)?shù)的SVM模型的識別精確度、F1分數(shù)均高于其它模型,分別為96.28%、92.58%;模型識別總準確率最高的模型是以特征值為輸入?yún)?shù)的二次判別模型和支持向量機模型,均為91.78%。特征值經(jīng)主成分分析降維后再輸入分類模型中均降低了模型的總準確率,其原因是主成分分析降維后得到的數(shù)據(jù)是原始數(shù)據(jù)的近似表達,降維的同時也損失了原始數(shù)據(jù)結(jié)構(gòu),減少了原始數(shù)據(jù)特征,從而降低分類算法的準確率[38-40]。

        表4 淺層學習模型分類效果

        注:表中-1表示斷翅;1表示正常。

        Note: In the table, -1 indicates broken-wing; 1 indicates normal.

        深度學習模型VGG16的分類效果如圖13所示,153個斷翅品中正確分類的有142個樣本,誤分類的有11個;100個合格品中正確分類的樣本有94個,誤分類的有6個。經(jīng)計算可得該模型的召回率為92.81%,精確度為95.95%,F(xiàn)1分數(shù)為94.35%,總準確率為93.28%。在所有模型中,預測時間最短的是以主成分作為輸入?yún)?shù)的SVM模型,可在0.000 9 s內(nèi)判定353張樣本圖片,平均速度為3.92×105張/s;預測時間最長的模型是VGG16模型,判定253張樣本圖片需24.46 s,平均速度為10.34 張/s。

        經(jīng)過綜合評判VGG16模型的F1分數(shù)和總準確率均高于其他識別模型,但其平均預測速度為10.34張/s,遠慢于其它模型。其原因是VGG16是深度學習模型,具有結(jié)構(gòu)復雜、層數(shù)多的特征,在提高模型的準確率和容錯性的同時也導致了運行時間長的弊端[41-42]。因此在后續(xù)的研究中可通過簡化代碼、及時清理變量的方式提高代碼的運行速度,或利用參數(shù)剪枝、參數(shù)量化、緊湊網(wǎng)絡、參數(shù)共享等方式對深度學習模型進行壓縮和增速,從而優(yōu)化模型、提高預測速度[43-45]。

        圖14為VGG16模型部分誤檢與漏檢的雞胴體樣本圖,如圖14a所示,由于少數(shù)斷翅品翅膀與軀干處的骨頭未完全斷裂,翅膀展開幅度小,弱化了斷翅特征。如圖 14b所示,翅膀的大小也會影響結(jié)果的準確性,翅膀肥大的合格品在重量的影響下增加了兩翅向外伸展的幅度,形成斷翅的假象。

        注:圖中第一行數(shù)據(jù)分別表示正確地預測為斷翅的樣本數(shù)、錯誤地預測為正常的樣本數(shù)、斷翅品的識別精確度;第二行數(shù)據(jù)分別表示錯誤地預測為斷翅的樣本數(shù)、正確地預測為正常的樣本數(shù)、合格品的識別精確度;第三行分別表示斷翅品的召回率、合格品的召回率、模型的總準確率。

        圖14 漏檢和誤檢樣本

        3 結(jié) 論

        本研究利用機器視覺系統(tǒng)獲得經(jīng)人工質(zhì)檢的雞胴體斷翅品和合格品的正視圖,共計1 053張,采用加權(quán)平均值法(灰度化)、二維中值濾波法(去噪)、迭代法(閾值分割)的圖像預處理方法獲得除去背景的雞胴體圖像,并以此圖為基礎提取了11個特征值。分別將特征值和經(jīng)降維的主成分導入LDA、QDA、RF、SVM、BP模型,將去除背景的雞胴體RGB圖導入深度學習模型VGG16。綜合比較模型的F1分數(shù)和總準確率,發(fā)現(xiàn)所有模型中以VGG16對雞胴體斷翅品和合格品分類效果最好,F(xiàn)1分數(shù)為94.35%,總準確率為93.28%,平均預測速度為10.34 張/s。該模型可為雞胴體斷翅的快速識別與分類提供技術(shù)參考。但仍需改善模型的預測速度,并提高對斷翅程度輕的斷翅品和翅膀肥大的合格品的識別準確率。此外,本研究通過手動捕獲雞胴體的正視圖的方式對斷翅雞胴體靜態(tài)檢測,尚未實現(xiàn)斷翅雞胴體的全自動檢測,因此在后續(xù)研究中需通過信號觸發(fā)裝置實現(xiàn)相機自動曝光和光源自動頻閃,從而實現(xiàn)對斷翅雞胴體的實時檢測。

        [1] 何雯霞,熊濤,尚燕. 重大突發(fā)疫病對我國肉禽產(chǎn)業(yè)鏈市場價格的影響研究:以非洲豬瘟為例[J]. 農(nóng)業(yè)現(xiàn)代化研究,2022,43(2):318-327.

        He Wenxia, Xiong Tao, Shang Yan. The impacts of major animal diseases on the prices of China’s meat and poultry markets: evidence from the African swine fever[J]. Research of Agricultural Modernization, 2022, 43(2): 318-327. (in Chinese with English abstract)

        [2] 李繼忠,曲威禹,花園輝,等. 屠宰工藝和設備對雞胴體自動分割的影響[J]. 肉類工業(yè),2021(4):36-41.

        Li Jizhong, Qu Weiyu, Hua Yuanhui, et al. Effect of slaughtering technology and equipment on automatic segmentation of chicken carcass[J]. Meat Industry, 2021(4): 36-41. (in Chinese with English abstract)

        [3] Chowdhury E U, Morey A. Application of optical technologies in the US poultry slaughter facilities for the detection of poultry carcass condemnation[J]. British Poultry Science, 2020, 61(6): 646-652.

        [4] 王成軍,韋志文,嚴晨. 基于機器視覺技術(shù)的分揀機器人研究綜述[J]. 科學技術(shù)與工程,2022,22(3):893-902.

        Wang Chengjun, Wei Zhiwen, Yan Chen. Review on sorting robot based on machine vision technology[J]. Science Technology and Engineering, 2022, 22(3): 893-902. (in Chinese with English abstract)

        [5] Brosnan T, Sun D W. Improving quality inspection of food products by computer vision:A review[J]. Journal of Food Engineering, 2004, 61(1): 3-16.

        [6] Taheri-Garavand A, Fatahi S, Omid M, et al. Meat quality evaluation based on computer vision technique: A review[J]. Meat Science, 2019, 156: 183-195.

        [7] 戚超,徐佳琪,劉超,等. 基于機器視覺和機器學習技術(shù)的雞胴體質(zhì)量自動分級方法[J]. 南京農(nóng)業(yè)大學學報,2019,42(3):551-558.

        Qi Chao, Xu Jiaqi, Liu Chao, et al. Automatic classification of chicken carcass weight based on machine vision and machine learning technology[J]. Journal of Nanjing Agricultural University, 2019, 42(3): 551-558. (in Chinese with English abstract)

        [8] Huynh T T M, Tonthat L, Dao S V T. A vision-based method to estimate volume and mass of fruit/vegetable: case study of sweet potato[J]. International Journal of Food Properties, 2022, 25(1): 717-732.

        [9] 周寶倉,呂金龍,肖鐵忠,等. 機器視覺技術(shù)研究現(xiàn)狀及發(fā)展趨勢[J]. 河南科技,2021,40(31):18-20.

        Zhou Baocang, Lü Jinlong, Xiao Tiezhong, et al. Research status and development trend of machine vision technology[J]. Henan Technology, 2021, 40(31): 18-20. (in Chinese with English abstract)

        [10] 溫艷蘭,陳友鵬,王克強,等. 基于機器視覺的病蟲害檢測綜述[EB/OL]. 中國糧油學報,(2022-01-11) [2022-08-11]. http: //kns. cnki. net/kcms/detail/11. 2864. TS. 20220302. 1806. 014. html.

        Wen Yanlan, Chen Youpeng, Wang Keqiang, et al. An overview of plant diseases and insect pests detection based on machine vision[EB/OL]. Journal of the Chinese Cereals and Oils Association, (2022-01-11) [2022-08-11]. (in Chinese with English abstract)

        [11] 劉平,劉立鵬,王春穎,等. 基于機器視覺的田間小麥開花期判定方法[J]. 農(nóng)業(yè)機械學報,2022,53(3):251-258.

        Liu Ping, Liu Lipeng, Wang Chunying, et al. Determination method of field wheat flowering period based on machine vision[J]. Transactions of the Chinese Society for Agricultural Machinery, 2022, 53(3): 251-258. (in Chinese with English abstract)

        [12] Li J B, Rao X Q, Wang F J, et al. Automatic detection of common surface defects on oranges using combined lighting transform and image ratio methods[J]. Postharvest Biology and Technology, 2013, 82: 59-69.

        [13] 閆彬,楊福增,郭文川. 基于機器視覺技術(shù)檢測裂紋玉米種子[J]. 農(nóng)機化研究,2020,42(5):181-185, 235.

        Yan Bin, Yang Fuzeng, Guo Wenchuan. Detection of maize seeds with cracks based on machine vision technology[J]. Journal of Agricultural Mechanization Research, 2020, 42(5): 181-185, 235. (in Chinese with English abstract)

        [14] 姜沛宏,張玉華,陳東杰,等. 基于多源感知信息融合的牛肉新鮮度分級檢測[J]. 食品科學,2016,37(6):161-165.

        Jiang Peihong, Zhang Yuhua, Chen Dongjie, et al. Measurement of beef freshness grading based on multi-sensor information fusion technology[J]. Food Science, 2016, 37(6): 161-165. (in Chinese with English abstract)

        [15] Chen W, Du N F, Dong Z Q, et al. Double yolk nondestructive identification system based on Raspberry Pi and computer vision[J]. Journal of Food Measurement and Characterization, 2022, 16(2): 1605-1612.

        [16] 程子華. 基于機器視覺的殘缺餅干分揀系統(tǒng)開發(fā)[J]. 現(xiàn)代食品科技,2022,38(2):313-318, 325.

        Cheng Zihua. The development of incomplete biscuit sorting system based on machine vision[J]. Modern Food Science and Technology, 2022, 38(2): 313-318, 325. (in Chinese with English abstract)

        [17] 郭峰,劉立峰,張奎彪,等. 家禽胴體影像分選技術(shù)研究新進展[J]. 肉類工業(yè),2019(11):31-40.

        Guo Feng, Liu Lifeng, Zhang Kuibiao, et al. New progress in research on image grading technology of poultry carcass[J]. Meat Industry, 2019(11): 31-40. (in Chinese with English abstract)

        [18] 戚超. 基于深度相機和機器視覺技術(shù)的雞胴體質(zhì)量在線分級系統(tǒng)[D]. 南京:南京農(nóng)業(yè)大學,2019.

        Qi Chao. On-line Grading System of Chicken Carcass Quality Based on Deep Camera and Machine Vision Technology[D]. Nanjing: Nanjing Agricultural University, 2019. (in Chinese with English abstract)

        [19] 陳坤杰,李航,于鎮(zhèn)偉,等. 基于機器視覺的雞胴體質(zhì)量分級方法[J]. 農(nóng)業(yè)機械學報,2017,48(6):290-295, 372.

        Chen Qunjie, Li Hang, Yu Zhengwei, et al. Grading of chicken carcass weight based on machine vision[J]. Transactions of the Chinese Society for Agricultural Machinery, 2017, 48(6): 290-295, 372. (in Chinese with English abstract)

        [20] 吳玉紅. 基于機器視覺的雞翅質(zhì)檢研究[D]. 泰安:山東農(nóng)業(yè)大學,2016.

        Wu Yuhong. Research on Chicken Wings Quality and Mass Detection Based on Machine Vision[D]. Taian: Shandong Agricultural University, 2016. (in Chinese with English abstract)

        [21] 徐京京. 雞翅質(zhì)量檢測與重量分級智能化裝備的設計[D]. 泰安:山東農(nóng)業(yè)大學,2016.

        Xu Jingjing. Design of the Chicken Wings Quality Inspection and Weight Classification of Intelligent Equipment[D]. Taian: Shandong Agricultural University, 2016. (in Chinese with English abstract)

        [22] Asmara R, Rahutomo F, Hasanah Q, et al. Chicken meat freshness identification using the histogram color feature[C]//IEEE. International Conference on Sustainable Information Engineering and Technology. Batu, Indonesia, 2017: 57-61.

        [23] Carvalho L, Perez-Palacios T, Caballero D, et al. Computer vision techniques on magnetic resonance images for the non-destructive classification and quality prediction of chicken breasts affected by the White-Striping myopathy[J]. Journal of Food Engineering, 2021, 306: 110633.

        [24] 楊凱. 雞胴體表面污染物在線檢測及處理設備控制系統(tǒng)的設計與開發(fā)[D]. 南京:南京農(nóng)業(yè)大學,2015.

        Yang Kai. Design and Development of Online Detection and Processing Equipment Control System for Contaminants on Chicken Carcass Surface[D]. Nanjing: Nanjing Agricultural University, 2015. (in Chinese with English abstract)

        [25] 陳艷. 基于機器視覺的家禽機械手掏膛及可食用內(nèi)臟分揀技術(shù)研究[D]. 武漢:華中農(nóng)業(yè)大學,2018.

        Chen Yan. Research on the Technology of Poultry Manipulator Eviscerating and Edible Viscera Sorting Based on Machine Vision[D]. Wuhan: Huazhong Agriculture University, 2018. (in Chinese with English abstract)

        [26] 王樹才,陶凱,李航. 基于機器視覺定位的家禽屠宰凈膛系統(tǒng)設計與試驗[J]. 農(nóng)業(yè)機械學報,2018,49(1):335-343.

        Wang Shucai, Tao Kai, Li Hang. Design and experiment of poultry eviscerator system based on machine vision positioning[J]. Transactions of the Chinese Society for Agricultural Machinery, 2018, 49(1): 335-343. (in Chinese with English abstract)

        [27] Chen Y, Wang S C. Poultry carcass visceral contour recognition method using image processing[J]. Journal of Applied Poultry Research, 2018, 27(3): 316-324.

        [28] Teimouri N, Omid M, Mollazade K, et al. On-line separation and sorting of chicken portions using a robust vision-based intelligent modelling approach[J]. Biosystems Engineering, 2018, 167: 8-20.

        [29] 趙正東,王虎虎,徐幸蓮. 基于機器視覺的肉雞胴體淤血檢測技術(shù)初探[J]. 農(nóng)業(yè)工程學報,2022,38(16):330-338.

        Zhao Zhengdong, Wang Huhu, Xu Xinglian. Broiler carcass congestion detection technology based on machine vision[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2022, 38(16): 330-338. (in Chinese with English abstract)

        [30] 高舉,李永祥,徐雪萌. 基于機器視覺編織袋縫合缺陷識別與檢測[J]. 包裝與食品機械,2022,40(3):51-56.

        Gao Ju, Li Yongxiang, Xu Xuemeng. Recognition and detection of stitching defects of woven bags based on machine vision[J]. Packaging and Food Machinery, 2022, 40(3): 51-56. (in Chinese with English abstract)

        [31] 王坤. 基于機器視覺的花邊針自動分揀方法研究[D]. 上海:東華大學,2022.

        Wang Kun. Investigation on Automatic Sorting Method of Curved Edge Needle Based on Machine Vision[D]. Shanghai: Donghua University, 2022. (in Chinese with English abstract)

        [32] 位沖沖. 基于卷積神經(jīng)網(wǎng)絡的工件識別技術(shù)研究[D]. 哈爾濱:哈爾濱商業(yè)大學,2022.

        Wei Chongchong. Research on Workpiece Recognition Technology Based on Convolutional Neural Networks[D]. Harbin: Harbin University of Commerce, 2022. (in Chinese with English abstract)

        [33] 劉德志,曾勇,袁雨鑫,等. 基于機器視覺的火車輪對軸端標記自動識別算法研究[J]. 現(xiàn)代制造工程,2022(7):113-120.

        Liu Dezhi, Zeng Yong, Yuan Yuxing, et al. Research on automatic recognition algorithm of axle end mark of train wheelset based on machine vision[J]. Modern Manufacturing Engineering, 2022(7): 113-120. (in Chinese with English abstract)

        [34] 張凱,李振華,郁豹,等. 基于機器視覺的花生米品質(zhì)分選方法[J]. 食品科技,2019,44(5):297-302.

        Zhang Kai, Li Zhenghua, Yu Bao, et al. Peanut quality sorting method based on machine vision[J]. Food Science and Technology, 2019, 44(5): 297-302. (in Chinese with English abstract)

        [35] 戴建民,曹鑄,孔令華,等. 基于多特征模糊識別的煙葉品質(zhì)分級算法[J]. 江蘇農(nóng)業(yè)科學,2020,48(20):241-247.

        Dai Jianmin, Cao Zhu, Kong Linghua, et al. Tobacco quality grading algorithm based on multi-feature fuzzy recognition[J]. Jiangsu Agricultural Science, 2020, 48(20): 241-247. (in Chinese with English abstract)

        [36] 王慧慧,孫永海,張婷婷,等. 鮮食玉米果穗外觀品質(zhì)分級的計算機視覺方法[J]. 農(nóng)業(yè)機械學報,2010,41(8):156-159,165.

        Wang Huihui, Sun Yonghai, Zhang Tingting, et al. Appearance quality grading for fresh corn ear using computer vision[J]. Transactions of the Chinese Society for Agricultural Machinery, 2010, 41(8): 156-159, 165. (in Chinese with English abstract)

        [37] 朱博陽,吳睿龍,于曦. 人工智能助力當代化學研究[J]. 化學學報,2020,78(12):1366-1382.

        Zhu Boyang, Wu Ruilong, Yu Xi. Artificial intelligence for contemporary chemistry research[J]. Acta Chimica Sinica, 2020, 78(12): 1366-1382. (in Chinese with English abstract)

        [38] 王敏,周樹道,楊忠,等. 深度學習技術(shù)淺述[J]. 自動化技術(shù)與應用,2019,38(5):51-57.

        Wang Min, Zhou Shudao, Yang Zhong, et al. Simple aclysis of deep learning technology[J]. Computer Applications, 2019, 38(5): 51-57. (in Chinese with English abstract)

        [39] 劉廣昊. 基于數(shù)字圖像和高光譜的柑橘葉片品種鑒別方法[D]. 重慶:西南大學,2020.

        Liu Guanghao. Identification Methods of Ccitrus Leaf Varieties Based on Digital Image and Hyperspectral[D]. Chongqing: Southwest University, 2020. (in Chinese with English abstract)

        [40] 梁培生,孫輝,張國政,等. 基于主成分分析和BP神經(jīng)網(wǎng)絡的蠶蛹分類方法[J]. 江蘇農(nóng)業(yè)科學,2016,44(10):428-430, 582.

        Liang Peisheng, Sun Hui, Zhang Guozheng, et al. A classification method for silkworm pupae based on principal component analysis and BP neural network[J]. Jiangsu Agricultural Science, 2016, 44(10): 428-430, 582. (in Chinese with English abstract)

        [41] 劉雅琪. 基于機器視覺的魚頭魚尾定位技術(shù)的研究[D]. 武漢:武漢輕工大學,2021.

        Liu Yaqi. Research on Positioning Technology of Fish Head and Tail Based on Machine Vision[D]. Wuhan: Wuhan Polytechnic University, 2021. (in Chinese with English abstract)

        [42] 史甜甜. 基于深度學習的織物疵點檢測研究[D]. 杭州:浙江理工大學,2019.

        Shi Tiantian. Research on Fabric Defection Based on Deep Learning. Hangzhou: Zhejiang Sci-Tech Unicersity, 2019. (in Chinese with English abstract)

        [43] 高晗,田育龍,許封元,等. 深度學習模型壓縮與加速綜述[J]. 軟件學報,2021,32(1):68-92.

        Gao Han, Tian Yulong, Xu Fengyuan, et al. Survey of deep learning model compression and acceleration[J]. Coden Ruxuew, 2021, 32(1): 68-92. (in Chinese with English abstract)

        [44] 鮑春. 基于FPGA的圖像處理深度學習模型的壓縮與加速[D]. 北京:北京工商大學,2020.

        Bao Chun. Deep Learning Model Compression and Acceleration for Image Processing Based on FPGA[D]. Beijing: Beijing Technology and Business University, 2020. (in Chinese with English abstract)

        [45] Han R, Liu C H, Li S, et al. Accelerating deep learning systems via critical set identification and model compression[J]. IEEE Transactions on Computers, 2020, 69(7): 1059-1070.

        Rapid detection technology for broken-winged broiler carcass based on machine vision

        Wu Jiangchun, Wang Huhu※, Xu Xinglian

        (,,,210095,)

        Broken-winged chicken carcasses can be one of the most common defects in broiler slaughter plants. Manual detection cannot fully meet the large-scale production, due to the high labor intensity with the low efficiency and accuracy. Therefore, it is a high demand to rapidly and accurately detect broken wings on chicken carcasses. This study aims to realize the rapid inspection of broken-winged chicken carcasses in the progress of broiler slaughter, in order to improve the production efficiency for the cost-saving slaughter line. 1053 broiler carcass images were collected from a broiler slaughter line using a computer vision system. Rapid identification was then constructed for the broken wing defects. Specifically, the front view of the chicken carcass was obtained in the machine vision system. The preprocessing was then deployed to obtain the chicken carcass images without the background, including the weighted average (graying), two-dimensional median filtering (denoising), and iterative (threshold segmentation). The code was also written in the MATLAB platform. After that, a total of 11 characteristic values were calculated, covering the exact distance starting from the left and right ends of the chicken carcass image to the centroid and the difference (d1, d2, and dc), the heights of the lowest point in the two wings and their difference (h1, h2, and hc), the areas of the two wings and ratio of them (S1, S2, and Sr), squareness (R), and width-length ratio (rate). As such, the eight principal components were achieved in the principal component analysis after the reduction of several dimensions. Separately, the principal components and characteristic values were imported into the specific model of linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), random forest (RF), support vector machine (SVM), and BP neural network. Among them, the input parameter of the VGG16 model was from the RGB maps of the chicken carcass with the removed background. Finally, a comparison was made for the F1-scores and total accuracy of each model. Thus, the highest recall rate of 91.13% was achieved in the RF model with the characteristic values as the input parameters among the shallow learning models. The second higher recognition accuracy and F1-score were 96.28% and 92.58%, respectively in the SVM model with the characteristic values as input parameters. The highest total accuracy of model recognition was achieved in the quadratic discriminant and SVM models with the characteristic values as the input parameters, both up to a proportion of 91.78%. Moreover, the F-score and total accuracy of the VGG16 model were the highest among the total model combinations, with respective rates of 94.35% and 93.28%, respectively. In terms of the prediction time of models, the shortest prediction time was obtained in the SVM model with the principal components as the input parameters. Specifically, the capacity was found to determine 353 sample images in 0.000 9 s, with an average speed of 3.92×105images per second. By contrast, the longest prediction time was observed in the VGG16 model, where 24.46 s to determine 253 sample images, with an average speed of 10.34 images per second. In conclusion, the VGG16 model can be expected to serve as the best classification of broken wings in chicken carcasses.

        machine vision; machine learning; broiler carcass; detection technology of broken wing

        10.11975/j.issn.1002-6819.2022.22.027

        TS251.7

        A

        1002-6819(2022)-22-0253-09

        吳江春,王虎虎,徐幸蓮. 基于機器視覺的雞胴體斷翅快速檢測技術(shù)[J]. 農(nóng)業(yè)工程學報,2022,38(22):253-261.doi:10.11975/j.issn.1002-6819.2022.22.027 http://www.tcsae.org

        Wu Jiangchun, Wang Huhu, Xu Xinglian, et al. Rapid detection technology for broken-winged broiler carcass based on machine vision[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2022, 38(22): 253-261. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2022.22.027 http://www.tcsae.org

        2022-08-26

        2022-11-05

        國家現(xiàn)代農(nóng)業(yè)產(chǎn)業(yè)技術(shù)體系項目(CARS-41)

        吳江春,研究方向為肉品加工與質(zhì)量安全控制。Email:2021808112@stu.njau.edu.cn

        王虎虎,教授,博士,研究方向為肉品加工與質(zhì)量安全控制。Email:huuwang@njau.edu.cn

        猜你喜歡
        檢測模型
        一半模型
        “不等式”檢測題
        “一元一次不等式”檢測題
        “一元一次不等式組”檢測題
        “幾何圖形”檢測題
        “角”檢測題
        重要模型『一線三等角』
        重尾非線性自回歸模型自加權(quán)M-估計的漸近分布
        3D打印中的模型分割與打包
        小波變換在PCB缺陷檢測中的應用
        极品新娘高清在线观看| 亚洲精品国偷自产在线99正片| 最新国产日韩AV线| 激情文学人妻中文字幕| 亚洲自拍偷拍一区二区三区 | 亚洲免费成年女性毛视频| 国产一区二区三区在线大屁股| 国产精品免费av片在线观看| 人人做人人妻人人精| 国产裸体AV久无码无遮挡| 日韩av一区二区三区高清| 美女露内裤扒开腿让男人桶无遮挡| 色偷偷一区二区无码视频| 素人激情福利视频| 精品一区二区三区久久| 成人国产精品一区二区网站公司| 亚洲国产成人久久综合一区77 | 亚洲美女影院| 日韩极品免费在线观看| 国产午夜在线视频观看 | 久久婷婷色综合一区二区| 中文字幕一区二区人妻痴汉电车| 亚洲精品中文字幕视频色| 一本色道无码道dvd在线观看| 不卡高清av手机在线观看| 91国产视频自拍在线观看| 中文有码无码人妻在线| 国产中老年妇女精品| 亚洲AV小说在线观看| 中文字幕日韩有码国产| 中文字幕无码毛片免费看| 911香蕉视频| 久久精品国产亚洲av网在| 观看在线人视频| 久久丫精品国产亚洲av| 亚洲精品二区在线观看| 在线免费观看黄色国产强暴av| 免费国产黄网站在线观看| 91精品91久久久久久| 国产精品性色av麻豆| 野花社区视频在线观看|