亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        基于邊界檢測和骨骼提取的顯著性檢測網(wǎng)絡(luò)

        2023-06-21 02:42:34楊愛萍程思萌王金斌宋尚陽丁學(xué)文
        關(guān)鍵詞:特征檢測方法

        楊愛萍,程思萌,王金斌,宋尚陽,丁學(xué)文

        基于邊界檢測和骨骼提取的顯著性檢測網(wǎng)絡(luò)

        楊愛萍1,程思萌1,王金斌1,宋尚陽1,丁學(xué)文2

        (1. 天津大學(xué)電氣自動(dòng)化與信息工程學(xué)院,天津 300072;2. 天津職業(yè)技術(shù)師范大學(xué)電子工程學(xué)院,天津 300222)

        目前一些方法通過多任務(wù)聯(lián)合實(shí)現(xiàn)顯著性檢測,在一定程度上提升了檢測精度,但仍存在誤檢和漏檢問題,其原因在于各任務(wù)優(yōu)化目標(biāo)不同且特征域差異較大,導(dǎo)致網(wǎng)絡(luò)對(duì)顯著性、物體邊界等特征辨識(shí)能力不足.基于此,借助邊界檢測和骨骼提取提出一種多任務(wù)輔助的顯著性檢測網(wǎng)絡(luò),其包括特征提取子網(wǎng)絡(luò)、邊界檢測子網(wǎng)絡(luò)、骨骼提取子網(wǎng)絡(luò)以及顯著性填充子網(wǎng)絡(luò).其中,特征提取子網(wǎng)絡(luò)利用ResNet101預(yù)訓(xùn)練模型提取圖像的多尺度特征;邊界檢測子網(wǎng)絡(luò)選擇前3層特征進(jìn)行融合,可完整保留顯著性目標(biāo)的邊界信息;骨骼提取子網(wǎng)絡(luò)選擇后兩層特征進(jìn)行融合,可準(zhǔn)確定位顯著性目標(biāo)的中心位置;所提方法基于邊界檢測數(shù)據(jù)集和骨骼提取數(shù)據(jù)集分別對(duì)兩個(gè)子網(wǎng)絡(luò)進(jìn)行訓(xùn)練,保留最好的邊界檢測模型和骨骼提取模型,作為預(yù)訓(xùn)練模型輔助顯著性檢測任務(wù).為降低網(wǎng)絡(luò)優(yōu)化目標(biāo)與特征域之間的差異,設(shè)計(jì)了顯著性填充子網(wǎng)絡(luò)將提取的邊界特征和骨骼特征進(jìn)行融合和非線性映射.在4種數(shù)據(jù)集上的實(shí)驗(yàn)結(jié)果表明,所提方法能有效恢復(fù)缺失的顯著性區(qū)域,優(yōu)于其他顯著性目標(biāo)檢測方法.

        邊界檢測;骨骼提??;多任務(wù);顯著性檢測網(wǎng)絡(luò)

        顯著性檢測可通過計(jì)算機(jī)模擬人類視覺系統(tǒng),快速分析輸入圖像并保留最引人注意的區(qū)域.廣泛應(yīng)用在圖像檢索[1]、目標(biāo)檢測[2]、行為識(shí)別[3]、圖像分?割[4]和目標(biāo)識(shí)別[5]等計(jì)算機(jī)視覺任務(wù).

        現(xiàn)有顯著性檢測方法主要采用“主體檢測為主,邊界細(xì)化為輔”的思想,通過不同任務(wù)分別檢測顯著性目標(biāo)區(qū)域和顯著性目標(biāo)邊界.根據(jù)邊界檢測方式可分為基于單數(shù)據(jù)集的多任務(wù)檢測網(wǎng)絡(luò)和基于多數(shù)據(jù)集的多任務(wù)檢測網(wǎng)絡(luò).

        基于單數(shù)據(jù)集的多任務(wù)檢測網(wǎng)絡(luò)一般設(shè)計(jì)兩個(gè)并行網(wǎng)絡(luò)分別檢測顯著性目標(biāo)區(qū)域和顯著性目標(biāo)邊界,兩個(gè)網(wǎng)絡(luò)均使用DUTS數(shù)據(jù)集[6]進(jìn)行監(jiān)督學(xué)習(xí).Wei等[7]通過數(shù)學(xué)運(yùn)算將輸入圖像分解為邊界圖和目標(biāo)區(qū)域圖,并設(shè)計(jì)兩個(gè)子網(wǎng)絡(luò)分別學(xué)習(xí).Song?等[8]提出一種多層次邊界細(xì)化的顯著性檢測網(wǎng)絡(luò),先獲得粗糙的顯著性預(yù)測圖,后對(duì)顯著性邊界進(jìn)行細(xì)化.該類方法由于利用邊界檢測算子在計(jì)算過程中存在誤差,導(dǎo)致提取的邊界不完整.因此,一些學(xué)者使用多個(gè)數(shù)據(jù)集對(duì)網(wǎng)絡(luò)進(jìn)行監(jiān)督訓(xùn)練來提升顯著性判別能力和邊界提取能力.Wu等[9]采用邊界檢測和顯著性檢測多任務(wù)聯(lián)合訓(xùn)練方式,提升網(wǎng)絡(luò)的特征提取能力;然而該方法沒有考慮多個(gè)檢測任務(wù)之間的協(xié)作問題,導(dǎo)致提取的目標(biāo)區(qū)域和邊界特征不完整.在此基礎(chǔ)上,Liu等[10]增加了骨架檢測任務(wù),通過對(duì)3個(gè)任務(wù)聯(lián)合訓(xùn)練,提升網(wǎng)絡(luò)邊界檢測能力和中心定位能力.該方法使用權(quán)重共享策略交換多任務(wù)信息,忽略了不同檢測任務(wù)之間的差異,導(dǎo)致預(yù)測圖不完整.

        由以上分析可知,目前多任務(wù)檢測方法大都通過特征堆疊或權(quán)重共享等方式交換信息,而未考慮不同任務(wù)之間特征域差異性,導(dǎo)致特征提取不完整.與現(xiàn)有方法不同,本文采用“分治”思想,提出了一種基于多數(shù)據(jù)集多任務(wù)輔助的顯著性檢測網(wǎng)絡(luò).具體地,通過對(duì)邊界檢測任務(wù)和骨骼提取任務(wù)獨(dú)立訓(xùn)練,保留最優(yōu)的邊界檢測模型和骨骼提取模型,并將它們作為預(yù)訓(xùn)練模型輔助顯著性目標(biāo)檢測任務(wù),分別提取邊界特征和骨骼特征來準(zhǔn)確定位顯著性目標(biāo)的邊界位置和中心位置,緩解因不同任務(wù)目標(biāo)之間特征域的差異性導(dǎo)致的特征提取不完整問題.最后,將提取的邊界特征和骨骼特征進(jìn)行融合和非線性映射得到完整的顯著性圖.

        1?本文方法

        本文提出了一種基于多任務(wù)輔助的顯著性檢測網(wǎng)絡(luò),其整體結(jié)構(gòu)如圖1所示.該網(wǎng)絡(luò)由特征提取子網(wǎng)絡(luò)、邊界檢測子網(wǎng)絡(luò)、骨骼提取子網(wǎng)絡(luò)和顯著性填充子網(wǎng)絡(luò)組成.其中,特征提取子網(wǎng)絡(luò)用于提取輸入圖像的多尺度特征,由5個(gè)殘差卷積塊級(jí)聯(lián)而成,可表示為RB1-RB5.邊界檢測子網(wǎng)絡(luò)通過前3層卷積RB1、RB2、RB3提取顯著性目標(biāo)的輪廓,得到邊界信息;骨骼提取子網(wǎng)絡(luò)利用后兩層卷積RB4、RB5提取顯著性目標(biāo)的骨骼,定位中心位置;為了提升網(wǎng)絡(luò)的辨識(shí)能力,利用金字塔卷積模塊增大特征的感受野,并設(shè)計(jì)特征增強(qiáng)模塊對(duì)特征進(jìn)行自適應(yīng)加權(quán).最后,顯著性填充子網(wǎng)絡(luò)根據(jù)顯著性目標(biāo)的邊界信息和中心位置進(jìn)行填充,得到顯著性預(yù)測圖.

        圖1?網(wǎng)絡(luò)整體結(jié)構(gòu)

        1.1?特征提取子網(wǎng)絡(luò)

        1.1.1?金字塔卷積模塊

        為了增強(qiáng)網(wǎng)絡(luò)的全局感知能力,受金字塔網(wǎng)絡(luò)結(jié)構(gòu)[12-13]啟發(fā),本文設(shè)計(jì)了金字塔卷積模塊,將多種感受野下的特征進(jìn)行融合.金字塔卷積模塊結(jié)構(gòu)如圖2所示.

        圖2?金字塔卷積模塊結(jié)構(gòu)

        1.1.2?特征增強(qiáng)模塊

        為提升特征表達(dá)能力,篩選有用特征、抑制無用特征,設(shè)計(jì)了特征增強(qiáng)模塊,利用通道注意力機(jī)制[14]和空間注意力機(jī)制[15],從通道維度和空間維度對(duì)特征進(jìn)行篩選和增強(qiáng),特征增強(qiáng)模塊結(jié)構(gòu)如圖3所示.

        圖3?特征增強(qiáng)模塊

        1.2?邊界檢測子網(wǎng)絡(luò)

        1.3?骨骼提取子網(wǎng)絡(luò)

        1.4?顯著性填充子網(wǎng)絡(luò)

        1.5?損失函數(shù)

        分階段對(duì)不同任務(wù)進(jìn)行監(jiān)督學(xué)習(xí).第1階段為邊界檢測任務(wù),選擇二進(jìn)制交叉熵函數(shù)[16]作為損失函數(shù),即

        第2階段為骨骼提取任務(wù),其損失函數(shù)為

        第3階段為顯著性目標(biāo)檢測任務(wù),其損失函數(shù)為

        2?實(shí)驗(yàn)與結(jié)果分析

        2.1?實(shí)驗(yàn)設(shè)置

        選擇BSDS500[17]數(shù)據(jù)集作為邊界檢測任務(wù)的訓(xùn)練集,該數(shù)據(jù)集包含200張圖像,每張圖像對(duì)應(yīng)3~4張真值圖,隨機(jī)選取一張用于網(wǎng)絡(luò)訓(xùn)練;選擇SK-LARGE[18]數(shù)據(jù)集為骨骼提取任務(wù)的訓(xùn)練集,包含746張圖像;選擇DUTS-TR[6]數(shù)據(jù)集為顯著性檢測任務(wù)的訓(xùn)練集,包含10553張圖像;DUTS-TE[6]、ECSSD[19]、HKU-IS[20]和PASCAL-S[21]作為測試集.

        2.2?對(duì)比實(shí)驗(yàn)

        為了驗(yàn)證本文方法的有效性,從客觀指標(biāo)和主觀效果兩方面與現(xiàn)有顯著性目標(biāo)檢測方法進(jìn)行對(duì)比,對(duì)比方法包括PAGR[22]、PiCANet[23]、MLM[9]、ICNet[24]、BASNet[25]、AFNet[26]、PAGE[27]、CPD[28]、ITSD[29]、CANet[30]、CAGNet[31]、HERNet[8]、AMPNet[32].

        圖4為主觀效果對(duì)比結(jié)果.選取了一些代表性場景圖像,如復(fù)雜場景的顯著性目標(biāo)(第1行)、前景和背景相似的顯著性目標(biāo)(第2行)、小型顯著性目標(biāo)(第3行)、規(guī)則顯著性目標(biāo)(第4行)、被遮擋的顯著性目標(biāo)(第5行)、多個(gè)顯著性目標(biāo)(第6行).可以看出,所提方法取得了理想的檢測結(jié)果,尤其在小型顯著目標(biāo)圖像中,大部分方法漏檢了遠(yuǎn)處的鴨子(第3行);在顯著目標(biāo)被覆蓋和復(fù)雜場景圖像中,多數(shù)方法均存在誤檢問題,將狗上方的黃色字母(第1行)和覆蓋在鳥身上的葉子(第5行)判定為顯著目標(biāo);在長條狀顯著目標(biāo)圖像中,本文方法得到了更精確的顯著目標(biāo)邊界.由圖4可以看出,本文方法在多個(gè)場景下,主觀效果都優(yōu)于其他多任務(wù)聯(lián)合方法(MLM[9]).由此可以得知,本文提出的基于邊界檢測和骨骼提取的多任務(wù)輔助方法能有效恢復(fù)缺失的顯著性區(qū)域,解決檢測結(jié)果不完整的問題.

        表1?不同顯著性目標(biāo)檢測方法的客觀指標(biāo)

        Tab.1?Objective metrics of different saliency detection methods

        圖4?所提方法和其他方法的主觀比較

        2.3?平均速度比較

        表2比較了本文方法與其他方法的平均速度.可以看出,本文方法的運(yùn)行速度優(yōu)于大部分顯著性檢測方法;相比于ITSD[29]和CPD[28]兩個(gè)快速顯著性檢測網(wǎng)絡(luò),本文方法也有一定競爭力.

        2.4?消融實(shí)驗(yàn)

        表2 本文所提方法與其他方法在平均速度上的比較

        Tab.2 Comparison of the proposed method with other methods in terms of average speed

        為了驗(yàn)證所提方法中金字塔卷積模塊(PCM)的作用,進(jìn)行了消融實(shí)驗(yàn).共包括如4個(gè)實(shí)驗(yàn):實(shí)驗(yàn)1為淺層特征和深層特征均不使用PCM(without PCM,WPCM);實(shí)驗(yàn)2為僅淺層特征使用PCM (shallow PCM,SPCM);實(shí)驗(yàn)3為僅深層特征用PCM(deep PCM,DPCM);實(shí)驗(yàn)4為淺層特征和深層特征均用PCM(both PCM,BPCM).

        為了驗(yàn)證所提方法中特征增強(qiáng)模塊(FEM)的有效性,在4個(gè)數(shù)據(jù)集上進(jìn)行消融實(shí)驗(yàn),分別為:淺層特征和深層特征均不使用FEM(without FEM,WFEM)(實(shí)驗(yàn)1);僅淺層特征使用FEM(shallow PCM,SFEM)(實(shí)驗(yàn)2);僅深層特征用FEM(deep FEM,DFEM)(實(shí)驗(yàn)3);淺層特征和深層特征均用FEM(both FEM,BFEM)(實(shí)驗(yàn)4);不同模型是否使用FEM的值和MAE的結(jié)果如表5所示.

        表3?消融實(shí)驗(yàn)結(jié)果

        Tab.3?Ablation results

        表4?金字塔卷積模塊的消融實(shí)驗(yàn)結(jié)果

        Tab.4?Ablation results on the pyramid convolutional module

        表5?特征增強(qiáng)模塊的消融實(shí)驗(yàn)結(jié)果

        Tab.5?Ablation results on the feature enhancement module

        3?結(jié)?語

        本文提出了一種基于邊界檢測和骨骼提取的顯著性檢測網(wǎng)絡(luò),通過對(duì)兩個(gè)任務(wù)分別訓(xùn)練,輔助顯著性檢測網(wǎng)絡(luò)生成完整的顯著性圖,可有效解決檢測結(jié)果中部分顯著區(qū)域漏檢和誤檢的問題.具體來說,本文將輸入圖像分解,利用邊界檢測子網(wǎng)絡(luò)和骨骼提取子網(wǎng)絡(luò)分別獲得顯著性目標(biāo)邊界特征和骨骼特征,可準(zhǔn)確地定位顯著性目標(biāo)的邊界位置和中心位置;為了降低多任務(wù)之間的差異,設(shè)計(jì)顯著性填充子網(wǎng)絡(luò),以骨骼特征為中心、邊界特征為邊界,對(duì)顯著性目標(biāo)區(qū)域進(jìn)行填充,獲得完整的顯著性圖.此外,文中還設(shè)計(jì)了金字塔卷積模塊和特征增強(qiáng)模塊對(duì)邊界特征和骨骼特征進(jìn)行篩選和增強(qiáng),提升網(wǎng)絡(luò)表達(dá)能力.實(shí)驗(yàn)結(jié)果表明,本文方法能在降低特征提取難度的同時(shí),完整且準(zhǔn)確地檢測出顯著性目標(biāo).

        [1] Babenko A,Lempitsky V. Aggregating local deep features for image retrieval[C]//Proceedings of the IEEE International Conference on Computer Vision. Santiago,Chile,2015:1269-1277.

        [2] 龐彥偉,余?珂,孫漢卿,等. 基于逐級(jí)信息恢復(fù)網(wǎng)絡(luò)的實(shí)時(shí)目標(biāo)檢測算法[J]. 天津大學(xué)學(xué)報(bào)(自然科學(xué)與工程技術(shù)版),2022,55(5):471-479.

        Pan Yanwei,Yu Ke,Sun Hanqing,et al. Hierarchical information recovery network for real-time object detection[J]. Journal of Tianjin University(Science and Technology),2022,55(5):471-479(in Chinese).

        [3] Abdulmunem A,Lai Y K,Sun X. Saliency guided local and global descriptors for effective action recognition[J]. Computational Visual Media,2016,2(1):97-106.

        [4] Zhou S P,Wang J J,Zhang S,et al. Active contour model based on local and global intensity information for medical image segmentation[J]. Neurocomputing,2016,186:107-118.

        [5] Cao X C,Tao Z Q,Zhang B,et al. Self-adaptively weighted co-saliency detection via rank constraint[J]. IEEE Transactions on Image Processing,2014,23(9):4175-4186.

        [6] Wang L J,Lu H C,Wang Y F,et al. Learning to detect salient objects with image-level supervision[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,USA,2017:136-145.

        [7] Wei J,Wang S H,Wu Z,et al. Label decoupling framework for salient object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle,USA,2020:13025-13034.

        [8] Song D W,Dong Y S,Li X L. Hierarchical edge refinement network for saliency detection[J]. IEEE Transactions on Image Processing,2021,30:7567-7577.

        [9] Wu R M,F(xiàn)eng M Y,Guan W L,et al. A mutual learning method for salient object detection with intertwined multi-supervision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:8150-8159.

        [10] Liu J J,Hou Q B,Cheng M M. Dynamic feature integration for simultaneous detection of salient object,edge,and skeleton[J]. IEEE Transactions on Image Processing,2020,29:8652-8667.

        [11] He K M,Zhang X Y,Ren S Q,et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,USA,2016:770-778.

        [12] Chen L C,Papandreou G,Kokkinos I,et al. Dee-plab:Semantic image segmentation with deep convolutional nets,atrous convolution,and fully connected crfs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2017,40(4):834-848.

        [13] He K M,Zhang X Y,Ren S Q,et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2015,37(9):1904-1916.

        [14] Hu J,Shen L,Sun G. Squeeze-and-excitation networks [C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,USA,2018:7132-7141.

        [15] Peng C,Zhang X Y,Yu G,et al. Large kernel matters—Improve semantic segmentation by global convolutional network[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu,USA,2017:4353-4361.

        [16] De Boer P T,Kroese D P,Mannor S,et al. A tutorial on the cross-entropy method[J]. Annals of Operations Research,2005,134(1):19-67.

        [17] Arbelaez P,Maire M,F(xiàn)owlkes C,et al. Contour detection and hierarchical image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence,2011,33(5):898-916.

        [18] Shen W,Zhao K,Jiang Y,et al. Object skeleton extraction in natural images by fusing scale-associated deep side outputs[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas,USA,2016:222-230.

        [19] Yan Q,Xu L,Shi J D,et al. Hierarchical saliency detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Portland,USA,2013:1155-1162.

        [20] Li G,Yu Y. Visual saliency based on multiscale deep features[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston,USA,2015:5455-5463.

        [21] Li Y,Hou X D,Koch C,et al. The secrets of salient object segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Columbus,USA,2014:280-287.

        [22] Zhang X W,Wang T T,Qi J Q,et al. Progressive attention guided recurrent network for salient object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,USA,2018:714-722.

        [23] Liu N,Han J W,Yang M H. Picanet:Learning pixel-wise contextual attention for saliency detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City,USA,2018:3089-3098.

        [24] Wang W G,Shen J B,Cheng M M,et al. An iterative and cooperative top-down and bottom-up inference network for salient object detection[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:5968-5977.

        [25] Qin X B,Zhang Z C,Huang C Y,et al. Basnet:Boundary-aware salient object detection[C]// Proceed-ings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:7479-7489.

        [26] Feng M Y,Lu H C,Ding E. Attentive feedback network for boundary-aware salient object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:1623-1632.

        [27] Wang W G,Zhao S Y,Shen J B,et al. Salient object detection with pyramid attention and salient edges[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:1448-1457.

        [28] Wu Z,Su L,Huang Q M. Cascaded partial decoder for fast and accurate salient object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach,USA,2019:3907-3916.

        [29] Zhou H J,Xie X H,Lai J H,et al. Interactive two-stream decoder for accurate and fast saliency detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Seattle,USA,2020:9141-9150.

        [30] Li J X,Pan Z F,Liu Q S,et al. Complementarity-aware attention network for salient object detection[J]. IEEE Transactions on Cybernetics,2020,52(2):873-887.

        [31] Mohammadi S,Noori M,Bahri A,et al. CAGNet:Content-aware guidance for salient object detection[J]. Pattern Recognition,2020,103:107303.

        [32] Sun L N,Chen Z X,Wu Q M J,et al. AMPNet:Average- and max-pool networks for salient object detection[J]. IEEE Transactions on Circuits and Systems for Video Technology,2021,31(11):4321-4333.

        [33] Achanta R,Hemami S,Estrada F,et al. Frequency-tuned salient region detection[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami,USA,2009:1597-1604.

        [34] Fan D P,Cheng M M,Liu Y,et al. Structure-measure:A new way to evaluate foreground maps[C]// Proceedings of the IEEE International Conference on Computer Vision. Venice,Italy,2017:4548-4557.

        [35] DeepSaliency:Muilt-task deep neural network model for salient object detection[J]. IEEE Transactions on Image Processing,2016,25(8):3919-3930.

        Saliency Detection Network Based on Edge Detection and Skeleton Extraction

        Yang Aiping1,ChengSimeng1,Wang Jinbin1,Song Shangyang1,Ding Xuewen2

        (1. School of Electrical and Information Engineering,Tianjin University,Tianjin 300072,China;2. School of Electronic Engineering,Tianjin University of Technology and Education,Tianjin 300222,China)

        Recently,considerable progress has been made in salient object detection based on joint multitask learning. However,false detection and leak detection persist owing to differences in optimization objectives and feature domains among different tasks. Therefore,current networks are incapable of identifying features such as saliency and object boundaries. Herein,we proposed an assisted multitask saliency detection network based on edge detection and skeleton extraction,comprising a feature extraction subnetwork,edge detection subnetwork,skeleton extraction subnetwork,and saliency filling subnetwork. The feature extraction subnetwork extracts multilevel features of images using ResNet101 pretrained model. The edge detection subnetwork selects the first three layers for feature fusion to retain the salient edge completely. The skeleton extraction subnetwork selects the last two layers for feature fusion to locate the center of the salient object accurately. Unlike the current networks,we train two subnetworks on edge detection dataset and skelecton extraction dataset to preserve the best models separately,which are used as pretrained models to assist with saliency detection tasks. Furthermore,to reduce the discrepancy between optimization objects and feature domains,the saliency filling subnetwork is designed to make the fusion and non-linear mapping for extracted edge and skeletal features. Experimental results for four datasets show that the proposed method can not only restore the missing saliency regions effectively but also outperform other methods.

        edge detection;skeleton extraction;multitask;saliency detection network

        10.11784/tdxbz202204052

        TP391

        A

        0493-2137(2023)08-0823-08

        2022-04-29;

        2022-12-16.

        楊愛萍(1977—??),女,博士,副教授.Email:m_bigm@tju.edu.cn

        楊愛萍,yangaiping@tju.edu.cn.

        國家自然科學(xué)基金資助項(xiàng)目(62071323,61632018,61771329);天津市科技計(jì)劃資助項(xiàng)目(20YDTPJC01110).

        the National Natural Science Foundation of China(No. 62071323,No. 61632018,No. 61771329),Tianjin Science and Technology Planning Project(No. 20YDTPJC01110).

        (責(zé)任編輯:孫立華)

        猜你喜歡
        特征檢測方法
        “不等式”檢測題
        “一元一次不等式”檢測題
        “一元一次不等式組”檢測題
        如何表達(dá)“特征”
        不忠誠的四個(gè)特征
        抓住特征巧觀察
        可能是方法不對(duì)
        小波變換在PCB缺陷檢測中的應(yīng)用
        用對(duì)方法才能瘦
        Coco薇(2016年2期)2016-03-22 02:42:52
        四大方法 教你不再“坐以待病”!
        Coco薇(2015年1期)2015-08-13 02:47:34
        亚洲人成伊人成综合网中文| 久久久麻豆精亚洲av麻花| 老鲁夜夜老鲁| 女人被狂躁到高潮视频免费网站| 日本中文字幕在线播放第1页| 福利一区二区三区视频在线| 国模91九色精品二三四| 亚洲av成人精品日韩在线播放| 免费av片在线观看网站| 无码啪啪熟妇人妻区| 香港三级日本三韩级人妇久久| 国产成人精品2021| 亚洲深深色噜噜狠狠爱网站 | 天干天干啦夜天干天2017 | 女女同性av一区二区三区免费看 | 蜜桃在线观看免费高清| av剧情演绎福利对白| 女人被狂躁到高潮视频免费网站| 久精品国产欧美亚洲色aⅴ大片| 在线视频播放观看免费| 综合偷自拍亚洲乱中文字幕| 在线观看午夜亚洲一区| 啪啪网站免费观看| 免费观看成人稀缺视频在线播放 | 日韩女优精品一区二区三区| 国产精品_国产精品_k频道w| av天堂精品久久久久| 亚洲一区久久蜜臀av| 国产黄大片在线观看画质优化| 国产97在线 | 亚洲| 亚洲国产99精品国自产拍| 免费黄网站永久地址进入| 欧美伦费免费全部午夜最新| 男女野外做爰电影免费| 久久精品综合国产二区| 国产91色综合久久免费| 男女裸交无遮挡啪啪激情试看 | 国产av一区二区精品久久凹凸| av天堂线上| 国产精品高清视亚洲乱码| 成人综合婷婷国产精品久久蜜臀|