摘 要:針對傳統(tǒng)CT和MRI醫(yī)學(xué)圖像融合后存在邊緣輪廓模糊、紋理細節(jié)丟失等問題,提出基于NSCT域結(jié)合相位一致性滾動引導(dǎo)濾波與改進參數(shù)自適應(yīng)雙通道PCNN的圖像融合方法。首先,采用相位一致性滾動引導(dǎo)濾波對CT源圖像進行增強,提高骨骼輪廓結(jié)構(gòu)清晰度。然后,通過NSCT變換分解增強后的CT和MRI源圖像得到低頻子帶和高頻子帶。低頻子帶系數(shù)采用改進參數(shù)自適應(yīng)雙通道脈沖耦合神經(jīng)網(wǎng)絡(luò)融合策略,明顯改善了軟組織的紋理細節(jié)模糊效果;高頻子帶系數(shù)采用加權(quán)求和修正拉普拉算法融合,提升了融合后圖像的細節(jié)、紋理等信息。最后,通過逆NSCT變換重構(gòu)出融合圖像。通過五組對比實驗表明,所提方法的AG、CC、SF、MSE以及CEN客觀評價指標(biāo)分別平均提高了13.30%、6.71%、4.40%、40.23%、19.16%,說明該融合方法在處理紋理細節(jié)、邊緣輪廓、結(jié)構(gòu)相似性以及圖像像素方面性能更好。
關(guān)鍵詞:醫(yī)學(xué)圖像融合;非下采樣輪廓波變換;相位一致性;滾動引導(dǎo)濾波;自適應(yīng)雙通道脈沖耦合神經(jīng)網(wǎng)絡(luò)
中圖分類號:TP391.41文獻標(biāo)志碼:A
文章編號:1001-3695(2023)08-044-2520-06
doi:10.19734/j.issn.1001-3695.2022.12.0643
Medical image fusion based on rolling guide filter and
adaptive PCNN in NSCT domain
Di Jing Guo Wenqing Liu Jizhao Lian Jing Ren Li
(1.School of Electronic amp; Information Engineering,Lanzhou Jiaotong University,Lanzhou 730070,China;2.School of Information Science amp; Engineering,Lanzhou University,Lanzhou 730000,China)
Abstract:Aiming at the problems of blurring edge contours and loss of texture details after fusion of conventional CT and MRI medical images,this paper proposed an image fusion method based on non-subsampled contourlet transform domain combined with phase consistent rolling guidance filtering(PCRGF) and improved parameter-adaptive dual channel pulse coupled neural network(PCNN).Firstly,it used PCRGF to enhance the CT source images to improve the definition of bone contour structure.Then,it applied NSCT to decompose enhanced CT and MRI source images to obtain the high and low frequency sub-bands.Fusion of low frequency sub-band used an improved parameter-adaptive dual channel pulse coupled neural network,which significantly improved the blurring of texture details in soft tissue.It used "a weighted summation modified Laplace (WSML) algorithm to fuse the high frequency sub-bands,which enhanced the fused image with more details,textures and other information in the source images.Finally,it used the inverse NSCT transformation to reconstruct the fused image.The results of five groups of comparison experiments show that the objective evaluation indexes of AG,CC,SF,MSE and CEN are improved by 13.30%,6.71%,4.40%,40.23% and 19.16%,indicate that this method performs better in enhancing the texture details,edge contours,structural similarity and image pixels of the source image.
Key words:medical image fusion;non-subsampled contourlet transform;phase consistency;rolling guidance filtering;adaptive dual-channel pulse coupled neural network
0 引言
現(xiàn)代醫(yī)學(xué)影像設(shè)備為人體各個部位提供了多種病灶圖像,如計算機斷層掃描(CT)、核磁共振成像(MRI)等,CT圖像對骨、鈣化、增強血管等成像有亞毫米級的分辨率,MRI圖像對關(guān)節(jié)軟骨、軟組織成像較清晰,有利于確定病灶區(qū)域。為從單模態(tài)醫(yī)學(xué)圖像中獲得更豐富的信息,多模態(tài)醫(yī)學(xué)圖像融合被廣泛用于臨床分析,實現(xiàn)對病灶部位準(zhǔn)確、全面的描述[1]。
近年來,多尺度變換在醫(yī)學(xué)、紅外和多聚焦圖像融合中被廣泛應(yīng)用[2,3]。最早的基于多尺度變換模型是金字塔變換,隨后又提出小波變換[4],目前最為流行的是非下采樣輪廓波(non-subsampled contourlet transform,NSCT)[5]、非下采樣剪切波(non-subsampled shearlet transform,NSST)[6,7]、離散小波[8]和剪切波[9]。文獻[10]提出醫(yī)學(xué)、紅外和多聚焦圖像融合算法,該算法利用NSCT作為分解工具進行融合,但其過程存在偽影。文獻[11]提出一種基于NSCT域內(nèi)稀疏表示的算法,在該框架下,低頻子帶利用稀疏表示法處理,造成融合圖像細節(jié)信息丟失嚴(yán)重。文獻[12]提出一種基于NSST域脈沖耦合神經(jīng)網(wǎng)絡(luò)(pulse coupled neural network,PCNN)的算法,該算法高頻是將空間頻率和參數(shù)自適應(yīng)PCNN組合,從整體來看融合結(jié)果圖具有很好的視覺效果,但邊緣保留較差。近年來,一些學(xué)者通過研究濾波器模型得出濾波器在處理融合圖像邊緣和輪廓結(jié)構(gòu)具有很好的效果,如引導(dǎo)濾波[13]、滾動引導(dǎo)濾波、梯度與導(dǎo)向濾波等,已在圖像融合領(lǐng)域取得較為滿意的結(jié)果。文獻[14]提出基于滾動引導(dǎo)濾波的融合算法,該算法具有良好的邊緣保護性,且能實現(xiàn)細節(jié)處理,相比于傳統(tǒng)的多尺度分解方法具有更好的性能。文獻[15]提出了基于滾動引導(dǎo)濾波與顯著性檢測的分解方法,該算法將圖像分解為基礎(chǔ)層、顯著層和基層,通過對基礎(chǔ)層、顯著層和基層進行系數(shù)融合,使融合后的結(jié)果圖在紋理邊緣方面處理較好。文獻[16]提出快速引導(dǎo)濾波融合框架,在處理圖像的過程中,沒有過多地考慮與源圖像結(jié)構(gòu)的差異,所以部分邊緣會出現(xiàn)輪廓模糊現(xiàn)象。
針對醫(yī)學(xué)圖像紋理信息缺失、邊緣和輪廓特征模糊等問題,本文提出一種NSCT域相位一致性滾動引導(dǎo)濾波(phase consis-tent rolling guidance filtering,PCRGF)與改進雙通道PCNN的醫(yī)學(xué)圖像融合算法。該算法使用PCRGF對已配準(zhǔn)CT源圖像進行預(yù)處理,提高骨骼區(qū)域清晰度;改進雙通道PCNN重新定義了鏈接輸入、閾值設(shè)置以及權(quán)值矩陣,充分提取低頻子帶的細節(jié)信息和輪廓特征信息;高頻融合使用改進八鄰域加權(quán)求和修正拉普拉斯(weighted summation modified Laplace,WSML)算法,提高了相鄰像素的相關(guān)性。最終獲得了較好的醫(yī)學(xué)融合圖像。
1 數(shù)學(xué)模型
1.1 非下采樣輪廓波變換
NSCT分解過程如圖1所示。第一層通過非下采樣金字塔濾波器組(NSPFB)對配準(zhǔn)圖像分解得到低通子帶圖(lowpass sub-band image,LSI)和帶通子帶圖(bandpass sub-band image,BSI);第二層則是對第一層得到的LSI繼續(xù)分解,但是通過第一次分解得到的數(shù)據(jù)特征信息與源圖像最為相近。同時,用非下采樣方向濾波器(NSDFB)將通過NSPFB分解獲得的BSI像進行分解。NSDFB由兩個單通道的濾波器組成,它的功能是實現(xiàn)在圖像中多個方向的分解[17],根據(jù)平移特性和不變特性可知,每個尺度下子圖和上一層原分解圖的大小保持不變。
1.2 雙通道脈沖耦合神經(jīng)網(wǎng)絡(luò)
脈沖耦合神經(jīng)網(wǎng)絡(luò)由Eckhorn等人[18,19]提出,它是一種類似于小型哺乳動物視覺皮層神經(jīng)系統(tǒng)的反饋型網(wǎng)絡(luò)。PCNN模型與其他神經(jīng)網(wǎng)絡(luò)相比有較大優(yōu)勢,可以不用訓(xùn)練測試數(shù)據(jù),自身可以找到相似的特征信息。PCNN模型較人工神經(jīng)網(wǎng)絡(luò)模型更能滿足人眼視覺特性,同時PCNN模型算法在現(xiàn)在的生物學(xué)領(lǐng)域內(nèi)具有更好的合理性和可解釋性[20]。雙通道脈沖耦合神經(jīng)網(wǎng)絡(luò)模型如圖2所示。
1.3 滾動引導(dǎo)濾波
基于滾動引導(dǎo)濾波預(yù)處理圖像,在尺度測量下完全控制細節(jié)平滑,與其他保邊濾波器相比,滾動引導(dǎo)濾波是通過迭代實現(xiàn)的,具有快速收斂性,且算法可以自動保存模塊較大的結(jié)構(gòu)。滾動引導(dǎo)濾波的兩個主要步驟是將模塊較小的結(jié)構(gòu)過濾和恢復(fù)圖像邊緣輪廓[21]。滾動引導(dǎo)濾波具體流程如圖3所示。
2 改進融合算法
2.1 融合流程
2.2 改進滾動引導(dǎo)濾波
2.3 低頻子帶的融合
2.4 高頻子帶的融合
3 實驗結(jié)果
3.1 實驗說明
3.2 評價指標(biāo)及實驗結(jié)果對比
3.2.1 評價指標(biāo)
3.2.2 客觀評價指標(biāo)與主觀評價指標(biāo)
4 結(jié)束語
本文算法通過將CT圖像利用相位一致性滾動引導(dǎo)濾波進行增強,解決了圖像整體偏暗、輪廓結(jié)構(gòu)清晰度不夠高等問題。兩幅源圖像的低頻子帶采用改進參數(shù)自適應(yīng)雙通道PCNN進行融合,高度符合人眼視覺特性,有助于醫(yī)生較快判斷病灶區(qū)域。對于高頻子帶采用八鄰域加權(quán)求和修正拉普拉斯算法,提高了高頻子帶圖像全局信息的利用度。通過多組對比實驗,說明本文算法能較好地保留單模態(tài)醫(yī)學(xué)圖像的數(shù)據(jù),在處理圖像紋理、輪廓結(jié)構(gòu)清晰度、像素誤差等方面能力更強。下一步主要研究算法的復(fù)雜度以及運行速度。
參考文獻:
[1]Liu Risheng,Liu Jinyuan,Jiang Zhiying,et al.A bilevel integrated model with data-driven layer ensemble for multi-modality image fusion[J].IEEE Trans on Image Processing,2020,30:1261-1274.
[2]張炯,王麗芳,藺素珍,等.局部全局特征耦合與交叉尺度注意的醫(yī)學(xué)圖像融合[J].計算機工程,2023,49(3):238-247.(Zhang Jiong,Wang Lifang,Lin Suzhen,et al.Medical image fusion with local-global feature coupling and cross-scale attention[J].Computer Engineering,2023,49(3):238-247.)
[3]楊飛燕,王蒙.基于ST分解和VGG深層網(wǎng)絡(luò)的紅外與可見光圖像融合[J].激光與光電子學(xué)進展,2023,60(2):127-137.(Yang Feiyan,Wang Meng.Infrared and visible image fusion based on ST decomposition and VGG deep networks[J].Laser and Optoelectro-nics Progress,2023,60(2):127-137.)
[4]Lewis J J,O’Callaghan R J,Nikolov S G,et al.Pixel-and region-based image fusion with complex wavelets[J].Information Fusion,2007,8(2):119-130.
[5]Kumar G A E S,Devanna H.Computational effective multimodal me-dical image fusion in NSCT domain[C]//Proc of IOP Conference Series:Materials Science and Engineering.2021:012003.
[6]Diwakar M,Shankar A,Chakraborty C,et al.Multi-modal medical image fusion in NSST domain for Internet of medical things[J].Multi-media Tools and Applications,2022,81(26):37477-37497.
[7]Diwakar M,Singh P,Shankar A.Multi-modal medical image fusion framework using co-occurrence filter and local extrema in NSST domain[J].Biomedical Signal Processing and Control,2021,68:102788.
[8]Koteswararao K,Swamy K V.Multimodal medical image fusion using NSCT and DWT fusion frame work[J].International Journal of Innovative Technology and Exploring Engineering,2019,9(2):3643-3648.
[9]Zhao Mingju,Peng Yuping.A multi-module medical image fusion method based on non-subsampled shear wave transformation and convolutional neural network[J].Sensing and Imaging,2021,22(1):1-16.
[10]Liu Yu,Liu Shuping,Wang Zengfu.A general framework for image fusion based on multi-scale transform and sparse representation[J].Information Fusion,2015,24:147-164.
[11]Shabanzade F,Ghassemian H.Multimodal image fusion via sparse re-presentation and clustering-based dictionary learning algorithm in nonsubsampled contourlet domain[C]//Proc of the 8th International Symposium on Telecommunications.Piscataway,NJ:IEEE Press,2016:472-477.
[12]Vanitha K,Satyanarayana D,Prasad M N G.Multi-modal medical image fusion algorithm based on spatial frequency motivated PA-PCNN in the NSST domain[J].Current Medical Imaging,2021,17(5):634-643.
[13]Li Shutao,Kang Xudong,Hu Jianwen.Image fusion with guided filtering[J].IEEE Trans on Image Processing,2013,22(7):2864-2875.
[14]張慧,韓新寧,韓惠麗.基于滾動引導(dǎo)濾波的紅外與可見光圖像融合[J].紅外技術(shù),2022,44(6):598-603.(Zhang Hui,Han Xinning,Han Huili.Infrared and visible image fusion based on a rolling guidance filter[J].Infrared Technology,2022,44(6):598-603.)
[15]Lin Yingcheng,Cao Dingxin.Adaptive infrared and visible image fusion method by using rolling guidance filter and saliency detection[J].Optik,2022,262:169218.
[16]Zhang Yongxin,Li Deguang,Zhang Ruiling,et al.Sparse features with fast guided filtering for medical image fusion[J].Journal of Medical Imaging and Health Informatics,2020,10(5):1195-1204.
[17]Li Liangliang,Ma Hongbing.Pulse coupled neural network-based multimodal medical image fusion via guided filtering and WSEML in NSCT domain[J].Entropy,2021,23(5):591.
[18]Eckhorn R,Reitbock H J,Arndt M,et al.A neural network for feature linking via synchronous activity[J].Canadian Journal of Microbio-logy,1989,46:759-763.
[19]Eckhorn R,Reitboeck H J,Arndt M T,et al.Feature linking via synchronization among distributed assemblies:simulations of results from cat visual cortex[J].Neural Computation,1990,2(3):293-307.
[20]Yang Zhen,Lian Jing,Guo Yanan,et al.An overview of PCNN mo-del’s development and its application in image processing[J].Archives of Computational Methods in Engineering,2019,26(2):491-505.
[21]Zhang Qi,Shen Xiaoyong,Xu Li,et al.Rolling guidance filter[C]//Proc of European Conference on Computer Vision.Cham:Springer,2014:815-830.
[22]Ganasala P,Kumar V.CT and MR image fusion scheme in nonsubsampled contourlet transform domain[J].Journal of Digital Imaging,2014,27(3):407-418.
[23]Li Bo,Peng Hong,Luo Xiaohui,et al.Medical image fusion method based on coupled neural P systems in nonsubsampled shearlet transform domain[J].International Journal of Neural Systems,2021,31(1):2050050.
[24]Yin Ming,Liu Xiaoning,Liu Yu,et al.Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain[J].IEEE Trans on Instrumentation and Measurement,2019,68(1):49-64.
[25]Li Bo,Peng Hong,Wang Jun.A novel fusion method based on dyna-mic threshold neural P systems and nonsubsampled contourlet transform for multi-modality medical images[J].Signal Processing,2021,178:107793.
[26]Zhu Rui,Li Xiongfei,Zhang Xiaoli,et al.MRI and CT medical image fusion based on synchronized-anisotropic diffusion model[J].IEEE Access,2020,8:91336-91350.
[27]Li Xiaosong,Zhou Fuqiang,Tan Haishu,et al.Multimodal medical image fusion based on joint bilateral filter and local gradient energy[J].Information Sciences,2021,569:302-325.