亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        A Tensor—based Enhancement Algorithm for Depth Video

        2018-05-07 07:05:28YAOMENG-qiZHANGWEI-zhong
        科技視界 2018年5期
        關(guān)鍵詞:中圖標(biāo)識(shí)碼分類號(hào)

        YAO MENG-qi ZHANG WEI-zhong

        【Abstract】In order to repair the dark holes in Kinect depth video, we propose a depth hole-filling method based on tensor. First, we process the original depth video by a weighted moving average system. Then, reconstruct the low-rank sensors and sparse sensors of the video utilize the tensor recovery method, through which the rough motion saliency can be initially separated from the background. Finally, construct a four-order tensor for moving target part, by grouping similar patches. Then we can formulate the video denoising and hole filling problem as a low-rank completion problem. In the proposed algorithm, the tensor model is used to preserve the spatial structure of the video modality. And we employ the block processing method to overcome the problem of information loss in traditional video processing based on frames. Experimental results show that our method can significantly improve the quality of depth video, and has strong robustness.

        【Key words】Depth video; Ttensor; Tensor recovery; Kinect

        中圖分類號(hào): TN919.81 文獻(xiàn)標(biāo)識(shí)碼: A 文章編號(hào): 2095-2457(2018)05-0079-003

        1 Introduction

        With the development of depth sensing technique, depth data was increasingly used in computer vision, image processing, stereo vision and 3D reconstruction, object recognition etc. As the carriers of the human activities, video contains a wealth of information and has become an important approach to get real-time information from the outside world. But due to the limitation of the device itself, gathering sources, lighting and other reasons, the depth video always contains noise and dark holes. Thus the quality of video is far from satisfactory.

        For two dimensional videos, the traditional measures to denoising and repairing adopted filter methods based on frames[1]. But the continuous frames have a lot of redundant information, which will bring us much trouble. This representation method ensures the completeness of the videos inherent structure.

        2 Tensor-based Enhancement Algorithm for Depth Video

        2.1 A weighted moving average system[2]

        When Kinect captures the video, the corresponding depth values are constantly changing, even at the same pixel position of the same scene. It is called Flickering Effect, which caused by random noise. In order to avoid this effect, we take the following measures:

        1)Use a queue representing a discrete set of data, which saves the previous N frames of the current depth video.

        2)Assign weighted values to the N frames according to the time axis. The closer the distance, the smaller the frame weight.

        3)Calculate the weighted average of the depth frames in the queue as new depth frame.

        In this process, we can adjust the weights and the value of N to achieve the best results.

        2.2 Low-rank tensor recovery model

        Low-rank tensor recovery[3] is also known as high order robust principal component analysis (High-order RPCA). The model can automatically identify damaged element in the matrix, and restore the original data. The details are as follows: the original data tensor D is decomposed into the sum of the low rank tensor L and the sparse tensor S,

        The tensor recovery can be represented as the following optimization problem:

        where,D,L,S∈RI1×I2×..×IN ,Trank(L) is the Tucker Rank of tensor L.

        The above tensor recovery problem can be transformed into the following convex optimization problem.

        Aiming at the optimization problem in (2), typical solutions[4] are as follows: Accelerated Proximal Gradient (APG) algorithm, Augmented Lagrange Multiplier (ALM) algorithm. In consideration of the accuracy and fast convergence speed of ALM algorithm, we use ALM algorithm to solve this optimization problem and generalize it to tensor. According to (2), we formulate an augmented Lagrange function:

        2.3 Similar patches matching

        There is a great similarity between frame and frame of video, so the tensor constructing by the video has a strong low rank property[5]. For a moving object in the current frame, if the scene is not switched, the similar part should be in its front frame and back frame. For each frame, set an image patch bi,j with size a×a as the reference patch. Then set a window B(i,j)=l·(a×a)×f centered on the reference patch,where is a positive integer and f is the number of original video frames. The similarity criterion of the patches is expressed by MSE, which is defined as

        where N=a×a denotes the size of patch bij,Cij is the pixel value of the frame to be detected at present, and Rij is the pixel value of the reference frame. The smaller the value of MSE is, the more accurate the two patches match. Search for image patches bx,y which similar to reference patch in B(i,j),and put their coordinate values in set :

        where t is threshold. It can be tested and determined according to the experimental environment. When MSE is less than or equal to this value, we can conclude that the test patch and reference patch are similar. Then add it to set i,j. The first n similar patches can be used to define as a tensor:

        3 Experiment

        3.1 Experiment setting

        The experiment uses three videos to test. Some color image frames of the test video are as listed in Figure 1.

        Fig.1. Test video captured from the Kinect sensor (a) Background is easy, the moving target is the man.(b) Background is complex, the moving target are two men, and they are far from the camera.(c) Background is messy, and the moving target is the man in red T-shirt, he is near the camera.

        3.2 Parameter setting

        In the same experimental environment, we compare our method with VBM3D[6] and RPCA. For VBM3D and RPCA algorithm, the source code is used, provided by the literature, to get the best result. For our algorithm, the parameters are all set empirically, so that the algorithm can achieve the best results. In all tests, we set some parameters as follows: the number of test frames is 120; the number of similarity patches is 30; the size of patch is 6*6, the maximum number of iterations is 180; tolerance thresholds are ?著1=10-5,?著2=5×10-8. We use Peak Signal-to-Noise Ratio (PSNR)[7] to quantatively measure the quality of denoised video images. And the visual effect of video enhancement can be observed directly.

        3.3 Experiment results

        In order to measure the quality of the processed image, we usually refer to the PSNR value to measure whether a handler is satisfactory. The unit of PSNR is dB. So the smaller the value, the better the quality. As can be seen from table 1, in the same experimental environment, the effect of the proposed method is better than other methods in the three groups of test videos. Fig.2 shows the enhancement result of moving object after removing the background by our method .

        As we can see from Figure 3, the proposed method in this paper can remove noise very well and basically restore the texture structure of the video. The effect of video enhancement is satisfactory.

        Fig.2. The enhancement result of moving object after removing the background by our method. (a) (b)(c)are depth video frame screenshot in original depth video a,b,c. (d)(e)(f) The enhancement results of moving object in video a, video b and video c respectively.

        Fig.3. Depth video enhancement result (a)(b)(c) Depth video frame screenshot in original depth video a, video b and video c respectively. (d)(e)(f) The enhancement results in video a, video b and video c respectively.

        Fig.4. The comparison results(partial enlarged view) of our method and other methods(VBM3D and RPCA method). (a)(b)(c) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with our method. (d)(e)(f) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with VBM3D.(g)(h)(i) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with RPCA.

        We compare the results of our method used in this article with those of the VBM3D and RPCA method. In order to make the experimental results clearer, we put the partial magnification. By comparison, we can see that our method is superior to the other methods in denoising, repairing holes and maintaining edges.

        4 Conclusion

        In this paper, we propose a tensor-based enhancement algorithm for depth video, combining tensor recovery model and video patching. Experimental results show that the proposed method can effectively remove the interference noise and maintain the edge information. It is superior to the traditional methods in the processing of depth video.

        References

        [1]Liu J, Gong X. Guided inpainting and filtering for Kinect depth maps[C]. IEEE International Conference on Pattern Recognition, 2012:2055-2058.

        [2]Zhang X, Wu R. Fast depth image denoising and enhancement using a deep convolutional network[C]//Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016: 2499-2503.

        [3]Xie J, Feris R S, Sun M T. Edge-guided single depth image super resolution[J]. IEEE Transactions on Image Processing, 2016, 25(1): 428-438.

        [4]Compressive Principal Component Pursuit, Wright, Ganesh, Min, Ma, ISIT 2012, submitted to Information and Inference, 2012.

        [5]Chang Y J, Chen S F, Huang J D. A Kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities.[J]. Research in Developmental Disabilities, 2011, 32(6):2566-2570.

        [6]Bang J Y, Ayaz S M, Danish K, et al. 3D Registration Using Inertial Navigation System And Kinect For Image-Guided Surgery[J]. 2015, 977(8):1512-1515.

        [7]Zhongyuan Wang, Jinhui Hu, ShiZheng Wang, Tao Lu Trilateral constrained sparse representation for Kinect depth hole filling[J]. Pattern Recognition Letters, 65 (2015) 95–102

        猜你喜歡
        中圖標(biāo)識(shí)碼分類號(hào)
        The Tragic Color of the Old Man and the Sea
        Connection of Learning and Teaching from Junior to Senior
        English Language Teaching in Yunann Province: Opportunities & Challenges
        A Study of Chinese College Athletes’ English Learning
        A Study on the Change and Developmentof English Vocabulary
        Translation on Deixis in English and Chinese
        Process Mineralogy of a Low Grade Ag-Pb-Zn-CaF2 Sulphide Ore and Its Implications for Mineral Processing
        Study on the Degradation and Synergistic/antagonistic Antioxidizing Mechanism of Phenolic/aminic Antioxidants and Their Combinations
        潤滑油(2014年3期)2014-11-07 14:30:02
        A Comparative Study of HER2 Detection in Gastroscopic and Surgical Specimens of Gastric Carcinoma
        The law of exercise applies on individual behavior change development
        久久91精品国产91久久麻豆| 国产精品ⅴ无码大片在线看| 精品熟女少妇av免费观看| 亚洲中文字幕av天堂| 亚洲一区二区岛国高清| 伊人久久综合无码成人网| 特级做a爰片毛片免费看108| 国产欧美日韩综合一区二区三区| 乱人伦视频69| 伊人久久综合狼伊人久久| 一本久久a久久免费综合| 国产成人一区二区三区影院动漫| 男女肉粗暴进来120秒动态图| 一级午夜视频| 精选二区在线观看视频| 久久伊人精品中文字幕有尤物| 一本久久综合亚洲鲁鲁五月天| 国产精品欧美一区二区三区| 亚洲特黄视频| 色婷婷精品国产一区二区三区 | 国产又色又爽又黄的| 波多野结衣aⅴ在线| 激情内射亚洲一区二区| 我要看免费久久99片黄色| 久久国内精品自在自线图片| 久久av高潮av喷水av无码| 亚洲激情视频在线观看a五月| 亚洲成人免费av影院| 另类老妇奶性生bbwbbw| 高清国产一级毛片国语| 人妻少妇中文字幕专区| 亚洲av日韩av女同同性| 亚洲另类自拍丝袜第五页| 亚洲一区二区三区中文视频| 国产精品久久久久久久久免费观看| 成年毛片18成年毛片| 人妻少妇69久久中文字幕| 99久久精品日本一区二区免费| 国产精在线| 在线精品亚洲一区二区三区| 免费观看全黄做爰大片|