亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        A Tensor—based Enhancement Algorithm for Depth Video

        2018-05-07 07:05:28YAOMENG-qiZHANGWEI-zhong
        科技視界 2018年5期
        關(guān)鍵詞:中圖標識碼分類號

        YAO MENG-qi ZHANG WEI-zhong

        【Abstract】In order to repair the dark holes in Kinect depth video, we propose a depth hole-filling method based on tensor. First, we process the original depth video by a weighted moving average system. Then, reconstruct the low-rank sensors and sparse sensors of the video utilize the tensor recovery method, through which the rough motion saliency can be initially separated from the background. Finally, construct a four-order tensor for moving target part, by grouping similar patches. Then we can formulate the video denoising and hole filling problem as a low-rank completion problem. In the proposed algorithm, the tensor model is used to preserve the spatial structure of the video modality. And we employ the block processing method to overcome the problem of information loss in traditional video processing based on frames. Experimental results show that our method can significantly improve the quality of depth video, and has strong robustness.

        【Key words】Depth video; Ttensor; Tensor recovery; Kinect

        中圖分類號: TN919.81 文獻標識碼: A 文章編號: 2095-2457(2018)05-0079-003

        1 Introduction

        With the development of depth sensing technique, depth data was increasingly used in computer vision, image processing, stereo vision and 3D reconstruction, object recognition etc. As the carriers of the human activities, video contains a wealth of information and has become an important approach to get real-time information from the outside world. But due to the limitation of the device itself, gathering sources, lighting and other reasons, the depth video always contains noise and dark holes. Thus the quality of video is far from satisfactory.

        For two dimensional videos, the traditional measures to denoising and repairing adopted filter methods based on frames[1]. But the continuous frames have a lot of redundant information, which will bring us much trouble. This representation method ensures the completeness of the videos inherent structure.

        2 Tensor-based Enhancement Algorithm for Depth Video

        2.1 A weighted moving average system[2]

        When Kinect captures the video, the corresponding depth values are constantly changing, even at the same pixel position of the same scene. It is called Flickering Effect, which caused by random noise. In order to avoid this effect, we take the following measures:

        1)Use a queue representing a discrete set of data, which saves the previous N frames of the current depth video.

        2)Assign weighted values to the N frames according to the time axis. The closer the distance, the smaller the frame weight.

        3)Calculate the weighted average of the depth frames in the queue as new depth frame.

        In this process, we can adjust the weights and the value of N to achieve the best results.

        2.2 Low-rank tensor recovery model

        Low-rank tensor recovery[3] is also known as high order robust principal component analysis (High-order RPCA). The model can automatically identify damaged element in the matrix, and restore the original data. The details are as follows: the original data tensor D is decomposed into the sum of the low rank tensor L and the sparse tensor S,

        The tensor recovery can be represented as the following optimization problem:

        where,D,L,S∈RI1×I2×..×IN ,Trank(L) is the Tucker Rank of tensor L.

        The above tensor recovery problem can be transformed into the following convex optimization problem.

        Aiming at the optimization problem in (2), typical solutions[4] are as follows: Accelerated Proximal Gradient (APG) algorithm, Augmented Lagrange Multiplier (ALM) algorithm. In consideration of the accuracy and fast convergence speed of ALM algorithm, we use ALM algorithm to solve this optimization problem and generalize it to tensor. According to (2), we formulate an augmented Lagrange function:

        2.3 Similar patches matching

        There is a great similarity between frame and frame of video, so the tensor constructing by the video has a strong low rank property[5]. For a moving object in the current frame, if the scene is not switched, the similar part should be in its front frame and back frame. For each frame, set an image patch bi,j with size a×a as the reference patch. Then set a window B(i,j)=l·(a×a)×f centered on the reference patch,where is a positive integer and f is the number of original video frames. The similarity criterion of the patches is expressed by MSE, which is defined as

        where N=a×a denotes the size of patch bij,Cij is the pixel value of the frame to be detected at present, and Rij is the pixel value of the reference frame. The smaller the value of MSE is, the more accurate the two patches match. Search for image patches bx,y which similar to reference patch in B(i,j),and put their coordinate values in set :

        where t is threshold. It can be tested and determined according to the experimental environment. When MSE is less than or equal to this value, we can conclude that the test patch and reference patch are similar. Then add it to set i,j. The first n similar patches can be used to define as a tensor:

        3 Experiment

        3.1 Experiment setting

        The experiment uses three videos to test. Some color image frames of the test video are as listed in Figure 1.

        Fig.1. Test video captured from the Kinect sensor (a) Background is easy, the moving target is the man.(b) Background is complex, the moving target are two men, and they are far from the camera.(c) Background is messy, and the moving target is the man in red T-shirt, he is near the camera.

        3.2 Parameter setting

        In the same experimental environment, we compare our method with VBM3D[6] and RPCA. For VBM3D and RPCA algorithm, the source code is used, provided by the literature, to get the best result. For our algorithm, the parameters are all set empirically, so that the algorithm can achieve the best results. In all tests, we set some parameters as follows: the number of test frames is 120; the number of similarity patches is 30; the size of patch is 6*6, the maximum number of iterations is 180; tolerance thresholds are ?著1=10-5,?著2=5×10-8. We use Peak Signal-to-Noise Ratio (PSNR)[7] to quantatively measure the quality of denoised video images. And the visual effect of video enhancement can be observed directly.

        3.3 Experiment results

        In order to measure the quality of the processed image, we usually refer to the PSNR value to measure whether a handler is satisfactory. The unit of PSNR is dB. So the smaller the value, the better the quality. As can be seen from table 1, in the same experimental environment, the effect of the proposed method is better than other methods in the three groups of test videos. Fig.2 shows the enhancement result of moving object after removing the background by our method .

        As we can see from Figure 3, the proposed method in this paper can remove noise very well and basically restore the texture structure of the video. The effect of video enhancement is satisfactory.

        Fig.2. The enhancement result of moving object after removing the background by our method. (a) (b)(c)are depth video frame screenshot in original depth video a,b,c. (d)(e)(f) The enhancement results of moving object in video a, video b and video c respectively.

        Fig.3. Depth video enhancement result (a)(b)(c) Depth video frame screenshot in original depth video a, video b and video c respectively. (d)(e)(f) The enhancement results in video a, video b and video c respectively.

        Fig.4. The comparison results(partial enlarged view) of our method and other methods(VBM3D and RPCA method). (a)(b)(c) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with our method. (d)(e)(f) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with VBM3D.(g)(h)(i) The enhancement results(partial enlarged view) of depth video a, video b and video c respectively with RPCA.

        We compare the results of our method used in this article with those of the VBM3D and RPCA method. In order to make the experimental results clearer, we put the partial magnification. By comparison, we can see that our method is superior to the other methods in denoising, repairing holes and maintaining edges.

        4 Conclusion

        In this paper, we propose a tensor-based enhancement algorithm for depth video, combining tensor recovery model and video patching. Experimental results show that the proposed method can effectively remove the interference noise and maintain the edge information. It is superior to the traditional methods in the processing of depth video.

        References

        [1]Liu J, Gong X. Guided inpainting and filtering for Kinect depth maps[C]. IEEE International Conference on Pattern Recognition, 2012:2055-2058.

        [2]Zhang X, Wu R. Fast depth image denoising and enhancement using a deep convolutional network[C]//Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEE, 2016: 2499-2503.

        [3]Xie J, Feris R S, Sun M T. Edge-guided single depth image super resolution[J]. IEEE Transactions on Image Processing, 2016, 25(1): 428-438.

        [4]Compressive Principal Component Pursuit, Wright, Ganesh, Min, Ma, ISIT 2012, submitted to Information and Inference, 2012.

        [5]Chang Y J, Chen S F, Huang J D. A Kinect-based system for physical rehabilitation: a pilot study for young adults with motor disabilities.[J]. Research in Developmental Disabilities, 2011, 32(6):2566-2570.

        [6]Bang J Y, Ayaz S M, Danish K, et al. 3D Registration Using Inertial Navigation System And Kinect For Image-Guided Surgery[J]. 2015, 977(8):1512-1515.

        [7]Zhongyuan Wang, Jinhui Hu, ShiZheng Wang, Tao Lu Trilateral constrained sparse representation for Kinect depth hole filling[J]. Pattern Recognition Letters, 65 (2015) 95–102

        猜你喜歡
        中圖標識碼分類號
        The Tragic Color of the Old Man and the Sea
        Connection of Learning and Teaching from Junior to Senior
        English Language Teaching in Yunann Province: Opportunities & Challenges
        A Study of Chinese College Athletes’ English Learning
        A Study on the Change and Developmentof English Vocabulary
        Translation on Deixis in English and Chinese
        Process Mineralogy of a Low Grade Ag-Pb-Zn-CaF2 Sulphide Ore and Its Implications for Mineral Processing
        Study on the Degradation and Synergistic/antagonistic Antioxidizing Mechanism of Phenolic/aminic Antioxidants and Their Combinations
        潤滑油(2014年3期)2014-11-07 14:30:02
        A Comparative Study of HER2 Detection in Gastroscopic and Surgical Specimens of Gastric Carcinoma
        The law of exercise applies on individual behavior change development
        国产熟女精品一区二区三区| 九色九九九老阿姨| 波多野结衣一区二区三区高清| 国产男女插插一级| av网站韩日在线观看免费 | 免费a级毛片无码a∨免费软件| 欧美日韩精品一区二区三区高清视频| 国产噜噜亚洲av一二三区| 国产禁区一区二区三区| 亚洲av无码专区首页| 黄色网址国产| 五月婷婷俺也去开心| 69av在线视频| 久久人妻精品免费二区| 中文字幕无码中文字幕有码| 久久精品国产亚洲av蜜臀| 久久这里都是精品一区| av中文字幕性女高清在线| 国产又黄又硬又粗| 老少交欧美另类| 成在线人免费无码高潮喷水| 国产白浆一区二区在线| 色婷婷亚洲精品综合影院| 99久久国产视频| 久久婷婷夜色精品国产| 久久综合九色欧美综合狠狠| 亚洲一区二区三区无码国产| 亚洲an日韩专区在线| 日韩va高清免费视频| 麻花传媒68xxx在线观看| 久久精品久久久久观看99水蜜桃| 国产精品麻豆A在线播放| 91三级在线观看免费| 曰本大码熟中文字幕| 国产一级在线现免费观看| 91快射视频在线观看| 美女高潮黄又色高清视频免费| 四虎4545www国产精品| 国产亚洲精品高清视频| 2021亚洲国产精品无码| 日本老熟欧美老熟妇|