亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        A Sensor-Service Collaboration Approach for Target Tracking in Wireless Camera Networks

        2017-05-09 01:39:11ShuaiZhaoLeYu
        China Communications 2017年7期

        Shuai Zhao , Le Yu

        1 State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications,Beijing, 100876, China

        2 China Mobile Information Security Center, Beijing 100033, China

        * The corresponding author, email: zhaoshuaiby@bupt.edu.cn

        I. INTRODUCTION

        Target detection and tracking is one of the most important computer vision problems involving the task of generating inferences about the movement of objects with a series of images. Researchers have proposed many tracking approaches combined with their intended application areas. Surveillance and security control is a typical application of target detection and tracking, firstly detected with radar and detection sensors and then tracking with camera surveillance systems. There are many other application areas, such as virtual reality, smart home, intelligent rehabilitation and tracking lanes for driver assistance. Wireless Camera Networks (WCNs) use cameras to capture visual information from the monitoring area and process the information locally with the buildin processing components. WCNs can provide more rich-information than other wireless sensor networks. Meanwhile, camera tracking is an effective location tracking way which can reuse the existing ubiquitous video monitoring system. The process of camera target tracking usually is resource intensive, underutilized and high-cost [1].

        In order to improve the pervasiveness of these emerging applications, low-cost, versatile and accurate target tracking mechanisms are demanded. In this paper, we propose an approach that employs cameras to tracking target in WCNs. Most researchers proposed collaborative mechanism using multiple view information to track target [1, 2]. Cerpa et al.[3] employed multiple cameras around the target to collect information that can make the tracking task more complete, accurate and reliable. The cluster structure is usually used to analyze data in centric sensor collaboratively.A cluster head (CH) and many neighboring sensors as cluster member compose the cluster structure.

        This paper proposes a camera tracking approach alone with perspective projection model as sensing model and SMC as tracking approach.

        After the cameras are deployed, the traditional clusters are formed statically. The area each cluster covers and the members each cluster processes are static. The static collaborative mechanism suffers from the following challenges:

        Information fusion: Sensors in different clusters cannot cooperative work or share information.

        Fault tolerance: If sensors become failure,a cluster may not have enough sensors to perform the target tracking tasks.

        Poor feasibility: Static mechanism is not suitable for highly dynamic scenarios. Along with the target moving, cameras which near the movement trajectory should be active to capture the view information.

        To meet the challenges, researchers proposed dynamic collaborative mechanisms. For example, an optimized reconfiguration based tree-structured cluster was employed for moving target tracking. The work of [4] proposed an information driven sensor querying method that a leader sensor estimates and selects the best neighbor sensors to carry out tracking and work as the next leader. All the above works usually employed the omnidirectional sensing model in common sensor network, but camera sensor network needs a special sensing model.Therefore, the above works are un fit for camera sensor network. The collaborative target tacking scheme has two important problems:

        1) Target Detecting and Tracking: designing a location and tracking approach to predict the target location accurately under the low complexity requirement of wireless camera sensor networks.

        2) Collaborative Mechanisms: selecting cluster dynamically for target tracking in order to balance the tradeoff between the network energy consumption and the quality of target tracking.

        In order to perform the target tracking task in WCNs, this paper proposes a target sensing model for the camera sensors and a dynamic community-based camera collaboration (D3C)framework which deploys the dynamic cluster structure. It combines the perspective projection model and color histograms as the sensing model which is nonlinear and takes into account the observation noisy. Then, sequential Monte Carlo (SMC) technology is employed to predict the target location; In WCNs, cam-era sensors are deployed under some kind of coverage strategy. We can construct camera collaborative graph based on camera sensor location. Actually, cluster structure can be achieved by clustering nodes on the camera collaborative graph. Cluster structure is similar with the community structure in social network analysis. The community usually involves a group of nodes that are connected to each other more densely than other nodes. The main contributions of this paper include the following:

        1) Camera collaboration graph can be formed based on distance metric after cameras are deployed. Then, community detection method is used to discover the overlapping community structure of camera collaborative graph;

        2) In the target tracking procedure, a probability estimation algorithm is proposed to select the CH;

        3) It proposes an optimization-based method that select a group of camera sensors as the cluster members (CM) from the neighbors belonged to CH’s community. Fig.1 shows the process of D3C.

        The remainder of the paper is organized as follows: Section 2 discusses the related works briefly. Section 3 presents assumptions and preliminaries. Section 4 presents the dynamic node collaboration scheme and the SMC-based tracking model. Section 5 discusses some of the experimental results obtained with the model on camera networks and in simulation and conclusions are given in Section 6.

        II. RELATED WORK

        In target detection and tracking applications,low-power, low-bandwidth, security concerns,and difficulty in fusing multi-view information centrally at a single station promote the development of distributed camera network framework. Meanwhile, using a single camera to locate and track target may encounter object occlusion and variation fusion between multiple camera views. Multiple camera systems extend some limitations of single camera systems and enlarge the observation area. The reference [1] is concerned with the relations between two or more linear images of spatial objects, and reconstruction of these spatial objects. Multiple cameras collaboration mechanism may cause extra energy cost, so it needs effective camera subset selection methods. There are many survey papers related to multi-camera surveillance [2, 5, 7, 8]. Existing works focus on the applications such as object location, tracking, person re-identification,security and privacy protection, activity analysis in multi-camera surveillance. These applications have been part of the contemporary research being addressed by several computer vision scientists and researchers across the world. This paper [8] aims to provide researchers with a state-of-the-art overview of various techniques for multi-camera coordination and control that have been adopted in surveillance systems. Researchers employ camera coordination mechanism to cooperative sensing and tracking for surveillance system. Many formal approaches are proposed for cooperative tracking such as game theory, decision theory,control theory and other heuristic approaches.This survey [7] reviews image enhancement,object location and tracking, and moving target behavior analysis. The work also surveys the fusion of multiple camera sensors, camera calibration and cooperative camera systems in the wide-area surveillances. Researchers focused on target tracking in camera sensor networks. The work of [6] considered object detection, tracking, recognition, and pose estimation in a distributed camera network. In[24], the cooperative cameras are divided into subgroup of active cameras and subgroup of static cameras. The static cameras capture and integrate images to detect and track objects,meanwhile the active cameras high-resolution images to track objets. Static cameras and active cameras work together to promote the effectiveness of object tracking. The work [25]developed a multiple camera tracking system for indoor environments. Considering object occlusions, camera which has better visibility of the object are assigned to track the moving object. Each camera employs Kalman filter to perform human tracking. The system has a defect that an object which reappears in FOV overlapping cameras would be considered as a new object. The work of [26] proposes an unsupervised probabilistic learning method which fuses a large amount of observation from different cameras. The method can not only estimate probabilistically the location but also the time with which a target may reappear. The work [27] proposes a method for multi-camera image tracking in the context of image surveillance. The authors take the tracking problem a stage further by merging the 2D observations from each camera view to form global 3D world coordinate object tracks. The work [28] proposed a distributed algorithm that each camera independently estimates local paths in its neighborhood with the observations sent from its neighbors and the local probabilistic transition model.

        The paper [11] presents a method for creating camera coalitions in resource-constrained networks and demonstrate collaborative target tracking. The coalition formation was considered as a decentralized resource allocation process. In the process, the best cameras among those viewing are selected as a coalition using marginal utility theory. In the paper [9], nineteen trackers were selected from a wide variety of popular algorithms. Selected trackers were evaluated on experiment datasets which covering many typical aspects, such as occlusion, camera motion, low contrast, illumination and so on. The F-score and the object tracking accuracy (OTA) evaluation metrics are effective. Experiment results showed that the Kalman-filter is efficient and reliable for most target tracking application scenarios, but it is confined to a related restricted issues of linear Gaussian problems. In order to solve the problem beyond the restricted class, the particle filter was proved to be reliable methods for stochastic dynamic estimation [10]. Particle filter methods are employed to solve nonlinear and non-Gaussian problems.

        III. PRELIMINARY AND PROBLEM DEFINITION

        The following sections introduce the object tracking scheme in camera sensor network.The target motion model and the camera sensing model employ station function and measure function respectively. Finally, an efficient dynamic node-collaborative method is proposed to track moving object by integrating images from nodes.

        IV. PROPOSED TRACKING ALGORITHM

        4.1 Review of proposed tracking algorithm

        As shown in Fig.2, there is a set of camera sensors deployed in a surveillance region, and circles represent the community structure.The target traverses through the region. To save power, each camera node can be in active mode or sleep mode. In the sleep mode, the camera node records lower quality video and plays surveillance function only. Otherwise,each camera can sense the target, send and receive data in the active mode.

        After the camera sensors deployed, the camera sensor collaboration graph can be formed based on the location of camera sensors. Considering the distance between cameras, the overlapping community of the camera sensor network can be discovered. The overlapping community structure can help to find dynamic cluster of cameras during the target tracking process. The target tracking process can be described as follows:

        1) Initialization:

        All calculations are performed in the centralized server. The deployed graph and the overlapping community of the camera sensor network are stored in the server. In the process of target location and tracking, participated cameras send the surveillance videos to the server.

        2) Cluster Head Selection

        When the object appears in the monitored area, server randomly selects a camera as cluster head from the cameras which can detect the target. As the target object moves, the probability estimation algorithm is used to estimate the next possible position of the object,and the nearest camera is selected as the next cluster head is based on the estimated position.

        3) Cluster Member Activation

        Then, cameras within the same community of the cluster head will be turned into active mode and transmit the collected video to the server.

        4) Collaborative Mechanisms

        The centralized server fuse all videos from different perspectives to locate and track the target. For the camera with overlapping views,the video information obtained by multiple cameras is transmitted to the server for fusion analysis, and the position of the target is obtained. For cameras with overlapping perspective, the central server uses the centroid coordinates of the moving target to match the mapping. For the perspective of the camera does not overlap, only the camera to obtain the video information to locate, that is, a single camera positioning.

        Repeat these steps until the target leaves the region. And the server records the trajectory of the target.

        4.2 Sensing model of camera sensor network

        Camera networks are useful in the context of a variety of applications in multimedia sensor networks, including area surveillance,object tracking and environmental monitoring.As shown in Figure 3, there are some cameras randomly deployed in a workspace. The model integrates multi-view images captured by cameras to locate and track objects col-laboratively. In fact, all tracking applications have a common procedure that selects the measurements to be incorporated into the state estimator from among several measurement candidates [14-16]. Then, the target locating strategy and the localization-oriented sensing model for cameras are discussed in detail.

        Background subtraction is one of the most used methods for detecting motion target from video sequences of static cameras [17]. Detecting the moving targets from the difference between the current frame and a reference frame, often called “background model” [18].Since there are multiple objects in our environment. In order to distinguish each moving object, the color histograms are used to identify targets.

        Where the first matrix is the transposed matrix which indicates the transformation from camera coordinate system to image coordinate

        Fig. 3 An example of camera network

        Fig. 4 The central perspective projection model

        Therefore, the target location is:

        Furthermore, many issues may cause errors such as the accuracy of background subtraction method and the identification of target.The background subtraction and color histo-

        4.3 Multi - dimensional Vision Location

        As shown in Fig.6, target A(X,Y,Z) in world coordination system. The two camerasOlandOrare employed to locate and track the target A. The cameras acquire imagesClandCrrespectively.al(ul,vl)andar(ur,vr) are the position of target A inClandCr. Connect the camera optical center and the target point to form two straight linesAOlandAOr. The intersection point of these two lines is the location of target A. To locate the target A(X,Y,Z), it is necessary to locate the two cameras and determine the mapping relationship betweenal(ul,ur),ar(ul,ur) and A(X,Y,Z).

        Observing from the perspective of overlooking, points(x,y), (x1,y1), (x2,y2), and represent the coordinate of target A, cameraOlandOrrespectively. The intersection of the two straight lines ((x1,y1)-(x,y) and (x2,y2)-(x,y)) is the position of target. The position of target A can be obtained by calculating the angle ofa1anda2. The angle between the lines from point(x1,y1) to point A and from point (x1,y1) to point B is the FOV. The line from point (x1,y1) to point C is the axis of FOV. The angle of FOVθcan be obtained which is 2*(a1+β). If calculating the angle β,a1can be obtained. β can be calculated as follows:

        Fig. 5 Perspective projection model

        Fig. 6 Multi-dimensional vision location

        Fig. 7 Projection of multi-dimensional vision location

        Then, the position of target can detect. If there are multiple cameras to observe the same target, you may not find an exact solution to satisfy all the equations, where the least squares method is used to find an approximate solution that minimizes the mean square error of all methods.

        4.4 Target Tracking in WCNs

        4.5 Dynamic Collaboration Scheme

        Fig. 8 (a) illustrates the camera collaborative network. Thus, the community detection can be applied to find the connected group,and the active region can be divided into many overlapping grids. This paper uses CPM(Clique Percolation Method) [22], an overlapping community detection algorithm, to discovery the community structure in the camera collocation network. Fig.8 (b) illustrates the community detected by CPM. The CPM establishes the community structure from k-cliques,which corresponds to the complete subgraph of k nodes. The community is defined as the largest union of k-cliques that can reach each other by a series of adjacent k-cliques. Using the above community detection algorithm, the collaboration community among cameras can be obtained. The following dynamic cluster selection is based on the community structure.

        Fig. 8 An example of WCNs

        V. EXPERIMENTS

        5.1 Experiment on real-world application

        In this section, the D3C scheme is implemented in the laboratory for a practical application.The laboratory is approximately 18 meters long and 20 meters wide. Fifteen camera sensors are deployed in the room. Fig.9 illustrates the experiment environment in which many oきce facilities arranged. Target moves in uniform motion and red line records the motion trail. The experiment machine has Quad-Core CPUs of 2.5 GHz and 24 GB RAM. We set the

        Fig. 9 Top view of the experiment environment

        We compare the algorithm with other existing methods to validate the effectiveness of proposed methods. In this paper, EKF (Extended Kalman filter) is employed as the baseline. EKF is a non-linear version of the Kalman filter, which estimates the linearization of the current mean and covariance.

        Fig. 11 shows the experiment result of the EKF. The blue points denote the true state of the moving target and the green lines denote the target trail. The red regions denote the confidence region. The results offig.10 and Fig.11 show that the accuracy of D3C outperforms EKF.

        Table I Target trajectories

        5.2 Experiments on Large Scale Camera Networks

        As shown in Fig.12 (a), 500 cameras are deployed in region S. We generate 22 points(see “*” in Fig.12 (b)) on the target trail. D3C selects the cluster cameras (Head, Member)to estimate the target location at each time. In Fig.12 (b), “+” denotes the estimated target location. The results show that, for each k, the estimated positions are close to the actual positions of target in the simulation scenario.

        Fig. 10 Experiment result of D3C

        VI. CONCLUSION

        This paper proposes a camera tracking approach alone with perspective projection model as sensing model and SMC as tracking approach. In order to implement SMC, this paper employs local community detection from social network analysis to form the dynamic 114 candidate nodes to track target collaboratively.

        Fig. 11 Experiment result of EKF

        Fig. 12 The experiment result (“.” denotes the camera sensor, “*” denotes the true state location and “+” denotes the estimated target location)

        Experimental evaluations on both real-world and synthetic datasets show that the proposed approach is effective in tracking mobile targets. Moreover, D3C meets the versatility, real-time and fault tolerance requirements of target tracking applications.

        There are many potential future directions of this work. It is interesting to combine camera deployment strategy and community detection method to select dynamic cluster.It is also interesting to study how to identify multiple heterogeneous objects in application environment.

        ACKNOWLEDGEMENTS

        This work is supported by National Natural Science Foundation of China (Grant No.61501048); National High-tech R&D Program of China (863 Program) (Grant No.2013AA102301); The Fundamental Research Funds for the Central Universities (Grant No.2017RC12); China Postdoctoral Science Foundation funded project (GrantNo.2016T90067,2015M570060).

        [1] R. Hartley and A. Zisserman, Multiple view geometry in computer vision: Cambridge university press, 2003.

        [2] Joshi K A, Thakore D G. A Survey on Moving Object Detection and Tracking in Video Surveillance System[J]. International Journal of Soft Computing & Engineering, 2012, 2(3).

        [3] A. Cerpa, J. Elson, D. Estrin, L. Girod, M. Hamilton, and J. Zhao, “Habitat monitoring: Application driver for wireless communications technology,” ACM SIGCOMM Computer Communication Review, vol. 31, pp. 20-41, 2001.

        [4] F. Zhao, J. Shin, and J. Reich, “Information-driven dynamic sensor collaboration,” Signal Processing Magazine, IEEE, vol. 19, pp. 61-72, 2002.

        [5] Aghajan H, Cavallaro A. Multi-Camera Networks: Principles and Applications[M]. Academic Press, 2009.

        [6] A. C. Sankaranarayanan, A. Veeraraghavan, and R. Chellappa, “Object detection, tracking and recognition for multiple smart cameras,” Proceedings of the IEEE, vol. 96, pp. 1606-1624,2008.

        [7] Kim I S, Hong S C, Yi K M, et al. Intelligent visual surveillance — A survey[J]. International Journal of Control, Automation and Systems, 2010,8(5):926-939.

        [8] Natarajan P, Atrey P K, Kankanhalli M.Multi-Camera Coordination and Control in Surveillance Systems:A Survey[J]. Acm Transactions on Multimedia Computing Communications &Applications, 2015, 11(4):1-30.

        [9] A. W. Smeulders, D. M. Chu, R. Cucchiara, S. Calderara, A. Dehghan, and M. Shah, “Visual tracking: An experimental survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 36, pp. 1442-1468, 2014.

        [10] B. Ristic, S. Arulampalam, and N. Gordon, “Beyond the Kalman filter,” IEEE Aerospace and Electronic Systems Magazine, vol. 19, pp. 37-38,2004.

        [11] J. C. SanMiguel and A. Cavallaro, “Cost-aware coalitions for collaborative tracking in resource-constrained camera networks,” IEEE Sensors Journal, vol. 15, pp. 2657-2668, 2015.

        [12] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking,” Signal Processing, IEEE Transactions on, vol. 50, pp.174-188, 2002.

        [13] P. Chavali and A. Nehorai, “Scheduling and Power Allocation in a Cognitive Radar Network for Multiple-Target Tracking,” Signal Processing,IEEE Transactions on, vol. 60, pp. 715-729, 2012.

        [14] J. Berclaz, F. Fleuret, E. Türetken, and P. Fua,“Multiple object tracking using k-shortest paths optimization,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 33, pp.1806-1819, 2011.

        [15] C. Hue, J.-P. Le Cadre, and P. Pérez, “Sequential Monte Carlo methods for multiple target tracking and data fusion,” Signal Processing, IEEE Transactions on, vol. 50, pp. 309-325, 2002.

        [16] C. Aeschliman, J. Park, and A. C. Kak, “A probabilistic framework for joint segmentation and tracking,” in Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on,2010, pp. 1371-1378.

        [17] O. Barnich and M. Van Droogenbroeck, “ViBe: A universal background subtraction algorithm for video sequences,” Image Processing, IEEE Transactions on, vol. 20, pp. 1709-1724, 2011.

        [18] M. Fiala, “ARTag, a fiducial marker system using digital techniques,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, 2005, pp. 590-596.

        [19] V. A. Petrushin, G. Wei, and A. V. Gershman,“Multiple-camera people localization in an indoor environment,” Knowledge and information systems, vol. 10, pp. 229-241, 2006.

        [20] L. Liu, X. Zhang, and H. Ma, “Dynamic node collaboration for mobile target tracking in wireless camera sensor networks,” in INFOCOM 2009,IEEE, 2009, pp. 1188-1196.

        [21] A. O. Ercan, D. B. Yang, A. El Gamal, and L. J.Guibas, “Optimal placement and selection of camera network nodes for target localization,”in Distributed computing in sensor systems, ed:Springer, 2006, pp. 389-404.

        [22] G. Palla, I. Derényi, I. Farkas, and T. Vicsek, “Uncovering the overlapping community structure of complex networks in nature and society,”Nature, vol. 435, pp. 814-818, 2005.

        [23] Q. Ye, B. Wu, L. Suo, T. Zhu, C. Han, and B. Wang,“Telecomvis: Exploring temporal communities in telecom networks,” in Machine Learning and Knowledge Discovery in Databases, ed: Springer, 2009, pp. 755-758.

        [24] Micheloni C, Foresti G L, Snidaro L. A cooperative multicamera system for video-surveillance of parking lots[J]. Iee Symposium Intelligent Distributed Surveillance Systems, 2003:5/1-5/5.

        [25] Nam T. Multiple Camera Coordination in a Surveillance System[J]. 2003, 29(3):408--422.

        [26] Makris D, Ellis T, Black J. Bridging the gaps between cameras[C]// Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on. IEEE, 2004:II-205-II-210 Vol.2.

        [27] Black J, Ellis T. Multi camera image tracking[J].Image & Vision Computing, 2006, 24(11):1256-1267.

        [28] Kim H, Wolf M. Distributed tracking in a largescale network of smart cameras[C]// ACM/IEEE International Conference on Distributed Smart Cameras. ACM, 2010:8-16.

        摸进她的内裤里疯狂揉她动图视频 | 狠狠色噜噜狠狠狠狠97俺也去| av成人资源在线播放| 亚洲国产色婷婷久久精品| 国模吧无码一区二区三区| 欧美性大战久久久久久久| 亚洲先锋影院一区二区| 99久久精品人妻一区二区三区| 日本中文字幕一区二区有码在线| 毛片大全真人在线| 亚洲欧洲日产国产AV无码| 少妇高潮太爽了免费网站| 色婷婷精品久久二区二区蜜臀av| 人妻少妇不满足中文字幕| 欧美一级特黄AAAAAA片在线看 | 中文字幕天堂在线| 日韩精品中文字幕人妻中出| 色婷婷久久精品一区二区| 国产精品人妻一码二码| 国产精品原创巨作AV女教师| 亚洲av熟女天堂系列| 久久国产成人午夜av免费影院| 亚洲人成色7777在线观看不卡| 亚洲va在线va天堂va手机| 免费观看在线一区二区| 日本国产亚洲一区二区| 性一交一乱一乱一视频| 久久亚洲伊人| 99久久婷婷国产精品综合网站| 午夜时刻免费入口| 久久国产精品无码一区二区三区| 在线观看视频日本一区二区三区| 精品国产三级a在线观看不卡| 欧美a级情欲片在线观看免费| 国产精品18久久久久久不卡中国 | 亚洲日韩国产精品乱-久| 日韩欧美国产丝袜视频| 免费观看在线视频播放| 无码av不卡一区二区三区| 国产精品深田咏美一区二区| 国产日产免费在线视频|