Jiawei Wu , Xiuquan Qiao, Junliang Chen
Beijing University of Posts and Telecommunications, Beijing 100876, China
Abstract: The concurrent presence of different types of trafficc in multimedia applications might aggravate a burden on the underlying data network, which is bound to affect the transmission quality of the specified traffic.Recently, several proposals for fulfilling the quality of service (QoS) guarantees have been presented. However, they can only support coarse-grained QoS with no guarantee of throughput, jitter, delay or loss rate for different applications. To address these more challenging problems, an adaptive scheduling algorithm for Parallel data Processing with Multiple Feedback (PPMF) queues based on software de fined networks (SDN) is proposed in this paper, which can guarantee the quality of service of high priority trafficc in multimedia applications. PPMF combines the queue bandwidth feedback mechanism to realise the automatic adjustment of the queue bandwidth according to the priority of the packet and network conditions, which can effectively solve the problem of network congestion that has been experienced by some queues for a long time. Experimental results show PPMF signi ficantly outperforms other existing scheduling approaches in achieving 35--80% improvement on average time delay by adjusting the bandwidth adaptively, thus ensuring the transmission quality of the speci fied trafficc and avoiding effectively network congestion.
Keywords: multimedia streams; software defined networks; quality of service; priority-based adaptive feedback queues
To achieve a higher transmission rate and a lower packet loss in multimedia applications,it is necessary to have a flexible resource schedule mechanism (FRSM) for the data packet forwarding rate and reliability [1]. Although various QoS (e.g., a combination of QoS parameters) schemes and mechanisms have been proposed for FRSM in the application of multimedia, there is no one- fit-all algorithm [2]. There is no doubt that multi-queue management and scheduling (MQMS), as the core technology of FRSM, has been widely studied. However, compared with W.Kim [3]proposed QoS-APIs routing algorithm, the existing MQMS methods (e.g. first-in- first-out(FIFO) queues or stochastic fairness queueing(SFQ)) still have many issues. For instance, in the case of a full queue, incoming packets are dropped while the other queues may be idle.In particular, malicious flows might consume the majority of queue bandwidth and block the subsequent flows entering the queue. That is because that they lack congestion avoidance and feedback adjustment mechanisms. In addition, guaranteeing the QoS of the designated traffic largely requires manual per-device configuration by network administrators [3].These operations are not only prone to human errors, leading to critical service disruptions,but they can also only support coarse-grained QoS for different applications [3].
This happens because the current QoS architectures cannot continually adapt to the changes of the network state, which lacks a broader picture of the overall network resources [4]. Hence, to better cope with the aforementioned problems, an intelligent feedback adjustment mechanism using SDN in this paper that can guarantee the reliable transmission of the designated trafficc with low latency and low packet loss rate is necessary.
SDN has the ability to control the behavior of the entire network with the separation of control layer and forwarding layer, making the network switches in the data plane simple packet forwarding devices and application-aware [4]. Meanwhile, the control layer of the SDN architecture has the ability to perceive any changes of network topology and trafficc, which can provide a new solution for topology management and may improve the reliability of data forwarding. In addition,SDN’s global network view can adjust bandwidth provisioning schemes adaptively and utilize the network resources more efficciently,which is infeasible in traditional network architectures [4]. In that case, the SDN concept has quickly gained significant focus by the research community after the introduction of OpenFlow in 2008 [5].
Recently, several proposals for fulfilling the QoS guarantees in SDN environment have been presented in Section 2. They implemented flow scheduling using various scheduling model, such as HiQoS [7] and QVR (QoS-aware Virtualization-enabled Routing) [8].Specifically, HiQoS provides QoS guarantee by using a combination of queue mechanisms and multiple paths technology. This HiQoS can not guarantee the QoS of the specified trafficc without considering the queue priority,and when bursty traffic occurs, it may result in the loss of data packets due to the lack of network congestion handling ability of a wide variety of data flows generated by multimedia applications based on multiple queues.Furthermore, QVR combines virtualization technology to slice the network resource among multi-tenants’ applications, which can decide the optimal flow routes to fulfill the QoS guarantees for applications. However, the existing work of guaranteeing QoS solutions[1],[7],[8],[9],[13] in SDN neither consider the concurrent presence of multi-traffic scenarios with multiple feedback queues scheme nor detect transient congestion caused by link variations or bursty trafficc. The main contribution of this paper is to propose an optimisation framework with a multiple feedback queues and congestion avoidance mechanism for the reliability of data forwarding over SDN.
In this paper, the authors propose a feedback mechanism for the queue bandwidth of SDN networks.They aimed to solve a crucial problem that arises when deploying the feedback mechanism for the queue bandwidth.
As shown above, all these guaranteeing QoS solutions are based on approaches completely different from the counterparts in traditional networks, and this is mainly due to the decoupling architecture of SDN. However,as SDN becomes more widely accepted, one critical challenge in its applicability is whether the controller using a network Operating System (OS) and the integrated applications for routing and resource allocations could provide an intuitive and promising solution in multimedia applications. To this end, we propose an adaptive feedback control algorithm for Parallel data Processing with Multiple Feedback(PPMF) queues to improve the transmission quality of the specified traffic and avoid network congestion effectively for the concurrent presence of different types of trafficc in multimedia applications.
To the best of our knowledge, this is the first proposal to address multiple feedback queues under the queue delay constraint in OpenFlow networks. Our experimental results verify the effectiveness of the proposed algorithm.
The contributions of this paper are summarized as follows:
1) We design a new QoS architecture with a feedback mechanism for the queue bandwidth, but taking into account congestion avoidance mechanism based on SDN. This paper differs from existing QoS architectures because we offer a new adaptive scheduling scheme for parallel processing to adjust queue bandwidth that is based on the level of queue congestion and queue priority.
2) PPMF has the characteristics of queues anti-interference, which can guarantee the transmission quality of the speci fied trafficc.
3) PPMF enables end-to-end QoS guarantees by using the adaptive feedback mechanism for the queue bandwidth in network congestion and supports trafficc-awareness adaptive queue bandwidth adjustment that fast manages dynamic flows with low computational complexity (i.e. o(1)).
4) PPMF adopts a new adaptive feedback control algorithm, which can effectively avoid malicious flow transmission with the occupied bandwidth for a long time.
The remainder of the paper is organized as follows. We discuss the related works in Section 2. Section 3 proposes the PPMF provisioning algorithm, including its architecture,its scheduling model, and its algorithm implementation.
In Section 4, we present the simulation results and an analysis of the feasibility of this proposed algorithm. Finally, we conclude this work and present future work in Section 5.
Inspired by the promising prospects of SDN,previous researchers have tried to implement real-time multimedia traffic on SDN networks. For example, Yan et al. [7] propose a QoS-guarantee solution named HiQoS in the SDN and guarantee QoS for different types of traffic by queuing mechanisms. However,as the bandwidth of all the queues are a fixed value, the proposed system may have difficculty guaranteeing the transmission quality of the speci fied trafficc during the congestion scenario. Furthermore, Yang [9] and MF Ramdhani et al. [10] offer admission control algorithms that can avoid the network congestion. However, their approaches require appropriate modi fication and implementation of the cross layer protocols, which means that the implementation of this methods is very complexity.Note that, SC. Lin et al. [8] propose an adaptive feedback management solution in the SDN and guarantee end-to-end QoS by using a combination of network virtualization and flow allocation. They present different QoS routing problems, their challenges, and the QoS routing schemes. However, the authors only focus on flow isolation and prioritization as well as dynamic flow allocation without considering congestion avoidance mechanisms. Furthermore, they are not conceived for multimedia applications. By contrast, Egilmez et al. [12] propose a new prioritisation scheme without considering priority queuing mechanism but using manually configured rules,which can realise QoS guarantee for multimedia flows over SDN Networks. Yet, Egilmez et al. [12] only focus on considering the impact on best-effort trafficc and they ignore the bene fits of guaranteeing its trafficc bandwidth,which can then be utilised by the designated traffic in time of insufficient bandwidth.Similar to our work, literature [13] aims at the concurrent presence of different types of trafficc in multimedia applications, and propose a hybrid scheduling model by combining priority queueing with a packet general-processor sharing for providing diverse QoS guarantees.However, [13] only focuses on hybrid packet scheduling scheme (i.e., the combination method of priority queueing and packet general-processor sharing) without considering the congestion avoidance mechanism for bursty trafficc.
In this section, we present the design of PPMF,including PPMF Architecture and PPMF Scheduling Model.
Fig.1 shows the system framework of the proposed scheme based on SDN. This optimisa-tion framework is an extension of the standard OpenFlow controller, which provides the QoS guarantee with high priority trafficc in multimedia applications. It includes two key functional modules and interactions between modules.One is a data forwarding optimisation module,which is mainly used to optimise the current network environment and to adjust the feedback of the trafficc flows based on different priority queues and on the congestion level. The other is a routing module, which is responsible for collecting the up-to-date network state and managing the synchronisation and centralised configuration of the entire network topology information. This two modules exist in the application layer of the SDN architecture, which communicate with the controller via Northbound Open APIs, e.g., OpenFlow protocol[14]. Similarly, the data plane communicating with the controller via south-bound open APIs [14] includes OpenFlow switches that perform packet lookups and forwarding the packets among the network of switches. Here,our solution is to set three separate priority queues associated with a switch port in order to control the port trafficc of the physical node directly based on each port of the switch. It then forwards the corresponding trafficc to the queue with different priorities by setting the maximum and minimum queue bandwidth.Thus, it can guarantee the efficiency of data forwarding with different priorities.
The proposed controller offers various interfaces and functions, and it is responsible for maintaining the topology, implementing the classi fication of trafficc, generating and sending flow tables as well as controlling data forwarding. Also, the controller periodically detect any failure in the network based on network feedback coming from the OpenFlow (OF)switches and instantly modifies the switches’flow tables.
When a client tries to require a video packet service, it sends a connection request to the nearest OpenFlow switch. A connection request is required to match the flow table according to prede fined flow de finitions in the switch. In case of a successful match, the action(s) speci fied in the rule are executed. Here,we can conclude that the matching data of flow tables is the enqueued data. If there is no matching rule in the flow tables, then the packet is either dropped or the Packet_in message containing the requested information is sent to the controller for processing. The controller parses the Packet_in message, determines the trafficc type by using the Type of Service (ToS)field in the IPv4 header or source IP address.It then forwards the corresponding traffic to the queue with different priorities. During network operation, the controller can also adjust the queue bandwidth adaptively according to the bandwidth adjustment strategy when the network status changes.
In SDN networks, multimedia applications generate a large amount of data flows are treated differently by data plane devices in order to meet their different QoS requirements. To guarantee the QoS requirements of different types of traffic, all the flows are categorized into three classes with different priorities depending on the delay and bandwidth demand for queues.
These three kinds of trafficc classes are described as follows.
1) Traffic class A: bandwidth-sensitive and delay-sensitive trafficc with real-time appli-cations;
2) Trafficc class B: delay-sensitive trafficc;
3) Traffic class C: best-effort traffic, without any speci fic requirement (e.g., FTP trafficc).
According to the real-time traffic requirements, the highest priority is assigned to class A, and the lowest priority to class C. Also,when the feedback adjustment mechanism starts, different trafficc classes are scheduled to different priority queues. Thus, these queues are served by a certain scheduling policy (i.e,the bandwidth adjustment strategy of three different priorities queues) to control the order of flow forwarding of different types of trafficc.
This proposed algorithm provides QoS guarantees to designated streams; that is, to optimise routing dynamically to ensure delivery of the higher priority traffic within specified constraints. Our algorithm for the bandwidth adaptive adjustment is based on three separate priority queues and it allows the feedback mechanism of queue bandwidth. In addition,PPMF makes full use of the SDN centralised control function and it solves the problem that the traditional network cannot directly manage the equipment and the network topology.
3.3.1 Detection of queue congestion
The controller itself based on OpenFlow protocol does not directly provide standards-based REST-APIs for obtaining the current queue length, but it only provides the data traffic transmitted by the current queue (i.e, the trafficc of dequeuing the queue). To obtain the current length of the queue, it is necessary to know the trafficc of entering the queue. According to the characteristics of SDN centralized control architecture, forwarding all data are carried out by matching the flow table, which can’t directly obtain the trafficc of entering the queues based on the matching amount of flow table regarding entering the queue. However,we cannot guarantee access the consistent timestamp during the trafficc entering the queue and the trafficc dequeuing the queue, and there is no way to avoid data burst phenomenon in the real network environment. For example, if a queue is initially empty, then quickly filled,and quickly emptied, this situation cannot be simply determined as the current queue is congested [15].
To reflect the current network congestion status exactly, random early detection (RED)congestion control mechanisms [13] are applied to the queue for one of the trafficc classes in the proposed algorithm. Next, we will calculate the average queue length and queue delay as shown in the following equations.The average queue size accurately re flects the average delay at the router [15].
whereαυgandwqrepresent the average queue length and the queue weight, respectively. The queue weightwqis determined by the size and duration of bursts in queue size and is the time constants. In equation (2), L and minthrepresent bursts packets arriving at the router and a given minimum threshold, respectively.Also, wqmust satisfy equation (2). Here, we use wq= 0.002 according to literature [15].For largewqthis proposed algorithm cannot filter out transient congestion caused by link variations. In contrast, for smallwq, it cannot reflect the current network congestion status exactly. In this scenario, the router cannot effectively detect the early congestion. Thus, it is necessary to set the appropriate weights to balance the average queue length of the queue,which can avoid the fluctuation problem of the average queue length caused by the data explosion or transient congestion. In addition,for real-time queue length q in Eq. (3), we use the difference between the trafficc rate entering the queue in_rate and the trafficc rate dequeuing the queue out_rate.
The needle had barely put in its last stitch when the girl, glancing at the window, spied the white plumed19 hat of the King s son who was being led back by the spindle with the golden thread
The queuing delay can be made available as shown in equation (4), wherenow_widthis the current queue rate (i.e., the current bandwidth of the queue).
3.3.2 Feedback adjustment based on traき c flow
Currently, the packet is forwarded via the created queues with different priorities based on the switch port in the data plane. Although this method can guarantee the forwarding efficciency of some data, it lacks the ability to monitor the traffic flows. In SDN Networks, the data satisfying the matched rules is directly entered into the queues, which may result in a full status or congestion in the queues. Thus, the data that is ready to enter the queue can wait or even be lost, even if the other queues are idle. In this scenario, some queues will starve to death, or have no data, and other queues will suffer from the packet loss phenomenon because they do not make full use of the trafficc flows from the overall ports.
To deal with the problem of the bandwidth utilisation based on the overall ports, we design a more effective feedback adjustment mechanism that can adaptively balance the queue bandwidth based on the overall ports according to the current trafficc state of the network in SDN environment. The queue delay is the main parameter that is used to evaluate the current traffic state of the network, as described in Section 2-2-1. In addition, our proposed feedback adjustment mechanism,depending on the relation between the queue delay and the given threshold, determines three levels of network congestion (i.e., when the queue delay is less than the threshold of 70%, the queue status is health; when the queue delay exceeds the threshold of 90%, the queue status is severe congestion; when the delay is between the two thresholds, the queue status is moderate congestion [17]), which can adaptively adjust the queue bandwidth.
In the process of adjusting the three queues’bandwidths, when the priority of the three queues is higher, the ability to seize the other queues will be stronger. In contrast, when the priority queue bandwidth is lower, the possibility of preemption will be greater and the ability to guarantee the QoS requirement will be poorer. For instance, when the queue with the higher priority seizes the queue bandwidth with the lower priority, it must be guaranteed that there is no serious congestion. In contrast,the low priority queue must ensure the health status of the high priority queue all of the time when preempting the queue bandwidth with the higher priority.
Figure 2 depicts the implementation process for the bandwidth adjustment. It shows that the capacity of the preemption bandwidth in three different priority queues is different and it also shows that when the priority is higher, the ability to avoid congestion is stronger. There is still a large amount of packet loss after the three queues are adjusted for queue bandwidth. This demonstrates that the three queues are in a state of congestion. To solve the problem of global congestion, we should deal with the feedback information from the data plane and then send it to the controller.3.3.3 Feedback adjustment provisioning algorithm
Fig. 2. The implementation process for the bandwidth adjustment.
Algorithm 1 shows the detailed procedure for the different priority queues to adjust the bandwidth. Lines 1--2 are the two statements of the main function where the queue parameter and the congestion level of the current network can be achieved, and Lines 6,12,27,32,and 42 calculate their respective queue bandwidth borrowed by the bandwidth of the other queues according to the congestion level with Eq.(3). Speci fically, length(id=q2),length(id=q1) and length(id=q0) represent the average queue length of queue q2, q1 and q0,respectively. In addition, Tmpwidth represents the variable that is used to store the current bandwidth borrowed from the corresponding queue according to real time network congestion status based on three different queues.Furthermore, Threshold2, Threshold1 and Threshold0 represent the delay threshold of queue q2, q1 and q0, respectively. Next, in this algorithm, we use the numbers “0,1,2”to represent the health status, moderate congestion and severe congestion of the queue,respectively. Moreover, we then use several ifelse-endif statements that cover Lines 3--48 to optimise the queue bandwidth by preempting the other queues’ bandwidth according to the congestion level and the queue priority. By preempting the bandwidth of the other queues,we can increase the existing queue bandwidth and reduce the borrowed queue bandwidth accordingly.
Note that, Lines 3--20, Lines 26--35, and Lines 40--48 represent how to adjust the bandwidth of queue q2 and q1 according to the bandwidth adjustment strategy of three different priorities queues, respectively. Speci fically, when network congestion occurs for the low priority queue q2, the low priority queue q2 can only seize the bandwidth of queue q1 and must ensure the health status of the high priority queue q0 and q1 all of the time in the preempting bandwidth (as shown in Lines 40--48). Importantly, if there is still a large amount of packet loss after the three queues are adjusted for queue bandwidth (i.e, the three queues are in a state of congestion), this shows that the bandwidth adjustment strategy has fully utilized the port bandwidth (as shown in Lines 15--17 and Lines 46--48). To address this problem, we should deal with the feedback information (i.e, congestion information) from the data plane and then send it to the controller. The controller parses the message, checks the current network status, and recalculates a new route by replacing old flow-entries with new ones. When links change in the network,all the queue bandwidths are readjusted,which demonstrates our algorithm adaptive.Moreover, in this algorithm, nowWidth0,nowWidth1, and nowWidth2 represent the current queue bandwidth of queue q0, queue q1, and queue q2, respectively. For this algorithm, its implementation process is shown in figure2.
In this algorithm, Lines 3--21, Lines 24--34, and Lines 39--47 represent the order of flow forwarding of class A, class B and class C trafficc, respectively. Also, our proposed PPMF algorithm by setting maximum and minimum bandwidth based on three queues can guarantee the QoS of the designated traffic, which can limit the respective queue bandwidth.
The time complexity of this algorithm mainly depends on the method for ensuring the data forwarding efficciency with the higher priority trafficc by adjusting the three different queues’s bandwidths based on the three queue status. Hence, the time complexity of this algorithm is O(1).
In this section, we show some numerical results to evaluate the performance of the PPMF proposed in Section 3.2. Specifically, we describe our simulation environment and evaluate the performance of the proposed PPMF algorithm, to showcase the merits of the proposed PPMF algorithm compared with other existing scheduling algorithms.
We evaluate the performance of the proposed feedback adjustment algorithm based on traffic flows with simulations using the mininet emulator to create our network topology [16],as shown in figure 3. It can be seen in figure3 that the hosts are connected via the SDN switch. In addition, there is a host that acts as the floodlight controller, which is linked to the SDN switch. Next, we customised the iperf network testing tool and generated more test flows (e.g. flow 0-2) based on three different priority queues, as shown in table 1. For further reference, we collected the used hardware and software versions in table 2. In the following experiment, we assigned flow 0 to the highest priority queue q0, while flow 1 and flow 2 are assigned to the two priority queues(q1, q2), respectively. In addition, q2 is the lowest priority queue among the three priority queues.
Figure 4 depicts the impact of the increase of the queue data on other queues. It can be seen that no matter how many packets are sent by Host1, even if packet loss occurs, it will not lead to an increase of the time delay and packet loss rate when Host4 receives the packet sent by Host2. This is because that PPMF algorithm combines queuing mechanisms toguarantee network bandwidth for different types of trafficc. Also, there is no interference between trafficc classes. This veri fies that this proposed algorithm has the characteristics of queues anti-interference.
Table I. Network experiment flows.
Table II. Con figuration hardware and software.
Fig .3. Network topology.
Fig. 4. The anti-interference queues test.
Table III. Average Time delay (ms)
Figure 5 shows the average time delay performance achieved by PPMF, HiQoS [7]and LiQoS [7]. Here, we define the average time delay as the ratio of the total time of receiving the packet versus total received packets. We test the proposed PPMF algorithm against bursty traffic. We can observe that as the transmission packets increase, PPMF has achieved a significant reduction in average delay with performance stability, which outperforms HiQoS and LiQoS significantly.This is because PPMF can detect transient congestion and adjust adaptively the queue bandwidth according to the current network status and feedback adjustment mechanism. In addition, PPMF also improves the resources utilization greatly due to the usage of multiple feedback queues. Compared with PPMF,the flow allocated by LiQoS suffered from great fluctuations due to the usage of shortest path algorithm. However, the HiQoS algorithm mitigates the wide fluctuations in time delay by using the combination algorithm of multiple paths and the queue mechanism as it can provide available bandwidth. Table 3 shows the results on average time delay for different trafficc loads (i.e, the amount of data transferred by system) by using three different algorithms. When the traffic load is 30000 packets, the results from PPMF do not have signi ficant difference from those from HiQoS and LiQoS. With the increase in trafficc loads,PPMF can achieve 35--80% improvement on average time delay.
Figure 6 shows the sensitivity of different priority queue data on the same data transfer.With the increase of transmission packets, the average queue delay and packet loss rate of the higher priority queues are lower than those of other low priority queues. This is because more preempted queue bandwidth for higher priority queues increases the opportunity of transmissions, which can improve the data forwarding ability. Meanwhile, this shows that the higher the priority of the queue, the stronger the data forwarding ability and the lower the sensitivity of the data.
Figure 7 and figure 8 show the results for the changing of delay variation (jitter) and queue bandwidth in network congestion by using the feedback mechanism of queue bandwidth based on different priority queues. In figure 7, we can observe that the congestion of queue q1 occurs to some extent as the data packets increase because the sensitivity to the lower priority queue data increases as the level of queue congestion increases, and thus queue q1 borrows the bandwidth from queue 2, which accordingly increases the queue q1 bandwidth and decreases the queue q2 bandwidth. Finally, the delay of queue q1 decreases and the delay of queue q2 increases by borrowing the bandwidth of queue q2.However, due to the increasing of queue q2 delay upon borrowing the bandwidth, queue q2 is in a state of severe congestion. Thus, we can only consider the available bandwidth of queue q0 and queue q1 in the latter 30 seconds according to the principle of queue bandwidth feedback scheduling algorithm, as shown in figure 8. In figure 8, it can be observed that the bandwidth of queue q0 experienced two adjustments: one is from 60 Mbps to 10 Mbps by preempting the bandwidth of other queues due to the health of queue q0 and the moderate congestion of queue q1, the other is from 10 Mbps to 88.089 Mbps due to the higher priority of queue q0 by preempting the bandwidth of queue q1. After preempting the bandwidth of queue q1, the delay of queue q0 is signi ficantly reduced. This demonstrates that the feedback mechanism of queue bandwidth based on different priority queues is very effective when different levels of network congestion occur.Hence, we can conclude that our proposed methods can effectively avoid malicious flow transmission with the occupied bandwidth for a long time.
In this paper, we propose a feedback mechanism for the queue bandwidth of SDN networks. We aimed to solve a crucial problem that arises when deploying the feedback mechanism for the queue bandwidth---that is,the problem of automating adjustment of the queue bandwidth according to the priority of the packet and network conditions.
In addition, by making full use of the advantage of SDN centralised control with a global perspective, we implemented the monitoring of all of the queues’ bandwidths and adjusted the bandwidth according to the network congestion situation to optimise the process of data forwarding. The results demonstrate that PPMF is capable of accommodating the changing video traffic and providing a better QoS guarantee compared with other existing scheduling approaches. The results also show that the algorithm has the ability to avoid malicious flow transmission with the occupied bandwidth for a long time.
Fig. 5. Measuring the average time delay using LiQoS algorithm, PPMF algorithm, and HiQoS algorithm.
Fig. 6. Data sensitive test based on different priority queues.
For future work, we hope to improve the performance of our proposed algorithm in large-scale networks.
Fig. 7. Bandwidth adjustment test based on bandwidth feedback mechanism of different priority queue.
Fig. 8. Bandwidth adjustment test based on bandwidth feedback mechanism of different priority queue.
ACKNOWLEDGMENT
This work was supported by National Key Basic Research Program of China (973 Program)under grant no.2012CB315802. National Natural Science Foundation of China under grant no.61671081 and no.61132001.Prospective Research on Future Networks of Jiangsu Future Networks Innovation Institute under grant no.BY2013095-4-01. The authors would like to express their sincere gratitude for the anonymous reviewers’ helpful comments.