亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Modeling and game strategy analysis of suppressing IADS for multiple fighters’cooperation

        2018-04-27 06:38:16LIQiuniYANGRennongLIHaoliangZHANGHuanandFENGChao

        LI Qiuni,YANG Rennong,LI Haoliang,ZHANG Huan,and FENG Chao

        Aeronautics and Astronautics Engineering College,Air Force Engineering University,Xi’an 710038,China

        1.Introduction

        It is full of uncertainty in the modern war,and the battlefi eld situation changes rapidly.For a single fighter in campaign,it is hard to not only improve its own safety but also weaken the fighting capability of the enemy as much as possible,since it always faces with the problems of limited resources of the weapon and information.However,there are many advantages for multiple fighters’campaign,such as information sharing,resources optimization,actions coordination and complementary capabilities.It is beneficial for improving the operational efficiency.

        In the research of coordination missions for multiple fighters,theoretical breakthrough and great progress in engineering field have been obtained.The adaptive control system of multi-agents was proposed by Honeywell in[1],which supports the real-time control and coordination of unmanned combat air vehicle(UCAV).The hierarchical distributed architecture for the coordinated control was proposed by the U.S.Air Force Research Laboratory(AFRL)and Institute of Technology in[2]and[3].The problem of real-time coordination for the heterogeneous platform of many kinds of unmanned air vehicle(UAV)was studied in[4].The problem of target tracking identification for multiple fighters was researched by Beard et al.[5]by using the consistency dynamics method.In recent years,the study of coordination missions has become a hot topic,and traditional methods and intelligent optimization methods are mostly used on the theoretical research of this area.A path planning system structure with the collaborative management layer,the path planning layer and the trajectory control layer was presented in[6],based on the strategy of hierarchical decomposition.Collaborative functions were introduced in real-time path planning for combat in[7]by using the Voronoi graph method.The problem of cooperative reconnaissance was researched in[8]by employing evolutionary algorithms.In addition,the solution to the multiple traveling salesman problem with time windows and the simulated annealing method are also used in researching cooperative reconnaissance for the UAV[9,10].

        At present,the research of suppression of enemy air defense(SEAD)missions for multiple fighters[11–17]mostly employs network flow optimization and intelligent optimization methods such as particle swarm.The problem of operational control for the UAV is researched based on the theories of multi-agents and the complex system in[14].The UAV’s offensive and defensive problems under the condition with the interval number information were considered in[17].SEAD missions are the typical dynamic game problems.Thus far,there is no study under the condition that the number of nodes is changing in the battle evolution process.Therefore,by modeling the combat resources as multi-agents networks nodes[18],a complicated operational process integrating kinds of differentcombat resources and accompanying detecting,jamming and attacking is researched in this paper,and the problem of dynamic changing of the numbers and positions for the notes in the operational process is overcomed.A profit model is developed for both offense and defensive sides under the confrontation game,and a distributed virtual learning game strategy is proposed for solving the mixed strategy Nash equilibrium(MSNE)with n-person and n-strategy in the system countermeasures by using this model.

        2.Modeling process of multi-agents game

        The integrated air defense system(IADS)is an integrated operational process based on the network,which consists of three communication subnets, namely, the early warning subnet, the command and control subnet, and the intercepting operation subnet. Therefore, it has the characteristics of interconnection and interoperability.

        Aiming at solving the complicated problem of suppressingIADS,we investigate this issue by developing a multiagent countermeasure network system model.

        In a game, there are three essential factors: participators,actions(or strategies)and payoffs.Based on these three factors,the game can be established as shown as follows.

        2.1 Participators and strategies of the game

        We consider two multi-agent counter measure network systems which participate in the dynamic game and consist of n nodes and m nodes,respectively.

        As shown in Fig.1,these two systems are denoted asNandM,which are the attacking side fighters and the IADS defending side fighters,respectively.The countermeasure purpose of this game is summarized as follows:the attacking individuals will always try to suppress the defending side,such as weakening the scope of counteraction or damage capacity,striking the defending individuals,making them lose the confrontation capacity thoroughly,and ensuring own safety at the same time.In order to protect their own safety and minimize the loss,the defending individuals will try to anti-attack,jam or detect the attacking side.

        Fig.1 Two multi-agent countermeasure network systems

        In this game,each side has n or m agent participators.Denote the agent entity nodesX={X1,X2,...,Xn}as the attacking side,andR={R1,R2,...,Rr},S={S1,S2,...,Ss},T={T1,T2,...,Tz}as the early warning nodes,the command and control nodes,and the intercepting operation nodes of the defending side respectively,where r,s and z are the number of early warning nodes,the command and control nodes,and the intercepting operation nodes,respectively,and r+s+z=m.Let the attributes of the entity nodes be:the detecting scope with radius defined as A, the offensive scope with radius defined as B,the damage capacity C,the communication capacity D,the speed V,the value of the entity node P,the detecting probability Pdj,and the attacking probability Pkj.Assume that the information of the nodes of both sides in this game is completely known by the opposition,which means the numbers and properties of all the nodes are known by the opposition.

        LetG={O,E,F}be the strategy action space of the attacking side andG′={O′,E′,F′}be the strategy action space of the defending side.E,F and O represent three kinds of strategic actions:jamming,attacking and no action,respectively.Every multi-agent node chooses one kind of strategic action gi∈Gor g′i∈G′in each round of the game,so as to increase threats to the opposition and keep the value of the actor,as well as maximize the expected payoffs of the actor.In addition,the probability of the strategy sat is fies the constraints:gG,O+gG,E+gG,F=1andg′G′,O′+g′G′,E′+g′G′,F′=1.

        2.2 Payoffs of the game

        Referring to[13],the nodes of the multi-agent countermeasure systems take into account three performance indicators when choosing the strategy:(i)the estimated value of the comprehensive threat to the opposition;(ii)the estimated value of the nodes;(iii)the estimated influence of the mission’s cost for the whole system.

        (i)Considering the estimated value of the comprehensive threat to the opposition

        We estimate the comprehensive threat to the opposition according to four aspects:the detecting scope Aij,the offensive scope of node j denoted as Bj,the damage capacity Cj,the communication ability Dj.The value of the node’s comprehensive threat is calculated one by one.

        Because the unit is different,we normalize Aji,Bj,Cjand Djwith the formula:andare defined as the normalized value of Aji,Bj,andrespectively which are all in the interval[0,1].

        Suppose that the detecting domain and the offensive domain are sectorial or circular areas.Ajand Bjare the radii.For the sake of brevity,suppose that the multi-agent can be omni-directional jamming and multi-frequency jamming.When the node does not encounter jamming,the detect threat Ajiis shown as Fig.2,and it is calculated aswhere ajis the value of the detection angle.Otherwise,set the distance of two nodes asij,which is calculated from the coordinates of node i and node j.The detect threat will decrease with the jamming effect increased.Notice that the jamming effect andare in inverse proportion according to the experience.As a result,the detect threat will decrease withbeing reduced.Therefore,Ajican be modeled as the form ofas shown in Fig.3.If dij>Ai+Aj,the node will not be detected and jammed,and the detecting threat achieves the maximum.When the distance of two nodes dijis decreasing,the jamming effect will increase,and the detecting threat will decrease.

        Fig.2 Detect threat Ajiwithout encountering jamming

        Fig.3 The relationship curve of Ajiand dij

        Similarly,if dij>Bior dij>Bj,the node will not be attacked.The communication ability is defined asTh e element of the adjacency matrix dmjrepresents the nodes link status.wmjrepresents the weight of connection between two neighboring nodes.The larger wmjis,the more important the communication link is.Pdjand Pkjare the detecting probability and the attacking probability,respectively.In a round game,the node will be attacked by one or more nodes and the total attacking probability isIfCthreshold,the node is regarded as a destroyed node.If the node has been destroyed,then the node loses its opposed function:Aj=Bj=Cj=Dj=0,fji=0.λ1,λ2,λ3,λ4are the weights of threat and satisfy λ1+ λ2+ λ3+ λ4=1.

        If the attacking node is motional,then,the faster motion it is,the smaller threat it suffers.Set Vi∈[Vmin,Vmax],the direction of Viis determined by the strategy target.The coefficient of threat about Viis regarded as λvi=for Vmax/=Vminand λvi=1 for Vmax=Vmin.Above all,the total threat encountered by the whole system of the attacking side is FN=Since V has little effect on the defending side,the total threat encountered by the whole system of the defendingside is

        (ii)Considering the value of nodes

        Suppose that the economic and strategic value of the node i is estimated as Pi,and the economic and strategic value of the node j is estimated as Pj.This type of value is determined by its own economic value and strategic position.Then,the total value of the whole system is?where Pi∈[0,1]and Pj∈[0,1]are the economic and strategic value of the node i and node j,respectively.

        (iii)Considering the influence of the mission’s cost on the whole system

        Here,we mainly think about the influence of the destroyed node on the mission’s cost for the whole system.Equation(2)calculates the probability of the damage from the opposition node j as follows[13].

        It is worth mentioning that the probability of the damage from the opposition node j will reduce if some opposition nodes are destroyed.The reduction of this probability which is caused by the destroyed nodes is viewed as the total influence of the mission’s cost on the whole system,which will be shown later.

        Suppose that the node l is destroyed,then Pdl(X)=Pkl(X)=0.According to[13],the above probability is calculated by

        To sum up,because node l is destroyed,let Δ represent the variation value,the total influence on the whole system is regarded as

        (iv)Considering the payoffs function

        When the player chooses no action or the jamming strategy,we suppose the cost is small such that it can be ignored in this game.

        Define the cost of performing attacking action as cj.Denote c?jas the normalized value of cj,with cjbeing the value of the damage capacity declined in each attacking,in practice,cjimplies the weapon and firepower used in performing attacking action.kjis the times of the attacking having happened. If the attacking happens, the declined damage capacity is calculated by

        Thus,the reduction of the total damage capacityis pm.

        The mixed strategy vectors are described as follows.Consider that node i∈Njams the opposition node with the probability πi,gi,let Πi= {πi,gi|gi∈G}be the mixed strategy vector of all the possible strategies Πi={πi,gi∈R:?gi∈Gπi,gi=1},and denote Π-i={πi′,i′∈N{i}}.Similarly,node j ∈Mjams the opposition node with the probability φj,g′

        j,let Φj=be the mixed strategy vector of all the possible strategiesand denote Φ-j={φj′,j′∈M{j}}.

        The scenario analysis and assumptions for this game are summarized as follows:

        (i)The jamming strategy should be selected under the condition dij≤Ai+Aj;

        (ii)The attacking strategy of the attacking side should be selected under the condition>0,?dnj<An,(n∈N)and dij≤Bi,since the target node must be detected by some nodes inNand within the offensive scope of node i.

        (iii)The attacking strategy of the defending side should be selected under the condition>0,?dim<Am,(m∈M)and dij≤Bj.

        (iv)When the strategy action is F,some nodes of the opposition may be destroyed as long as its suffered total attacking probability is larger than Cthreshold.Accordingly,the total influence of the destroyed nodes on the whole system should be considered,and the cost of conducting attack action and the declined damage capacity should also be considered.

        (v)The variation of the comprehensive threat ΔFjand the variation of the value of the opposition nodes ΔPjshould be considered under any strategy action.When the strategy action is O,both ΔFjand ΔPjare 0.When the strategy action is E,ΔPjis 0.

        (vi)If the distance of two nodes is smaller than the offensive scope of any node in the opposition,the node will be attacked by the opposition,its suffered total attacking probability is.Thus the node’s comprehensive threat and value will be reduced,and the reduction

        Define Vi(gi,π-i,Φ)as the payoff of node i inN.Let Δ represent the variation value,therefore ΔFjdenotes the variation value of the estimated value of the comprehensive threat for node j,and so on.Then,based on the above scenario analysis and assumptions,there are four cases for the payoff of each node ofN.Take node i ofNfor example.

        Case 1When the strategy action of node i is selected as F and the distance of nodes i and j is larger than the offensive scope of any node ofM,the payoff of node i is Vi(gi,π-i,Φ)=q1(ΔFj+PajFM{j})+q2ΔPj-q3pi,where Vi(gi,π-i,Φ)consists of three parts:the variation of the estimated value of the comprehensive threat q1(ΔFj+PajFM{j}),the variation value of node q2ΔPjand the declined damage capacity q3pi,with q1,q2,q3being the weight coefficients.

        Case 2When the strategy action of node i is not F and the distance of nodes i and j is larger than the offensive scope of any node ofM,the payoff of node i is Vi(gi,π-i,Φ)=q1ΔFj+q2ΔPj,which consists of two parts:the variation of the estimated value of the comprehensive threat q1ΔFjand the variation value of node q2ΔPj.

        Case 3When the strategy action of node i is selected as F and there exist two nodes i and j such that the dis-tance of nodes i and j is smaller than the offensive scope of node j inM,the payoff of node i is

        which consists of the following parts:the variation of the estimated value of the comprehensive threat q1(ΔFj+PajFM{j}),the variation value of node q2ΔPj,and the reduction of comprehensive threat and values for node i by considering the total attacking probability of nodei suffer-

        Case 4When the strategy action of node i is not F and there exist two nodes i and j such that the distance of nodes i and j is smaller than the offensive scope of node j inM,the payoff of node i is

        which consists of three parts:the variation of the estimated value of the comprehensive threat q1ΔFj,the variation value of node q2ΔPj,and the reduction of comprehensive threat and values for node i by considering the total attacking probability of node i sufferingPkj))(Pi+Fi).

        In conclusion,we construct the payoff function of node i inNas(6),and each node i tries to optimize the payoff function.

        Define Uj(,φ-j,Π)as the payoff of node j inM.Then,there are also four cases for the payoff of each node ofM.Similar as the analysis of the payoff of node i inN,for node j inM,we can construct the payoff function of node j inMas(7),and each node j tries to optimize this utility function(7).

        where q1,q2,q3,q4(q1+q2+q3+q4=1)are the weight coefficients.

        3.Analyzing the evolution of the game

        As the game theory knows,this game model is a finite nonzero-sum mixed game.According to the MSNE existence theory,the MSNE exists for a finite game[19].In order to achieve the MSNE,a distributing virtual policy learning algorithm is proposed.On the MSNE of the game,the attacking side and the defending side will choose their own strategies with a certain probability.

        3.1MSNE

        In the game,the strategy is selected from the action strategy space,and it is a selection from multiple choices rather than a selection from two choices,which is not as usual.Thus,in this paper,we give an extensive definition of the MSNE as follows.

        Definition:(MSNE for n-person and n-strategy game)Suppose the mixed game isN+Mplayers,{G,G′}is the strategy space.Then πi,jis an MSNE,if and only if:

        When a game reaches MSNE,no one can obtain benefits by changing the strategy.In other words,no one has motivation to change the strategy actions.

        Take node i∈Ninto consideration and the expected payoff is calculated by(8).Analogously,for node j∈M,the expected payoff is calculated by(9),where Eπ,φis the expected action on the probability distribution{Π,Φ}.

        3.2 Distributing virtual policy learning algorithm

        To achieve the optimization of the expected payoffs,in each learning time t,the attacking node i and the defending node j choose a pure strategy∈Gor∈G′,respectively,as the optimal response based on other players’mixed strategies.Letdenote the mixed strategies of nodes i′∈N{i}at the time t-1,and Φt-1denote the mixed strategies of nodes j∈Mat the time t-1.Similarly,letand Πt-1N}.As a consequence,andcan be expressed as

        In addition,set three dimensional vectors for nodesNor nodesMto store the decision-making vector at a certain time.When the multi-agent node chooses jamming,attacking or no action,the decision-making vector value is[0,1,0],[0,0,1]or[1,0,0],respectively.

        According to[20],the strategies are updated by(12)and(13).

        It is the deviation linear combination of the last mixed strategy and the cumulative mixed strategy in every step’s update.The learning process will continue iterating until its convergence precision reaches a small value.Then the MSNE is found,and both sides take actions with stable probabilities.

        The distributing virtual policy learning algorithm for nperson and n-strategy game is described in Fig.4.

        Fig.4 The flow diagram of the algorithm

        4.Experiment and result

        In this simulation,both nodesNand nodesMof the two multi-agent countermeasure systems are distributed randomly in the space of 1 000 km×1 000 km.Assume that some nodes are marked as the important command and control nodes,and they are protected by other nodes.The initial strategy probability vectors are assumed to be[0.33,0.33,0.34]and[0.5,0,0.5];set weight coefficient q1=0.35,q2=0.4,q3=0.1,q4=0.15 for nodesNand q1=0.4,q2=0.35,q3=0.1,q4=0.15 for nodesM;the convergence precision is assumed to be 0.000 1;the threshold value Cthresholdis set as 0.7.αjis randomly set as 45°,60°,90°,180°or 360°.dmjis set as all 1’s matrix,where the initial communication is fully connected.The weight of connection wmjis assumed to be generated randomly between 0 and 1.

        To deeply show the effectiveness of the proposed method and algorithm,two groups of experimental data are considered as follows.

        The first group experimental data for the attack-defense game consists of three nodes of suppressing fighters and ten nodes of the IADS,namely,two command and control nodes, three intercepting operation nodes and five early warning nodes.

        The second group experimental data for the attack defense game consist of five nodes of suppressing fighters and fifteen nodes of the IADS,namely,three command and control nodes, five intercepting operation nodes and seven early warning nodes.

        Other parameters used in the simulation are generated randomly from the ranges as shown in Table 1.On the case of the same parameters, we compare the distributing virtual policy learning algorithm designed in this paper with the traditional adjacent algorithm,and analyze the evolution process of the offensive and defensive game(sample the experimental data every ten iteration to draw the results).

        Table 1 Stochastic assignment of related parameters

        Fig.5 and Fig.6 show that the nodes adaptively adjust their strategy and optimize the payoffs funtion by using the distributing virtual policy learning algorithm in this complex interaction of the attack-defense countermeasures game.Many times of experiments show that the presented algorithm has a good advantage for the problem of suppressing the IADS for multiple fighters in the case that the number of nodes and the iterations increase,even though it dose not has an obvious improvement when the number of nodes and the iterations is small compared with the traditional adjacent algorithm.In Fig.5(c),there is a jump phenomenon in simulation results of the average expected payoffs because a suppressing fighter destroys an important command and control node of the IADS successfully between t=18 and t=19.Moreover,the number of iterations for the two game algorithms to achieve MSNE may not be the same.Table 2 shows the average of the nodes’average expected payoffs and the nodes’total expected payoffs at the MSNE point for 30 times experiments,in each time experiment,and the parameter values are produced randomly for both groups of experiments as shown in Table 1.Taking the attraking side for example,it can be known from Fig.5 and Table 2 that,the presented algorithm is better than the traditional adjacent algorithm on the aspect of the nodes’average expected payoffs and the nodes’total expected payoffs when the number of total nodes to be either 13 or 20,since the target node is always selected according to the max expected payoffs.For the number of total nodes to be 13 and 20,the average of the nodes’average expected payoffs is increased by 21.94%and 35.84%,respectively,and the average of the nodes’total expected payoffs is increased by 20.68%and 27.13%,respectively.The simulation results indicate that the proposed distributing virtual policy learning algorithm used in suppressing IADS for multiple fighters can significantly improve battle effectiveness.

        Table 2 Average of the nodes’average expected payoffs and the nodes’total expected payoffs on the point of MSNE for 30 times experiments

        Fig.5 Meanexpectation pay and whole expectation payof the nodes of the suppressing side

        Fig.6 Track of the third node of the suppressing side when n=3,m=10

        Furthermore,employing the presented algorithm,the fighter fleet has the adaptive and self optimization ability for the dynamic battle field,and they can automatically implement target assignment,route planning and strategy selection,which takes advantage of the collaborative function in executing the task.

        5.Conclusions

        By modeling the combat resources as multi-agents networks nodes,a complicated operational process integrating kinds of different combat resources and accompanying detecting,jamming and attacking is researched.The dimension of the payoff matrix of every node is n×3 or m×3 at a certain time slot t,and the dimension of the payoff matrix space of all nodes is n×m×3,so the dimension of payoff matrix space of all nodes is 3t×n×m at all time slots.Employing the distributing virtual policy learning algorithm to simulate the evolution of this game,it chooses the appropriate strategy from the large payoff matrix space successfully for playing the game.The experiment results show that the designed distributing virtual policy learning algorithm solves the problem of suppressing the IADS by multiple fighters’cooperation very well,and the fighter fleet could make mission planning dynamically according to the battle field situation.For the actual combat,the designed algorithm is more effective than the traditional adjacent algorithm,and hence it can decrease the damage of combat fighters in the offensive operations considerably.To some extent,designing the payoff functions appropriately by the reconnaissance information and stability of the equilibrium solution can predict the strategies of the enemy and obtain the optimal combination strategy.

        [1]Honeywell Technology Center.Multi-agent self-adaptive CIRCA.[2016-10-10].http://www.htc.honeywell.com/pmjects/ants/6-00-quadcharts.ppt.

        [2]CHANDLER P R,PACHTER M.Research issues in autonomous control of tactical UAVs.Proc.of the American Control Conference,1998:394–398.

        [3]JOHNSON C L.Inverting the control ratio human control of large autonomous teams.Proc.of the International Conference on Autonomous Agents and Multi-Agent Systems,2003:458–465.

        [4]COMETS project official web page.[2016-10-10].http://www.com,ets-uavs.org.

        [5]BEARD R W,MCHAN T W,NELSON D D,et al.Decentralized cooperative aerial surveillance using fixed-wing miniature UAVs.Proceedings of the IEEE,2006,94(7):1306–1324.

        [6]WANG G,GUO L,DUAN H.A hybrid metaheuristic DE CS algorithm for UCAV three-dimension path planning.The Scientific World Journal,2012:583973.

        [7]ZHANG L,SUN Z J,WANG D B.An improved voronoi diagram for suppression of enemy air defense.Journal of National University of Defense Technology,2010,32(3):121–125.

        [8]RASMUSSEN S,CHANDLER P R,OPTHNAL V S.Heuristic assignment of cooperative autonomous unmanned air vehicles.Proc.of the AIAA Guidance,Navigation,and Control Conference,2003:5586–5597.

        [9]CHEN J,ZHA W Z,PENG Z H,et al.Cooperative area reconnaissance for multi-UAV in dynamic environment.Proc.of the 9th IEEE Asian Control Conference,2013:1299–1304

        [10]WU Q P,ZHOU S L,YAN S.A cooperative region surveillance strategy for multiple UAVs.Proc.of the IEEE Chinese Guidance,Navigation and Control Conference,2014:1744–1748.

        [11]HAQUE M,EGERSTEDT M.Multilevel coalition formation strategy for suppression of enemy air defenses missions.Journal of Aerospace Information Systems,2013,10(6):287–296.

        [12]POLAT C,IBRAHIM K,ABDULLAH S,et al.The small and silent force multiplier:a swarm UAV—electronic attack.Journal of Intelligent&Robotic Systems,2013,70(12):595–608.

        [13]SU F.Research on distributed online cooperative mission planning for multiple unmanned combat aerial vehicles in dynamic environment.Changsha,China:National University of Defense Technology,2013.(in Chinese)

        [14]SUMANTA K D.Modeling intelligent decision-making command and control agents:an application to air defense.IEEE Intelligent Systems,2014,29(5):1541–1672.

        [15]NICK E,KELLY C.Fuzzy logic based intelligent agents for unmanned combat aerial vehicle control.Journal of Defense Management,2015,6(1):1–3.

        [16]YOO D W,LEE C H,TAHK M J,et al.Optimal resource management algorithm for unmanned aerial vehicle missions in hostile territories.Proceedings of the Institution of Mechanical Engineers,Part G:Journal of Aerospace Engineering,2013,228(12):2157–2167.

        [17]CHEN X,LIU M,HU Y X.Study on UAV offensive/defensive game strategy based on uncertain information.Acta Armamentarii,2012,33(12):1510–1515.

        [18]JIN Y,LIU J Y,LI H W,et al.The research on the autonomous power balance framework for distribution network based on multi-agent modeling.Proc.of the International Conference on Power System Technology,2014:20–22.

        [19]ROGER M B.Game theory:analysis of conflict.New York:Springer,2010.

        [20]SUN Y Q.The research on key technologies of jamming attacks in wireless sensor networks.Changsha,China:National University of Defense Technology,2012.(in Chinese)

        国产v精品成人免费视频400条| 欧美性受xxxx狂喷水| 日日碰狠狠丁香久燥| 四虎影视国产884a精品亚洲| 久久久精品国产老熟女| 好吊妞视频这里有精品| 麻豆精品传媒一二三区| 欧美日韩国产亚洲一区二区三区| 亚洲中文字幕亚洲中文| 免费a级毛片在线播放| 成人免费毛片aaaaaa片| 巨爆乳中文字幕爆乳区| 久久伊人久久伊人久久| 欧美巨鞭大战丰满少妇| 台湾佬自拍偷区亚洲综合| 亚洲综合五月天欧美| 国产成人精品一区二三区在线观看 | 亚洲视频在线看| 久久国产高潮流白浆免费观看| 成年人干逼视频水好多| 中文字幕亚洲入口久久| 国产一区二区三区精品免费av| 性色av浪潮av色欲av| 久久久久久久98亚洲精品| 天堂中文在线资源| 樱桃视频影视在线观看免费| 狼人国产精品亚洲| 一本色道亚州综合久久精品| 真实夫妻露脸自拍视频在线播放 | 国产极品视觉盛宴在线观看| 免费看黄视频亚洲网站 | 香蕉亚洲欧洲在线一区| 日本免费看片一区二区三区| 亚洲妇女自偷自偷图片 | 中国精品久久久久国产| 亚洲av高清天堂网站在线观看| 啦啦啦www播放日本观看| 久久福利青草精品资源| 国产精品国产三级国产an不卡| 性饥渴的农村熟妇| 热99re久久精品这里都是免费|