亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Data?Based Feedback Relearning Algorithm for Robust Control of SGCMG Gimbal Servo System with Multi?source Disturbance

        2021-05-19 10:42:20,*,

        ,*,

        1.School of Electrical and Information Engineering,Tianjin University,Tianjin 300072,P.R.China;2.Beijing Institute of Control Engineering,Beijing 100190,P.R.China

        Abstract:Single gimbal control moment gyroscope(SGCMG)with high precision and fast response is an important attitude control system for high precision docking,rapid maneuvering navigation and guidance system in the aerospace field. In this paper,considering the influence of multi-source disturbance,a data-based feedback relearning(FR)algorithm is designed for the robust control of SGCMG gimbal servo system. Based on adaptive dynamic programming and least-square principle,the FR algorithm is used to obtain the servo control strategy by collecting the online operation data of SGCMG system. This is a model-free learning strategy in which no prior knowledge of the SGCMG model is required. Then,combining the reinforcement learning mechanism,the servo control strategy is interacted with system dynamic of SGCMG. The adaptive evaluation and improvement of servo control strategy against the multi-source disturbance are realized. Meanwhile,a data redistribution method based on experience replay is designed to reduce data correlation to improve algorithm stability and data utilization efficiency. Finally,by comparing with other methods on the simulation model of SGCMG,the effectiveness of the proposed servo control strategy is verified.

        Key words:control moment gyroscope;feedback relearning algorithm;servo control;reinforcement learning;multisource disturbance;adaptive dynamic programming

        0 Introduction

        In the field of aerospace,the control moment gyroscope(CMG) gimbal servo system is often used as an actuator for attitude control of aerospace equipment. Fig.1 shows the typical structure of a single gimbal CMG(SGCMG)system. SGCMG has one rotor system which supports a constant an?gular momentum,one gimbal system that changes the angular momentum,and one structure base[1].SGCMG changes the speed and rotation angle of the rotor system by controlling the permanent mag?net synchronous motor(PMSM)in gimbal system,and then the rotor system is used as the actuator to output appropriate torque. Compared with the tradi?tional direct control of motor drive system,SGC?MG can stably provide a larger torque,which is based on the ability given by the law of conservation of angular momentum.

        Under normal working condition,the output torque of SGCMG system is proportional to the an?gular velocity of rotor system. However,in the complex space environment,the angular velocity of rotor system will be disturbed by various disturbanc?es,which will affect the quality of the output torque. There are multi-sources for disturbance in SGCMG system,including coupled gyro torque,unbalance torque of the high speed rotor,friction torque,precision error of grating sensor,fluctuation of torque coefficient of driving motor,calculating ac?curacy of system circuit design[2-4]. It is worth noting that these disturbances include multi-source high-fre?quency,low-frequency and slope torque disturbanc?es. Therefore,to improve the robustness and antiinterference ability of SGCMG system, some works have been conducted based on fuzzy control,sliding mode control,disturbance observer compen?sation,repetitive control,etc[5-9].

        Many of the existing control strategies are de?signed based on the determined system models,to deal with high-frequency or low-frequency torque disturbances. From this perspective,when SGC?MG system is faced with model uncertainty,such as torque coefficient fluctuations,the reliance on an accurate model will hinder the effectiveness of the model-based strategy and thus fail to achieve the ex?pected control effect.

        Off-policy algorithm is a kind of reinforcement learning(RL)algorithm structure that extracts mod?el information based on system operation data,and finally obtains the control strategy without using sys?tem model[10-14]. Based on adaptive dynamic pro?gramming (ADP) method, off-policy algorithm was developed for the robust control of some linear and nonlinear systems,and the prior knowledge of the system dynamic has been relaxed[10-11]. In Ref.[12],the off-policy algorithm was extended the problem to H∞control,where the ideal of integral RL(IRL)method has been applied. Considering in?put constraints,a two-player game problem is stud?ied based on the off-policy algorithm in Ref.[13].Therefore,developed from off-policy algorithm,this paper designs a feedback relearning(FR)algo?rithm to obtain the servo control strategy without re?lying on the SGCMG system model.

        Considering the variability and complexity of multi-source disturbance in SGCMG system,the designed servo control strategy should have certain adaptability. In this regard,the on-policy algorithm can solve this online learning problem to improve the algorithm adaptability[15-22]. In on-policy algo?rithm,the obtained control strategy is rewarded or punished by designing an incentive mechanism,and then the new strategy is used to interact with the system. Continuously strengthen the control strate?gy to optimize the objective function,thus realizing online update and adaptive control. Therefore,the designed FR algorithm combines the idea of on-poli?cy algorithm to realize online update of servo control strategy.

        In the off-policy algorithm based on leastsquare principle,the collected data episodes need to satisfy certain rank conditions to ensure the validity of the matrix inverse operation. However,the corre?lation problem between adjacent data episodes is very serious,especially in the continuous-time ro?bust control problem. Experience replay technology can be used to achieve faster learning by reusing the collected data[23]. The application of experience re?play technology not only reduces the data correla?tion of the current data set,but also improves data utilization efficiency[24-25]. Meanwhile,when apply?ing the experience replay technology to actor-critic RL algorithms,the convergence properties can also be guaranteed[26]. Therefore,referring to the idea of experience replay,a data redistribution method is designed to reduce data correlation to improve algo?rithm stability and data utilization efficiency.

        In this paper,due to the complex mechanical structure of SGCMG,the influence of gimbal instal?lation and the flexible support,it is difficult to ob?tain the accurate mathematical model in practice.The speed control problem of SGCMG is a complex servo control problem,which is also a motivation for the development of data-based RL algorithm in this paper. The data-based RL algorithm is based on the collected servo data,and the control strategy of SGCMG can be realized through iterative learning.For the convenience of problem formulation and the description of multi-source disturbance,the PMSM model is given in section 1. In practical servo con?trol,the controlled system is the overall SGCMG system,which is a complex nonlinear system.

        The main contributions are as follows:First,inspired by on-policy and off-policy algorithms,a data-based FR algorithm is designed for the robust control problem of nonlinear system,which has the adaptability for uncertain problems and high data ef?ficiency. Second,based on the FR algorithm,the servo control strategy of SGCMG system can be ob?tained by collecting servo data episodes of gimbal system. The prior knowledge of the SGCMG model is not required. Third,a data redistribution method based on experience replay is designed to reduce da?ta correlation to further improve algorithm stability and data efficiency.Considering the multi-source dis?turbance,the comparison experiment with PID and SMC is given to verify the effectiveness of proposed strategy.

        The main organization of this paper is as fol?lows:Section 1 investigates the background of SGCMG gimbal servo system with multi-source dis?turbance. In section 2,the prior knowledge and mathematical principle of FR algorithm are de?scribed in detail. Section 3 introduces the structure of FR algorithm,the application of FR algorithm in SGCMG system,and the technology of data redis?tribution method based on experience replay. In sec?tion 4,the comparative simulation with other meth?ods is analyzed. Finally,section 5 contains some conclusions of this paper.

        1 Problem Formulation

        SGCMG consists of one rotor system,one gimbal system and one structural base. The gimbal system is used to change the angular momentum of rotor system to output the torque. However,it is difficult to accurately express the model of SGCMG system by mathematical principle. In the existing work,we usually analyze the gimbal system,which is the driving control system,to reduce the difficulty of controller design. SGCMG is derived by control?ling the PMSM,which is studied ond?qaxes. Fur?ther,the state space model of PMSM is defined as[27]

        where the physical meaning of the model parameters are as follows. Stator current ofd?qaxes:IdandIq;d?qaxes voltage:udanduq;stator inductances ond?qaxes:LdandLq;gimbal rotation speed:w;Stator resistance:R;number of pole pairs:p;flux linkage:φf;viscous friction coefficient:f;Moment of inertia:J;multi-source torque disturbance:Tl.As investigated in Ref.[4],different kinds of torque disturbances are included inTl,including high-fre?quency,low-frequency and slope torque disturbanc?es.

        The multi-source disturbances can be mathe?matically expressed as

        whereTGrepresents the gyroscopic effect on gyro torque;Tgthe disturbance torque caused by static unbalance which will disappear when SGCMG works in aerospace;Tfthe low-frequency torque dis?turbance caused by nonlinear friction of gimbal trans?mission parts such as bearing and conducting ring;Tdis related to the rotor unbalance vibration with high-frequency torque disturbance;Tmthe high-fre?quency torque disturbance related to motor torque fluctuation;wsthe satellite speed;whthe rotor speed;θthe gimbal angle position. The detail analy?sis of multi-source disturbances can be referred to Ref.[4],which will not be repeated here. It should be noted that this paper mainly focuses on the multisource disturbances that act on the servo torque in the SGCMG system.

        Remark 1From the above description,we know that multi-source disturbances exist in SGC?MG gimbal servo system,and the influence of these disturbances on high-precision servo control cannot be ignored[2-3]. However,it is still a technical bottle?neck in this field to accurately describe the impact of these disturbances on the system model,which af?fects the construction of complete SGCMG gimbal servo system in mathematical form. Due to the dif?ferences in installation or mechanical parts,even two devices of the same type will have different model parameters. In some scenarios,these nega?tive effects may lead to the performance degradation of model-based strategies.

        Therefore,considering the difficulty of SGC?MG system modeling and the influence of multisource disturbance,a data-based FR algorithm is de?signed to circumvent the difficulty of accurate mod?eling of SGCMG. In section 2,the prior knowledge and mathematical principle of FR algorithm will be described in detail.

        2 Data?Based Feedback Relearn?ing Algorithm

        2.1 Prior knowledge:On?policy and off?policy algorithm

        In RL methods,on-policy and off-policy algo?rithms are two common algorithm structures. The core of both algorithms includes policy evaluation and policy improvement. The control strategy is evaluated based on the target indicators,and then the current strategy is improved to optimize the tar?get function. Through continuous interaction and up?date of the control strategy and the system dynam?ics,the interactive improvement of overall strategy will eventually be achieved. Both on-policy and offpolicy algorithm structures can complete the evalua?tion and improvement of algorithms based on the collected system data[11].

        The off-policy algorithm has better data utiliza?tion efficiency and convergence ability. At first,the off-policy algorithm collects system operation data of finite dimensions and processes it into data epi?sodes. Then,the evaluation and improvement of the control strategy can be completed through itera?tive learning. The collected data episodes are iterat?ed under off-line conditions,so the off-policy algo?rithm does bring much burden to the storage sys?tem. At the same time,finite data episodes of offline iteration based on least-square principle which makes the algorithm converge better and the itera?tion steps will be relatively small[10-13]. However,due to the characteristics of off-line iteration,the collected finite data is dynamically generated by orig?inal system,and the obtained strategy is in line with the original dynamic. Therefore,when the off-poli?cy algorithm is used to deal with system uncertain?ties,the control performance may decrease.

        Fortunately,the on-policy algorithm has better adaptive capabilities to uncertain systems. Based on the collected data,the control strategies can be ob?tained through policy evaluation and policy improve?ment. Different from the off-policy,the control strategy based on the on-policy algorithm is applied to the system dynamic in real time and new data will be generated for the next iteration. As a result,the collected new data episodes will contain changes in dynamic information,thereby achieving the dynam?ic improvement of control strategy[11,15-17]. Since the algorithm needs to constantly interact with system,the collected data are used only once in each itera?tion. Therefore,the on-policy algorithm performs worse than the off-policy algorithm in term of data efficiency and convergence speed.

        Remark 2On-policy and off-policy algo?rithms have their advantages and restrictions. Onpolicy algorithm has advantages in adaptability,and off-policy algorithm has advantages in convergence and data utilization. However,on-policy has low da?ta utilization,and off-policy has insufficient adapt?ability to system uncertainty. Accordingly,the char?acteristics of FR algorithm are as follows:For the problem of multi-source disturbances,the online op?timization and adaptive update of control strategy can be realized;the data redistribution method makes full use of empirical servo data;the correla?tion between adjacent data is reduced,and the algo?rithm stability and convergence are improved.

        In this paper,a new algorithm structure named FR algorithm is proposed,which has the advantag?es in data efficiency,algorithm convergence,and adaptability. The specific mathematical principles will be introduced in detail in section 2.2.

        2.2 Mathematical principle of feedback re?learning algorithm

        To facilitate the introduction of mathematical principles,the unknown uncertain SGCMG system can be expressed as follows

        wherex(t)∈Rnis the state vector which corre?sponds to the error speedew(t) and stator currentIqof SGCMG. Define the setting speed of SGCMG asw0,andew(t) =w0-w(t),x(t)=[ew(t),Iq(t)]T.u(t)∈Rmrepresents the servo control strategy,which is related to theqaxis voltage state;D(t)the multi-source disturbance of SGCMG system;F(x(t)) the unknown system dynamic of SGCMG withF(0)=0;andG(x(t)) the unknown control matrix.

        Based on the nominal systemx?(t)=F(x(t))+G(x(t))u(t),the cost function can be defined as

        whereU(x(t),u(t))=xT(t)Nx(t)+uT(t)Mu(t)is the utility function withU(x(0) ) =0;NandMare the positive define symmetric matrices with proper dimensionsnandm. The optimal cost func?tion can be expressed as

        whereΩuis the set of admissible control for system(3).Further,the Hamiltonian function is obtained

        where ?V(x(t))=?V(x(t))/?x(t)withV(0)=0. Based on Bellman optimality principle,the fol?lowing Hamiltonian-Jacobi-Bellman (HJB) equa?tion can be defined as

        whereu*(t)∈Ωurepresents the optimal solution of HJB equation. The optimal servo control strategy is formulated as

        Then,substituting Eq.(8)into Eq.(7),the HJB equation will be changed as

        However,Eq.(9)is a partial differential equa?tion,and its analytical solution is generally difficult to directly solve. Based on policy iteration(PI)al?gorithm,ADP was proposed to solve Eq.(9)and fi?nally obtain an approximate solution ofu*[28]. Initial?ization:V0(x(0))=0,iteration stepsi=0,initial admissible controlu1(t).

        Policy evaluation:SubstituteVi(x(t)) into Eq.(10)to get the solution of ?Vi+1(x(t))by

        Policy improvement:Update the control strate?gy

        Repeat these two steps until the algorithm meets the accuracy requirements,then the corre?sponding servo control strategy can be obtained.During the above iteration,the model information is still needed[29]. Further,the model dynamicsFandGof SGCMG system can be relaxed based on inte?gral reinforcement learning(IRL)method[13,16,18].

        In the iteration process,the time derivation of cost functionVi+1(x) can be formulated as dVi+1/dt=?Vi+1(x)T(F(x)+G(x)(u1(t)+D(t))),andu1(t) is the admissible control. Under the influence of multi-source disturbancesD(t),the system will not diverge during the first data collec?tion stage.Then,defineu0(t)=u1(t)+D(t).

        Based on Eqs.(10,11),we can obtain

        Integral Eq.(12)on the time interval [t,t+Δt]

        Therefore,PI algorithm based on Eqs.(10,11)has been replaced by Eq.(13),which is mathe?matically equivalent as the Newton’s method[17].Based on the collected data episodes,the cost func?tionVi+1(x) and the control strategyui+1(t) will be solved without using system dynamics of SGC?MG.

        2.3 Neural network implementation based on least?square principle

        In FR algorithm,the actor-critic structure neu?ral network is used to approximate the cost function and servo control strategy of SGCMG system,and least-square principle is used in the iteration of col?lected data episodes. The critic network and action network can be expressed as

        whereλc∈Rlcandλa∈Rlarepresent the activation functions;νc∈Rlc×mcandνa∈Rla×mathe ideal weights of critic and action networks;lcandlathe neuron number in hidden layer. The reconstruction errorsεcandεacan be omitted as the number of itera?tion steps large enough[30-31]. Therefore,define the estimated form of Eq.(14)

        whereiandjrepresent the iterative steps in the out?er loop and the inner loop,respectively. For exam?ple,ui,j+1(x) represents the (j+1)th iteration so?lution of the inner loop in theith outer loop. The structure of algorithm iteration will be introduced in section 3. Define a large time sequence {tk,k∈(0,…,q)},andqis the dimension requirement in data collection which satisfiesq≥lc+lamato meet the full rank condition in the matrix inverse operation[13].

        Based on Eqs.(13,15),the residual error?is formulated as

        where?is introduced in the process of neural net?work approximation. The purpose of iteration is to get the optimal weights of neural networks,so that the residual error will converge to the minimum val?ue.Then,Eq.(16)can be expressed as

        where

        and

        whereIλais the identity matrix with appropriate di?mension;vec(A) the column vector representation of matrixA,where all the column vectors are placed in one column.?the Kronecker Product operation.

        In theith iteration,the collected data set can be defined as

        and

        Based on the least-square principle,the weight parameters can be calculated by

        Therefore,based on the neural network imple?mentation and least-square principle,system model is not needed in the proposed servo control strate?gy,which circumvent the difficulty of SGCMG modeling.

        3 Algorithm Structure and Data Redistribution

        To solve the problem of multi-source distur?bance,the servo control strategy obtained in FR al?gorithm interacts with SGCMG system and realizes the adaptive adjustment based on RL. The basic structure of FR algorithm is shown in Fig.2,includ?ing the outer loop and the inner loop iterations.

        Fig.2 Typical structure of SGCMG

        The first step is algorithm initialization,which involves the parameter initialization and system op?eration of SGCMG. Further,the algorithm collects the servo status of gimbal system in real time,per?forms calculations according to Eqs.(18,19)and stores them in the memory pool until the algorithm dimension requirementqis satisfied. Then,qddi?mensional data will be randomly deleted by using the data redistribution method. Based on the leastsquare principle,the inner loop iteration is per?formed based on Eqs.(20—22). Until the calcula?tion accuracyρjis satisfied or the maximum number of iterationNjis reached,then the outer loop criteri?on is performed. When the accuracyρiis satisfied,the corresponding servo control strategy can be ob?tained. Meanwhile,the pseudo code of FR algo?rithm is given in Algorithm 1.

        Algorithm 1FR Algorithm

        1:Start

        2:Initialization:

        3:Data collection:

        4:Ifqis satisfied

        5:Collect speed error states of gimbal sys?tem;

        6:Calculate data episodes;

        7:Store data episodes in the memory pool;

        8:End if

        9:Data redistribution:

        10:Randomly removeqddimensional data epi?sodes inq;

        11:Policy evaluation and improvement:

        12:Do least?square iteration based on Eqs.(20—22);

        13:WhileρjorNjis satisfied

        14:Ifρiis satisfied

        15:Obtain the servo control strategy;

        16:Else

        17:Return to data collection step;

        18:End if

        19:End

        It is worth noting that for the data-based RL method,the uncertain data episodes will affect the algorithm convergence. In this paper,D(t) related to the multi-source disturbances will directly affect the accuracy of collected data episodes and the dy?namic performance. In this scenario,the high corre?lation of collected data is an important factor,which may promote the singularity of matrix operation.

        Based on experience replay technology,a data redistribution method is designed to effectively re?duce the correlation of collected data,and then im?prove the convergence performance and data utiliza?tion of FR algorithm. In the iteration of FR algo?rithm,the collected data episodes will be prepro?cessed before each inner loop iteration. In order to reduce the correlation between data episodes,qddi?mensional data episodes in the data set will be ran?domly eliminated,and the sequence of the rest epi?sodes will be disordered.

        In the face of uncertain system,this processing will be beneficial to the convergence of data-based algorithm,so as to improve the stability of the algo?rithm. In the next outer loop iteration,the last data set can still be retained,and only the episodes elimi?nated in advance need to be supplemented to meet the iteration requirementsq. This can greatly im?prove the efficiency of data utilization,and it is also the advantage of the proposed data redistribution method.

        4 Simulation Analysis

        In this paper,multi-source disturbance is con?sidered in a simulation model of SGCMG gimbal servo system. The parameters of SGCGM are given in Table 1.

        Table 1 SGCMG gimbal servo system parameters

        In simulation,PID and SMC methods are used to compare with the servo control strategy based on FR algorithm,which is called FR control. The PID controllers are listed in Table 2,and the SMC meth?od can be found in Ref.[4].

        Table 2 Parameters of PID controller

        For the training process of FR control,the pa?rameters is set as:N=2×I2×2,M=I(Iis the identity matrix),ρi=ρj=1×10-6,Nj=100,q=100 andqd=40. The activation function of ac?tion and critic networks areλa(x)=λc(x)=Then,the weight training pro?cesses of two networks are shown in Figs.3,4. If the on-policy algorithm is used,the weight parame?ters after one iteration will be applied to SGCMG.However,the current parameters have not con?verged to the optimal solution,and cannot ensure the stability of SGCMG under multi-source distur?bance.

        Fig.3 Weight training process of action network

        Fig.4 Weight training process of critic network

        Based on the definition of state variablex(t)=[ew(t),Iq(t)]T,the collected data of SGCMG sys?tem are given in Fig.5. More importantly,the weights of neural networks are iterated from 0,where the selection of the initial weights in the itera?tive algorithm is relaxed,and it is more conducive to engineering.

        Fig.5 Collected data of SGCMG system

        Fig.6 shows the training process of FR control strategy,including data collection process under ad?missible control(before 0.5 s),algorithm iteration(at 0.5 s),and the control process(after 0.5 s).The sampling time of SGCMG system is set as 0.005 s. Combined with the requirement ofq=100,the data collection process lasted for 0.5 s.The algorithm iterates for a short time at 0.5 s,and then outputs the servo control strategy.

        Fig.6 Training process of FR controller

        The multi-source disturbance including highfrequency, low-frequency sinusoidal disturbances and slope disturbance have been shown in Fig.7,which are used to simulate precision errors intro?duced by position sensors,the unbalance torque of the high-speed rotor,coupled torque by satellite speed,etc.

        Fig.7 Multi-source disturbance

        Then,F(xiàn)ig.8 gives the tracking control of the SGCMG system under the multi-source distur?bance. The control signal of FR controller is given in Fig.9. In the simulation,the setting speed is set asw0=0.5°/s. It can be observed that SMC and PID control are greatly affected to some extent un?der this complex disturbance. In contrast,the FR controller shows better control performance in stabil?ity and rapidity. Based on the proposed FR algo?rithm,there is still a small fluctuation in the speed output. However,the robustness of SGCMG sys?tem is obviously improved,and the speed can con?verge to the expected value faster. At the same time,the strategy proposed in this paper is a modelfree method based on data collection,which also im?proves the generalization of the control strategy.

        Fig.8 Servo speed control under multi-source disturbance

        The correlation of adjacent data is shown in Fig.10. Based on Eqs.(20,21),“Data 1”repre?sents the collected data set in the first iteration withi=1,i.e.,Ξ1,1andΘ1,1,where the data redistribu?tion method has not been used. Accordingly,“Data 2”is the collected data set in the second iteration withi=2. It indicates that the data redistribution method has been used. Fig.10 shows that the corre?lation of adjacent data is significantly reduced by da?ta redistribution. For the data-based RL algorithm,high correlation of adjacent data may lead to poor convergence or even divergence of algorithm.Therefore,the data redistribution method can re?duce the data correlation and improves the conver?gence performance of FR algorithm.

        Fig.9 FR control signal under multi-source disturbance

        Fig.10 Correlation of collected data set

        5 Conclusions

        A data-based FR algorithm is proposed for the robust control of SGCMG gimbal servo system,where a data redistribution method is designed to im?prove the data utilization and algorithm conver?gence. Under the influence of multi-source distur?bance,the control strategy can be obtained by using the collected data of SGCMG. This method avoids the difficulty of mathematical modeling of SGCMG and has better adaptability for uncertain problems.Through the comparative analysis on simulation platform,the proposed method can better suppress the multi-source disturbance,in terms of rapidity and stability.

        日韩在线视精品在亚洲| 久久国产亚洲高清观看| 成年人观看视频在线播放| 国产又粗又猛又黄又爽无遮挡| a级黑人大硬长爽猛出猛进| 亚洲精品456| 中文字幕天天躁日日躁狠狠| 亚洲天堂av在线一区| 久久精品国产99久久久| 亚洲国产精品无码专区| 免费无码黄动漫在线观看| 国产91色在线|亚洲| 久久99久久久精品人妻一区二区| 欧美成人秋霞久久aa片 | 人日本中文字幕免费精品| 国内精品亚洲成av人片| 性欧美长视频免费观看不卡| 国产日韩精品中文字无码| 国产亚洲第一精品| 免费观看国产激情视频在线观看| 久久精品国产av麻豆五月丁| 欧美激情综合色综合啪啪五月| 久久精品无码免费不卡| 免费无遮挡毛片中文字幕| 玩弄人妻奶水无码AV在线| 日韩精品免费av一区二区三区| 国产精品爽爽ⅴa在线观看| 免费人成视频在线| 波多野结衣aⅴ在线| 欧美色资源| 免费人妻精品区一区二区三| 偷拍综合在线视频二区| 国产呦系列呦交| 91久久国产情侣真实对白| av在线免费观看麻豆| 亚洲va中文字幕无码一二三区| 国内a∨免费播放| 无码专区亚洲avl| 日本免费一区二区三区在线播放| 女人张开腿让男桶喷水高潮| 无码熟妇人妻AV影音先锋|