亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        基于深度強(qiáng)化學(xué)習(xí)的車(chē)輛邊緣計(jì)算任務(wù)卸載方法

        2023-10-18 08:49:09郭曉東郝思達(dá)王麗芳

        郭曉東 郝思達(dá) 王麗芳

        摘 要:車(chē)輛邊緣計(jì)算允許車(chē)輛將計(jì)算任務(wù)卸載到邊緣服務(wù)器,從而滿(mǎn)足車(chē)輛爆炸式增長(zhǎng)的計(jì)算資源需求。但是如何進(jìn)行卸載決策與計(jì)算資源分配仍然是亟待解決的關(guān)鍵問(wèn)題。并且,運(yùn)動(dòng)車(chē)輛在連續(xù)時(shí)間內(nèi)進(jìn)行任務(wù)卸載很少被提及,尤其對(duì)車(chē)輛任務(wù)到達(dá)隨機(jī)性考慮不足。針對(duì)上述問(wèn)題,建立動(dòng)態(tài)車(chē)輛邊緣計(jì)算模型,描述為7狀態(tài)2動(dòng)作空間的Markov決策過(guò)程,并建立一個(gè)分布式深度強(qiáng)化學(xué)習(xí)模型來(lái)解決問(wèn)題。另外,針對(duì)離散—連續(xù)混合決策問(wèn)題導(dǎo)致的效果欠佳,將輸入層與一階決策網(wǎng)絡(luò)嵌套,提出一種分階決策的深度強(qiáng)化學(xué)習(xí)算法。仿真結(jié)果表明,所提算法相較于對(duì)比算法,在能耗上保持了較低水平,并且在任務(wù)完成率、時(shí)延和獎(jiǎng)勵(lì)方面都具備明顯優(yōu)勢(shì),這為車(chē)輛邊緣計(jì)算中的卸載決策與計(jì)算資源分配問(wèn)題提供了一種有效的解決方案。

        關(guān)鍵詞:車(chē)輛邊緣計(jì)算;任務(wù)卸載;資源分配;深度強(qiáng)化學(xué)習(xí)

        中圖分類(lèi)號(hào):TP393?? 文獻(xiàn)標(biāo)志碼:A

        文章編號(hào):1001-3695(2023)09-038-2803-05

        doi:10.19734/j.issn.1001-3695.2023.02.0027

        Task offloading method based on deep reinforcement learning for vehicular edge computing

        Guo Xiaodonga,Hao Sidab,Wang Lifangb

        (a.College of Electronic Information Engineering,b.College of Computer Science & Technology,Taiyuan University of Science & Technology,Taiyuan 030024,China)

        Abstract:To meet the exploding demand for computational resources in vehicles,offloading computational tasks to edge servers is allowed in vehicular edge computing.But how to make offloading decision and computational resource allocation are still critical issues that need to be addressed.Moreover,task unloading of moving vehicles in continuous time is rarely mentioned,especially the randomness of vehicle task arrival is not considered enough.To address the above problems,this paper

        established a dynamic vehicle edge computing model and described this model as a Markov decision process in seven state two action spaces.Then this paper built a distributed deep reinforcement learning model to solve the problem.Furthermore,for the discrete-continuous hybrid decision problem causing poor results,

        this paper proposed

        a deep reinforcement learning algorithm for split-order decision making,which nested the input layer with the first-order decision network.Simulation results show that the proposed algorithm has significant advantages in terms of task completion rate,time delay,and reward compared to the comparison algorithm by maintaining a lower level of energy consumption.This paper provides an effective solution to the offloading decision and computational resource allocation problem in vehicle edge computing.

        Key words:vehicular edge computing(VEC);task offloading;resource distribution;deep reinforcement learning

        0 引言

        近年來(lái),隨著智能網(wǎng)聯(lián)車(chē)輛的快速發(fā)展,車(chē)輛的信息化、智能化程度不斷提高。與此同時(shí),自動(dòng)駕駛[1]、增強(qiáng)車(chē)輛現(xiàn)實(shí)[2]、車(chē)載游戲等車(chē)載應(yīng)用和服務(wù)不斷涌現(xiàn),對(duì)車(chē)輛的計(jì)算能力提出嚴(yán)苛要求,計(jì)算能力不足已成為制約其發(fā)展的關(guān)鍵問(wèn)題。

        車(chē)輛邊緣計(jì)算(VEC)[3]被認(rèn)為是一種很有前景的解決方案。VEC將計(jì)算和存儲(chǔ)資源放置在距離用戶(hù)更近的路側(cè)單元(road side unit,RSU),允許車(chē)輛將計(jì)算任務(wù)卸載到邊緣服務(wù)器,從而實(shí)現(xiàn)低時(shí)延、低能耗的任務(wù)卸載。VEC環(huán)境下,車(chē)輛配備通信設(shè)施可以通過(guò)與RSU間的無(wú)線(xiàn)連接完成車(chē)輛—邊緣服務(wù)器間的信息傳遞,形成一種稱(chēng)為車(chē)—基礎(chǔ)設(shè)施(vehicle-to-infrastructure,V2I)[4]的模式。VEC的架構(gòu)模型與卸載策略是其中的關(guān)鍵問(wèn)題,吸引了大量學(xué)者的關(guān)注。Liu等人[3]對(duì)VEC的最新研究進(jìn)行了綜述,包括簡(jiǎn)介、架構(gòu)、優(yōu)勢(shì)與挑戰(zhàn)等。模型方面,Tian等人[5]對(duì)運(yùn)動(dòng)車(chē)輛進(jìn)行建模,并提出一種任務(wù)信息已知情況下的KMM算法以減小任務(wù)時(shí)延。Huang等人[6]將任務(wù)按照優(yōu)先級(jí)分為關(guān)鍵應(yīng)用、高優(yōu)先級(jí)應(yīng)用和低優(yōu)先級(jí)應(yīng)用,并研究了一種不同優(yōu)先級(jí)應(yīng)用下最小化能耗的任務(wù)卸載問(wèn)題。卸載策略方面,可以分為集中式卸載策略與分布式卸載策略。集中式卸載策略由中心節(jié)點(diǎn)進(jìn)行統(tǒng)一調(diào)度與管理,具備更優(yōu)的全局效果,但是車(chē)輛高速移動(dòng)會(huì)導(dǎo)致網(wǎng)絡(luò)拓?fù)淇焖僮兓?],從而造成集中式網(wǎng)絡(luò)不斷重構(gòu)導(dǎo)致時(shí)延增加。Hou等人[8]針對(duì)非凸和NP難的卸載優(yōu)化問(wèn)題,設(shè)計(jì)了一種容錯(cuò)粒子群優(yōu)化的啟發(fā)式算法,以最大化卸載的可靠性。

        相比之下,分布式卸載策略由個(gè)體根據(jù)環(huán)境信息單獨(dú)作出,避免了網(wǎng)絡(luò)不斷重構(gòu)。深度強(qiáng)化學(xué)習(xí)(deep reinforcement learning,DRL)是常用的分布式卸載算法,并具備廣泛應(yīng)用。施偉等人[9]提出了一種基于深度強(qiáng)化學(xué)習(xí)的多機(jī)協(xié)同空戰(zhàn)決策方法,用于提高多機(jī)協(xié)同對(duì)抗場(chǎng)景下的多機(jī)協(xié)同度;陳佳盼等人[10]綜述了深度強(qiáng)化學(xué)習(xí)算法在機(jī)器人操作領(lǐng)域的重要應(yīng)用;Chen等人[11]研究了卸載決策與資源分配的聯(lián)合優(yōu)化問(wèn)題,并提出一種基于強(qiáng)化學(xué)習(xí)的任務(wù)卸載與資源分配方法以減少延遲和能耗。

        以上方法雖然解決了部分VEC環(huán)境下的任務(wù)卸載問(wèn)題,但仍存在一些不足。一是模型缺乏對(duì)運(yùn)動(dòng)車(chē)輛在連續(xù)時(shí)間內(nèi)的研究,且對(duì)車(chē)輛任務(wù)到達(dá)的隨機(jī)性考慮不足;二是未能充分考慮車(chē)輛的高速移動(dòng)性,及其造成的網(wǎng)絡(luò)拓?fù)淇焖僮兓?]。針對(duì)以上問(wèn)題,建立動(dòng)態(tài)車(chē)輛邊緣計(jì)算模型,并建立一個(gè)分布式深度強(qiáng)化學(xué)習(xí)模型來(lái)解決問(wèn)題。本文的主要研究工作如下:

        a)構(gòu)建動(dòng)態(tài)多時(shí)隙的車(chē)輛邊緣計(jì)算任務(wù)卸載與資源分配模型。針對(duì)動(dòng)態(tài)VEC環(huán)境下任務(wù)卸載與資源分配問(wèn)題,將連續(xù)時(shí)間抽象為多時(shí)隙模型,并把車(chē)輛相關(guān)的運(yùn)動(dòng)狀態(tài)、計(jì)算資源、計(jì)算任務(wù)等動(dòng)態(tài)壓入時(shí)隙隊(duì)列,構(gòu)建連續(xù)的車(chē)輛運(yùn)動(dòng)模型、任務(wù)模型與計(jì)算模型。

        b)設(shè)計(jì)一種基于深度強(qiáng)化學(xué)習(xí)的分布式任務(wù)卸載與資源分配算法。考慮7種狀態(tài)對(duì)卸載決策的聯(lián)合影響,特別是探討了任務(wù)復(fù)雜度和傳輸距離對(duì)卸載策略的交叉影響。將問(wèn)題描述為7狀態(tài)2動(dòng)作空間的Markov決策過(guò)程,并建立即時(shí)決策的分布式深度強(qiáng)化學(xué)習(xí)模型來(lái)闡述問(wèn)題。將智能體分布在多個(gè)計(jì)算節(jié)點(diǎn),通過(guò)共享參數(shù)和并行化計(jì)算來(lái)提高訓(xùn)練效率和性能。

        c)提出一種分階決策的深度強(qiáng)化學(xué)習(xí)算法。針對(duì)離散—連續(xù)混合決策問(wèn)題導(dǎo)致的效果欠佳,將輸入層與一階決策網(wǎng)絡(luò)嵌套,提出一種分階決策的深度強(qiáng)化學(xué)習(xí)算法。經(jīng)實(shí)驗(yàn)驗(yàn)證,該算法在時(shí)延、能耗、任務(wù)完成率等方面都具備明顯優(yōu)勢(shì)。

        4 實(shí)驗(yàn)和分析

        仿真分析基于Python 3.7.10、NumPy 1.18.5、pyglet 1.5.21、TensorFlow 2.3.0。參考文獻(xiàn)[14~16]進(jìn)行時(shí)隙、通信、任務(wù)相關(guān)實(shí)驗(yàn)參數(shù)設(shè)置;根據(jù)能耗限制的不同,車(chē)輛端參考IntelTM CoreTM系列CPU進(jìn)行實(shí)驗(yàn)參數(shù)設(shè)置,服務(wù)器端參考Intel Xeon系列CPU進(jìn)行實(shí)驗(yàn)參數(shù)設(shè)置;將任務(wù)復(fù)雜程度控制在[50,1 250] cycles/bit,覆蓋復(fù)雜計(jì)算任務(wù)與簡(jiǎn)單計(jì)算任務(wù);單輛車(chē)與RSU間的平均數(shù)據(jù)吞吐量為38.5 Mbps,VEC服務(wù)器平均數(shù)據(jù)吞吐量為770 Mbps;主要參數(shù)設(shè)置如表2所示。

        為了驗(yàn)證本文算法的有效性,參考文獻(xiàn)[12,17,18]的實(shí)驗(yàn)設(shè)計(jì),設(shè)計(jì)對(duì)比實(shí)驗(yàn),并且本地計(jì)算、貪婪卸載或隨機(jī)卸載是共有的;參考文獻(xiàn)[19~22],在VEC任務(wù)卸載的強(qiáng)化學(xué)習(xí)解決方案中,DQN和DDPG被廣泛應(yīng)用。本文對(duì)比算法有全部本地計(jì)算、采用貪婪卸載、采用DQN算法卸載、采用DDPG算法卸載、采用分階決策的分布式動(dòng)態(tài)卸載算法(本文算法)。進(jìn)行多次實(shí)驗(yàn)并對(duì)所有車(chē)輛的結(jié)果進(jìn)行加權(quán)和,以下是復(fù)現(xiàn)上述方法得到的結(jié)果。由圖3~6,對(duì)總時(shí)延、總執(zhí)行時(shí)延、總傳輸時(shí)延、總等待時(shí)延進(jìn)行分析可得:本文算法在總時(shí)延方面表現(xiàn)出超過(guò)15%的性能優(yōu)勢(shì),這得益于算法顯著降低了執(zhí)行時(shí)延和等待時(shí)延。

        由圖7~10,對(duì)總能耗、總獎(jiǎng)勵(lì)、car剩余計(jì)算資源和VEC剩余計(jì)算資源進(jìn)行分析可得:在能耗方面,本文算法與DQN、貪婪卸載處在同一水平線(xiàn),且明顯低于DDPG算法,而本地計(jì)算未能完成任務(wù);在獎(jiǎng)勵(lì)方面,本文算法表現(xiàn)出超過(guò)20%的性能優(yōu)勢(shì);在剩余計(jì)算資源方面,本文算法、貪婪卸載、DQN對(duì)VEC計(jì)算資源利用充分,且車(chē)輛為即將到來(lái)的任務(wù)留有一定的計(jì)算資源,具備較優(yōu)的資源配置策略。相比之下,DDPG算法對(duì)VEC計(jì)算資源利用不充分且對(duì)本地計(jì)算資源過(guò)度依賴(lài)。

        由圖11,對(duì)卸載失敗任務(wù)數(shù)進(jìn)行分析可得:本文算法、DQN、貪婪算法未出現(xiàn)任務(wù)卸載失敗的情況,而DDPG算法出現(xiàn)少量任務(wù)卸載失敗,本地計(jì)算則出現(xiàn)大量任務(wù)卸載失敗。

        ?

        5 結(jié)束語(yǔ)

        本文旨在研究車(chē)輛邊緣計(jì)算中的卸載決策和計(jì)算資源分配問(wèn)題,特別是針對(duì)連續(xù)時(shí)間內(nèi)運(yùn)動(dòng)車(chē)輛隨機(jī)到達(dá)任務(wù)的情況進(jìn)行探討,這種情況下需要快速、準(zhǔn)確地進(jìn)行卸載決策和計(jì)算資源的分配。為了解決這一問(wèn)題,本文提出了一種基于深度強(qiáng)化學(xué)習(xí)的車(chē)輛邊緣計(jì)算任務(wù)卸載方法。首先,將問(wèn)題描述為7狀態(tài)2動(dòng)作空間的Markov決策過(guò)程,建立分布式深度強(qiáng)化學(xué)習(xí)模型。并且,針對(duì)離散—連續(xù)混合決策問(wèn)題導(dǎo)致決策效果較差的問(wèn)題,將輸入層與一階決策網(wǎng)絡(luò)嵌套,提出一種分階決策的深度強(qiáng)化學(xué)習(xí)算法。經(jīng)仿真實(shí)驗(yàn)分析,本文算法經(jīng)過(guò)訓(xùn)練能夠綜合當(dāng)前任務(wù)的信息、剩余計(jì)算資源、剩余未計(jì)算任務(wù)數(shù)、與邊緣服務(wù)器距離以及邊緣服務(wù)器的剩余計(jì)算資源來(lái)作出較優(yōu)的即時(shí)決策,并具備低時(shí)延、低能耗、高任務(wù)完成率的優(yōu)點(diǎn)。

        本文為車(chē)輛邊緣計(jì)算、為滿(mǎn)足車(chē)輛爆炸式增長(zhǎng)的計(jì)算資源需求提供了一種有效的解決方案。接下來(lái)將重點(diǎn)研究多邊緣服務(wù)器場(chǎng)景下的車(chē)輛邊緣計(jì)算網(wǎng)絡(luò),并探索任務(wù)卸載與資源分配策略,以期望實(shí)現(xiàn)更好地協(xié)同計(jì)算和負(fù)載均衡。

        參考文獻(xiàn):

        [1]Narayanan S,Chaniotakis E,Antoniou C.Shared autonomous vehicle services:a comprehensive review[J].Transportation Research Part C:Emerging Technologies,2020,111:255-293.

        [2]Pratticò F G,Lamberti F,Cannavò A,et al.Comparing state-of-the-art and emerging augmented reality interfaces for autonomous vehicle-to-pedestrian communication[J].IEEE Trans on Vehicular Technology,2021,70(2):1157-1168.

        [3]Liu Lei,Chen Chen,Pei Qingqi,et al.Vehicular edge computing and networking:a survey[J].Mobile Networks and Applications,2021,26(3):1145-1168.

        [4]李智勇,王琦,陳一凡,等.車(chē)輛邊緣計(jì)算環(huán)境下任務(wù)卸載研究綜述[J].計(jì)算機(jī)學(xué)報(bào),2021,44(5):963-982.(Li Zhiyong,Wang Qi,Chen Yifan,et al.A survey on task offloading research in vehicular edge computing[J].Chinese Journal of Computers,2021,44(5):963-982.)

        [5]Tian Shujuan,Deng Xianghong,Chen Pengpeng,et al.A dynamic task offloading algorithm based on greedy matching in vehicle network[J].Ad hoc Networks,2021,123:102639.

        [6]Huang Xinyu,He Lijun,Zhang Wanyue.Vehicle speed aware computing task offloading and resource allocation based on multi-agent reinforcement learning in a vehicular edge computing network[C]//Proc of IEEE International Conference on Edge Computing.Piscataway,NJ:IEEE Press,2020:1-8.

        [7]Zhang Yan.Mobile edge computing[M].Cham:Springer,2022.

        [8]Hou Xiangwang,Ren Zhiyuan,Wang Jingjing,et al.Reliable computation offloading for edge-computing-enabled software-defined IoV[J].IEEE Internet of Things Journal,2020,7(8):7097-7111.

        [9]施偉,馮旸赫,程光權(quán),等.基于深度強(qiáng)化學(xué)習(xí)的多機(jī)協(xié)同空戰(zhàn)方法研究[J].自動(dòng)化學(xué)報(bào),2021,47(7):1610-1623.(Shi Wei,F(xiàn)eng Yanghe,Cheng Guangquan,et al.Research on multi-aircraft cooperative air combat method based on deep reinforcement learning[J].Acta Automatica Sinica,2021,47(7):1610-1623.)

        [10]陳佳盼,鄭敏華.基于深度強(qiáng)化學(xué)習(xí)的機(jī)器人操作行為研究綜述[J].機(jī)器人,2022,44(2):236-256.(Chen Jiapan,Zheng Minhua.A survey of robot manipulation behavior research based on deep reinforcement learning[J].Robot,2022,44(2):236-256.)

        [11]Chen Xing,Liu Guizhong.Joint optimization of task offloading and resource allocation via deep reinforcement learning for augmented reality in mobile edge network[C]//Proc of IEEE International Conference on Edge Computing.Piscataway,NJ:IEEE Press,2020:76-82.

        [12]張秋平,孫勝,劉敏,等.面向多邊緣設(shè)備協(xié)作的任務(wù)卸載和服務(wù)緩存在線(xiàn)聯(lián)合優(yōu)化機(jī)制[J].計(jì)算機(jī)研究與發(fā)展,2021,58(6):1318-1339.(Zhang Qiuping,Sun Sheng,Liu Min,et al.Online joint optimization mechanism of task offloading and service caching for multi-edge device collaboration[J].Journal of Computer Research and Development,2021,58(6):1318-1339.)

        [13]Guo Songtao,Liu Jiadi,Yang Yuanyuan,et al.Energy-efficient dyna-mic computation offloading and cooperative task scheduling in mobile cloud computing[J].IEEE Trans on Mobile Computing,2018,18(2):319-333.

        [14]Gu Xiaohui,Zhang Guoan.Energy-efficient computation offloading for vehicular edge computing networks[J].Computer Communications,2021,166:244-253.

        [15]田賢忠,許婷,朱娟.一種最小化時(shí)延多邊緣節(jié)點(diǎn)卸載均衡策略研究[J].小型微型計(jì)算機(jī)系統(tǒng),2022,43(6):1162-1169.(Tian Xianzhong,Xu Ting,Zhu Juan.Research on offloading balance strategy of multiple edge nodes to minimize delay[J].Journal of Chinese Computer Systems,2022,43(6):1162-1169.)

        [16]Zhu Hongbiao,Wu Qiong,Wu X J,et al.Decentralized power allocation for MIMO-NOMA vehicular edge computing based on deep reinforcement learning[J].IEEE Internet of Things Journal,2021,9(14):12770-12782.

        [17]許小龍,方子介,齊連永,等.車(chē)聯(lián)網(wǎng)邊緣計(jì)算環(huán)境下基于深度強(qiáng)化學(xué)習(xí)的分布式服務(wù)卸載方法[J].計(jì)算機(jī)學(xué)報(bào),2021,44(12):2382-2405.(Xu Xiaolong,F(xiàn)ang ZiJie,Qi Lianyong,et al.A deep reinforcement learning-based distributed service offloading method for edge computing empowered Internet of Vehicles[J].Chinese Journal of Computers,2021,44(12):2382-2405.)

        [18]Sun Jianan,Gu Qing,Zheng Tao,et al.Joint communication and computing resource allocation in vehicular edge computing[J/OL].International Journal of Distributed Sensor Networks,2019,15(3).https://doi.org/10.1177/1550147719837859.

        [19]盧海峰,顧春華,羅飛,等.基于深度強(qiáng)化學(xué)習(xí)的移動(dòng)邊緣計(jì)算任務(wù)卸載研究[J].計(jì)算機(jī)研究與發(fā)展,2020,57(7):1539-1554.(Lu Haifeng,Gu Chunhua,Luo Fei,et al.Research on task offloading based on deep reinforcement learning in mobile edge computing[J].Journal of Computer Research and Development,2020,57(7):1539-1554.)

        [20]鄺祝芳,陳清林,李林峰,等.基于深度強(qiáng)化學(xué)習(xí)的多用戶(hù)邊緣計(jì)算任務(wù)卸載調(diào)度與資源分配算法[J].計(jì)算機(jī)學(xué)報(bào),2022,45(4):812-824.(Kuang Zhufang,Chen Qinglin,Li Linfeng,et al.Multi-user edge computing task offloading scheduling and resource allocation based on deep reinforcement learning[J].Chinese Journal of Computers,2022,45(4):812-824.)

        [21]Qi Qi,Wang Jingyu,Ma Zhanyu,et al.Knowledge-driven service offloading decision for vehicular edge computing:a deep reinforcement learning approach[J].IEEE Trans on Vehicular Technology,2019,68(5):4192-4203.

        [22]Qin Zhuoxing,Leng Supeng,Zhou Jihu,et al.Collaborative edge computing and caching in vehicular networks[C]//Proc of IEEE Wireless Communications and Networking Conference.Piscataway,NJ:IEEE Press,2020:1-6.

        收稿日期:2023-02-03;修回日期:2023-03-15? 基金項(xiàng)目:國(guó)家自然科學(xué)基金資助項(xiàng)目(61876123);山西省研究生教育改革項(xiàng)目(2021YJJG238,2021Y697);太原科技大學(xué)博士啟動(dòng)基金資助項(xiàng)目(20212021);大學(xué)生創(chuàng)新創(chuàng)業(yè)項(xiàng)目(20210499)

        作者簡(jiǎn)介:郭曉東(1977-),男,山西襄汾人,碩導(dǎo),博士,主要研究方向?yàn)橹悄苡?jì)算、邊緣智能與協(xié)同計(jì)算;郝思達(dá)(1997-),男,河北晉州人,碩士,主要研究方向?yàn)橹悄苡?jì)算、車(chē)聯(lián)網(wǎng)、邊緣智能與協(xié)同計(jì)算;王麗芳(1975-),女(通信作者),山西和順人,副教授,碩導(dǎo),博士,主要研究方向?yàn)橹悄苡?jì)算、智能優(yōu)化控制(wanglifang@tyust.edu.cn).

        999国产精品亚洲77777| 日产乱码一二三区别免费l| 少妇性俱乐部纵欲狂欢电影| 色视频www在线播放国产人成| 久久久精品国产亚洲AV蜜| 亚洲一区二区三区美女av| 日韩三级一区二区三区| 免费无码a片一区二三区| 亚洲 欧美 唯美 国产 伦 综合 | 精品无码中文字幕在线| 国产精品欧美久久久久老妞| 成年男人午夜视频在线看| 人妻少妇精品视频专区vr| 成人免费777777被爆出| 久久精品国产99精品九九| 偷拍一区二区三区在线观看| av免费播放网站在线| 性大毛片视频| 国产爆乳无码一区二区在线| 青青草好吊色在线视频| 国产精品成人无码久久久久久| 国产欧美日韩综合精品二区| 亚洲午夜久久久久中文字幕久| av人妻在线一区二区三区| 亚洲av无码专区国产不卡顿| 大地资源在线播放观看mv| 爆乳无码AV国内| 亚洲成人精品久久久国产精品| 秘书边打电话边被躁bd视频| 97性视频| 亚洲一区二区在线视频,| 亚洲熟妇av一区二区三区| 亚洲精品无码成人片久久不卡| 国产chinese在线视频| 久久亚洲乱码中文字幕熟女| 人人妻一区二区三区| 日韩在线不卡免费视频| 黄色大片国产精品久久| 狠狠摸狠狠澡| 亚洲熟妇色xxxxx欧美老妇y| 久久精品国产亚洲av麻豆四虎|