亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        User space transformation in deep learning based recommendation

        2020-09-07 09:20:50WUCaihuaMAJianchaoZHANGXiuweiandXIEDang

        WU Caihua,MA Jianchao,ZHANG Xiuwei,and XIE Dang

        Radar Non-Commissioned Officer School,Air Force Early Warning Academy,Wuhan 430345,China

        Abstract:Deep learning based recommendation methods,such as the recurrent neural network based recommendation method(RNNRec)and the gated recurrent unit(GRU)based recommendation method(GRURec),are proposed to solve the problem of time heterogeneous feedback recommendation.These methods out-perform several state-of-the-art methods.However,in RNNRec and GRURec,action vectors and item vectors are shared among users.The different meanings of the same action for different users are not considered.Similarly,different user preference for the same item is also ignored.To address this problem,the models of RNNRec and GRURec are modified in this paper.In the proposed methods,action vectors and item vectors are transformed into the user space for each user firstly,and then the transformed vectors are fed into the original neural networks of RNNRec and GRURec.The transformed action vectors and item vectors represent the user specified meaning of actions and the preference for items,which makes the proposed method obtain more accurate recommendation results.The experimental results on two real-life datasets indicate that the proposed method outperforms RNNRec and GRURec as well as other state-of-the-art approaches in most cases.

        Key words:recommender system,collaborative filtering,time heterogeneous feedback,recurrent neural network,gated recurrent unit(GRU),user space transformation.

        1.Introduction

        Recently,several recurrent neural network based recommendation methods,such as the recurrent neural network based recommendation method(RNNRec)[1]and the gated recurrent unit(GRU)based neural network recommendation method(GRURec)[2],are proposed to address the problem of time heterogeneous feedback recommendation[1].In this scenario,different kinds of user feedback with time stamps,such as rating,transaction,browsing,reviewing,sharing and so on,are used to generate personalized recommendation results.It is reported that RNNRec and GRURec generate more accurate recommendation results than several traditional recommendation methods.

        In RNNRec and GRURec,actions and items are represented by vectors.Recommendation results are generated according to action vector sequences and the corresponding item vector sequences,which are the input of a recurrent neural network.Action vectors and item vectors are shared among users.That is,the vector of one kind of action or one item remains the same when it is used to generate recommendation for different users.However in practice,the meaning of a same action is different for different users.For example,some users give 3-point to an item because they like it,but for other people,3-point means not very good.Another example is adding a product into the cart.Some users add a product into the cart because they will buy it later,but some users add a product into the cart only to record it.RNNRec and GRURec do not consider the difference of a same action for different users.Similarly,these methods also ignore the difference of user preference by sharing item vectors among users.To address these problems,in this paper,we modify the models of RNNRec and GRURec.In the proposed model,item vectors and action vectors are transformed into the user space for each user at first.Thus,the transformed action vectors represent the specific meaning for the user,and the transformed item vectors reflect the preference of the user.Then,the transformed action vectors and item vectors are input to the neural networks of RNNRec and GRURec to generate personalized recommendation results.

        The main contributions of this paper are twofold.Firstly,we modify the models of RNNRec and GRURec by adding the user space transformation part into the original models,and the modified models are called TransRNNRec and TransGRURec,respectively.In the proposed models,the user specific meaning of the actions and the user preference to the items are considered by transforming the ac-tion vectors and item vectors into the user space in the user space transformation part.This makes the proposed models obtain more accurate results than RNNRec and GRURec.Secondly,we verify the proposed models on two real-life datasets.The experimental results indicate that the proposed models outperform other state-of-the-art recommendation methods.

        The rest of this paper is organized as follows.In the next section,we briefly review the traditional recommendation methods and the newly proposed deep learning based recommendation methods.Section 3 presents the preliminaries of this paper,including the introduction of the time heterogeneous feedback recommendation problem,RNNRec and GRURec.And then,the proposed TransRNNRec and TransGRURec are introduced in details in Section 4.In Section 5,the proposed methods are compared with some state-of-the-art methods on two large-scale real-life datasets,and the convergence of the proposed methods are also analyzed.Finally,the conclusions and future work are provided in Section 6.

        2.Related work

        Recently,with the rising of deep learning,several deep learning based recommendation approaches are proposed.In these methods,deep learning models,such as restricted Boltzmann machine(RBM),convolutional neural network(CNN),stacked denoising autoencoders(SDAE),deep structured semantic model(DSSM),recurrent neural network and so on,are used solely or combined.Here,we brie fly introduce representative deep learning based recommendation.For more details,please see the surveys[3,4].

        Salakhutdinov et al.[5]reported that RBM slightly outperforms singular value decomposition(SVD)on the Netflix dataset for rating prediction.Oord et al.[6]proposed to get music feature vectors from audio data by a CNN for recommendation.Wang et al.[7]combined SDAE[8]and collaborative topic regression(CTR)[9]for recommendation.Elkahky et al.[10]proposed a DSSM based model for cross domain recommendation.Wang et al.[11]proposed a generative adversarial network(GAN)based information retrieval method,which can be used for web search,recommendation and question answering.Recurrent neural network is the most used model in the deep learning based recommendation methods[4].For example,Zhang et al.[12]designed a recurrent neural network model for ratings prediction.Hidasi et al.[13]proposed a recurrent neural network based approach to predict the next items that the user may click on.Wu et al.[1]proposed RNNRec to solve the problem of time heterogeneous feedback recommendation.Liu et al.[2]added a GRU layer into the model of RNNRec[1]and proposed a GRU based neural network,GRURec,to perform multiple time scales analysis and avoid the gradient vanishing during training.Ebesu et al.[14]proposed a memory networks based recommendation model,called collaborative memory networks(CMN),where neural attention mechanism is also integrated in this model.Li and She[15]proposed to use variational autoencoder(VAE)to learn deep latent representations from content data,and the implicit relationships between items and users are learned from both content and ratings.Liang et al.[16]extended VAE by introducing a different regularization parameter in the objective function for implicit feedback.The parameters in the model are inferred through the Bayesian method.

        RNNRec and GRURec are most related to this work.The main difference between these methods and the proposed methods is that item vectors and action items are transformed into the user space for each user in the proposed model.The transformed action vectors and item vectors represent the user specific meaning of the actions and the user preference to the items.

        3.Preliminaries

        In this section,we first introduce the problem of time heterogeneous feedback recommendation.And then,we briefly introduce two recurrent neural network based recommendation methods,RNNRec[1]and GRURec[2].The time heterogeneous feedback recommendation problem was first introduced in[1].In this scenario,the recommender systems try to predict which items the users may prefer in the future according to the user historical feedback with time stamps.For more details,please see[1].Two recurrent neural network based recommendation models,RNNRec and GRURec,are designed to solve the time heterogeneous feedback recommendation problem.The structures of these models are shown in Fig.1 and Fig.2.In these methods,the time heterogeneous feedback recommendation problem is transformed to estimating the probability that a user will prefer an item in the future given the historical feedback with time stamps,P(j|Ai),hereAi=(ai,1,ai,2,...)is the feedback sequence of userisorted by time stamps.The recommender system will recommend the items that the user may access most likely.

        In Fig.1 and Fig.2,the current user vectoru,item vectorv(t),feedback vectora(t)are the inputs of the RNNRec model.o(t)is the output of the model.The hidden layers(t)remembers the state of user historical activities,and this part of the model is recurrent.The hidden layerhrepresents the relatively stable user preference,and this part of the model is non-recurrent.RNNRec outperforms some state-of-the-art recommendation methods on large-scale real life datasets.

        Fig.1Structure of RNNRec

        Fig.2Structure of GRURec

        However, because of the recurrent structure in the model of RNNRec, gradient vanishing may occur during training. And RNNRec is unable to analyze the feedback sequence on multiple time scales, which may lead to some recommendation error. To overcome these two drawbacks, a GRU layer, the grey block in Fig. 2, is added into the model of RNNRec. GRUs are able to prevent gradient vanishing [17]. Furthermore, both long-term and short-term dependencies in sequences can be learned through the reset gates and the update gates in the GRUlayer [18]. Thus, the users’ historical feedback sequences can be analyzed on multiple time scales. For these reasons, GRURec gets more accuracy results than RNNRec. For more details about these two methods, please see [1] and [2].

        Although RNNRec and GRURec outperform some traditional recommendation methods,the embedding item vectors and action vectors are unchanged among users.It makes the models of RNNRec and GRURec cannot reflect the different meanings of one kind of feedback activity for different users and the different preference of users for a same item.Thus,RNNRec and GURRec may obtain some wrong recommendation results.

        4.The proposed method

        In this paper,in order to improve the accuracy of RNNRec methods further,we modify the models of RNNRec and GRURec,and propose TransRNNRec and TransGRURec,respectively.In the proposed models,action vectors and item vectors are transformed into the user space for each user at first.The transformed action vectors and item vectors represent the user specific meaning of the actions and user preference to the items,respectively.And then,the transformed action vectors and item vectors are fed into the original models.The space transformation makes the proposed models get more accurate results.Next we will introduce TransRNNRec and TransGRURec in details.

        4.1RNNRec and GRURec with user space transformation

        The structures of TransRNNRec and TransGRURec are shown in Fig.3.The main difference between the proposed models and the original models in Fig.1 and Fig.2 is that the item vectorv(t)and action vectora(t)in the input layer are transformed into the user space in the proposed model.The parts of user space transforming in the proposed model is marked by dashed box in Fig.3.

        Fig.3Structure of TransRNNRec and TransGRURec

        Similar as RNNRec and GRURec,in the proposed model,users,items and actions are represented by hot vectors,which are called the user,item and action vectors,respectively.The current user vectoru,item vectorv(t),action vectora(t)are the inputs of the proposed models.In the original models in Fig.1,the weight matricesVandWcan also be viewed as the item embedding matrix and the action embedding matrix,respectively.Each column of matrixVorWrepresents the latent feature of an item or an action.The item embedding matrixVand the action embedding matrixWare shared among users in RNNRec and GRURec,and this makes the original models cannot reflect the different meanings of actions for different users and user preference to the items.Thus,in the proposed models,the user space transformation matrixUuis added for the useru,through which the item embedding vector and the action embedding vector are transformed into the user space.And then,the transformed item embedding vector and the action embedding vector are input to the hidden layer.For example,in TransRNNRec,hidden layers are calculated as follows:

        wherev(t)anda(t)are the current item vector and the current action vector.s(t?1)is the last hidden layer state.V,W,XandQare the weight matrices between the input layer and the hidden layer.MatricesVandWare also called the item embedding matrix and the action embedding matrix,respectively.In(1),Uu(V v(t)+Wa(t))is the sum of the transformed item vector and action vector.

        In TransGRURec,the reset gater,the update gatezand the candidate hidden layergin the GRU layer are calculated after the user space transforming,as follows:

        whereWr,Vr,Xr,Wz,Vz,Xz,W,VandXare the weight matrices,and?is the element-wise multiplication.In the formulas aboveUu(Wra(t)+Vrv(t)),Uu(Wza(t)+Vzv(t))andUu(Wa(t)+V v(t))are the transforming of the item vector and the action vector.

        The calculation of the hidden layers(t)in Trans-GRURec is the same with that in GRURec,as follows:

        The hidden layerhand the outputo(t)of TransRNNRec and TransGRURec are calculated in the same way of RNNRec and GRURec.

        whereQ,YandZare the weight matrices.

        4.2Model learning

        The proposed model can be learned by the back propagation(BP)algorithm or the BP through time(BPTT)algorithm.In this paper,the models of TransRNNRec and TransGRURec are learned through BPTT cooperating with root mean square prop(RMSPROP)[19]algorithm,which is a variation of stochastic gradient descent(SGD).In RMSPROP,different parameters are updated using different learning rates.The parameters with greater gradients in previous training epochs get smaller learning rates,and the parameters with smaller gradients get larger learning rates on the contrary.For more details,please see[19].

        4.3Recommendation generation

        The recommendation results are generated in the same way of RNNRec and GRURec.After the models of Trans-RNNRec and TransGRURec are learned,we calculate the output of the neural networks at the last time stamp for each user.And then,pick up theKlargest elements of the output.The indexes of these elements are the IDs of the items recommended to this user.

        4.4Computational complexity

        In TransRNNRec,the time complexity of computing the model output is abouto((D+n)D),hereDis the dimension of the hidden layer,andnis the number of items.The complexity of updating weight matrices between the output layer and the hidden layer,YandZ,once is abouto(nD).The complexity of updating matricesXandUuonce is abouto(D2).Only one column of matriceW,VorQis updated for a single training sample,so the complexity of updating these matrices once is abouto(D).If these matrices are updated by BPTT,the complexities are abouto(TD2)ando(TD),hereTis the number of the unfolded time steps.In recommender systems,n?D,so the complexity of processing a single training sample through the BP algorithm is abouto(TD).If the BPTT algorithm is used to train the model,the complexity of processing a single training sample is slightly higher thano(TD).If there arenstraining samples,the complexity of training the model in one iteration through the BPTT algorithm is slightly higher thano(nsnD).

        Similarly,in TransGRURec,the complexity of computing the output is abouto((D+n)D).The complexity of training the TransGRURec model in one iteration through the BPTT algorithm is slightly higher thano(nsnD).

        Because there are more weight matrices in Trans-GRURec,the complexity of TransGRURec is higher than that of TransRNNRec.Because of the user space transforming in models of TransRNNRec and TransGRURec,the complexities of TransRNNRec and TransGRURec are slightly higher than those of RNNRec and GRURec,respectively.

        5.Experimental results and discussions

        In this section,the proposed methods are verified on two large-scale real-life datasets.First,the datasets and the quality metrics are introduced.Then,the experiment setting is presented.Next,the proposed methods are compared with several state-of-the-art approaches.Finally,convergence analysis of the proposed method is provided.

        5.1Datasets and metrics

        We evaluate the proposed methods,TransRNNRec and TransGRURec,on the Taobao 2014 dataset and the Lastfm dataset[20].Table 1 summarizes the statistics of these datasets.A total of 884 users’182 880 activities on 9 531 items on Taobao(www.taobao.com)are collected in Taobao 2014 dataset.There are four kinds of activities in this dataset,including clicking,adding into the favorites list,adding into the cart and buying.

        Table 1Statics of the datasets

        There are 1 892 users and 17 632 artists in the Lastfm(http://www.lastfm.com)dataset[17].A total of 186 479 tags assigned by users are also recorded.All of these tag activities are transformed into one kind of feedback in the experiments to verify the proposed methods in the circumstance of implicit feedback.

        The recommendation results are evaluated by F1 score for each user,which is often used in information retrieval and classification to measure the performance.F1 score is the weighted average of precision and recall,as follows:

        where prediction?set is the item set recommended to the users obtained by the recommendation method.reference?set is the relevant item set containing all of the items in the testing data.|·|is the number of elements in the set.

        5.2 Experiment settings

        In the time heterogeneous feedback recommendation problem,recommendation results are generated according to the historical feedback activities,and verified by comparing them with the feedback activities in the future.Therefore,the datasets are divided into the training data and the testing data according to the time stamps of the feedback firstly.For the Lastfm dataset,we pick the last 10 or 20 tag activities of each user as the testing data,and the rest are treated as the training data.For the Taobao 2014 dataset,we select activities in the last 10 or 20 days as the testing data,and the rest are treated as the training data.In the experiments,the items in the testing data are set as the relevant items(reference set in(9)and(10)).Then,we train TransRNNRec,TransGRURec and the compared method.After that,we generate the recommendation item lists(prediction set in(9)and(10))using the trained methods.Finally,F1 scores of the recommendation results obtained by each method are calculated.

        The proposed methods are verified in the cases ofD=16orD=32in the experiments.Here,Dis the dimension of the hidden layer.

        The learning rate of the proposed methods is selected according to the experimental results in Section 5.4.The best learning rate of TransRNNRec and TransGRURec on the Taobao 2014 dataset is 0.001.The best learning rates of TransRNNRec and TransGRURec on the Lastfm dataset are 0.01 and 0.001,respectively.For convenience,the learning rate of the proposed methods is set to 0.001on the Taobao 2014 dataset and 0.01 on the Lastfm dataset.

        According to[19],the decay rate of RMSPROP is set to 0.9 or 0.95 typically.Thus,the decay rate is set to 0.95 on all datasets.

        The proposed methods are implemented in Python using Theano and accelerated by GPUs.

        5.3Comparison

        The proposed methods are compared with four state of-the-art recommendation methods,weighted regularized matrix factorization(WRMF)[21],Bayesian personalized ranking for matrix factorization(BPRMF)[22],weighted BPRMF[23]and SoftMarginRankingMF[24]on the Taobao 2014 dataset and the Lastfm dataset[20].Three classical collective filtering methods,the user basedk-nearest neighbors recommendation method(KNN(user)),the item-basedk-nearest neighbors recommendation method(KNN(item))and the most popular item recommendation method(Most Popular)are taken as baselines in the experiments.The proposed method is also compared with RNNRec and GRURec to illustrate the improvement of the user space transforming.The compared methods are brie fly introduced as follows:

        (i)KNN(user),the user-basedk-nearest neighbors recommendation method;

        (ii)KNN(item),the item-basedk-nearest neighbors recommendation method;

        (iii)Most Popular,which recommends the most popular items to the users;

        (iv)WRMF[21],where the weighted sum-of-square of matrix factorization errors is minimized;

        (v)BPRMF[22],where the objective function is pairwise comparison error;

        (vi)Weighted BPRMF[23],an extension of BPRMF,where a weight is given to each pair of comparison;

        (vii)Soft Margin Ranking MF[24],an extension of maximum margin matrix factorization(MMMF)[25],where the objective function is the ordinal regression score;

        (viii)RNNRec,a recurrent neural network based recommendation method without user space transforming;

        (ix)GRURec,a GRU based recommendation method without user space transforming.

        All of the compared methods except RNNRec and GRURec are implemented in MyMediaLite[26].In the experiments,the parameters of the compared methods are set to the optimal values stated in the original literature.The parameters of RNNRec and GRURec are the same as those of TransRNNRec and Trans GRURec.We train RNNRec,GRURec and the proposed methods through the BPTT algorithm with RMSPROP.The unfolded steps of BPTT is set to 15 for these methods.The experiment is performed five times for each case.Mean and standard deviations(in the brackets)of F1@10 and F1@20 are calculated and listed in Tables 2–5 to make the comparison clear.The best results with statistical significance are shown in bold.In the tables,Test=10 means that the last 10 feedback activities of each user in the Lastfm dataset are selected as the testing data,or the feedback activities in last 10 days of each user in the Taobao 2014 dataset are selected as the testing data.The meaning of Test=20 is similar to that of Test=10.

        Table 2Comparison results on Taobao 2014 Dataset(Test=10)

        On the Taobao 2014 dataset,as shown in Table 2 and Table 3,the proposed methods outperform the original methods in all cases.Compared with RNNRec and GRURec,Trans RNNRec and Trans GRURec improve F1 scores by about 20%on average,respectively.The main reason is that there is no user space transformation part in the models of RNNRec and GRURec,and the user action and item information are transformed into user space in the proposed methods,which can improve the recommendation accuracy.

        Table 3Comparison results on Taobao 2014 Dataset(Test=20)

        Table 4Comparison results on Lastfm dataset(Test=10)

        On the Lastfm dataset,TransRNNRec obtains more accuracy results than RNNRec in all cases.The F1 scores obtained by TransRNNRec are about 30%higher than those obtained by RNNRec on average.The reason is the same as that on Taobao 2014 dataset.The user action and item information are transformed into user space in the user space transformation part in TransRNNRec,and this improves the recommendation accuracy.However,TransGRURec does not outperform GRURec on the Lastfm dataset.The possible reason is that there are only one kind of feedback in the Lastfm dataset.The advantage of user space transforming may not be obvious in this situation.Furthermore,the Lastfm dataset is sparser than the Taobao 2014 dataset.The model of TransGRURec is more complex than that of GRURec.More training data are needed for TransGRURec to get better results.

        Table 5Comparison results on Lastfm dataset(Test=20)

        In all cases,not only RNNRec and GRURec but also the proposed methods outperform other compared methods.This indicates the recurrent neural network based models are more suitable to deal with the time heterogeneous feedback recommendation problem than the traditional methods,including the matrix factorization based methods,such as WRMF,BPRMF,weighted BPRMF and SoftMarginRankingMF.

        The results obtained by the proposed methods in the cases ofD=32 is better than those in the cases ofD=16.This indicates that higher hidden layer dimensions can improve the accuracy of the recommendation results of the proposed method.However,training the model with higher hidden layer dimensions needs more computing time.A suitable hidden layer dimension should be selected by trading off between accuracy and training time according to the application scenario.

        5.4Convergence analysis

        To demonstrate the convergence of the proposed methods,we draw the curves of the testing F1@10 and F1@20 during the training process at different learning rates in Fig.4 and Fig.5.The learning rates are set to 0.01,0.005 and 0.001 on the Taobao 2014 dataset and the Lastfm dataset.

        Fig.4Testing F1 scores on Taobao 2014 dataset during the training of TransRNNRec and TransGRURec

        Fig.5Testing F1 scores on Lastfm dataset during the training of TransRNNRec and TransGRURec

        From Fig.4(a)–Fig.4(d)and Fig.5(a)–Fig.5(d),it can be observed that the converge speed of TransRNNRec at a lower learning rate is slower than that at a higher learning rate.On the Taobao 2014 dataset,the training progress of TransRNNRec converges within about 20 epoches.The best result is obtained when the learning rate is set to 0.001 on the Taobao 2014 dataset.When the learning rate is high(0.01 and 0.005),the curves of F1 score fluctuate up and down in a narrow range after several epoches.And in case of lower learning rates,the curves of F1 score are relatively smooth after convergence.This indicates that the lower learning rate can avoid fluctuation during training.On the Lastfm dataset,the best learning rate of Trans-RNNRec is 0.01,and the training of TransRNNRec converges within 40 epoches at the learning rate of 0.01.

        According to the curves in the Fig.4(e)–Fig.4(h)and Fig.5(e)–Fig.5(h),the best learning rate of Trans-GRURec on the Taobao 2014 dataset is 0.001.Trans-GRURec converges within 20 epoches on the Taobao 2014 dataset.The converging process of TransGRURec on the Taobao 2014 dataset is very similar to that of Trans-RNNRec on this dataset.The best learning rate of Trans-GRURec on the Lastfm dataset is 0.001.The converging speeds of TransGRURec at lower learning rates(0.005and 0.001)on this dataset are very slow.Especially at the learning rate of0.001,more than 80 epoches are needed for convergence.The reason is that the Lastfm dataset is sparser than the Taobao 2014dataset.Using more training samples can speed up the convergence.

        6.Conclusions and future work

        In this paper,we modify the model of RNNRec and GRURec,and propose TransRNNRec and TransGRURec,in which the action vectors and the item vectors are transformed into user space before entering the recurrent neural network.This makes the transformed action vectors and item vectors can represent user specified action meaning and the preference to items.This advantage makes the proposed methods outperform the original methods and other state-of-the-art recommendation methods.

        It is reported that the external knowledge can help to improve the recommendation accuracy.For example,Oramaset al.[27]linked tags and textual descriptions in a music recommender system to the external knowledge graph,and obtained graph based item feature for recommendation.One of the future research direction is to combine the information in the recommender system and the external knowledge to generate more reasonable recommendation.External knowledge can be used to generate recommendation reasons,and it is also a valuable research direction.

        Another possible research direction is to improve the training sample utilization efficiency of recurrent neural network based recommendation.The recurrent neural network based recommendation models are more complex than the traditional recommendation models,and more training samples are needed to get satisfied results for these models.Improving the training sample utilization efficiency of these models can make them more widely applied.

        国产偷国产偷亚洲高清| 国产suv精品一区二人妻| 污污污污污污污网站污| 天天澡天天揉揉AV无码人妻斩| 日韩亚洲精品中文字幕在线观看| 久久综合狠狠色综合伊人| 久久久久久久久久久熟女AV| 美女福利一区二区三区在线观看| 午夜福利视频一区二区二区| 成人做受黄大片| 好爽…又高潮了毛片免费看| 国产美女黄性色av网站| 国产高清精品一区二区| 国产熟妇另类久久久久| 亚洲羞羞视频| 久久麻豆精亚洲av品国产蜜臀| 国产成年人毛片在线99| 97无码免费人妻超级碰碰夜夜| 免费看操片| 蜜桃av福利精品小视频| 在厨房拨开内裤进入毛片| 亚洲色欲综合一区二区三区| 精品国产一区二区三区亚洲人 | 射进去av一区二区三区| 亚洲日韩精品一区二区三区无码| 开心婷婷五月激情综合社区| 国产高清天干天天视频| 国产一区二区三区在线男友| 在线看无码的免费网站| 中文字幕精品亚洲人成| 日韩亚洲午夜精品一区二区三区| 国产综合精品久久99之一| 亚洲成a∨人片在无码2023| 久久久久久久综合日本| 成人高清在线播放视频| 天天爽夜夜爱| 亚洲一区sm无码| 久久老熟女一区二区三区| 久久99热狠狠色精品一区| 亚洲成人电影在线观看精品国产| 国产一级一厂片内射视频播放|