亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        用Velocity Verlet積分器 改進(jìn)HMC抽樣方法

        2024-04-29 00:44:03李婉熒唐亞勇
        關(guān)鍵詞:效率方法模型

        李婉熒 唐亞勇

        Hamilton Monte Carlo (HMC)方法是一種常用的快速抽樣方法. 在對(duì)哈密頓方程進(jìn)行抽樣時(shí),HMC方法使用Leapfrog積分器,這可能造成方程的位置及動(dòng)量的迭代值在時(shí)間上不同步,其產(chǎn)生的誤差會(huì)降低抽樣效率及抽樣結(jié)果的穩(wěn)定性. 為此,本文提出了IHMC(Improved HMC)方法,該方法用Velocity Verlet積分器替代Leapfrog積分器,每次迭代時(shí)都計(jì)算兩變量在同一時(shí)刻的值. 為驗(yàn)證方法的效果,本文進(jìn)行了兩個(gè)實(shí)驗(yàn),一個(gè)是將該方法應(yīng)用于非對(duì)稱(chēng)隨機(jī)波動(dòng)率模型(RASV模型)的參數(shù)估計(jì),另一個(gè)是將方法應(yīng)用于方差伽馬分布的抽樣,結(jié)果顯示:IHMC方法比HMC方法的效率更高、結(jié)果更穩(wěn)定.

        HMC方法; Velocity Verlet積分器; RASV模型; 方差伽馬分布

        O29 A 2024.011004

        Improving the ?HMC sampling method by Velocity Verlet integrator

        LI Wan-Ying, TANG Ya-Yong

        (School of Mathematics, Sichuan University, Chengdu 610064, China)

        Hamilton Monte Carlo (HMC) method is a fast sampling method. To sampling a Hamiltonian equation, the HMC method uses the Leapfrog integrator. As a result, unsynchronized values of position and momentum variables of the equation are possibly generated during the iterative process and resulting estimation error can seriously damage the sampling efficiency and stability of sampling result. To solve this problem, here we propose an improved HMC (IHMC) method by replacing the Leapfrog integrator in HMC method with Velocity Verlet integrator. The IHMC method calculates the variables simultaneously in each iteration. Two numerical examples are implemented to check the performance of the IHMC method, one is the problem of estimating the parameters of realized and stochastic volatility (RASV) model with asymmetric effects, the other is the problem of sampling the variance Gamma distribution. It is shown that the IHMC method has higher sampling efficiency and more stable sampling result compared with the HMC method.

        Hamiltonian Monte Carlo method; Velocity Verlet integrator; RASV model; Variance Gamma distribution

        1 Introduction

        The Hamilton Monte Carlo (HMC) method is a fast sampling method combining the Markov chain Monte Carlo (MCMC) method and Hamiltonian dynamics. To sampling a Hamiltonian equation, the HMC method uses Leapfrog integrator. This method is firstly introduced in the late 1980s by Duane ?et al . ?[1] , known as the hybrid Monte Carlo method. The HMC method is mainly applied in physics until Neal ?[2] ?realized that it can also be applied to Bayesian neural network. In 2011, Neal ?[3] ?further provided a detailed introduction to this method and made it widely noticed and applied. Nowadays the HMC method has more and more applications in simulations of various chemical systems, physical systems and statistical inferences.

        李婉熒, 等: 用Velocity Verlet積分器改進(jìn)HMC抽樣方法

        Compared with the MCMC method, the HMC method has higher efficiency and ?stability. But there is still some room for improvement. Recently, combined with some important sampling methods, new modified Hamiltonian Monte Carlo (MHMC) methods are proposed ??[4-6] . Notably, Radivojevi ???et al . ?[4] ?used the splitting integrator instead of the Leapfrog integrator to get better conservation properties. Remarkably, unsynchronized values of the position and momentum variables of the Hamiltonian equation are possibly generated during the iterative process and the resulting estimation error can seriously damage the sampling efficiency and stability of sampling result. In this paper, we try to use the Velocity Verlet integrator to improve the HMC method, since the Velocity Verlet integrator calculates the position and momentum variables simultaneously with accuracy similar to the Leapfrog integrator.

        The rest of the paper is organized as follows. Section 2 and Section 3 describe the HMC method and the IHMC method. In Section 4, two numerical examples are used to show the performance of the IHMC method. Finally, we summarize this paper in Section 5.

        2 The HMC method

        The HMC method combines the MCMC method with Hamiltonian dynamics. For computer simulation, the Hamiltonian equation is firstly discretized by using a small time-step ?ε . Then the Leapfrog integrator is used to carry out the iterative sampling.

        Hamiltonian dynamics operates on a ?d -dimensional position vector ?q ?and a ?d -dimensional momentum vector ?p , thus the full state space is 2 d -dimensional. The system can be described by a function of ?q ?and ?p ?known as the Hamiltonian, ??H(q,p) ?, written as

        H(q,p)=U(q)+K(p),

        here ?U(q) ?is called potential energy, ?K(p) ?is called kinetic energy, and the functions are defined as,

        U(q)=- ln ?f(q),K(p)= 1 2 |p | ??2 ,

        where ?f(q) ?is the probability density of ?q . The position variable ?q ?is obtained by solving the Hamiltons equation given by the following equations

        d ?q ??t ??d t = ?H ??p ??t ?= p ??t ,

        d ?p ??t ??d t =- ?H ??q ??t ?=-

        U( q ??t ) ??(1)

        respectively,where ?t ?represents time, ??q ??t ??and ??p ??t ??can also be written as ?q(t) ?and ?p(t) . The Leapfrog integrator has local error of order ??ε ??3 ??and global error of order ??ε ??2 ?. The Leapfrog integrator guarantees the symplectic and volume preservation properties by performing the following steps in each iteration:

        p t+ ε 2 ?=p(t)- ε 2

        U q(t) ,

        q(t+ε)=q(t)+εp t+ ε 2 ?,

        p(t+ε)=p t+ ε 2 ?- ε 2

        U q(t+ε) ???(2)

        Step 1 ????Input the step size SymboleA@

        , the number of iterations ?L ?and the initial value ??q ??0 ??for ?q .

        Step 2 ????Draw ?p ?from ?N 0,I .

        Step 3 ????Simulate discretization of the Hamiltonian dynamics by (2). Then we do

        (i) half-step update ?p : ?p←p- ε 2

        U(q) ;

        (ii) alternating update ?p ?and ?q ?for ?L ?times, say, ??q←q+εp, p←p-ε

        U(q);

        (iii) half-step update ?p : ?p←p- ε 2

        U(q) .

        Step 4 ????Let ?(q,p) ?be the value taken before the update, ???q ?︿ , p ?︿ ???be the value taken after the update. Calculate the acceptance probability

        P= exp {-H ?q ?︿ , p ?︿ ?+H(q,p)} .

        Step 5 ????Draw ?u ~Uniform[0,1], then

        q= ?q, ?if ?u≥P,

        q ?︿ , ?if ?u

        When the HMC method runs, the momentum vector is first updated once with half a step, then the position and momentum vectors are updated a specified number of times with one step, and finally the momentum vector is updated once again with half a step. Specifically, the obtained position and momentum vectors are ?p ?ε 2 ?,q(ε),p ?3ε 2 ?,q(2ε),p ?5ε 2 ?,q(3ε),… ?in order. A noteworthy drawback of the HMC method is that the position and momentum vectors are not synchronized in time, which may result in errors during iteration and affect the sampling efficiency and stability of sampling result.

        3 The IHMC Method

        3.1 The Velocity Verlet integrator

        To avoid the errors caused by the HMC method, here we introduce the Velocity Verlet integrator ?[5] . Similar to the Leapfrog integrator, the Velocity Verlet integrator also has order ??ε ??3 ??local error and order ??ε ??2 ??global error and can preserve the symplectic and volume ?[6] ?besides that the momentum and position vectors can be calculated simultaneously.

        The target variable ?( q ??t , p ??t ) ?discretized by the Velocity Verlet integrator with step size ?ε ?is the solution of the following differential system:

        d ?q ??t ??d t = p ???t ?- ε 2

        U ?q ???t ???,

        d ?p ??t ??d t =- 1 2 (

        U( q ???t ?)+

        U( q ??????t ?)) ??(3)

        where ??t = max {s∈ε ?Z :s≤t}, t = min {s∈ε Z : s≥t } . Compared with (1), the gradient term in equation (3) is used to reduce the error of the numerical integration. The Velocity Verlet integrator updates the position and momentum variables as follows.

        q(t+2ε)=q(t)+2εp(t)-2 ε ??2

        U(q(t)),

        p(t+ε)=p(t)- ε 2

        U(q(t))+

        U(q(t+2ε)) ?,

        q(t+ε)=q(t)+εp(t+ε)- ε ??2

        U(q(t)) ??(4)

        Then, by using (4) to replace (2), Step 3 in the HMC method should be replaced by

        Step 3 * ???Simulate discretization of Hamiltonian dynamics by (4). Then we do alternating update ?p ?and ?q ?for ?L ?times:

        q ??v ←q+2εp-2 ε ??2

        U(q),

        p←p- ε 2

        U q +

        U ?q ??v ??,

        q←q+εp- ε ??2

        U(q).

        Compared with (2), (4) contains more computations on the premise of obtaining ?p(t+ε) ?and ?q(t+ε) ?with the same ?ε . However, (4) can reduce the estimate errors and solve the time synchronization problem with higher stability. ?That is to say, good result still can be expected even for larger step sizes. We will confirm this by some simulations in the next section.

        3.2 Convergence of the IHMC method

        To investigate the convergence of the IHMC method, suppose that ?U ?is a quadratic differentiable function satisfying that

        U(x) ≤L x ?, which can be easily satisfied by most functions. Consider that ?T∈ 0,∞ ,ε∈[0,1] ?satisfying ??T/ε∈ ?Z (ε≠0) ?and assume that ?L( T ??2 +εT)≤1 ?which can be satisfied by choosing a suitable ?T ?for a given ?L . We have the following two theorems.

        Theorem 3.1 ???For given initial values ??q ??0 , ??p ??0 ∈ ??R ???d ?, the following inequalities hold:

        max ???m≤T ????q ??m ??≤ 1 1- L 2 ??T ??2 +εT ?·

        max ??q ??0 ?, ?q ??0 + p ??0 T ,

        max ???m≤T ????p ??m ??≤ ?p ??0 ?+ 2 T+ε ·

        max ??q ??0 ?, ?q ??0 + p ??0 T .

        Proof ?From the equations (3), for all ?s∈[0,T] ?we have

        ∫ ??s ??0 ??d ??d t ?q ???t = q ???s - q ??0 =

        ∫ ??s ??0 ?p ????r ??d r- ε 2 ∫ ??s ??0

        U ?q ????r ???d r,

        where

        p ????r ?= p ??0 - 1 2 ∫ ???r ???0

        U ?q ????u ??+

        U ?q ????u ????d u,

        r = max ?t∈ε ?Z :t≤r , r = min {t∈ε ?Z :t≥r} . Combining the above two equations gives

        q ???s = q ??0 + p ??0 s-

        1 2 ?∫ ???s ??0 ?∫ ????r ???0

        U ?q ????u ??+

        U ?q ???????u ????d u d r-

        ε 2 ?∫ ???s ??0

        U ?q ????r ???d r ?(5)

        Then we have

        ‖ q ???s - q ??0 - p ??0 s‖=‖ 1 2 ?∫ ???s ??0 ?∫ ????r ???0 (

        U( q ????u ?)+

        U( q ???????u ?)) d u d r- ε 2 ?∫ ???s ??0

        U( q ????r ?) d r‖≤

        1 2 ∫ ??s ??0 ?∫ ????r ???0 (

        U( q ????u ?) +

        U( q ???????u ?) ) d u d r+

        ε 2 ∫ ??s ??0

        U( q ????r ?) ?d r≤ L 2 ∫ ??s ??0 ?∫ ????r ???0 ( ?q ????u ??+

        q ???????u ??) d u d r+ εL 2 ∫ ??s ??0 ??q ????r ???d r≤

        L∫ ??s ??0 ?∫ ???r ??0 ??max ???m∈ 0,T ?????q ???m ???d u d r+

        εL 2 ∫ ??s ??0 ??max ???m∈ 0,T ?????q ???m ???d r≤

        L 2 ( s ??2 +εs) ?max ???m∈ 0,T ?????q ???m ??≤

        L 2 ( T ??2 +εT) ?max ???m∈[0,T] ????q ???m ??.

        In the above equation ??max ???m∈[0,T] ????q ??m ????can be rewritten as

        max ???m∈[0,T] ????q ??m ??≤ ?max ???m∈[0,T] ????q ??m - q ??0 - p ??0 m ?+

        max ?????{ ?q ??0 ?, ?q ??0 + p ??0 T } ?(6)

        It is further obtained that

        max ???m∈[0,T] ????q ??m - q ??0 - p ??0 m ?≤

        L 2 ??T ??2 +εT ?1- L 2 ??T ??2 +εT ???max ?????{ ?q ??0 ?, ?q ??0 + p ??0 T } ?(7)

        Substituting (7) into (6) again gives

        max ???m∈ 0,T ?????q ??m ??≤ 1 1- L 2 ??T ??2 +εT ?·

        max ?????{ ?q ??0 ?, ??q ??0 + p ??0 T } ?(8)

        From the equations (3), we have

        p ??s = p ??0 - 1 2 ?∫ ???s ??0

        U ?q ????r ??+

        U ?q ???????r ????d r ?(9)

        From (8) and (9), we get

        max ???m∈[0,T] ????p ??m ??≤ ?p ??0 ?+ LT 1- L 2 ??T ??2 +εT ?·

        max ?????{ ?q ??0 ?, ?q ??0 + p ??0 T }.

        From ?L( T ??2 +εT)≤1 , the above equation can be reduced to

        max ???m∈[0,T] ????p ??m ??≤ ?p ??0 ?+ 2 T+ε · ?max ?????{ ?q ??0 ?,

        q ??0 + p ??0 T } ?(10)

        The proof is completed.

        Theorem 3.2 ???For given initial values ??q ??0 , ?p ??0 ∈ ??R ???d ?, the following inequalities hold:

        max ???s,t≤T ????q ??t - q ??s ??≤ ?p ??0 ?T+

        max ?????{ ?q ??0 ?, ?q ??0 + p ??0 T },

        max ???s,t≤T ????p ??t - p ??s ??≤

        2 T+ε ?max { ?q ??0 ?, ?q ??0 + p ??0 T }.

        Proof ?Without loss of generality, assume that ?0≤s

        q ???t - q ???s ?≤ ?p ??0 ?t-s ?+

        1 2 ∫ ?t ??s ??∫ ????r ???s

        U ?q ????u ???+

        U ?q ????u ?????d u d r+

        ε 2 ∫ ?t ??s

        U ?q ????r ????d r≤ ?p ??0 ?T+ L 2 ( (t-s) ??2 +

        ε(t-s)) ?max ???m∈[0,T] ????q ???m ??≤ ?p ??0 ?T+ L 2 ( T ??2 +

        εT) 1 1- L 2 ??T ??2 +εT ?·

        max ?????{ ?q ??0 ?, ??q ??0 + p ??0 T }≤

        p ??0 ?T+ ?max ?????{ ?q ??0 ?, ??q ??0 + p ??0 T }.

        i.e.,

        max ???s,t≤T ???q ??t - q ??s ?≤ ?p ??0 ?T+

        max ?????{ ?q ??0 ?, ?q ??0 + p ??0 T } ?(11)

        From (9) we have

        p ??t - p ??s ?≤ 1 2 ∫ ??t ??s

        U ?q ????r ???+

        U ?q ???????r ????d r≤

        L(t-s) ?max ???m∈[0,T] ????q ??m ??≤ LT 1- L 2 ??T ??2 +εT ?·

        max ?????{ ?q ??0 ?, ?q ??0 + p ??0 T }≤

        2 T+ε ?max { ?q ??0 ?, ?q ??0 + p ??0 T },

        i.e.,

        max ???s,t≤T ???p ??t - p ??s ?≤ 2 T+ε ?max { ?q ??0 ?, ?q ??0 + p ??0 T } ?(12)

        asrequired. The proof is end.

        Remark 1 ???Theorem 3.1 tells us that in a time interval ?T , the values of position variable ?p ?and the velocity variable ?q ?generated by the IHMC method can be controlled by the corresponding constant velocity motion (initial position ???q ??0 ?, initial velocity ??p ??0 ?). Furthermore, Theorem 3.2 tells us that in a chain generated by IHMC method, the deviations of the variable values at all moments are restricted to a certain range (given by the initial conditions of ??q ??0 ?, ??p ??0 ?, and the upper time limit ?T ). Thus the convergence of the IHMC method is proved.

        4 Numerical examples

        Example 4.1 ???Parameter estimation for the RASV model.

        Stochastic volatility (SV) model is an essential tool for financial time series volatility. It identifies the time-varying return volatility in financial market by describing volatility as a random function. Nevertheless, the application of SV model is limited by the problem of evaluating their likelihood functions. To handle this problem, some Bayesian-based approach with the HMC method are used to accurately infer the parameters of SV model ?[7,8] . To compare the performance of the HMC and IHMC methods, we establish an SV model with realized volatility (RV) and asymmetric effect, abbreviated as RASV model ?[9] , by utilizing daily rate of return data and RV containing intraday high-frequency rate of return data information.

        Suppose that we have ?m ?intraday returns during day ?t , ????r ??t,i ????m ??i=1 ?, so that a simple RV estimator ?[10] ?is defined as the sum of squared returns, ?i.e. , ??RV ???t =∑ ?m ??i=1 ??r ???2 ??t,i . ?The RASV model is described as

        R ??t = ?σ ??ε ???2 ??e ????1 2 ?α ??t ??ε ??t ,t=1,…,T,

        Y ??t =c+ α ??t + σ ??u ?u ??t ,t=1,…,T,

        α ??t+1 =φ α ??t + σ ??η ?η ??t ,t=1,…,T-1,

        α ??1 ~N 0, ??σ ??η ???2 ?1- φ ??2 ??,

        ε ??t ??u ??t ??η ??t ??~N ??0 0 0 ??, ??1 0 ρ 0 1 0 ρ 0 1 ??,

        where ?= ??R ??t ????T ??t=1 ,Y= ??Y ??t ????T ??t=1 ,α= ??α ??t ????T ??t=1 ??. Let ?R ?and ?Y ?be two observation vectors, ??R ??t ??is the rate of return over a unit time period, and ?Y ??t ??is the logarithm of RV estimator, ?α ?is the latent vector, and ?N ·,· ??represent a normal distribution. Let ?θ=(c, σ ??u , σ ??ε ,φ, σ ??η ,ρ) . The joint posterior density of the model can be written as

        p θ,α|R,Y =∏ ?T ??t=1 ?p ?R ???t | σ ???ε , α ???t ?×

        ∏ ?T ??t=1 ?p ?Y ???t |c, σ ???u ?×p θ ×p ?α ??1 |φ, σ ???η ?×

        ∏ ?T ??t=2 ?p ?α ???t | α ???t-1 , σ ???ε ,φ, σ ???η ,ρ .

        While estimating the RASV model, both the HMC method and the IHMC method are applied for parameters ?α,φ, σ ??ε , σ ??η ,ρ , and parameters ??σ ??u ??and ?c ?are sampled directly from the conditional posterior density.

        The data used in this example are the 1-minute high-frequency data of Shanghai Composite Index from January 4, 2021 to February 21, 2022. We take the opening and closing prices of each day to calculate the daily return rate, and get the series ?R= ??R ??t ????T ??t=1 . The RV estimator was calculated by using the daily 1-minute high-frequency data, and the logarithm was taken to obtain the sequence ?Y= ??Y ??t ????T ??t=1 ?. We estimate the RASV model by running the corresponding 25 000 iterations of the HMC and IHMC methods and taking 5000 iterations as burn-in period. The prior distributions required in the joint prior density are set as

        σ ??2 ??ε ~I(xiàn)G 5,4 , σ ??2 ??u ~I(xiàn)G 2.5,0.025 ,

        σ ??2 ??η ~I(xiàn)G 2.5,0.025 , φ+1 2 ~B 20,1.5 ,

        ρ+1 2 ~B 1.5,5 ,c~N 0,10 ,

        where ?B(·,·) ?and ?IG(·,·) ?represent the Beta and inverse Gamma distributions, respectively. The choice of these prior distributions ensures that all parameters make sense. Particularly, the Beta prior for ?φ ?and ?ρ ?ensures that ?-1<φ,ρ<1 . For initial parameter values, we set

        σ ??(0) ??ε =1, c ??(0) =0, σ ??(0) ??u =0.1, φ ??(0) =0.95,

        σ ??(0) ??η =01, ρ ??(0) =-0.3, α ??(0) = ??α ??(0) ??t ????T ??t=1 ?,

        where

        α ??1 ???(0) ~N(0, ?σ ??2 ??η ?1- φ ??2 ?), ?α ??i ???(0) ~N(φ α ??i-1 , σ ??2 ??η ) .

        Tuning parameters for the HMC method have been reported to be more difficult than tuning the parameters for other MCMC methods, because the performance of the HMC-based method is highly sensitive to two specified parameters, ??i.e. , ?the step size and the number of iterations. For its larger computation in the IHMC method, we use a larger step size to compensate. In the following simulation, we set iteration number 100 for both the HMC method and the IHMC method. For the parameter ?α , we set step size 0.005 for the HMC method, while 0.01 for the IHMC method, and for the parameters ?φ, σ ??ε , σ ??η ,ρ , we set step size 0.006 for the HMC method, while 0012 for the IHMC method. The estimation is implemented by R software.

        Tab.1 presents the result obtained by using the HMC method and the IHMC method. The IF is defined as IF =1+2∑ ?∞ ??j=1 ??ρ ??j ?, where ??ρ ??j ??is the autocorrelation at lag ?j , which is used to illustrate the efficiency of the method and reflect the convergence rate. This quantity can be interpreted as the number of correlated samples with the same variance-reducing power as one independent sample. An IF value of 1 indicates that the draws are uncorrelated while large values indicate a slow convergence as well as a bad mixing. Note that Nugroho and Morimoto ?[11] ?have proven that the IFs are relatively not sensitive to the initial values. ?It can be seen that the resulting parameter estimates are quite close. By comparison, the SD value and standard error of the IHMC method is smaller, indicating that the fluctuation of variance of the sampling result obtained by the IHMC method is smaller, and the sampling results are more reliable. Meanwhile, the IFs of the IHMC method are all lower than those of the HMC method, indicating that the convergence rate is faster and the mixing effect is better. The acceptance rate of the IHMC method is obviously higher than that of the HMC method, and the acceptance rate is very close to 1, which shows that the IHMC method has a higher sampling efficiency.

        The trace plots and the autocorrelation plots for posterior samples of two methods are displayed in Figs.1 and 2, respectively. The trace plots show that the sampling result of the IHMC is more stable. The autocorrelation plots exhibit quick decay of the autocorrelation as time lag between samples increases, indicating that the process is stationary. As observed in the autocorrelation plots, the attenuation of the autocorrelation of two HMC methods are at a good pace and have the similar performance. For two HMC methods, the autocorrelation attenuation and acceptance rate of ??σ ??ε ??is slower than other parameters, it may be that the ??σ ??ε ??parameter is highly sensitive to the model, which needs further study.

        Example 4.2 ???Sampling the variance Gamma distribution.

        Financial time series noise often exhibits spiky and thick-tailed properties. we use the normal scale mixture distribution—Variance Gamma ?distribution (VGD) to verify the effectiveness of the IHMC method for sampling distributions that exhibit spiky and thick-tailed properties. The VGD was proposed by Seneta ?et al . ?[12,13] ?to describe stock market returns. The density of VGD can be written as

        ∫ ??∞ ??0 N(x|μ, ?σ ??2 ?θ )IG(θ| ?2 , ?2 ) d θ.

        That is to say, the VGD is equivalent to the following hierarchical form:

        X μ, σ ??2 ,θ~N x|μ, ?σ ??2 ?θ ???and ?θ ~

        IG θ| ?2 , ?2 ?.

        Set the parameter ??μ,σ,?= 6,2,5 ??and initial value ??x ??0 =2 ?to sample the distribution using the HMC method and the IHMC method. Similar to Example 4.1, we set iteration number 100 for two methods, step size 0.25 for the HMC method and 0.5 for the IHMC method. The sampling result are shown in Tab.2, Figs.3 and 4. From the sampling result, it can be seen that the IHMC method is more stable and has higher acceptance rate. Comparing the IF values and the autocorrelation function plots in Fig.4 reveals that the sampling result for the IHMC method has better convergence speed and mixing effect.

        5 Conclusions

        In this study, we have improved the HMC method by combining the Velocity Verlet integrator and proposed the IHMC method. By using two numerical examples, we show that the sampling results for the IHMC method are more stable, with better SD, standard error, acceptance rate and IF values. Therefore, we may make the conclusion that the IHMC method has higher sampling efficiency and stability than the HMC method.

        It is worth noting that it is also interesting to improve the HMC method by combining the Velocity Verlet integrator mentioned in this paper with the existing MHMC methods, which is the major topic of our future work.

        References:

        [1] ??Duane ?S, Kennedy A D, Pendleton B J, ?et al . Hybrid Monte Carlo [J]. Phys Lett B, 1987, 195: 216.

        [2] ?Neal ?R M. Bayesian learning for neural networks [M]. New York: Springer, 1996.

        [3] ?Neal R M. MCMC using Hamiltonian dynamics [C]// Handbook of Markov chain Monte Carlo. Boca Raton: Chapman & Hall/CRC Press, 2011.

        [4] ?Radivojevi T, Fernández-Pendás M, Sanz-Serna J M, ?et al . Multi-stage splitting integrators for sampling with modified Hamiltonian Monte Carlo methods [J]. J Comput Phys, 2018, 373: 900.

        [5] ?Swope, W C, Andersen H C, Berens P H, ?et al . A computer simulation method for the calculation of equilibrium constants for the formation of physical clusters of molecules: application to small water clusters [J]. J Chem Phys, 1982, 76: 637.

        [6] ?Bou-Rabee N, Sanz-Serna J M. Geometric integrators and the Hamiltonian Monte Carlo method [J]. Acta Number, 2018, 27: 113.

        [7] ?Shephard ?N. Fitting nonlinear time-series models with applications to stochastic variance models [J]. J Appl Econ, 1993, 8: S135.

        [8] ?Jacquier E, Polson N G, Rossi P E. Bayesian analysis of stochastic volatility models [J]. J Bus Econ Stat, 2002, 20: 69.

        [9] ?Takahashi M, Omori Y, Watanabe T. Estimating stochastic volatility models using daily returns and realized volatility simultaneously [J]. Comput Stat Data An, 2009, 53: 2404.

        [10] ?Takahashi M, Watanabe T, Omori Y. Volatility and quantile forecasts by realized stochastic volatility models with generalized hyperbolic distribution [J]. Inter J Forecast, 2016, 32: 437.

        [11] Nugroho D B, Morimoto T. Estimation of realized stochastic volatility models using Hamiltonian Monte Carlo-Based methods [J]. Comput Stat, 2015, 30: 491.

        [12] Madan D B, Seneta E. The variance gamma (VG) model for share market returns [J]. J Bus, 1990: 511.

        [13] Seneta E. Fitting the variance-Gamma model to financial data [J]. J Appl Probab, 2004, 41: 177.

        收稿日期: ?2022-10-17

        作者簡(jiǎn)介: ??李婉熒(1999-), 山西臨汾人, 博士研究生, 主要研究方向?yàn)榻鹑跀?shù)學(xué).

        通訊作者: ?唐亞勇. E-mail: yayongtang@scu.edu.cn

        猜你喜歡
        效率方法模型
        一半模型
        重要模型『一線(xiàn)三等角』
        提升朗讀教學(xué)效率的幾點(diǎn)思考
        甘肅教育(2020年14期)2020-09-11 07:57:42
        重尾非線(xiàn)性自回歸模型自加權(quán)M-估計(jì)的漸近分布
        可能是方法不對(duì)
        3D打印中的模型分割與打包
        用對(duì)方法才能瘦
        Coco薇(2016年2期)2016-03-22 02:42:52
        四大方法 教你不再“坐以待病”!
        Coco薇(2015年1期)2015-08-13 02:47:34
        捕魚(yú)
        跟蹤導(dǎo)練(一)2
        人妻少妇精品无码专区动漫| 亚洲写真成人午夜亚洲美女| 成年人视频在线观看麻豆| 亚洲网站免费看| 国产aⅴ丝袜旗袍无码麻豆| 精品蜜桃一区二区三区| 中文字幕一区二区三区在线看一区| 亚洲综合偷拍一区二区| 一区二区三区极品少妇| 精品无码久久久久久久久水蜜桃| 国产精品无码一区二区三区电影 | 18国产精品白浆在线观看免费 | 最新国产av无码专区亚洲| 毛片免费全部无码播放| 国产激情久久99久久| 亚洲国产精品综合久久20 | 国产精品理人伦国色天香一区二区| 一本大道久久东京热无码av| 亚洲av综合色区在线观看| 中文字幕一区二区在线看| 日本视频二区在线观看| 强开少妇嫩苞又嫩又紧九色| 国产女女做受ⅹxx高潮| 99久久精品国产片| 国产网友自拍视频在线观看| 欧美乱妇高清无乱码免费| 欧美亚洲国产一区二区三区| 暖暖视频在线观看免费| 日产无人区一线二线三线新版| 欧美视频第一页| 欧美精品一本久久男人的天堂| 精品日本免费观看一区二区三区| 久草手机视频在线观看| 97一期涩涩97片久久久久久久 | 人妖国产视频一区二区| 999精品无码a片在线1级| 性色av 一区二区三区| 98在线视频噜噜噜国产| 亚洲午夜久久久精品国产| 免费国产不卡在线观看| 丝袜美腿高清在线观看|