Yuwan Ding, Hongwei Liu and Xiaojun Ma
(School of Mathematics and Statistics, Xidian University, Xi’an 710126, China)
Abstract:In order to solve variational inequality problems of pseudomonotonicity and Lipschitz continuity in Hilbert spaces, an inertial subgradient extragradient algorithm is proposed by virtue of non-monotone stepsizes . Moreover, weak convergence and R-linear convergence analyses of the algorithm are constructed under appropriate assumptions. Finally, the efficiency of the proposed algorithm is demonstrated through numerical implementations.
Keywords:variational inequality;extragradient method;pseudomonotonicity;Lipschitz continuity;weak and linear convergence
LetHbe a real Hilbert space with transvection 〈·,·〉 and norm ‖·‖ andC?Hbe closed and convex nonempty. In this work, the discussion is mainly focused on the variational inequality problem (VIP) represented in the form:
Seeks*∈Csuch that
〈F(s*),ρ-s*)〉≥0, ?ρ∈C
(1)
whereF:H→His a mapping. The symbol Ω is defined as the solution of VIP (1). Here, the following presumptions hold:
(C1) The answer set Ω is nonempty, i.e., Ω≠?.
(C2) The mappingFis defined as pseudomonotonic, i.e.,
〈F(c),ρ-c〉≥0?
〈F(ρ),ρ-c〉≥0,?c,ρ∈H
(C2') The mappingFisβ-strongly pseudomonotone, i.e., there existsβ>0 satisfying
(C3) The mappingFis Lipschitz continuity and fulfills
‖F(xiàn)(c)-F(ρ)‖≤L‖c-ρ‖,?c,ρ∈H
whereL>0 is a Lipschitz constant.
(C4) The practicable setCis closed, convex and nonempty.
The symbols ? and → express weak convergence and strong convergence, respectively.PC:H→Cis called the metric projection (see the definition in Section 1). It is universally known that the fixed point problem below and VIP (1) are of equivalence:
Seeks*∈Csuch that
s*=PC(s*-τF(s*))
(2)
whereτ>0.
VIP (1), developed by Fichera[1-2]and Stampacchia[3], is of great importance in the field of applied mathematics, which serves as a useful tool for the study of complementarity problems, transportation, network equilibrium problems and many more[4-6]. Because of its role, scholars concentrate their attention on exploring and figuring out its approximate solution, so numerous projection-like methods that have been suggested to deal with VIP (1) with its associated optimization problems (refer to Refs.[7-20]).
To be specific, the original approach to solve VIP (1) is projected gradient method,whose numerical advantage is that only one projection ontoCis finished. In the aspect of convergence proof,Fis strong (or inverse strongly) monotonicity. To weaken this strong condition, Korpelevich[21]and Antipin[22]presented the extragradient method which needs to calculate the value ofFtwice and two projections ontoC, respectively. However, the complex form ofCin concrete applications leads to a slow convergence rate. To improve its numerical efficiency, Censor et al[23],proposed the subgradient extragradient methodby improving Korpelevich’s extragradient method to solve VIP (1) in Hilbert space. This second projection of the method is a specific subgradient projection which can be readily computed. They also established its weak convergence under the monotone and Lipschitz continuous assumptions ofF. The subgradient extragradient method, due to its simplicity and feasibility, has been extensively researched and extended by many scholars (refer to Refs.[24-31] and the references therein). Inspired by the work of Censor[23], Dong et al.[24]embedded the projection and contraction method[32-33]into the subgradient extragradient method and proposed a modified subgradient extragradient method to solve VIP (1) via the following formula
(3)
where
(4)
and the stepsizeτnis selected to be the largestτ∈{σ,σl,σl2,…} fulfilling
(5)
They proved that algorithm (3) is weakly convergent when the hypothesis aboutFis Lipschitz continuous and monotonic. Noting that algorithm (3) runs an Armijo-like line search rule for finding a proper stepsize per iteration, which leads to additional computation costs.
The inertial extrapolation technique was adopted by Nesterov[34]to accelerate the convergence rate about the heavy sphere method[35]. Motivated by the inertial idea, Alvarez and Attouch[36]offered an inertial proximal point algorithm in order to set the maximal monotone operator. Recently, it has been employed by many researchers to quicken the extragradient method for VIP (1)[25-31, 37-39]. However, the weak convergence of the algorithm[24]with inertial techniques has yet to be considered. So, a natural question emerges as follows.
Is is possible to obtain a new modification of the subgradient extragradient method[24]such that a weak convergence theorem and numerical improvement can be gained under much weaker conditions than monotonicity and sequential weak continuity of the mappingF?
In response to the above question, concrete contributions by this work are the following:
Add an inertial effect to the modified subgradient extragradient method[24]for accelerating the convergence properties, which is inspired by some excellent works[20,26,34, 36, 39];
Introduce a new stepsize different from that in Refs.[14, 17, 40] and overcome the drawback of additional computation projections ontoCper iteration. This can lower the computational costs of the algorithm;
Present an inertial subgradient extragradient method and its weak convergence does not require monotonicity and sequential weak continuity of the cost mappingF, compared with the work by Thong et al.[18-19, 29]and by Cai et al.[20];
Ultimately, some numerical computations are presented to demonstrate the effectiveness of this newly proposed algorithm.
This article is organized as below. Several fundamental lemmas and concepts which are applied in the subsequent sections are introduced in Section 1. Weak convergence theorem of this new proposed algorithm is established in Section 2 and R-linear convergence rate is obtained in Section 3. Numerical implementations and corresponding results are presented in Section 4 and display a brief summary in Section 5.
Suppose thatHis known to be a real Hilbert space andC?His closed and convex nonempty.PC:H→Cis called the metric projection. For every dotc∈Hfulfills
PC(c)∶=argmin{‖c-ρ‖ |ρ∈C}
The projection ofζonto a half-spaceT={u∈H:〈v,u-c〉≤0} is computed by
(6)
wherec∈H,v∈Handv≠0[41].
Lemma1[11,42]For eachξ,ρ,η∈H,
(i)〈ξ-ρ,ξ-η〉=0.5‖ξ-ρ‖2+
0.5‖ξ-η‖2-0.5‖ρ-η‖2
(ii)‖φξ+(1-φ)ρ‖2=φ‖ξ‖2+
(1-φ)‖ρ‖2-φ(1-φ)‖ξ-ρ‖2,
?φ∈R
Lemma2[43]Letξ∈H, then
(i)〈ξ-PC(ξ),ρ-PC(ξ)〉≤0,?ρ∈C
(ii)‖PC(ξ)-PC(ρ)‖2≤
〈PC(ξ)-PC(ρ),ξ-ρ〉,?ρ∈H
Lemma3[44]Presume thatC?His closed and convex nonempty. LetF:C→Hbe pseudomonotone and continuous. Then the following equivalence holds:
s*∈Ω?〈F(t),t-s*〉≥0,?t∈C
Lemma4[45]Assume thatH1andH2are two real Hilbert spaces. If the mappingF:H1→H2fulfills uniform continuity onDandD?H1is bounded, thenF(D) is bounded.
Lemma5[46-47]Presume that the sequence {ξn} is nonnegative number satisfying
Lemma6[48]Presume that {bn}?[0,∞) and {wn}?[0,∞) are the sequences fulfilling:
(i)bn+1≤bn+Δn(bn-bn-1)+wn, ?n≥1;
(iii){Δn}?[0,?], where ?∈[0,1).
Lemma7[42,48]Presume thatC?His a nonempty set and {cn} is a sequence inHfulfilling:
(ii)any sequential weak cluster point of {cn} belongs toC.
Then {cn} converges weakly to a dot inC.
In the section, inertial effects and new stepsizes are added to the subgradient extragradient algorithm to solve VIP (1), and its weak convergence is established under some weaker conditions thatFare pseudomonotone and uniformly continuous, compared with the work in Ref.[24]. Next, some useful conditions are stated as follows.
(C5) The operatorF:H→Hfulfills the following property[25]:
Lemma8If the sequence {cn} is created by Algorithm 1 and suppose that
∞. Then the succession {cn} is bounded.
Algorithm 1 Step 0 Take τ1>0,δ∈(0,χ)?(0,1) and {ψn}?[0,?)?[0,1). Adopt the sequence {qn} satisfying Lemma 5. Let c0,c1∈H1 be initial points. Step 1 With cn-1,cn(n≥1), computedn=ψn(cn-cn-1)+cnyn=PC(dn-τnF(dn))(7)and update τn+1=ì?í????min{δ‖dn-yn‖‖F(xiàn)(dn)-F(yn)‖,qnτn}, if F(dn)≠F(yn)qnτn+ω—n, otherwise(8) If dn=yn, stop,dn is a solution of VIP (1). If not, go to Step 2. Step 2 Figure outcn+1=PTn(dn-ΛnτnF(yn))(9)whereTn∶={ξ∈H|〈yn-ξ,dn-τnF(dn)-yn〉≥0} Λn=〈dn-yn,Θn〉‖Θn‖2 Θn=dn-yn-τn(F(dn)-F(yn))(10)
(11)
Remark2With relation (7), it can be easily found that when the proposed method creates some finite iterations,dn∈Ω. Therefore, unless otherwise stated, it is assumed that Algorithm 1 iterates infinitely and generates an infinite sequence.
ProofIn the situation ofF(dn)≠F(yn), asFisL-Lipschitz continuous, it can be derived that
This further discloses that
Lemma10Suppose that presumptions (C1)-(C4) hold. If succession {dn} is produced by Algorithm 1. Then, for anys∈Ω there is
2‖cn+1-s‖2≤2〈cn+1-s,dn-τnΛnF(yn)-s〉=
‖cn+1-s‖2+‖dn-τnΛnF(yn)-s‖2-
‖cn+1-dn+τnΛnF(yn)‖2=‖cn+1-s‖2+
‖cn+1-s‖2+‖dn-s‖2-‖cn+1-dn‖2-
2〈cn+1-s,τnΛnF(yn)〉
After arrangement,
‖cn+1-s‖2≤‖dn-s‖2-‖cn+1-dn‖2-
2τnΛn〈cn+1-s,F(yn)〉
(12)
Using the pseudomonotonicity ofF,yn∈Cands∈Ω, and by Lemma 3 again, it can be seen that 〈F(yn),yn-s〉≥0, which further shows that 〈cn+1-s,F(yn)〉≥〈cn+1-yn,F(yn)〉.
Its version is the following:
-2τnΛn〈cn+1-s,F(yn)〉≤
-2τnΛn〈cn+1-yn,F(yn)〉
(13)
Meanwhile, both the definition ofTnandcn+1∈Tnindicate that
〈cn+1-yn,dn-τnF(dn)-yn〉≤0
After its change, there is
〈cn+1-yn,Θn〉≤τn〈cn+1-yn,F(yn)〉
(14)
With relations (10), (13) and (14), it can be derived that
-2τnΛn〈cn+1-s,F(yn)〉≤-2Λn〈cn+1-
yn,Θn〉=-2Λn〈dn-yn,Θn〉+2Λn〈dn-
2Λn〈dn-cn+1,Θn〉
(15)
In response to the term 2Λn〈dn-cn+1,Θn〉 in Eq.(15), it can be estimated that
2Λn〈dn-cn+1,Θn〉=‖dn-cn+1‖2+
(16)
which comes from the usual relation 2cd=c2+d2-(c-d)2.Combining Eqs.(12), (15) and (16), it shows that
‖cn+1-s‖2≤‖dn-s‖2-
(17)
〈dn-yn,Θn〉=‖dn-yn‖2-
τn〈F(dn)-F(yn),dn-yn〉≥
‖dn-yn‖2-τn‖F(xiàn)(dn)-F(yn)‖·
and
‖Θn‖=‖dn-yn-τn(F(dn)-F(yn))‖≥
‖dn-yn‖-τn‖F(xiàn)(dn)-F(yn)‖≥
Since
whereδ∈(0,χ)?(0,1). Therefore,
So, ?n≥N',
〈dn-yn,Θn〉≥(1-χ)‖dn-yn‖2
(18)
and
‖Θn‖≥(1-χ)‖dn-yn‖
(19)
On the other hand,
‖Θn‖=‖dn-yn-τn(F(dn)-F(yn))‖≤
‖dn-yn‖+τn‖F(xiàn)(dn)-F(yn)‖≤
Since
whereδ∈(0,χ)?(0,1). Therefore,
So, ?n≥N'',
‖Θn‖≤(1+χ)‖dn-yn‖
(20)
With Eqs.(18) and (20), it can be derived that ?n≥N, whereN=max{N',N''},
(21)
It verifies from Eq.(19) that ?n≥N,
(22)
Also, by Eqs.(17) and (21), it deduces that ?n≥N,
‖cn+1-s‖2≤‖dn-s‖2-‖dn-cn+1-ΛnΘn‖2-
(23)
Theorem1Assume that presumptions (C1)-(C5) are satisfied and
So sequence {cn} created by Algorithm 1 converges weakly to a dot ofΩ.
ProofLets∈Ω. Employing the regulation ofdn, it implies that
‖dn-s‖2=‖ψn(cn-cn-1)+cn-s‖2=
‖cn-s‖2+2ψn〈cn-cn-1,cn-s〉+
(24)
Applying Lemma 1 (i), there is
This, along with Eq.(24), shows that
(25)
The Combination of Eqs.(23) and (25) signifies that ?n≥N,
2ψn‖cn-cn-1‖2-‖dn-cn+1-ΛnΘn‖2-
2ψn‖cn-cn-1‖2
(26)
Use Lemma 6 with
bn=‖cn-s‖2
and
wn=2ψn‖cn-cn-1‖2
where [t]+=max{t,0}. Therefore,
Applying Eq.(26), it verifies that ?n≥N,
‖cn-s‖2-‖cn+1-s‖2+
ψn(‖cn-s‖2-‖cn-1-s‖2)+
2ψn‖cn-cn-1‖2≤‖cn-s‖2-
‖cn+1-s‖2+ψn[‖cn-s‖2-‖cn-1-s‖2]++2ψn‖cn-cn-1‖2
(27)
From Eqs.(7) and (11), it can be derived that ?n≥N,
which corresponds to
(28)
This and Eq.(27) verify that
(29)
With Eqs.(20), (22) and (27), it can be deduced that ?n≥N,
‖dn-cn+1‖≤‖dn-cn+1-ΛnΘn‖+
‖ΛnΘn‖≤‖dn-cn+1-ΛnΘn‖+
(30)
Invoking Eqs.(28) and (30) can derive
‖cn+1-cn‖≤‖dn-cn+1‖+
‖dn-cn‖→0,n→∞
(31)
〈dni-τniF(dni)-yni,t-yni〉≤0,?t∈C,
After arrangement, its version is that
Further there is:
〈F(dni),t-dni〉,?t∈C
(32)
(33)
Also,
(34)
From ‖dni-yni‖→0, the resulted sequence {yni} is bounded. SinceFfulfills Lipschitz condition inH, {F(yni)} is bounded. Thus, it can be observed that
Along with Eqs.(33) and (34), it means that
〈F(ynk),t-ynk〉+θi≥0,k≥Ni
(35)
As the sequence {θi} is decreasing, it is found that sequence {Ni} is increasing. Moreover, as {yNi}?C, it may be assumed thatF(yNi)≠0 for eachi≥0 (otherwise,yNiis a solution ) and thus, setting
〈F(yNi),vNi〉=1 can be deduced for eachi≥0.Below, it can be inferred from Eq.(35), for eachi≥0
〈F(yNi),t+θivNi-yNi〉≥0
SinceFis pseudomonotone, it means that
〈F(t+θivNi),t+θivNi-yNi〉≥0
This shows that
〈F(t),t-yNi〉≥〈F(t)-F(t+θivNi),t+
θivNi-yNi〉-θi〈F(t),vNi〉
(36)
Thus, for anyt∈C, it can be deduced that
Remark3
(1) It is remarkable that the weak convergence of Algorithm 1 under presumptions (C2) and (C5) are much weaker than the hypothesis of monotonicity and sequential weak continuity ofFemployed in existing works[20, 24, 29, 49];
(2) The obtained results in this paper improve the results of Theorem 3.1 in Ref.[ 18], the Theorem 3.1 in Ref.[ 24], the Theorem 3.1 in Ref. [ 29], the Theorem 3.1 in Ref. [ 37] and the Theorem 4.1 in Ref. [38], because the convergence is obtained without monotonicity and sequential weak continuity.
On the portion, linear convergence rate of the succession {cn} created by Algorithm 1 is discussed.
Theorem2Suppose that presumptions (C1), (C2’) and (C3) are satisfied. Let
{cn} converges at least R-linearly to the unique solution of Ω.
ProofThe operatorFfulfills strongly pseudomonotone, which means that VIP (1) has a unique solution denoted bys. From the computation rule ofynand applying Lemma 2 (i), it signifies that
〈dn-yn,s-yn〉≤τn〈F(dn),s-yn〉=τn〈F(dn)-F(yn),s-yn〉+τn〈F(yn),s-yn〉≤τn‖F(xiàn)(dn)-F(yn)‖·‖s-yn‖-βτn‖s-yn‖2≤τnL‖dn-yn‖·‖s-yn‖-
βτn‖s-yn‖2
(37)
After its change, it can be derived that
Hence
This shows that
(38)
Combining Lemma 10 and Eq.(38), there is ?n≥N,
‖cn+1-s‖2≤r2‖dn-s‖2
(39)
From the regulation ofdnand Lemma 1 (ii), it can be concluded that
‖d2n+1-s‖2=‖ψ2n+1(c2n+1-c2n)+c2n+1-s‖2=
‖(1+ψ2n+1)(c2n+1-s)+(-ψ2n+1)(c2n-s)‖2=(1+ψ2n+1)‖c2n+1-s‖2-ψ2n+1‖c2n-s‖2+ψ2n+1(1+ψ2n+1)‖c2n+1-c2n‖2
(40)
Combining Eq.(39) and the regulation ofdn, ?n≥N/2 is obtained:
‖c2n+1-s‖2≤r2‖d2n-s‖2=r2‖c2n-s‖2
(41)
Letn∶=2n+1 in Eq.(39), from Eqs.(40) and (41), there is ?n≥N/2,
‖c2n+2-s‖2≤r2‖d2n+1-s‖2=r2[(1+ψ2n+1)‖c2n+1-s‖2-ψ2n+1‖c2n-s‖2+
ψ2n+1(1+ψ2n+1)·‖c2n+1-c2n‖2]≤r2[(1+ψ2n+1)r2‖c2n-s‖2-ψ2n+1·
‖c2n-s‖2+ψ2n+1(1+ψ2n+1)(‖c2n+1-s‖+
‖c2n-s‖)2]≤r2[(1+ψ2n+1)r2-ψ2n+1+ψ2n+1(1+ψ2n+1)(1+r)2]·‖c2n-s‖2≤r2‖c2n-s‖2
(42)
where the last inequality is due toψn≤(1-r)/(1+r),?n≥1.By (42), there is ?n≥N,
‖c2n+2-s‖≤r‖c2n-s‖≤…
≤rn-N+1‖c2N-s‖
(43)
With Eqs.(41) and (43), it can be implied that ?n≥N,
‖c2n+1-s‖≤r‖c2n-s‖≤…
≤rn-N+1‖c2N-s‖
(44)
Consequently, from Eqs.(43) and (44), it can be concluded that {cn} converges R-Linearly tos.
On the portion, several numerical implementations relative to the pseudomonotone VIP (1) are presented. The proposed Algorithm 1 (Alg.1) are compared with some recent self-adaptive algorithms such as Yao’s Algorithm 1 (YISAlg.1)[25], Reich’s Algorithm 3 (SDPLAlg. 3)[26]and Thong’s Algorithm 3.1 (TVRAlg. 3.1)[29]. The tests are performed in MATLAB R2020b on Intel(R) Core(TM) i5-7200U CPU @ 2.50 GHZ, RAM 4.00 GB.
Example1The first classical example is shown in Refs.[26,50-51]. The practicable set fulfillsC=RmandF(x)=Ax, whereAis anm×msquare matrix whose entries are represented by
Table 1 Results in Example 1
Remark4According to the results in Table 1, it is easy to observe that Alg. 1 enjoys a faster convergence speed than YISAlg. 1 and SDPLAlg. 3 in the aspects of iteration number and CPU time. So, the proposed method in this paper is of feasibility.
The corresponding experimental results (execution time in seconds and number of iterations) are exhibited by employing different dimensionsm. Table 2 records the experimental results.
Remark5As shown in Table 2, Alg.1 works better than YISAlg. 1 and TVRAlg. 3.1. To be specific, the proposed algorithm in this paper needs less time and smaller iteration numbers than the compared ones.
Table 2 Results in Example 2
In the article, the modified subgradient extragradient algorithm with inertial effects and non-monotone stepsizes is proposed and analyzed to solve variational inequality problems with pseudomonotonicity. Furthermore, its weak convergence under weaker presumptions is proved and the R-linear convergence rate is obtained. Finally, numerical experiments verify the correctness of the theoretical results.
Journal of Harbin Institute of Technology(New Series)2023年5期