WANG Chun-feng,JIANG Yan,SHEN Pei-ping
(1.College of Mathematics and Information,Henan Normal University,Xinxiang 453007,China)
(2.Foundation Department,Zhengzhou Tourism College,Zhengzhou 450009,China)
This paper considers the following minimax linear fractional programming problem(MLFP)
wherep≥2,A∈Rm×n,b∈Rmare arbitrary real numbers,ni(x)are affne functions,D={x∈Rn|Ax≤b}is bounded with intD/=?,and?x∈D,ni(x)≥0,di(x)>0,i=1,···,p.In fact,if we use the method of[1]to preprocess problem MLFP,we also only requiredi(x)/=0,i=1,···,p.
As an important branch of nonlinear optimization,fractional programming attracted the interest of practitioners and researchers since the 1970’s.There are two main reasons.Onereason is that it frequently appears in various disciplines,including transportation planing[2,3],government planing[4], finance and investment[5–7],and so on.Another reason is that,fractional programming is NP-hard[8,9],that is,it generally posses multiple local optimal solutions that are not globally optimal,so it is of great challenge to solve this problem,and it is necessary to put forward some effective methods.
The problem MLFP is a special class of fractional programming,which also attracted the interest of practitioners and researchers during the past years[10–16].To solve problem MLFP,many algorithms were proposed,including partial linearization algorithm[17],interior point algorithm[18],parametric programming method[19],cutting plane algorithm[20],monotonic optimization approach[21],branch and bound algorithms[1,22–23],and so on.In addition,some theoretical results were obtained about the problem MLFP,and the readers can refer to the literature[1].
The aim of this paper is to present a new branch and bound algorithm for solving problem MLFP.Compared with other three branch and bound algorithms[1,22–23],our algorithm is easier to implement.In their methods,to obtain a linear relaxation programming problem of problem MLFP,their procedures need twice linear relaxation.However,our method only need once linear relaxation.Comparison results show that the performance of our algorithm is superior to the other three methods in most case.
This paper is organized as follows.In Section 2,the new linear relaxation technique is presented,which can be used to obtain the linear relaxation programming problem LRP for problem MLFP.In Section 3,the global optimization algorithm is described,and the convergence of this algorithm is established.Numerical results are reported to show the feasibility and effciency of our algorithm in Section 4.
To solve problem MLFP,we first convert it into an equivalent problem(EP).After that,for generate the linear relaxation problem of EP,we present a new linear relaxation technique.
In order to derive the equivalent EP of MLFP,we first computeand construct the initial rectangle
Then by introducing a new variablet,we can obtain the EP of problem MLFP as follows
Theorem 1 If(x?,t?)is a global optimal solution of EP,thenx?is also a global optimal solution of problem MLFP,andt?is the optimal value of EP and MLFP.
Proof Readers can refer to[1].
By Theorem 1,in order to globally solve problem MLFP,we may globally solve EP instead.So,in the following,we only consider how to solve the EP.
To solve EP,we present a branch and bound algorithm.In this algorithm,a principal process is to construct a linear relaxation programming problem for EP,which can provide a lower bound for the optimal value of EP overXk?X0.
LetXk={x|l≤x≤u}be the initial boxX0or modified box as defined for some partitioned subproblem in a branch and bound scheme.We will show how to construct the problem LRP for EP overXk.
For convenience in expression,let
To derive the problem LRP of EP overXk,we first consider the term
Let Φi(x,t),from(2),we have the following relation
By(3),the linear relaxation programming problem LRP can be established as follows
Letv(LRP)andv(EP)be the optimal value of problems LRP and EP,respectively,from the above discussion,obviously,we havev(LRP)≤v(EP)overXk.
Theorem 2 For allx∈Xk=[l,u],let Δx=u?l,consider the functionsand Φi(x,t).Then we have
Proof From the definitions Φi(x,t)and(x,t),we have
From Theorem 2,it follows thatwill approximate the function Φi(x,t)as Δx→0.
In this section,based on the former results,we present the branch and bound algorithm to solve EP.
During each iteration of the algorithm,the branching process will generate a more re fined partition that cannot yet be excluded from further consideration in searching for a global optimal solution for EP,which is a critical element in guaranteeing convergence.This paper chooses a simple and standard bisection rule,which is suffcient to ensure convergence since it drives the intervals shrinking to a singleton for all the variables along any in finite branch of the branch and bound tree.
Consider rectangleX={x∈Rn|lj≤xj≤uj,j=1,···,n}?X0,which is associated with a node subproblem.The branching rule is described as follows
(i)letk=argmax{uj?lj|j=1,···,n};
(ii)letτ=(lk+uk)/2;
(iii)let
Through using this branching rule,the rectangleXis partitioned into two subrectanglesX1andX2.
Based upon the results and operations given above,this subsection summarizes the basic steps of the proposed algorithm.
LetLB(Xk)and(x(Xk),t(Xk))be the optimal function value of problem LRP and an element of the corresponding argmin over the subrectangleXk,respectively.
Algorithm statement
Step 1 Set the convergence tolerance∈≥0;the feasible error∈1≥0;the upper boundUB0=+∞;the set of feasible pointsF=?.
FindLB0=LB(X0)and(x(X0),t(X0))by solving the problem LRP overX0.With the feasible error∈1,if(x(X0),t(X0))is feasible to EP,set(x0,t0)=(x(X0),t(X0)),F=F∪{(x0,t0)}and updateUB0.IfUB0?LB0≤∈,then stop:(x0,t0)is an∈-optimal solution of EP.Otherwise,setQ0={X0},k=1,and go to Step 2.
Step 2 SetUBk=UBk?1.SubdivideXk?1into two subrectanglesXk,1,Xk,2via the
IfUBk=t(Xk,t),set(xk,tk)=(x(Xk,t),t(Xk,t)).
Step 4 SetQk=(Qk?1
Step 5 SetLBk=min{LB(X)|X∈Qk}.LetXkbe the subrectangle which satisfies thatLBk=LB(Xk).IfUBk?LBk≤∈,then stop:(xk,tk)is a global∈-optimal solution of problem EP.Otherwise,setk=k+1,and go to Step 2.
The following theorem gives the global convergence properties of the above algorithm.
Theorem 3 If the algorithm terminates finitely,then upon termination,xkis a global∈-optimal solution for problem MLFP;else,it will generate an in finite sequence{xk}of iterations such that along any in finite branch of the branch and bound tree,and any accumulation point will be a global optimal solution of problem MLFP.
Proof When the algorithm terminates finitely,the conclusion is obvious.When the algorithm terminates in finitely,as stated in[25],a suffcient condition for the algorithm to be convergent to a global optimum is that the bounding operation must be consistent and the selection operation is bound improving.
A bounding operation is called consistent if at every step any unfathomed partition can be further refined,and if any in finitely decreasing sequence of successively refined partition elements satisfies
whereUBkis a computed upper bound in stagekandLBkis the best lower bound at iterationknot necessarily occurring inside the same subrectangle withUBk.In the following,we will show(5)holds.
Since the employed subdivision process is the bisection,the process is exhaustive.Consequently,from Theorem 2 and the relationshipv(LRP)≤v(EP),formulation(5)holds,this implies that the employed bounding operation is consistent.
A selection operation is called bound improving if at least one partition element where the actual upper bound is attained is selected for further partition after a finite number of refinements.Clearly,the employed selection operation is bound improving because the partition element where the actual upper bound is attained is selected for further partition in the immediately following iteration.
Based on the above discussion,we know that the bounding operation is consistent and that selection operation is bound improving.Therefore,according to[25],the employed algorithm is convergent to the global optimum of MFLP.
In this section,to verify the performance of the proposed algorithm,some numerical experiments are carried out and compared with three latest algorithms[1,22–23].The algorithm is implemented by Matlab 7.1,and all test problems are carried out on a Pentium IV(3.06 GHZ)microcomputer.The simplex method is applied to solve the linear relaxation programming problems.
For test problems 1–8,the convergence tolerance∈is set to 5×10?8,the feasible error∈1are set 0.005,0.001,0.001,0.005,0.001,0.001,0.001,0.001,which agree with the feasible error used in[1].
The results of problems 1–8 are summarized in Table 1,where the following notations have been used in row headers:Iter:number of algorithm iterations;Lmax:the maximal number of algorithm active nodes necessary.
Example 9 is a random test problem.Table 2 summarizes our computational results of Example 9.For this test problem,the convergence tolerance∈=5e?8,and the feasible error∈1=0.001.In Table 2,Ave.Iterdenotes the average number of iterations;Ave.Timerepresents the average CPU time of the algorithm in seconds,which are obtained by solving 10 different random instances for each size.
Example 1[1,21,22]
Example 2[1,21,22]
Example 3[1,22]
Example 4[1,22]
Example 5[1,23]
Example 6[1,23]
Example 7[1,23]
Example 8[1,23]
Example 9
where all elements ofcij,dij,i=1,···,p,j=1,···,nare randomly generated in[0,1];all elements ofdi,fiare randomly generated in[0,p];all elements ofAandbare randomly generated in[0,1].
From Table 1,it can be seen that,except for Examples 2 and 8,the performance of our algorithm is superior to the other three methods.From Table 2,we can see that the
CPU time and the number of iterations of our algorithm are not sensitive to the size of the problemspandm.
Table 1:Computational results of Examples 1–8
Table 2:Computational results of Example 9
Test results show that our algorithm is competitive and can be used to solve the problem MLFP.
[1]Jiao H W,Liu S Y.A new linearization technique for minimax linear fractional programming[J].Intern.J.Comp.Math.,2014,91(8):1730–1743.
[2]Almogy Y,Levin O.Parametric analysis of a multistage stochastic shipping problem(Edited by Lawrence J)[M].Oper.Res.69,London,England:Tavistock Publications,1970.
[3]Falk J E,Palocsay S W.Optimizing the sum of linear fractional functions,recent advances in global optimization(edited by Floudas C A and Pardalos P M)[M].Princeton,New Jersey:Princeton University Press,1992.
[4]Colantoni C S,Manes R P,Whinston A.Programming,pro fi t rates and pricing decisions[J].Account.Rev.,1969,44:467–481.
[5]Maranas C D,Androulakis I P,Floudas C A,et al.Solving long-term fi nancial planning problems via global optimization[J].J.Econ.Dyn.Contr.,1997,21:1405–1425.
[6]Konno H,Watanabe H.Bond portfolio optimization problems and their applications to index tracking:a partial optimization approach[J].J.Oper.Res.Soc.Japan,1996,39:295–306.
[7]Schaible S.Fractional programming[M].Horst R,Pardalos P M.Handbook of Global Optimization,Dordrecht:KluwerAcademic,1995.
[8]Freund R W,Jarre F.Solving the sum-of-ratios problem by an interior point method[J].J.Glob.Optim.,2001,19:83–102.
[9]Matsui T.NP-hardness of linear multiplicative programming and related problems[J].J.Glob.Optim.,1996,9:113–119.
[10]Ahmad I,Husain Z.Duality in nondifferentiable minimax fractional programming with generalized convexity[J].Appl.Math.Comput.,2006,176:545–551.
[11]Bajona-Xandri C,Martinez-Legaz J E.Lower subdifferentiability in minimax fractional programming[J].Optim.,1999,45:1–12.
[12]Balasubramaniam P,Lakshmanan S.Delay-interval-dependent robust-stability criteria for neutral stochastic neural networks with polytopic and linear fractional uncertainties[J].Intern.J.Comput.Math.,2011,88(10):2001–2015.
[13]Ding F.Decomposition based fast least squares algorithm for output error systems[J].Sig.Proc.,2013,93:1235–1242.
[14]Ding F.Two-stage least squares based iterative estimation algorithm for CARARMA system modeling[J].Appl.Math.Model.,2013,37:4798–4808.
[15]Zhu J,Liu H,Hao B.A new semismooth newton method for NCPs based on the penalized KK function[J].Intern.J.Comput.Math.,2012,89(4):543–560.
[16]Liu Y J,Ding R.Consistency of the extended gradient identi fi cation algorithm for multi-input multioutput systems with moving average noises[J].Intern.J.Comput.Math.,2013,90(9):1840–1852.
[17]Benadada Y,Fedand J A.Partial linearization for generalized fractional programming[J].Zeitschrift F¨ur Oper.Res.,1988,32:101–106.
[18]Freund R W,Jarre F.An interior-point method for fractional programs with convex constraints[J].Math.Prog.,1994,67:407–440.
[19]Crouzeix J P,Ferland J A,Schaible S.An algorithm for generalized fractional programs[J].J.Optim.The.Appl.,1985,47:135–149.
[20]Barros A I,Frenk J B G,Generalized fractional programming and cutting plane algorithms[J].J.Optim.The.Appl.,1995,87:103–120.
[21]Phuong N T H,Tuy H.A uni fi ed monotonic approach to generalized linear fractional programming[J].J.Glob.Optim.,2003,26:229–259.
[22]Feng Q,Jiao H,Mao H,Chen Y.A deterministic algorithm for min-max and max-min linear fractional programmingproblems[J].Intern.J.Comput.Intel.Sys.,2011,4:134–141.
[23]Feng Q,Mao H,Jiao H.A feasible method for a class of mathematical problems in manufacturing system[J].Key Engin.Mater.,2011,460-461:806–809.
[24]Li X A,Gu M N.Global optimization method for a class of new fractional programming problems[J].J.Math.,2012,36(2):1011–1020.
[25]Horst R,Tuy H.Global optimization,deterministic approaches[M].Berlin:Springer-Verlag,1990.