Bin Shi,Xu Yang,Liexiang Yan*
Department of Chemical Engineering,Wuhan University of Technology,Wuhan 430070,China
Distillation of crude oil is regarded to be one of the most fundamentalprocesses in petroleum refining and petrochemicalindustries,where the crude oil is separated into different products each with specific boiling range.In response to the highly competitive market and stringent environmental laws,improving the operation level of crude oil distillation unit(CDU)is essential.In fact,the optimization of CDU system is beneficial to simultaneously achieve a well-controlled and stable system,high production rate and product quality as well as low operation cost for the economic consideration.Therefore,the engineering design,control strategy and process optimization of a CDU have been paid attention to improve product efficiency and quality assurance in petroleum industry in recent years[1].
As one of the most complicated operations in the field of chemical separation processes,the operation input and output variables among the CDU are highly interacted,which undoubtedly increase the difficulty in obtaining and maintaining the optimal operation condition for CDU.Moreover,the optimal manipulated parameters of CDU have to be frequently adjusted due to the variation of properties of crude oil supplied,and the production mode may be also changed over strategically from season to season.Besides,the oil supply can give rise to some severe problems in plant management or even lead to the shutdown of the CDU on the condition that the specifications of oil products cannot be reached or the CDU operation is not stabilized.In brief,it is quite necessary to improve operation level of a complex CDU system.
In recent years,the research of CDU operation has been focused on the subject of process control and optimization.Inamdaret al.[2]proposed a steady state model based on(C+3)iteration variables to simulate an industrialCDU.The modelwas first tuned using industrialdata.Then the elitistnon-dominated sorting genetic algorithm(NSGA-II)was employed to solve a few meaningfulmulti-objective optimization problems.Case study has shown that the optimal operating conditions,where the pro fit could be increased keeping the product properties within acceptable limits were found by the proposed approach.Moreet al.[3]presented the optimization of a crude distillation unit using commercial Aspen Plus software.Optimization model constituted a rigorous simulation modelsupplemented with suitable objective functions with and without product flow rate constraints.Simulation study inferred that the product flow rate constraints sensitively affect atmospheric distillation column diameter and crude feed flow rate calculations.Based on all simulation studies,a generalized inference con firmed that it was difficult to judge upon the quality of the solutions obtained as far as their global optimality was concerned.Seoet al.[4]proposed the design optimization of CDU using a mixed integer nonlinear programming(MINLP)method and realized the reduction in energy costs for an existing CDUsystem.As a meta-model,artificialneural networks(ANN)trained by historical data were also applied to the optimization aspect.With a method of design of experiment(DOE),Chenetal.[5]proposed a method using ANNmodels and information analysis fordesign ofexperiment(AIDOE)which carried outthe experimentalor optimization process batch by batch.To maximize a CDU's valuable product yield under required product qualities,Liauet al.[6]developed an expertsystemwhere ANNtrained with production data were used as the knowledge and the optimal conditions were solved with an optimization procedure.Motlaghiet al.[7]also designed an expert system for optimizing a crude oil distillation column where neural networks and genetic algorithm(GA)were used.Now,the method combining the data generated by a rigorous model with a meta-model has become a popular method to carry out the optimization of CDU.Yao and Chu[8]employed support vector regression(SVR)to optimize the CDU models constructed by Aspen Plus and revised DOE optimization procedure.Ochoa-Estopieret al.[9]simulated the distillation column using an ANN model and then the formulated optimization problem was solved using a simulated annealing(SA)algorithm.
In the terms of theoretical basis,rigorous models are more accurate than simplified and statistical models.Nevertheless,it is difficult to combine an optimization algorithm with rigorous models,because a large number of variables and non-linear equations need to be solved simultaneously[10,11]which will give rise to great computing burdens.In considering the industrial application of petrochemical process optimization,a meta-model with desirable fitting accuracy and generalization is more suitable for optimization calculation.As an emerging tool combining the strengths of discrete wavelet transform with neural network processing,wavelet neural network(WNN)models achieve strong nonlinear approximation ability,and thus have been successfully applied to forecasting[12],modeling and function approximations[13].Therefore,WNN is proposed to model the CDU unit in this study,which is expected to simulate it accurately and efficiently.When the modeling of CDU is finished,operation optimization of CDU becomes the core problem.In general,for the complex non-linear optimization problem,evolutionary algorithms outperform DOE and SQP in finding global optimal solution.Among various evolutionary algorithms,the line-up competition algorithm(LCA)is a simple and effective stochastic global optimization technique primarily due to its attractive properties,such as parallel evolutionary strategy and asexual reproduction of individuals[14].Based on these advantages,LCA is employed to find the best operation conditions of CDU in this work.
The aim of this study is to investigate the feasibility of combining WNN methodology with LCA in the area of optimizing operations in CDU.
The procedure for constructing the data-driven WNN model,as presented below,consists of three main steps.The first step is the construction of samples used as the knowledge database for the WNN model.The second step is the selection of the WNN structure and parameters;the third step is performing the training of the WNN model.
Wavelet neural network,inspired by both the feed-forward neural networks and wavelet decompositions has received considerable attention and become a powerful tool for function approximation[15].The main characteristic of WNN is that some kinds of wavelet functions are used as the nonlinear activation function in the hidden layer in place of the usual sigmoid function.Incorporating the time–frequency localization properties of wavelets and the learning abilities of general neural network,WNN has shown its advantages over other methods such as BPNN for complex nonlinear system modeling[16].
The basic wavelet theory is as follows.
For any functionΨ(t)∈L2(R)if it satis fies the admissibility condition[17]:
where Ψ(t)is the mother wavelet,a double parameter family of wavelets created by translating and dilating this mother wavelet:
whereais the dilation parameter andbis the translation parameter.They can be used to control the magnitude and position of Ψ(t).
In this work,a three-layer feed-forward wavelet neural network is designed,it has one input layer,one hidden layer and a linear output layer as shown in Fig.1.
Fig.1.WNN structure.
The hidden layer output is given by
where Xq=[x1q,x2q,…,xmq]Tis theqth vector of input samples,andq=1,2,…,Q.The nodesofinputand outputlayerare setin accordance with the training data.Hidden layer hashneurons.The connection weights from the input layer to the hidden layer areh×mmatrix W1,and the threshold of hidden layer is ah×1 arrayt1.The WNN output is given by
where yq=[y1q,y2q,…,ymq]Tis theqth corresponding vector of expected output,andq=1,2,…,Q.The connection weights from the hidden layer to the output layer aren×hmatrix W2,whilen×1 arrayt2is the threshold of output layer.It should be noticed that the superscript in W1,W2,t1andt2note the layers of WNN.Note that the above WNN is a kind of basic neural network in the sense that the wavelets consist of the basis function,therefore,the scalar parameter and the translation parameter would be determined by a training algorithm.
In order to take full advantage of the local capacity of the wavelet basic functions,the performance of WNN which has one hidden layer of neurons is measured by total error function,which is described as follows:
whereekq=Ykq-Dkq,Ykqis thekth component in theqth network output andDkqis thekth component in theqth network expected output.
The training process aims to find a set of optimal network parameters.In the previous work,the training of WNN is achieved by the ordinary back propagation technique.According to the gradient method,the parameters are tuned by
where η is the learning rate andlis the current iteration numbers.
Remark 1.The Morlet wavelet function ΨM(x)is often considered as a“mother wavelet”in the hidden nodes of the WNN,
We considerdifferentactivation functions,e.g.the Sigmoid shown as follows:
that are often selected in the hidden nodes of some neural network(NN)frameworks.Moreover,the pro files of activation functions by using Morlet,Sigmoid and Gauss are depicted in Fig.2(a),(b)and(c),respectively.
The WNNis trained comparatively slowly while calculating the samples one by one without any numerical optimization method.Recently,some work was devoted by introducing evolutionary algorithm,such as particle swarm optimization(PSO)[18,19],to initialize the parameters of WNN for accelerating the training process of WNN.However,the number of parameters in a practical WNN is up to dozens or even hundreds,which is difficult for evolutionary algorithmto carry out the optimization.In our approach,the numerical optimization algorithm,namely Levenberg–Marquardt(LM)algorithm,is introduced into the training process to accelerate convergence of WNN parameters.At the same time the training mode is changed into batch mode,which adjusts the parameters of WNN by calculating all the samples.Referring to theLevenberg–Marquardtalgorithm,the updating law for the matrix W1and W2,the arrayst1,t2,a,andbare shown as follows:
Fig.2.Pro files of(a)Morlet,(b)Sigmoid,and(c)Gauss function.
It is noted that s is solved by Eq.(15),elements of s are allocated to the W1,W2,t1,t2,aandbwhich are used to recalculate the total errorE.The parameter μ is updated by Eqs.(16)and(17).
Remark 2.Comparing with the back propagation technique for training the network,the parametric updating law by Eqs.(15)–(17)is used to enhance the convergence rate of the iteration while the parameter μ is updated by adjusting θ.Regarding the BPNN model using Sigmoid function and the Gauss function in RBFNN,they usually use the gradient method to update the weights.
If the proposed WNN model can precisely predict the operational modelof realCDU,then it can assistto simplify the complicated process model and improve the solvability of the constrained optimization problem.
Referring the optimization issue for the CDU,the constrained optimization model for the CDU process is formulated as
where the objective functionJrepresents the net revenue.Cprod,jandFprod,jare the price and flow rate of productjrespectively.The amount of the steam,FSused in the process operation multiple its price,CS,represents the energy cost.Φ is the WNN model of CDU modeled by our approach,and this model is a combination of nonlinear algebraic equations where the vector parameter s is estimated by the updating law by Eqs.(15)–(20).mvlbandmvubare the lower and upper bounds of process inputsx.ps1bandpsubare the lower and upper bounds of process outputsy.Constraints by Eq.(23)are specified according to the real process operating conditions.
Regarding the above WNN-based optimization model,the LCA algorithm[20,21]is adopted to solve thisconstrained optimization problem.In the LCA,independent and parallel evolutionary families are always kept during evolution,each family producing offspring only by asexual reproduction.There are the two levels of competition in the algorithm.One is the survival competition inside a family.The best one of each family survives in each generation.The other level is the competition between families.According to their values of objective function,families are ranked to form a line-up.The best family is located in the first position in the line-up,while the worstfamily is putin thefinalposition.The families of different positions have different driving forces of competition.The driving force of competition may be understood as the powerofimpelling family mutation.By the above two levels ofthe competitions,the first family in the line-up is replaced continually by other families,accordingly the value ofits objective function is continually updated.As a result,the optimal solution can be approached rapidly.
The above two levels of competition in the algorithm can be illustrated in Fig.3.It is seen that a two-dimensional search space is occupied by four families,each consisting of five members.Afterwards,all the members in each family compete with each other.The member having the best objective value is chosen as the candidate of this family to strive for a better position in the next line-up.
Fig.3.Mapping diagram of LCA.
The LCA includes mainly the four operating processes:reproduction,ordering,allocation of the search space and contraction of the search space.The calculation steps are detailed as follows:
Step 1.Assign the numbers of evolutionary generation,individual and family,Ng,NiandNf,respectively.Initialize the starting evolutionary generation countergas 1.
Step 2.Uniformly and dispersedly generateNfindividuals,so-called families,to form the initial population.
Thefth individual in thegth generation consists ofNcdecision variables as follows:
Step 4.According to the fitness values,the individuals are ranked to form a line-up.For the problems of global minimum,the lineup is an ascending sequence.Otherwise,for the problems of global maximum,the line-up is a descending sequence.Sort the individuals.For the minimization problem,the individuals will be sorted in ascending order.Conversely,the individuals will be sorted in descending order for the maximization problem.The sorted individuals are expressed below:
Step 5.Allocate the associated search space proportionally for each individual according their position in the line-up.The first one in the line-up will be allocated the smallest sub-space,while the last one will be allocated the largest sub-space.The lower boundLcg,fand upper boundUcg,fof thecth decision variable in the sub-space is calculated by
Step 6.Through asexualreproduction based on the mostdiversity,each individual,so-called father,reproducesNioffspring within its search space.The manner that the offspring are produced is same with the way in Step 3.
Step 7.For thefth individual,theNioffspring together with their father compete with each other,and the best one survives as father in the next generation.
where β is the contraction factor which can be set between 0 and 1.Ifg<Ng,then go back to Step 6,or else stop the iteration.
Table 1Input specifications of the CDU process
Itis very importantto choose a setofappropriate controlparameters for decreasing computing time and increasing quality of solution.The LCA includes three parameters in all:population size(Nf),number of reproduction(Ni)for each family in each generation and contraction factor(β).
LargerNfandNiprovide generally high quality solution,but may result in a longer computing time.Small ones can speed up the convergence rate,but may result in trapping in a local minimum.We have to trade between the computing time and solution quality.
Contraction factor influences strongly on solution quality and computation time.Based on our computing experiences,for a difficult problem,the global optimal solution can be obtained only when 0.9<β<0.99.
Fig.5.Identification of CDU using BPNN,modified WNN and RBFNN:(a)training errors,(b)validation errors.
Referring to the specifications of a CDU system in a real refinery in Wuhan,China,the crude oil at 40°C and 300 kPa with flow rate of 569.6 t·h-1(702.2 m3·h-1)is fed into the CDU,which consists of the preheat train,the main tower,one condenser,three pumparounds(PA1,PA2,PA3)and three side strippers.Steam at 300 kPa and 400°C is used as a stripping agent in the main column and strippers.Five products including naphtha(NAP),diesel(DIE),kerosene(KER),atmospheric gas oil(AGO),and residue(RES)are exhausted at different stages.Fig.4 shows the CDU system,which can be simulated in Aspen Plus environment.The process inputs(x)include the steam flow rate and temperature of steam at the bottom of the column, flow rates of DIE,KER,AGO,PA1( first pump-around),PA2(second pump-around),and PA3(third pump-around).The process outputs(y)are ASTM D86 100%point of DIE,ASTM D86 95%point of AGO,RES of CDU,furnace duty,and duties of PA1,PA2 and PA3.
The independentvariables are randomly varied between their upper and lower bounds to ensure the full exploration of the search space.Table 1 shows the upper and lower bounds of the independent variables.The bounds of each variable is specified according to the real process operating conditions ofthe CDU.500 feasible scenarios,in the sense of leading converged simulation,were generated to build WNN distillation column model.The purpose of this case study is mainly to enhance the pro fitability of the CDU process via optimizing its operation.WNN structure is created to finish the modeling and identification of the CDU.To validate the approximation ability of WNN,BPNN and RBFNN are also constructed to model the same CDU system for comparison.The structure of NN comprises 30 neurons in the hidden layer.350 scenarios of 500 converged simulation scenarios are used to train the three networks,and the rest 150 scenarios are used to validate the trained networks.A comparison of the identification performance(training and validation)of the modified WNN using Morlet function(our work),the BPNN using Sigmoid function and the RBFNN using Gauss function is depicted in Fig.5(a)and(b),respectively.Apparently,the training and validation errors of the modified WNN are smaller than otherapproaches.The modified WNN,BPNN,and RBFNNuse the similar network structure,which containsone inputlayer,one hidden layerand one output layer.It is verified that the activation function embedded in the hidden layer,which extraordinarily affects the ability of approximating the complex nonlinear system and extracting the nonlinear characteristics of it.In this case,the sigmoid function used in BPNN is not orthogonal which may lead the slow convergence rate.However,the Morletfunction in WNN is orthogonal which reduces the redundant part.Moreover,Fig.6 shows that the identified WNN model provides the high accuracy to predictthe outputs ofthe CDUprocess as compared with the rigorous model in Aspen Plus.
Table 2Output specifications of the CDU process
Table 3Feed,product and utility prices
Fig.6.Validation of CDU using the modified WNN.
Based on the identified WNN model,the input/output specifications of the CDU process in Tables 1 and 2 and the prices of the products and the operating cost list in Table 3 are taken into account for the constrained optimization problem.A comparison of the LCA,GA and PSO for solving the same problem has been done here.For fair comparison,function evaluations in each iteration of the three algorithms are set to 150,the corresponding other detailed parameter settings for the algorithms are given in Table 4.Fig.7 shows the results of the three algorithms.It is clear that the values of the objective(J)by using LCA,GA and PSO increase very fast in beginning few generations.At the same generations,the pro fit predictions reveal that the value of LCA is higher than those by GA and PSO.Moreover,Table 5 indicates that all input/output differences between the WNN model and the model in Aspen Plus are less than 0.54%.It is verified that the optimal operating conditions obtained by the WNN-based optimization approach are reliable.The input and output patterns of the CDU with regard to base and optimalconditions are shown in Fig.8(a)and(b),respectively.As compared with the base conditions,the optimal operation increases the production ofdiesel,kerosene,and atmospheric gas oilby 22%,25%and 10%,respectively.The corresponding duties of furnace,PA1,PA2 and PA3 increase 10%,17%,8%,and 3%,respectively.Apparently,the performance of the CUD is improved by increasing a few duties of coolers.Consequently,the proposed approach based on WNN and LCA can reduce energy consumption in regard to the increments of oil products.In addition,by introducing different operation and property constraints into the optimization model,the new operationalscheme with different product distributions can be obtained easily.
Table 4Detailed parameter settings for LCA,GA and PSO
Fig.7.Pro fit predictions of CDU using LCA,GA and PSO.
Fig.8.Radar plots for comparisons of base and optimal conditions of CDU:(a)output flow rates of coolers and products,(b)duties of coolers and furnace.
Table 5Comparisons of the WNN model and process model at optimal operating condition
This study proposed a methodology using the combination of WNN-based optimization modeland LCA to modeland optimize the operation ofcrude distillation unit.The main results ofthis article are summarized as follows:
(1)A WNN model of CDU is constructed,where theLevenberg–Marquardtalgorithm is introduced into the WNN to speed up the training procedure.
(2)Based on the WNN model of CDU,an economic optimization model for crude oil distillation process is built under prescribed constraints.
(3)A practical framework combined with WNN-based optimization model and LCA is presented for optimizing the complex operation of non-linear CDU.
Case study result has shown that the optimal operating condition obtained by the proposed approach can increase the yield of high valuable products and reduce the energy consumption as compared with those in base operating conditions,therefore,increasing the total pro fits of the CDU.
Nomenclature
adilation parameter
btranslation parameter
Cprod,jprices of products,CNY·t-1
Csprices of stripping steam,CNY·t-1
Dkqkth component in theqth network expected output
Dqvector ofqth corresponding vector of expected output
Eerror function
eqvector ofqth sample error
Fprod,jflow rates of products,t·h-1
Fsflow rates of stripping steam,t·h-1
Hqhidden layer output
hnumber of neurons in the hidden layer
I unit matrix
J(s) Jacobian matrix of the network
L0lower bounds of variables in LCA
L2(R) Lebesgue square integrable function
minput number of neural network
mvlblower bounds of manipulated parameters
mvubupper bounds of manipulated parameters
Ngnumbers of evolutionary generation
Ninumbers of individual in each family
Nfnumbers of family in each evolutionary generation
noutput number of neural network
objconstrained objective function,1×104CNY
Ppopulation in LCA
Pnewly generated population
pslblower bounds of product specifications,°C
psubupper bounds of product specifications,°C
Qnumber of input samples
RBFNN radial basis function neural network
S reshaped vector of network parameters
tthreshold of neurons
U0upper bounds of variables in WNN
Wweight of connected neurons
Xqqth vector of input samples
xmanipulated parameters
Y fitness value of individual
Yqvector ofqth network output
Ykqkth component in theqth network output
Ynewly calculated individual fitness
yASTM D86 point of specified products,°C
β contraction factor in LCA
Δ scale of the search interval
η learning rate
θ factor in Levenberg–Marquardt algorithm
λ random number that ranges from 0 to 1
μ parameter in Levenberg–Marquardt algorithm
Φ formulation of WNN model
Ψ(t) wavelet base function
Subscripts
gevolutionary generation counter
lcurrent iteration number
lb lower bound
prod product
Ssteam
ub upper bound
[1]A.Mizoguchi,T.E.Marlin,A.N.Hrymak,Operations optimization and control design for a petroleum distillation process,Can.J.Chem.Eng.73(1995)896–907.
[2]S.V.Inamdar,S.K.Gupta,D.N.Saraf,Multi-objective optimization of an industrial crude distillation unit using the elitist non-dominated sorting genetic algorithm,Chem.Eng.Res.Des.82(2004)611–623.
[3]R.K.More,V.K.Bulasara,R.Uppaluri,V.R.Banjara,Optimization of crude distillation system using aspen plus:effect of binary feed selection on grass-root design,Chem.Eng.Res.Des.88(2010)121–134.
[4]J.W.Seo,M.Oh,T.H.Lee,Design optimization of a crude oil distillation process,Chem.Eng.Technol.23(2000)157–164.
[5]J.Chen,D.S.H.Wong,S.S.Jang,S.L.Yang,Product and process development using artificial neural-network model and information analysis,AIChE J.44(1998)876–887.
[6]L.C.K.Liau,T.C.K.Yang,M.T.Tsai,Expert system of a crude oil distillation unit for process optimization using neural networks,Expert Syst.Appl.26(2004)247–255.
[7]S.Motlaghi,F.Jalali,M.N.Ahmadabadi,An expert system design for a crude oil distillation column with the neural networks model and the process optimization using genetic algorithm framework,Expert Syst.Appl.35(2008)1540–1545.
[8]H.Yao,J.Chu,Operational optimization of a simulated atmospheric distillation column using support vector regression models and information analysis,Chem.Eng.Res.Des.90(2012)2247–2261.
[9]L.M.Ochoa-Estopier,M.Jobson,R.Smith,Operational optimization of crude oil distillation systems using artificial neural networks,Comput.Chem.Eng.59(2013)178–185.
[10]K.Basak,K.S.Abhilash,S.Ganguly,D.N.Saraf,On-line optimization of a crude distillation unit with constraints on product properties,Ind.Eng.Chem.Res.41(2002)1557–1568.
[11]J.C.M.Hartmann,Determine the optimum crude intake level:A case history,Hydrocarb.Process.80(2001)77–84.
[12]H.Chitsaz,N.Amjady,H.Zareipour,Wind power forecast using wavelet neural network trained by improved Clonal selection algorithm,Energy Convers.Manag.89(2015)588–598.
[13]Q.Zhang,A.Benveniste,Wavelet networks,IEEE Trans.Neural Netw.3(1992)889–898.
[14]L.X.Yan,D.X.Ma,Global optimization of non-convex nonlinear programs using lineup competition algorithm,Comput.Chem.Eng.25(2001)1601–1610.
[15]J.Zhang,G.G.Walter,Y.Miao,W.N.W.Lee,Wavelet neural networks for function learning,IEEE Trans.Signal Process.43(1995)1485–1497.
[16]S.Billings,H.L.Wei,A new class of wavelet networks for nonlinear system identification,IEEE Trans.Neural Netw.16(2005)862–874.
[17]I.Daubechies,Ten Lectures on Wavelets,Society for Industrial and Applied Mathematics,Philadelphia,1992.
[18]S.Chi,Character recognition based on wavelet neural network optimized with PSO algorithm,Appl.Mech.Mater.602(2014)1834–1837.
[19]Y.Lu,N.Zeng,Y.Liu,N.Zhang,A hybrid Wavelet Neural Network and Switching Particle Swarm Optimization algorithm for face direction recognition,Neurocomputing155(2015)219–224.
[20]L.X.Yan,Solving combinatorial optimization problems with line-up competition algorithm,Comput.Chem.Eng.27(2003)251–258.
[21]L.X.Yan,K.Shen,S.Hu,Solving mixed integer nonlinear programming problems with line-up competition algorithm,Comput.Chem.Eng.28(2004)2647–2657.
Chinese Journal of Chemical Engineering2017年8期