亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        DIP-MOEA:a double-grid interactive preference based multi-objective evolutionary algorithm for formalizing preferences of decision makers*#

        2022-11-23 09:00:26LudaZHAOBinWANGXiaopingJIANGYichengLUYihuaHU

        Luda ZHAO, Bin WANG?, Xiaoping JIANG, Yicheng LU, Yihua HU

        1College of Electronic Engineering, National University of Defense Technology, Hefei 230037, China

        2Third Interdisciplinary Center, National University of Defense Technology, Hefei 230037, China

        3Unit 78092 of People’s Liberation Army of China, Chengdu 610000, China

        Abstract:The final solution set given by almost all existing preference-based multi-objective evolutionary algorithms(MOEAs)lies a certain distance away from the decision makers’preference information region. Therefore,we propose a multi-objective optimization algorithm, referred to as the double-grid interactive preference based MOEA (DIPMOEA), which explicitly takes the preferences of decision makers (DMs) into account. First, according to the optimization objective of the practical multi-objective optimization problems and the preferences of DMs, the membership functions are mapped to generate a decision preference grid and a preference error grid. Then, we put forward two dominant modes of population, preference degree dominance and preference error dominance, and use this advantageous scheme to update the population in these two grids. Finally, the populations in these two grids are combined with the DMs’ preference interaction information, and the preference multi-objective optimization interaction is performed. To verify the performance of DIP-MOEA, we test it on two kinds of problems, i.e.,the basic DTLZ series functions and the multi-objective knapsack problems, and compare it with several different popular preference-based MOEAs. Experimental results show that DIP-MOEA expresses the preference information of DMs well and provides a solution set that meets the preferences of DMs, quickly provides the test results, and has better performance in the distribution of the Pareto front solution set.

        Key words: Multi-objective evolutionary algorithm (MOEA); Formalizing preference of decision makers;Population renewal strategy; Preference interaction

        1 Introduction

        Multi-objective optimization problems (MOPs)are characterized by simultaneous optimization of multiple relevant, yet competing objectives, as opposed to optimization models that have a single objective function (Dong et al., 2020). There are several different methods for solving MOPs. In early works,MOPs were largely addressed by reducing the multi-objective problem to a single-objective problem and solving the resulting problem with mathematical programming approaches, for example, the weighted sum method(Marler and Arora,2010),constraint method (Pirouz and Khorram, 2016), interactive programming method (Lai et al., 2021), and goal programming method (Deb, 2001). Louis and McDonnell (2004) applied evolutionary algorithms(EAs)to MOPs,and an explosion in the field of MOP has since occurred. To date, the multi-objective evolutionary algorithm (MOEA) has become one of the most popular approaches to MOPs. MOEAs can be classified into three types according to their evolutionary mechanisms, namely decomposition-,domination-, and indicator-based MOEAs. Examples of the decomposition-based MOEA include the MOEA/D proposed by Zhang QF and Li (2007)and MOIBA/AD proposed by Zhao et al. (2021).NSGA-II (Deb et al., 2002a) and SPEA-II (Wahid et al., 2015) are typical examples of dominationbased MOEAs. Zitzler and Künzli (2004) proposed R2-IBEA, a general framework for indicator-based MOEAs, and based on this framework, a number of improved MOEAs were developed, such as the IGD indicator based EA (Sun et al., 2019), hypervolume indicator based EA (Jiang et al., 2015), and IBEASVM (Li HR et al., 2019).

        As more and more attention is paid to MOPs in the real world, decision makers (DMs) usually care only about a part of the MOP solution set, which is called the region of interest (ROI). When obtaining this part of solution set, DM preferences are added to the MOEA; this kind of problem is called a preference-based MOP. When solving these problems,if a traditional MOEA method is used,the proportion of non-dominant solutions in the population will increase sharply with an increase of the dimension of the problem objective. This will lead to serious decline in the algorithm performance. However,the preference-based MOEA needs only to search for the DM’s ROI,which not only helps the DM choose its preferred solution,but also improves the efficiency of solving MOEAs and the convergence speed of the algorithm (Li LM et al., 2018). At present, there are mainly four setting methods when expressing DM preferences: dominance relation, angle relation,weight vector, and preference set (these four types are discussed in detail in Section 2).

        However, the above algorithms do not consider DM preference setting from the actual preference point of view, and the DM may provide preference information in different forms. At present, no MOEA can effectively deal with formal DM preferences, which makes it impossible to solve practical problems flexibly. Based on this problem,the contributions of this paper can be summarized as follows:

        1. The formal DM preference in practical MOPs is fuzzily processed, the processed objective preference is mapped to the optimization objective in the optimization model,and the corresponding relationship between the DM’s formal preference and model preference is established.

        2. To avoid performance degradation caused by traditional MOEAs in solving preference-based MOPs, we consider to reset the individual dominance relationship in the objective space and set the individual updating strategy in the grid to keep the distribution of population and the accuracy of the preference area in the final solution set.

        3. In practical problems, DM preference may change, which requires that the DM’s preference interaction be considered when solving MOPs. Therefore,based on setting the initial grid,DM preference can be adjusted in real time by adjusting the grid.

        2 Related works

        According to the introduction mode of DM preference information,preference based MOEAs can be divided into three categories: prior, interactive, and posterior preference based MOEAs. In the following, according to this classification, we analyze and summarize the literature about this algorithm.

        2.1 Evolutionary multi-objective optimization(EMO)with prior preference

        The static method of determining preference information in advance to guide the MOEA search is called prior preference based MOEA. In this kind of algorithm,the DM’s preference information is input in advance, and the algorithm builds a preference model according to the DM’s preference information and then guides the population to search for the preference solution set. When the individual dominance relationship is used as a preference setting method,the ?g-dominance(Luo et al.,2019)and r-dominance(Liu RC et al., 2017) methods are two most classic algorithms. In addition, the AD-NSGA-II algorithm proposed by Zheng and Xie(2014)redefines the dominance relationship and aggregation distance among individuals, giving priority to keeping individuals close to the preference point to search for the preference area. AP-ε-MOEA (Zheng et al., 2014)adds a reference point, which can effectively control the distribution range of the preference solution set and support the search for multiple preference areas. When the preference information is set as the objective weight vector,R-MEAD(short for reference point based MOEA through decomposition)proposed by Sudeng and Wattanapongsakorn(2015)introduces a decomposition mechanism to convert the DM’s preference information into a set of weight vectors carrying the preference information. Then,R-MEAD2 proposed by Wang F et al. (2019) enhances the performance of R-MEAD to some extent.The reference vector guided evolutionary algorithm(RVEA)(Cheng et al.,2016)consists of a number of uniformly distributed weight vectors. Liu QQ et al.(2020) introduced the mechanism of growing neural gas into RVEA (RVEA-iGNG), which further improves the bootstrap performance of the algorithm and solves large-scale MOPs with better results. In the co-evolution method based on the preference set and objective solution set, the basic idea is based on the preference-inspired co-evolutionary algorithm(PICEA) for many-objective optimization (Wang R et al.,2013),for example,PICEA-g(Paknejad et al.,2021) and PICEA-ω(Wang R et al., 2015b). EMO with prior preference has been widely used in practical problems(Miguel et al.,2019;Huang et al.,2021;Zhang ZX et al., 2021).

        2.2 EMO with interactive preference

        MOEAs in the search process based on the timely input of DM information to guide the search mode and direction of the method are called interactive preference based MOEAs. Using the individual dominance relationship as a preference setting method, López-Jaimes and Coello (2014) defined a new preference dominance relationship, which divides the entire objective space into two subspaces.When the angle relationship among individuals is used for setting the preference, the improved anglebased method with a specific bias parameter pruning algorithm with NSGA-II(i-ASA-NSGA-II)proposed by Sudeng and Wattanapongsakorn (2013) determines the DM preference area based on the reference point and the extension angle. When the objective weight vector is used as a preference setting method,Yu et al. (2016) proposed a preference and decomposition based MOEA (called MOEA/D-PRE). In the co-evolution method based on the preference set and objective solution set, Wang R et al. (2015a)proposed a new hybrid evolutionary multi-criterion decision-making approach using the brushing technique (iPICEA-g). EMO with posterior preference has been widely used in practical MOPs(Champasak et al.,2020;Cui et al., 2020;Chen et al.,2021).

        2.3 EMO with posterior preference

        The method of determining preference information based on the MOEA output information is called the posterior preference based MOEA. When the angle relationship among individuals is used for setting the preference,Sudeng and Wattanapongsakorn(2013)proposed an adaptive angle based pruning algorithm (ADA). This algorithm obtains the Pareto optimal solution according to an MOEA, and the preference information of ADA is provided by the maximum angle threshold. ASA (Sudeng and Wattanapongsakorn, 2015) improves ADA by introducing the offset strength parameter to calculate the angle threshold, to determine the preferred solution in the preferred area. The posteriori preference based MOEAs have been widely used in practical engineering (Jakubovski Filho et al., 2019; George and Amudha,2020;Lin et al., 2021).

        When solving MOPs, good results have been achieved from the point of view of improving the algorithm performance. However, in practical applications, adding a decision preference to MOEAs has not yet been studied in depth. Basically, there is almost no algorithm that can be widely applied to most practical MOPs with preference, which is a problem that must be solved.

        3 DIP-MOEA

        A general formulation for MOPs is

        There areMobjective functionsfj(x) (j=1,2,···,M) to be optimized, the set of which is denoted asF(x).xis the decision vector consisting of decision variables, andXdenotes the decision space. These objective functions often conflict;i.e., the best solution for one optimization objective may be the worst for another. Therefore, unlike the case of single-objective optimization, the goodness of a solution in MOPs is determined by dominance.A solutionx1dominatesx2ifx1is no worse thanx2in all of the objective functions and is strictly better in at least one objective function. The set of all the solutions that are not dominated by any other feasible solutions is referred to as the Pareto set(PS). Solutions in the PS reflect the essential tradeoffs among conflicting objectives. Geometrically,the boundary in the objective space defined by the set of points mapped from the PS is termed the Pareto front (PF). Next, we add the decision preference to MOPs. To make the preference in preference-based MOPs easy to understand and express, and to construct a general formal preference expression method and MOEA framework for MOPs with a decision preference in real-world situations,a new double-grid interactive preference based MOEA(DIP-MOEA)is proposed.

        In DIP-MOEA, there are mainly three innovative technologies: (1)the representation of DM preferences and preference errors and the generation of preference degree grids and preference error grids based on them,(2)the population updating strategy based on individual domination in different grids,and(3)the dynamic interaction method of DM preferences. First, taking a two-dimensional (2D) optimization problem as an example, we introduce a method for determining a decision preference and a decision preference error, and then propose a twogrid population renewal strategy. Because the solution updating or search process in traditional multiobjective optimization algorithms may result in a decline in the uniformity and diversity of the final PF solution set, the updating strategy we propose can not only ensure the individual to the target screening of decision preference,but also solve the PF solution set uniformity problem to a certain extent. Finally,through the scope of the decision preference hypercube in the solution space, we can obtain the final preference-based individual.

        3.1 Preference degree and preference error

        3.1.1 DM preference and preference transformation in MOPs

        The decision preference information in practical problems is determined according to the preference aggregation of each optimization objective. The degree of DM preference for different objectives can actually be determined by setting the optimization range for each optimization objective. To facilitate unified representation,we set all optimization objectives as the maximization objective, and the minimization objective is used for consistency by taking the inverses. According to this idea, the conversion steps of the actual DM preference and the preference in MOPs are as follows:

        Step 1: Determine the corresponding types of fuzzy membership functionsμj=φ(fj), fj ∈[lbj,ubj] (ubjand lbjdenote the upper and lower bounds offj, respectively), according to different optimization objectives in the optimization problem.Common fuzzy membership functions include rectangular, trapezoidal, parabolic, normal distribution type, Cauchy distribution type, and ridge distribution type (George and Amudha, 2020).

        Step 2: DM determines the number of gradesNdpreferred by each objective, where each grade isUj,the interval[lbj,ubj]of each optimization objective is divided intoNunits, and preference errorεis specified. For example, in maximizing MOP, the data change of an optimization objective conforms to a normal distribution, the DM prior preference has five grades, the interval [lbj,ubj] is equally divided into five units, and the DM preference errorεis regarded as the standard deviation of the normal distribution. When the DM determines the preference for this problem, the preferred inter-cell range of the optimization objective can be obtained by the membership function formula after the preference levels are determined. This process is illustrated in Fig.1a.

        Step 3: The DM subjectively gives the objective preference levelUjof thejthMOP,Uj= ˉμj-μj,where ˉμjandμjare the upper and lower bounds of the preference membership degree respectively, determined to agree with the DM preference level and the membership degree function, i.e., the endpoints of the coordinate region on the optimization objectivefjof the prior preferencej-dimensional prior preference hypercube (PP-HC) of the MOP determined by the DM.

        Take a 2D optimization problem as an example,as shown in Fig. 1b. If the DM determines that the number of preference levels for each objective is 3,then the 1stto 9thPP-HCs are determined, and if the DM determines that the number of preference levels for each objective is 4, then the 1stto 16thPP-HCs can be determined, with regions 1○to 6○representing different PP-HCs.

        Fig. 1 An illustration of the transition between actual DM preference and preference in MOPs, where (a)shows the corresponding relationship between the decision preference of the jth goal and the fuzzy membership function, and (b) indicates that when the DM preference has three or four levels in the two-dimensional problem, the preference is converted into a prior preference two-dimensional hypercube

        3.1.2 Generation of the preference degree grid and preference error grid

        On the basis of determining the numbers of decision preferences and PP-HCs in Section 3.1.1,preference grids can be partitioned according to the MOP preference levels determined by the DM. As shown in Fig. 2a, in the objective space of an MOP with two optimization objectivesf1andf2, the decision preferences on the two objectives have four levels (denoted as I1, II1, III1, IV1, and I2, II2,III2, IV2). Then the 2D PP-HC (red rectangular region in Fig. 2a) is determined by the preference conversion method in Section 3.1.1,and its endpoint coordinates on the two optimization objectives areˉf1,f1,ˉf2,andf2, to further determine the decision preference grid in the whole objective region. The algorithm for determining the decision preference grid is as follows:

        The condition for judging whether an individual is in a PP-HC is

        Because DMs are subjective in making preference decisions, errors are inevitable. So, preference errors should be fully considered in the process of population renewal. The errors produced by the DM when making the preference decision are expressed in the objective space,and the sub-grid group is produced, whereεjis the width of the grid and is the subjective deviation of the tolerance degree of preference on thejth-dimension objective. Take the 2D objective as an example, as shown in Fig. 2b. After determining the preference errorsε1andε2,a preference error grid can be formed. Then the position of an individual in the objective space needs to be associated with the DM preference error grid, and the determination of the vertex can be obtained from the following equation:

        Fig. 2 An illustration used to determine the preference degree grid and preference error grid in a twodimensional optimization problem, where (a) represents the DM preference degree grid and the PP-HC when the decision preference has four levels, and (b)represents the error grid determined by the single x(pink point) and error grid vertex (blue point) (References to color refer to the online version of this figure)

        3.2 Population renewal strategy

        In this subsection, the dominance relationship between individuals is defined based on the determination of two grids, and the population is updated iteratively through the dominance relationship, to obtain high-quality offspring individuals that meet DM preference requirements.

        3.2.1 Preference degree dominance and preference error dominance

        To screen the populations in the preference degree grid and the preference error grid and to obtain individuals more in line with the preferences of DMs,in this subsection we propose the preference degree dominance strategy(Ishibuchi et al.,2020)and preference error dominance strategy(Menchaca-Méndez et al., 2019).

        Definition 1 (Preference degree dominance, PD dominance) Individualx1PD dominatesx2(i.e.,x1?PDx2) if and only if one of the following conditions is met:

        To facilitate the understanding of the PE dominance strategy,we compare the PE dominance strategy with the traditional Pareto dominance strategy.In Fig. 3b, for an MOP with two objectives, for example, the dominated space of individualxin the objective space is a blue rectangle under the traditional Pareto dominance strategy,and its dominated space under the PE dominance strategy is a pink rectangle.

        Fig.3 Illustration of the preference degree(PD)dominance strategy (a) and preference error (PE) dominance strategy (b) in two-dimensional optimization problems (References to color refer to the online version of this figure)

        3.2.2 Population updating in the grid

        Based on the setting of the preference degree grid and preference error grid and on the PD dominance strategy and PE dominance strategy,individuals in two grid spaces can be updated. Algorithm 1 gives the individual updating strategy in the two grid spaces we design.

        The population updating process in the grid is shown in Fig.4.

        First, based on the establishment of two grid object spaces, the populationP(0) is randomly initialized,and the individuals inP(0)that do not satisfy the PE are copied intoQ(0) andt= 0. When selecting an individual inP(t), the PD dominance relationship is used to compare two individuals in the selection process. If individualp1PD dominatesp2,thenp1is selected;if neither individualp1norp2is PD dominant,one individual is randomly selected and is namedp(lines 7–12).

        Then, when selecting an individual inQ(t), the PE dominance relationship is used to compare the two individuals in the selection. If individualq1PE dominatesq2, then selectq1; if neither individualq1norq2is PE dominant, one individual is randomly selected and the selected individual is namedq(lines 13–17).

        WhetherP(t)accepts individualcis determined as follows: Individualspandqgenerate two offspring individuals using a simulated binary crossover(SBX)operator(Li H et al., 2019), and randomly select an individualc(lines 18 and 19). The specific flow is shown in the supplementary materials.

        Algorithm 1 update(NP, NQ, η, T): updating for the PE Pareto set and PD Pareto set Input: population size in preference space NP, population size in preference error space NQ, SBX operator parameter η,and number of population renewals T Output: preference space population P(T) and preference error space population Q(T)1: t= 0 2: while t ≤T do 3: initialize P(t) at random 4: for k = 1 to NP do 5: for n =1 to 0.5(NP +NQ) do 6: for m= 1 to NQ do 7: for pk at P(t) do 8: if pk?PE pk+1η then 9: qk ?pk 10: Q(t)?qk 11: end if 12: end for 13: for qm at Q(t) do 14: if (qm?PEqm+1)∨[(qm/?PEqm+1)∧(qm+1/?PEqm)] then 15: Q(t)?qm 16: end if 17: end for 18: c′?SBX(qk,qm,η)19: C(t) ?c′(randperm(2)) /* randomly select one of the two individuals obtained by SBX into the set C(t) */20: for cn at C(t) do 21: if ?pk ∈P(t),cn?PDpk then 22: pk ?cn 23: else if ?pk ∈P(t),(cn/?PDpk)∧(qk/?PDcn) then 24: pk ?cn 25: end if 26: end for 27: if ?qm ∈Q(t),cn?PEqm then 28: qm ?cn 29: else if ?qm ∈Q(t),(cn/?PEpk)∧(qk/?PEcn)∧(?j ∈{1,2,···,M},(qk,cn ≥fj)∧(qk,cn <ˉfj)) then 30: if cn ?qm then 31: qm ?cn 32: else if (cn /?qm)∧(qm /?cn) then 33: if M∑images/BZ_146_682_2023_717_2056.png(qm,j -boxj)2 then /* boxj denotes the coordinates of the origin of the jth grid */34: qm ?cn 35: end if 36: end if 37: else if ?qm ∈Q(t),(cn/?PEpk)∧(qk/?PEcn)∧(?j ∈{1,2,···,M},(qk,cn <fj)∧(qk,cn ≥ˉfj)) then 38: Q(t)?cn 39: end if 40: end for 41: end for 42: end for 43: t=t+1 44: end while(cn,j -boxj)2 >M∑images/BZ_146_1043_2023_1078_2056.pngj=1 j=1

        Individualcis compared to all individuals inP(t): If individualcPD dominates one or more individuals inP(t), an individual dominated by individualcis randomly replaced byc. If all of the individuals inP(t) PD dominate individualc, thenP(t) would not accept individualc. If individualcis not PD dominant compared to all individuals inP(t), one individual inP(t) will be randomly replaced withc(lines 21–26).

        Finally,whetherQ(t)accepts individualcis determined (lines 27–39): In the PE dominance relationship, individualcis compared with all the individuals inQ(t). Through the analysis of the relative position relationship among individuals, there are four scenarios:

        (1) If?a ∈Q(t),a?EDc, thenQ(t) does not accept individualc.

        (2)If?a ∈Q(t),c?EDa,add individualctoQ(t)and delete all individuals that are PE-dominated byc.

        (3)If?a ∈Q(t),neither individualcnorais PE dominant,andcandaare in the same enhanced preference hypercube (EP-HC), continue to determine the Pareto dominance relationship between them:(a) If individualcPareto dominatesa, thencwill be added toQ(t) and individualawill be deleted;(b) If neither individualcnorais Pareto dominant,the Euclidean distances of individualscandafrom the grid origin are determined, and the individual with the greater Euclidean distance is reserved.

        (4)If?a ∈Q(t),neither individualcnorais PE dominant,candaare not in the same EP-HC, andQ(t)acceptsc.

        3.3 Dynamic interaction of DM preference

        Whent=T, the iteration of the offspring population updating stops. At this time, the DM has two factors that need to be considered:

        (1)judging whether the individuals of the generated offspring population converge uniformly to the preference region;

        (2)judging whether the initial preferences of the actual MOPs have changed.

        In the above two cases,the MOEA needs to adjust the DM preference. Therefore,we introduce the dynamic change of DM preference interactively,compare the DM current population preference information with the preference information of a practical problem,adjust the parameters of the preference degree grid and the preference error grid, and further select individuals in the objective space using the enhanced hypercube formed by the combination of the two grids to obtain the DIP-MOEA proposed in this study (Algorithm 2).

        Algorithm 2 DIP-MOEA Input: Uj,μj = φ(fj),Nd,εj,NP,NQ,η,T Output: preference Pareto set S(t)1: for j =1,2,···,M do 2: gridpre j ,griderrj ?grid(Uj,μj,Nd) /* determine two grid spaces */3: end for 4: P(t),Q(t)?update(NP,NQ,η,T)5: if Uj =U′j,μj = μ′j then 6: return gridpre(Uj,μj) and griderr(Uj,μj)7: else if Uj and μj remain unchanged then 8: R(t) ?P(t)∪Q(t)9: end if 10: for o =1 to NP +NQ do 11: for ro at R(t) do 12: if (ro,j ≥(fj +εj))∧(ro,j <(ˉfj +εj)) then 13: R(t)?ro 14: end if 15: end for 16: end for

        Fig. 4 An illustration of the population updating process in two grids (SBX: simulated binary crossover)

        4 Simulation study

        Our proposed new preference MOEA was numerically simulated and a case study was made. All simulations presented here were performed on an Intel Core i7-8850H@2.60 GHz CPU with 16.0 GB RAM, Windows 10. All algorithms were operated in MATLAB 2020a. We performed two kinds of simulations. One was designed to solve the traditional multi-objective test function, comparing the performance of DIP-MOEA with those of several popular MOEAs on several performance metrics.The other was the multi-objective knapsack problem(MOKP),which is fundamental for solving many practical MOPs. On the basis of setting preferences,the simulation results of DIP-MOEA were compared with those of several preference-based MOEAs.

        In particular, there is limited research on preference-based MOEAs in real-world problems,and most algorithms introduce DM preferences to enhance the solution speed and make the PS converge more quickly. In this study, we propose DIP-MOEA for both the convergence speed of the algorithm and the quantification of preferences. In this section, we set simulation parameters for the model constructed for the test functions and the real-world problems,with the following two main considerations: First,to solve several common MOEA performance test functions, the preference parameters set here have no practical significance, and the ultimate purpose is to illustrate the advantages and disadvantages of DIP-MOEA in terms of the solution performance.Second, when setting the preference parameters for the real-world MOKP problems,the practical significance of the preference settings for each preferencebased MOEA is considered(Section 4.1). Ultimately,the advantages and disadvantages of DIP-MOEA are determined in solving this problem using different preference setting frameworks.

        4.1 Compared algorithms and parameter settings

        To comprehensively compare and analyze the performance of preference-based MOEAs, we selected two representative algorithms from four different types of DM preference setting methods, and compared MOEAs in simulations. Seven different preference-based MOEAs (g-NSGA-II based on the dominance relationship, AD-NSGA-II and APε-MOEA based on the angle preference relationship,MOEA/D-PRE and RVEA-iGNG based on the weight vector, and PICEA-g and iPICEA-g based on the preference set) were compared with DIPMOEA in terms of solution performance for practical problems. An SBX operator was used to update the population in NSGA-II combined with all preference-based MOEA frameworks, and a polynomial mutation operator(PLM)was used for population mutation.

        Table 1 Parameter settings for eight preference-based MOEAs

        4.2 Test functions and practical test problems

        The first simulation was on DTLZ 1–7 test functions(Deb et al.,2002b)of 3-,5-,and 7-dimensional objectives. We also used DDMOP 1–3 test functions (containing four decision variables, three optimization objectives, and collective behavioral decisions for multiple unmanned aerial vehicles flying in complex environments), which were used in the latest Congress on Evolutionary Computation (CEC)competition (Premkumar et al., 2021), as the test functions. The second simulation was at three different MOKP scales (Bazgan et al., 2009), by comparing the performances of eight preference-based MOEAs when solving MOPs. The parameter settings for these two types of simulations are shown in Tables 2 and 3,respectively.

        In Table 3, we set MOKPs in three different scenarios(2D,3D,and 4D)with different population sizes. The expression of MOKP is

        whereMis the number of backpacks,Nis the number of items,pijandwijrepresent the value and weight of thejthitem in theithbackpack respectively, andcjrepresents the maximum capacity of thejthbackpack. The value and weight of each item in each backpack and the setting reference of the capacity limit each backpack(Bazgan et al.,2009).

        Table 2 Parameter settings for 10 multi-objective test functions

        Table 3 Parameter settings for MOKPs at three different scales

        4.3 Performance metrics

        In the process of preference-based MOEA optimization,the convergence speed of the algorithm can be effectively improved by combining the DM preference information of a DM’s specific preference region. For general MOEA algorithms,there are many special or comprehensive evaluation methods, such as the generation distance (GD), inverted generational distance (IGD), and hypervolume (HV) (Mohammadi et al., 2013). However, GD metric can measure only the distance from the solution set obtained by the algorithm to the real PF, but cannot reflect the distance between the solution set obtained and the solution preferred by the DM. IGD and HV metrics are not applicable to the preference-based MOEAs.

        Thus, the IGD based on composite frontier(IGD-CF) put forward by Cai et al. (2021) was selected to analyze the performance of the algorithm when the DM preference was set. The IGD-CF does not need the real PF; it extracts the non-dominant solution from the combined solution set of all algorithms as a composite frontier for comparison. Then,a preference area was defined on the synthetic leading edge by the preference information provided by the DM.Finally,the preference area was measured using the reference point determined by the DM combined with IGD.

        In summary,we selected three performance metrics to evaluate the performance of the solutions to the two types of problems: IGD, HV, and IGD-CF.We compared DIP-MOEA with the first six local preference based MOEAs using IGD-CF. Preference points of preference-based MOEAs(g-NSGA-II,ADNSGA-II, and AP-ε-MOEA) were consistent with the point setting in IGD-CF indicators, while those of IGD-CF indicators were the same when evaluating the other algorithms. The latter two global preference based MOEAs were compared using IGD and HV. The calculation methods for the three metrics are shown in Eqs. (5)–(7):

        wherePrepresents the set of reference points distributed evenly on the PF,Qrepresents the optimal Pareto solution set obtained by the algorithm,d(v,Q) is the minimum Euclidean distance from an individualvinPto the solution setQ,volume(·)represents a Lebesgue measure,r= (r1,r2,···,rM)Tis a reference point dominated by all objective vectors ofQin an objective space,P*is the composite frontier in the preferred area, and|P*|denotes the number of elements in the solution set of the composite frontier in the preferred area. The smaller the IGD value, the faster the convergence of the algorithm and the better the distribution of the solution set. The larger the HV value, the faster the convergence and the better the diversity of the solution set. The smaller the IGD-CF value, the faster the convergence of the algorithm and the better the distribution of the solution set in the preferred region.

        4.4 Performance comparison on test functions

        Using the parameter setting of each algorithm(including the reference point, preference angle,and preference radius) in Table 1 and the parameter settings of the test functions in Table 2, eight preference-based MOEAs were tested, and the results were compared.

        At first, DIP-MOEA was compared with seven preference-based MOEAs: g-NSGA-II, ADNSGA-II, AP-ε-MOEA, MOEA/D-PRE, PICEA-g,iPICEA-g, and RVEA-iGNG. The preference points in the preference-based MOEAs were consistent with the preference points set by the indicator IGD-CF,and the parameter setting of the simulations is as illustrated in Table 1. There would have been 416 PS pictures if all the simulation results were presented.Therefore, we selected DTLZ 1–7 and DDMOP 1–3 test functions with the highest function complexity for presentation, with a total of 80 final PS pictures obtained from the simulations. Fig. 5 shows the test preference solution set of the DIP-MOEA preference solution results for 7-dimensional DTLZ 1–7 and DDMOP 1–3 test functions. The results for the other algorithms on DTLZ 1–5 test functions are presented in the supplementary materials(Fig. S1).

        As can be seen from Figs.5 and S1,DIP-MOEA can well solve DTLZ 1–7 and DDMOP 1–3 test functions,obtaining the preference solution set needed by the DM. Besides, the set obtained by DIP-MOEA can converge to the DM preference area, showing a good population distribution in the final results.For g-NSGA-II,when the reference point was located in the setting region, the solution set obtained converged to the reference point, but the solution set was widely distributed and cannot meet the DM’s preference requirement. This showed that g-NSGAII was seriously affected by the position of the reference point, and the algorithm had poor stability.The other seven algorithms also showed intuitively that the performance of solving the preferred solution set in the final results was inferior to that of DIP-MOEA. In Figs. 5a–5g and S1, it can be seen that DIP-MOEA had a better population distribution near the DM’s preference area on 7-dimensional DTLZ 1–7 compared with the seven comparison algorithms. The results of high-dimensional DTLZ 4–7 test functions in Figs. 5 and S1 showed that the stability of the DIP-MOEA solution needs to be further improved. The above results show that the preference transformation strategy, population updating strategy, and distributivity-preserving strategy of DIP-MOEA are effective in solving preferencebased MOEA test functions.

        When the reference points were in the setting region, the results of our IGD-CF statistical simulations obtained after running the test functions 50 times independently on different algorithms are shown in Table 4.

        Table 4 Mean and variance of the IGD-CF of function evaluations of 3-, 5-, and 7-dimensional DTLZ 1–7 and DDMOP 1–3 test functions with reference points for six algorithms

        As shown in Table 4, when the reference points were located in the setting region, DIP-MOEA achieved the best results in the vast majority of test functions,especially in the complex real-world problems DDMOP 1–3. However, DIP-MOEA had not reached the optimum in two dimensions on DTLZ 1 and DTLZ 2,and the effect on DTLZ 7 was not good,indicating that the robustness of DIP-MOEA needs to be improved. In addition, the IGD-CF values of the six algorithms all reached a low level, which shows that the algorithms achieved convergence to different degrees. However, with an increase in the dimension, the IGD-CF averages of the six algorithms on the DTLZ series test functions had shown a general increase trend, indicating that the overall performances of the six algorithms decreased.

        Table 5 shows the test results of IGD and HV index values on 3-, 5-, and 7-dimensional DTLZ 1–7 and DDMOP 1–3 test functions of PICEA-g,iPICEA-g, and DIP-MOEA, which were obtained after 50 independent repetitions of the test functions. It can be seen that with an increase in the target dimension and the complexity of test functions, the IGD index values of the three algorithms showed an overall increasing trend, which indicates that the performances of the algorithms gradually decreased. iPICEA-g outperformed PICEA-g and DIP-MOEA in terms of IGD values in some results of test functions. This indicated that the overall performance of iPICEA-g was superior to those of PICEAg and DIP-MOEA on some DTLZ series test functions. However, when the objective dimension was greater than or equal to 5,the overall performance of iPICEA-g suffered from a significant decrease. DIPMOEA performed well on high-dimensional DTLZ test functions, while iPICEA-g performed well on low-dimensional test functions. However, with an increase of the dimension,the overall performance of iPICEA-g decreased,which was especially obvious in DTLZ 1 and DTLZ 4 test functions.

        4.5 Performance comparison in testing MOKP problems

        Simulation results of six preference MOEAs on MOKP problems are shown in Fig.6. Table 6 shows the HV index and the runtime of eight algorithms in testing the MOKP problems.

        Fig.6 shows the distributivity and uniformity of the solution sets of each compared algorithm in solving the MOKP problems in different dimensions. It is obvious that DIP-MOEA obtained better distribution on the solution set in all dimensions except for the 4-dimensional MOKP.However,the figure is not enough to illustrate other performances,such as the convergence speed of the algorithm. Therefore, we need to determine them further in conjunction with the performance indicator comparisons in Table 6.

        It can be seen from the above results that DIPMOEA,RVEA-iGNG,AP-ε-MOEA,and MOEA/DPRE performed well in solving partial preference based MOKP problems in different dimensions.They can obtain better distribution results in the preferred regions, and DIP-MOEA was slightly superior to the other comparison algorithms in terms of the HV index. DIP-MOEA required less runtime than MOEA/D-PRE,but slightly more runtime than AD-NSGA-II, indicating that DIP-MOEA needs to be improved in computational overhead. In the test results of all preference-based MOEAs, DIP-MOEA was superior to all of the comparison algorithms in solution set distribution index results. In summary,DIP-MOEA had good comprehensive performance.

        In addition, to fully reflect the strengths and weaknesses of the algorithm, we counted the runtime of various algorithms on the test functions; the results can be found in the supplementary materials.

        For most test problems,our proposed algorithm DIP-MOEA was superior to most preference-based MOEAs, and can quickly obtain a preferred optimal solution that is more suitable for practical applications.

        In solving actual problems,DIP-MOEA has better comprehensive performance, but at a certain problem scale, the time cost increases significantly.So,it is necessary to further study how to reduce the complexity of the algorithm.

        5 Conclusions and future work

        Preference-based MOEA is designed to obtain the DM’s preferred solution set in MOPs, which has important value in practical engineering. However, this type of algorithm cannot solve most of the current preference-based MOPs. Specifically,in the process of solving DM preference, there are various obstacles that make the research progress in this field slow. To solve this problem, the corresponding relationship between formal preference and model preference of the DM is established by fuzzifying the preference, and the individual dominance relationship and updating strategy are reset in the objective space. Finally, we consider the preference interaction problem when solving MOPs,and adjust the DM preferences in real time by adjusting the grid,retaining the accuracy of population distribution and preference area in the final solution set. To test the performance of DIP-MOEA, simulations for DTLZ functions and MOKP problems are carried out. The results show that DIP-MOEA can quickly solve the test problems and has good performance concerning the distribution of the PF and the uniformity in the final solution set.

        Table 5 Mean and variance of the IGD and HV metrics of the 3-, 5-, and 7-dimensional DTLZ 1–7 and DDMOP 1–3 test functions of the three test algorithms

        Fig. 6 An illustration of the simulation results of the preference-based MOEAs (particle preference and all preferences) on MOKP: (a) 2D MOKP with six kinds of preference-based MOEAs (partial preference); (b)3D MOKP with six kinds of preference-based MOEAs (partial preference); (c) 2D MOKP with three kinds of preference-based MOEAs (all preferences); (d) 3D MOKP with three kinds of preference-based MOEAs(all preferences); (e) 4D MOKP with MOEA/D-PRE (all preferences); (f) 4D MOKP with RVEA-iGNG (all preferences); (g) 4D MOKP with DIP-MOEA (all preferences)

        Table 6 Values of the HV metric and runtime

        To solve practical MOEAs, there are three aspects that need further research. First, future research should consider whether preference parameters for the real-world optimization problem can be obtained through the decision-making process of the DM.For example,after a comprehensive evaluation and preference analysis of the MOPs by a group of decision-making processes, or a multi-participant decision-making process of multiple DMs(Altuzarra et al., 2007; Chiu et al., 2020; Akram et al., 2021),the preference value for each optimization objective of the problems to be optimized is obtained,and the problems are then solved to obtain a preference PS that meets the DM’s preference requirements. Second, there should be further study and summarization of the preference of DMs in practical engineering problems,for example,finding formal preference characteristics for different kinds of practical MOPs and obtaining the corresponding membership functions and preference error ranges for DMs in different scenarios. Third, the adaptability of DIP-MOEA to large-scale problems,the time complexity in the process of solving MOPs, and the overall performance of the algorithm should all be improved.

        Contributors

        Luda ZHAO designed the research and drafted the paper. Bin WANG and Yihua HU guided the research. Yicheng LU provided suggestions for the example background and processed the data. Luda ZHAO, Bin WANG, and Xiaoping JIANG revised and finalized the paper.

        Compliance with ethics guidelines

        Luda ZHAO, Bin WANG, Xiaoping JIANG, Yicheng LU, and Yihua HU declare that they have no conflict of interest.

        亚洲熟妇色自偷自拍另类| 国产台湾无码av片在线观看| 亚洲国产精品毛片av不卡在线| 免费国精产品自偷自偷免费看| 亚洲欧美日韩中文字幕网址| 精品国产一区二区三区毛片| 亚洲国产丝袜久久久精品一区二区| 亚洲精品久久国产精品| 青楼妓女禁脔道具调教sm| 亚洲欧美日韩国产综合一区二区 | 亚洲中文字幕女同一区二区三区| 亚洲精品大全中文字幕| 欧美激情一区二区三区成人| av蓝导航精品导航| 丰满熟妇人妻无码区| 中文字幕av素人专区| 亚洲av色香蕉一区二区三区老师| 久久精品国产亚洲av麻豆床戏| 国产亚洲精品A在线无码| 中文字幕一区二区网站| 日本伦理视频一区二区| 精品国产sm最大网站| 人人澡人人澡人人看添av| 久热香蕉av在线爽青青| 青青草视频视频在线观看| 成品人视频ww入口| 日韩精品人妻系列无码专区免费| 国产亚洲日韩AV在线播放不卡| 国产综合开心激情五月| 国产农村妇女精品一二区| 久久精品国产91久久性色tv| 久久久久综合一本久道| 久久亚洲网站中文字幕| 少妇人妻精品一区二区三区| 国产熟妇搡bbbb搡bb七区| 国产呦系列视频网站在线观看 | 影音先锋女人av鲁色资源网久久| 国产激情在观看| 在线亚洲精品中文字幕美乳色| 中文字幕人妻熟女人妻| 亚洲欧洲高潮|