亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        Robust global route planning for an autonomous underwater vehicle in a stochastic environment*

        2022-11-23 09:00:18JiaxinZHANGMeiqinLIUSenlinZHANGRonghaoZHENG

        Jiaxin ZHANG, Meiqin LIU,3, Senlin ZHANG, Ronghao ZHENG

        1State Key Laboratory of Industrial Control Technology, Zhejiang University, Hangzhou 310027, China

        2College of Electrical Engineering, Zhejiang University, Hangzhou 310027, China

        3Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China

        Abstract: This paper describes a route planner that enables an autonomous underwater vehicle to selectively complete part of the predetermined tasks in the operating ocean area when the local path cost is stochastic. The problem is formulated as a variant of the orienteering problem. Based on the genetic algorithm (GA), we propose the greedy strategy based GA (GGA) which includes a novel rebirth operator that maps infeasible individuals into the feasible solution space during evolution to improve the efficiency of the optimization, and use a differential evolution planner for providing the deterministic local path cost. The uncertainty of the local path cost comes from unpredictable obstacles, measurement error, and trajectory tracking error. To improve the robustness of the planner in an uncertain environment, a sampling strategy for path evaluation is designed, and the cost of a certain route is obtained by multiple sampling from the probability density functions of local paths. Monte Carlo simulations are used to verify the superiority and effectiveness of the planner. The promising simulation results show that the proposed GGA outperforms its counterparts by 4.7%–24.6% in terms of total profit, and the sampling-based GGA route planner (S-GGARP) improves the average profit by 5.5% compared to the GGA route planner (GGARP).

        Key words: Autonomous underwater vehicle; Route planning; Genetic algorithm; Orienteering problem;Stochastic path cost

        1 Introduction

        Monitoring the environment in an all-round way is often necessary in marine-related industries, such as submarine pipelines, oil exploration, harbor industry, and aquaculture (Cheng et al., 2021). Autonomous underwater vehicles (AUVs) that are maneuverable and can be equipped with multiple sensors have attracted great attention from researchers.AUV is a kind of underwater robot which is able to reach areas that are inaccessible to human beings and can finish complex tasks automatically. An AUV can adjust its actions timely based on environmental variations;hence,the route-planning strategy significantly affects the reliability and efficiency of the operation. An AUV is expected to carry out as many tasks as possible with limited battery energy (Han et al., 2021). Due to the complexity of the ocean, a comprehensive route-planning strategy is required to address the route-planning problem when the path cost is stochastic.

        Many noteworthy works have been reported to deal with the autonomous vehicle route-planning problem. The AUV route-planning problem has been represented as a combination of the travel salesman problem (TSP) and the knapsack problem (KP),where the vehicle is required to autonomously maximize the efficiency, i.e., using the limited battery capacity to obtain as much profit as possible (Mahmoud Zadeh et al.,2018). The problem has also been introduced as an orienteering problem (OP) (Chou et al.,2021),and it has been discussed in many fields,such as unmanned aerial vehicle(UAV)mission planning (Royset and Reber, 2009; Dorling et al., 2017)and tourist trip design problem(Vansteenwegen and van Oudheusden, 2007; Schilde et al., 2009). However, the methods proposed in the above studies do not perform well when facing the complex ocean environment. Given that OP is a non-deterministic polynomial (NP) hard problem (Bagagiolo et al.,2021),heuristic methods are supposed to be promising to deal with large and variant instances. A bilevel task planning strategy has been proposed to use metaheuristics to address the issue of team OP of autonomous surface vehicles(Sun et al.,2022). To solve OP with heterogeneous task characteristics, a two-phase heuristic approach has been proposed (Ji et al., 2021). Besides, the evolutionary algorithm has been combined with a greedy randomized adaptive search procedure to find the optimal route of OP(Marinakis et al., 2015). Different from UAVs (Lan et al., 2021), an AUV often faces a highly random ocean environment. Dealing with the randomness of the local path cost in the AUV route-planning problem is full of challenges,and swarm intelligence based evolutionary algorithms are powerful for solving this problem. Despite the excellent performance of existing evolutionary algorithms(Mahmoud Zadeh et al.,2018;Abbasi et al.,2020;Sun et al., 2022),the individuals in these methods are often of low efficiency due to the strict constraint of the total cost. The optimization is often seriously hindered because there is usually no clear boundary between the feasible and infeasible domains in the space of definition.

        Fruitful achievements based on various theories exist in AUV path planning. Traditional algorithms, such as the Dijkstra algorithm (Kirsanov et al., 2013), A* algorithm (Duchoň et al., 2014),the rapidly exploring random tree algorithm (RRT)(Xue et al.,2019),and the fast marching(FM)algorithm (Yu and Wang, 2014),are all effective in handling the AUV path-planning problem. To improve the universality and efficiency, heuristic algorithms,such as the particle swarm optimization(PSO)algorithm(Zhuang et al.,2016),the ant colony algorithm(ACA) (Yan, 2021), and the differential evolution(DE) algorithm (Zhang JX et al., 2022), have been introduced and testified to be reliable. However,the uncertainty of the local path cost,which comes from the inconsistency between the plan and the actual operation, is often not considered. The replanning strategy is effective in dealing with the cost fluctuation in path and route planning (Zeng et al., 2015;Mahmoud Zadeh et al., 2019), but it is only a remedial measure. Ocean model based prediction has been introduced to enhance the performance of planning (Zeng et al., 2020), but it is rarely useful when the problem is small-scale in space and time. To settle the problem with stochastic path cost,a recourse model which describes the constraint as a soft one is designed,where a penalty is proportional to timeout(Teng et al., 2004). Regrettably, this method may bring risks to the AUV. Linearization has also been introduced to model the total profit based on the two-stage recourse model (Evers et al., 2014),but it is unsatisfactory when dealing with large instances because of the high computational cost. Inspired by these works, in this paper we propose a sampling strategy to settle the existing challenges.

        To improve the reliability and efficiency of route planning when the task set is large and the local path cost is stochastic, an improved genetic algorithm (GA) based route planner equipped with a sampling-based route-cost estimator is designed in this paper. The GA is modified with the greedy strategy to be more suitable for solving the AUV route-planning problem compared with traditional methods. The DE local path planner is used to draw out the deterministic path cost, and the total deterministic route cost is replaced by the average sampled multiple times from the local paths’ probability density functions (PDFs). The structure of the proposed route planner is illustrated in Fig. 1.The contributions of this study can be summarized as follows:

        1. An exclusive GA with efficient evolutionary operators is devised to settle the AUV routeplanning problem. To address the individual feasibility problem in heuristic methods neglected by reported works(Mahmoud Zadeh et al.,2015,2018),a novel greedy strategy based rebirth operator is proposed. It can effectively solve the problem that individuals in the infeasible domain contribute little under the total time constraint, thereby tremendously improving the efficiency of the optimization.

        2. Most existing works regarding AUV route planning consider planning with deterministic local path cost (Mahmoud Zadeh et al., 2019). Taking the ocean complexity into consideration, we model the stochastic local path cost as the superposition of normal and Poisson distributions, based on the fact that the randomness of the path cost comes mainly from the path cost estimation error and the maneuvers caused by dynamic obstacles.

        3. The route cost is obtained by sampling from the PDFs of the local paths. The sampling-based route-cost estimator is integrated into the route planner to evaluate the fitness of each feasible route by sampling local paths in the optimization.

        Fig. 1 Structure of the introduced route planner

        2 Problem formulation

        An AUV is expected to perform as many tasks as possible with limited energy. The tasks to be performed include water quality measurement, underwater photography, terrain detection, data loading and unloading with sensor nodes, etc. Each of the tasks can be reckoned as a task spot with a certain profit according to its importance. The mission of the planner is to find a feasible route that maximizes the total profit of one AUV execution under the circumstance that the battery capability is limited and the cost of each sub-path is stochastic. A route refers to the sequence of a series of tasks to be completed, while a path is the physical path to be followed when an AUV operates between two task spots. A typical scenario is shown in Fig. 2, where a feasible route is marked. In this section, the AUV route-planning problem with stochastic path cost is formulated mathematically.

        Fig. 2 A sample of an autonomous underwater vehicle (AUV) following the planned path in the mission region

        2.1 AUV global route-planning problem

        Assume thatNis the task set. An AUV departs fromNs(launch position) and finally arrives atNd(destination) where the AUV is supposed to be recovered by the ship. For convenience, we define the set of all the spots as follows:

        The profitp, which represents the degree of priority of the task is endowed to each of the tasks inN.Define(i,j)as the arc connectingNi ∈N′andNj ∈N′, andtijis the deterministic cost from taskNito taskNj. Assume that all the arcs (i,j)∈A,whereAis the set of arcs connecting spots inN′,are accessible. Thus, we can formulate the AUV global route-planning problem on the complete graphG={N′,A}.

        Letsij ∈{1,0}be the binary selection variable.If arc(i,j)is chosen to be part of the route,sij=1;otherwise,sij= 0. Thus, the total profit of one optional route is given by

        wherepjis the profit of taskNj. The starting and destination spots must be included in the route.

        wheretijis the local cost of pathPi,j, whileTmaxis the time of battery duration. In this study, the goal of the optimization is to maximize Eq.(2)subject to conditions (3)–(5).

        2.2 Local path with stochastic cost

        The total cost of routeR= [Ns,...,Ni,Nj,...,Nd] is the sum of all the local path costs. The local costtijis defined as the sum of the traveling time fromNitoNjand the task execution timetjas follows:wherevGis the AUV’s ground-referenced speed,dijis the length of the most time-saving path betweenNiandNj, andtjis the time going to be spent in carrying out taskNj. The beeline path is the shortest way, but it is often not considered because the AUV needs to avoid collision with any obstacles and make full use of the ocean current. The most energysaving path is given by the path planner embedded in the planning system.

        Eq. (6) gives the deterministic form of the local path cost. However, the AUV’s future motion is often uncertain, which results in uncertainty of the time cost. The uncertainty comes from the measurement error,trajectory tracking error,and unplanned maneuvers performed to avoid collision with unpredictable obstacles.

        2.2.1 Measurement and trajectory tracking errors

        The working area is covered by the ocean current field which is often complex and subject to changes. The AUV uses on-board sensors to measure its speed, location, and other information in real time, and then controls the actuator to track the planned trajectory. The measurement and trajectory tracking errors are hard to estimate or eliminate under limitations of the battery, sensors, and computing resources. Consequently, it is rational to assume that the impact of the measurement and tracking errors is statistically uniform;the additional time can be described as follows:

        whereσ ∝tijindicates that the longer the pathPi,jis,the more the additional time Δteijmay be.

        2.2.2 Unpredictable obstacles

        The on-board collision avoidance system protects the AUV from collisions. However, the detection range of the obstacle avoidance sonar is usually tens to hundreds of meters, and this is not always enough to cover the entire mission area. Besides,the positions of some moving obstacles are hard to ascertain in advance. These facts lead to the uncertainty of the motion: unplanned maneuvers have to be performed to avoid collisions. We assume that the AUV adopts the consistent maneuvering strategy during the movement,and each maneuver will introduce the same additional traveling time Δtm. Thus,the additional traveling time of the travel fromNitoNjcaused by unplanned maneuvers is given by

        The above inference holds based on the scenario defined by Eqs.(7)–(9). Note that the formulation of the stochastic path cost is decoupled from the planner. In other words, the model described can be replaced by others that coincide with specific environments better.

        3 Greedy strategy based genetic algorithm (GGA): solver for the orienteering problem

        The large-scale OP described by expressions (2)–(5) is nonlinear due to the stochastic path cost (Evers et al., 2014). The GA is a powerful heuristic tool to settle such a problem. It uses chromosomes to represent possible solutions and uses a variety of evolutionary operators to iteratively find the optimal solution of the objective function. Generally, these operators include selection, crossover,and mutation. Based on the improved operators of the GA, a rebirth operator is proposed and thus a new route planning method is generated,namely the greedy strategy based genetic algorithm(GGA).

        3.1 Solution space and encoding

        EveryNi ∈Nis possible to be visited in any order during the AUV’s motion. If we put aside the prohibition of task repetition, the solution space of the optimization problem will be anl-dimensional discrete space:

        wherel=|N|is the cardinal number of the task set.The population space of GA is hence defined as

        whereKis the population size andXiis the individual solution in population X. The optimization goal is to maximize the total profitJand find the corresponding task sequenceX*∈S. Notice that because of the restrictions defined by conditions (3)and (4), not all the points inSare feasible to be chosen as possible solutions.

        In the chromosome of one solution, each task is represented by its sequence number, which is an integer, and the sequence numbers of the tasks are arranged in order. Notice that the number of tasks to be carried out is unknown before the departure,so the optimization process should be carried out simultaneously in thel-dimensional space and all of its arbitrary-dimensional subspaces. Therefore, the chromosome length is fixed tol. If the number of planned tasks is fewer thanl, the vacancies will be filled with zeros. In Fig.3,an example of one feasible route in a certain task set and the corresponding chromosome are presented.

        Fig. 3 An example route and its corresponding chromosome

        3.2 Selection,crossover, and mutation

        3.2.1 Selection operator

        The selection operatorTs:SK →Sis a random mapping,which selects one individual from the population.

        The total profit is used as the fitness to assess each individual in the population, and the roulette wheel strategy is adopted by the selection operator.Specifically, the probability of each individual being selected to be a parent is related to its fitness and the population’s fitness,which satisfies the following expression:

        whereJis the fitness function defined in Eq.(2),X°jis one individual in the parent population X°, and 0<α <∞(typically,α= 1). The whole X°is generated by repeating the selection operation.

        Additionally,we adopt the elite retention policy(Zhang H et al.,2020),which allows the most adaptable individuals in each generation being inherited directly in the next generation. In each generation,a certain proportion of the best-performing individuals will be reproduced to the next generation directly.

        3.2.2 Crossover operator

        Crossover is a fundamental aspect of evolution,which enables individuals in the population to exchange information by exchanging chromosome segments. The crossover operatorTc:S2→S2is also described as a random mapping.

        On account of that a radical evolution strategy may lead individuals into the infeasible space frequently, the single-point crossover is therefore adopted. Assume that the parents to be operated areXp1andXp2. They are divided into two segments at the crossover point which is generated arbitrarily,as follows:

        The offspringsXo1= [Xp1,1,Xp2,2] andXo2=[Xp2,1,Xp1,2] are generated by exchanging chromosome segments. The optimization has to be executed in subspaces with different dimensions; hence, the two crossover points inXp1andXp2are not required to be the same, which is different from the situation in traditional methods. Because task repetition may exist inXo1andXo2,task deletion is contained in the crossover operator to satisfy inequality (4). Thereafter, zeros will be used to fill the vacancies if any offspring’s length is shorter thanl. The crossover operator is intuitively illustrated in Fig.4.

        Fig. 4 An example of the crossover operation

        3.2.3 Mutation operator

        Mutation is a vital mechanism to ensure that the GA converges to the optimal solution set in probability. The mutation operatorTm:S →Scarried out on a single individual is a random mapping.

        Traditional mutation operators can hardly cope with such a complex optimization problem. Therefore, we design four optional suboperators based on the existing methods. ForX,the individual to be operated on,these suboperators are defined as follows:

        1. Replacement: Replace taskNi ∈Xby taskNj/∈X. BothNiandNjare selected with an equal probability from all the candidates. This operator is designed to perform a small-range optimization near the previous solution.

        2.Insertion: Select a new taskNj/∈Xand generate the insertion positioni <lrandomly; then,insertNjbetweenxiandxi+1, wherexi,xi+1∈Xare two adjacent genes inX. Since genexicorresponds to a certain task,in what follows we use taskxiinXfor simplicity. By this operation,a new task will be added into the route without changing the existing ones.

        3. Swap: Switch the positions of tasksxi,xj ∈X, whose indexes are selected randomly. This operation explores whether the fitness can be improved by changing the order of two tasks.

        4. Inversion: The randomly selected task sequence [xi,xi+1,...,xj]∈Xis inverted to replace itself. This operator adjusts the order of tasks more efficiently.

        These suboperators are illustrated in Fig. 5.One of the suboperators is chosen randomly each time the mutation is performed.

        Fig. 5 Optional suboperators in the mutation operation (the dotted and solid lines represent the routes before and after being operated on, respectively)

        3.3 Rebirth operator with greedy strategy

        Due to the limitation of the AUV’s durability,some newly generated individuals do not satisfy inequality (5). To settle this problem, discarding the infeasible individuals or giving up the final tasks(Evers et al., 2014) is a practical method. However,these processes are nearly equal to discarding part of the information acquired during the evolution. To improve the efficiency of the optimization using infeasible individuals, a novel rebirth operator is designed. The rebirth operation maps an individual from the infeasible solution space to the feasible solution space:

        whereΩ={X|X ∈S,TX ≤Tmax}is the feasible solution space and ?SΩis the complement space ofΩ.

        This operator aims at minimizing the profit loss when some tasks have to be abandoned to satisfy the total cost restriction. Therefore,the problem can be formulated as follows:

        whereX= [x1,x2,...,xn] is the individual to be optimized,yidetermines whether taskxiwill be deleted, and ΔTiis the running time that can be saved by deleting taskxi. This problem can be regarded as a variant KP,which is NP-complete. Problem(18)has to be solved several times in each iteration during the evolution, and hence a huge amount of computation is required to find the global optimal solution. The greedy algorithm is an effective tool to find the local optimal solution of KP. The proposed rebirth operator is inspired by the greedy strategy,which makes full use of the domain knowledge. The basic idea of the greedy strategy is to first get rid of the tasks with low profits and high costs.

        Following the crossover and mutation operations, the rebirth operation will be performed ifTX >Tmax. The cost effectiveness ofxiis defined asρi, satisfying the following expression:

        and the new individualX′is formed by the remaining tasks. To ensure that the route defined byX′can be completed within the battery life, the operation will be repeated until

        wherekis the number of times that the deletion is executed.

        Based on the greedy strategy, the newly designed rebirth operator can save those individuals who were destined to be weeded out. As a result,the average fitness of the population is improved,giving the evolution a boost. Meanwhile, because the new information does not come from the existing feasible individuals, the diversity of the population is ensured,which allows the optimization to continue without premature convergence.

        The algorithm of the whole GGA is given in Algorithm 1.

        4 Global route planning with stochastic local path cost

        The stochastic path cost leads to a situation as follows: the AUV may have to give up subsequent tasks because it has already spent too much time in completing the early tasks,and the remaining energy is not enough to finish the plan completely. If the stochastic feature of the path cost is not considered,the optimization result may be less robust, which causes a high probability of obtaining an unsatisfactory result in practice.

        Algorithm 1 Greedy strategy based genetic algorithm(GGA) for route planning 1: Input the geographical map, ocean current map, and information of task set N′2: Set the maximum number of iterations imax and the proper population size K 3: Generate the initial population X0 4: for i =0 to imax do 5: Xi+1 = ?6: Calculate the fitness of each X ∈Xi 7: for j = 1 to K do 8: Select the parents Xp1 and Xp2 from Xi by roulette, and generate the crossover points randomly. X′ = Tc(X) = [Xp1,1,Xp2,2] is the first intermediate individual 9: if There exist repetitive nodes {Nr} then 10: Delete repetitive tasks in X′11: end if 12: Select one mutation suboperator to operate on X′.X′′ = Tm(X′) is noted as the second intermediate individual 13: while Ti >Tmax do 14: Renew the list of ΔT 15: for k =1 to |X|do 16: ρk =pkΔT-1 k 17: end for 18: Nd = arg min xk∈X′′ ρk, X′′= X′′-Nd 19: end while 20: Xnew =X′′21: Add Xnew into Xi+1 22: end for 23: end for 24: Xoptimal = arg max J(X)25: return Xoptimal X∈Ximax

        4.1 Local path planning

        The optimal local path is the physical pathPi,jfromNitoNj. By following it, the AUV departing fromNiand targetingNjcan accomplish its journey using the minimum time. The AUV’s waterreferenced velocityvAis constant if we assume that its thrust power is constant(Zeng et al.,2020). The AUV’s ground-referenced velocityvGis the resultant ofvAand the water velocityvC. The energy consumption is consequently proportional to the time consumption. Therefore, the optimal path fromNitoNjsatisfies the following expression:

        where T is the forbidden region occupied by land and other obstacles, andMAUVis the kinematic model of the AUV.

        The map of the operation water area and the detectable ocean current and obstacles will be sent to the local path planner before the departure. The planner will output the optimal path, by following which the AUV can avoid any collision and finally arrive atNj. Moreover, the ocean current can be used effectively and thus the energy is saved.

        The DE-based path planner is used to act as the local path planner. DE is an improved version of GA, and it is usually used for multi-dimensional real-valued functions. A path in DE is defined by a series of way points (WPs), and the chromosome that represents this path is formed by the coordinates that are arranged in order. For example,P= [N1,WP1,WP2,N2] forms a unique path fromN1toN2. However, such a path composed of a couple of straight lines is difficult for the AUV to follow,because sharp maneuvers are certainly tough and uneconomical. The B-spline method is an effi-cient tool for path smoothing (Cai and Yao, 2020);therefore, it is adopted to smooth the local path to satisfy the constraint of the AUV’s kinematic model.This process is shown in Fig.6.

        The standard evolutionary operations (Mahmoud Zadeh et al., 2019) are included in the planner. Chromosomes are composed of genes, which are the coordinates of the control points arranged in sequence. During the optimization,the evaluator assesses each path,and the result will be deemed as the gist of the evolution. In the global route-planning problem, the task setNcontainslelements. The possibility of navigation exists between any two of the tasks in the absence of strong prior knowledge(which is common). Consequently, thel×ltriangle path cost matrixCis formed and sent to the route planner to form the basis of the subsequent global optimization.

        Fig. 6 A local path from N1 to N2

        4.2 Sampling-based route evaluator

        Taking the uncertainty of the path cost into consideration,the AUV may have to give up some tasks for the sake of its safety. Different from the planning,when the AUV finds that it has spent too much time on the initial tasks, which have already been completed, it has to discard the tasks at the end of the route. Therefore, the total profit in one test is represented as follows:

        whereVis the profit shortage caused by the deletion of thektasks at the end of the sequence.

        Assume thatX= [x1,x2,...,xn] is one potential solution of the instance. The total profit loss brought about by the task deletion is naturally the superposition of the deleted tasks’profits. For simplicity,we define the Euclidean distance‖xi-xi+1‖as the cost fromxitoxi+1(for simplicity,we use the normal form to represent a point(e.g.,xi) while the bold form to represent the corresponding position for calculation (e.g.,xi)). The time saved by deletingxj(2<j <n)will be

        Ifxjandxj-1are deleted simultaneously, the time saved will be

        it follows that ΔT{j,j-1} /= ΔTj+ ΔTj-1, which indicates that the system does not satisfy the superposition principle (i.e., the problem is nonlinear).

        Consequently, in spite of the mathematical formulation of the stochastic local path cost presented in Eqs. (9)–(12), the nonlinearity of the problem leads to the difficulty in describing the time consumption of one route analytically in a large instance, because a slight difference in one local path cost may dramatically affect the whole route. Therefore, the sampling-based method is adopted to estimate the cost of possible routes.

        The expectation of the time cost of a local path is given by Eq. (12). However,the total time expectation of a route is not the superposition of that of its local paths. Estimation is implemented to replace the deterministic local path cost with the average value of the sample distribution (SD), which is obtained frommMonte Carlo simulations. Specifically,for routeX,we have

        Fig.7 Probability density function(PDF)and sample distribution (SD) of the local path Pi,j

        5 Simulation results and discussion

        In this section, the GGA and the samplingbased route evaluator are tested to clarify their superiority. The GGA containing the newly designed operators is included in the planning system,where a DE-based path planner is used as the local path planner. The site of the tests is set in the sea around Luxi Island,Zhejiang Province,China,covering an area of 5 km×7 km. The kinematic parameters of the AUV are set according to the AUV TH-B050R produced by Tianhe Maritime,Xi’an,China. A test instance of the route-planning problem with deterministic path cost is solved using the GGA planner proposed in Section 3. Furthermore, the sampling-based route evaluator described in Section 4 is tested when the stochastic path cost is taken into account. The simulations are implemented in MATLAB R2017b on a computer with an Intel i7-8700 CPU and 16 GB RAM.

        5.1 Route planning with deterministic path cost

        The AUV works offthe coast of Luxi Island.Forty tasks whose importance is quantified as different dimensionless integers are scattered in this area,making up the task set. The AUV launched at the starting spot is urged to complete some of these tasks and arrive at the recovery station to be picked up. In this subsection, the local path cost is considered as deterministic, so the local paths and their costs are determined by the DE-based path planner.

        The AUV travels at a constant groundreferenced speed of 4 kn (about 2.06 m/s), and the battery duration is set as 10 800 s. The GGA is tested and the result is compared with that from three GAs that use traditional operators. The differences between the GGA and the others are described as follows:

        1. GA1: Traditionally,there is only one form of mutation in the GA. For a chromosome to be operated on, the mutation operator selects one of the tasks and turns it into another. In other words,only the replacement is adopted in the mutation process.By contrast,the GGA adopts four kinds of mutation suboperators as designed in Section 3.2.

        2. GA2: Uniform crossover is deemed to be advanced in many pieces of research. We compare the GA where the uniform crossover is used with our algorithm, which applies the single-point crossover strategy.

        3. GA3: The greedy strategy based rebirth operator is newly proposed in this research. Following the traditional method, GA3deletes those individuals that are in the infeasible space and generates new ones to fill the gaps. We investigate the improvement due to the rebirth operator.

        As for the items not mentioned, the test algorithm (GGA) is consistent with the others. In other words,there is only one difference between the test algorithm and each control algorithm. The parameters of these algorithms are listed in Table 1.

        The planning result of the GAA in one test is presented in Fig.8: the AUV departs from the starting station(marked by the triangle),passes through the task spots marked by circles (task profits are marked next to these circles), and finally reaches the destination (marked by the square). The route given by the planner is represented by a set of dashed lines,and the task spots to be visited are marked by crosses. Note that the dashed lines indicate only routes but not physical paths, which explains why they intersect the terrestrial area painted in black.

        To draw a convincing conclusion,we perform 60 Monte Carlo simulations and list the results in Table 2, wherein the values of the residual time, total profit, and CPU time are the average of the Monte Carlo simulations. In Fig. 9, the variation of the total profit of the algorithms during the evolution is illustrated. Finally, the GGA earns the profit of 1051.2, which is 4.7% higher than that of the algorithm with monotonous mutation operator(GA1)(Ferreira et al.,2014)and 19.9%higher than that of the algorithm with crossover strategy (GA2) (Mahmoud Zadeh et al., 2018). The operations of insertion,inversion,and swap are essentially equivalent to repeatedly performing the replacement,which is the most classical mutation operation. However, they are more purposeful, which enhances the local optimization capability of the algorithm. The uniform crossover is efficient in many cases, but the sparsity of the feasible solution of the route-planning problem undermines its validity severely. In other words,too many individuals in the new generation will be infeasible if uniform crossover is adopted. Further-more, the GGA outperforms GA3, which does not adopt the rebirth operator,by 24.6%in terms of total profit. The stability of the algorithms is shown in Fig. 10. In addition to the more satisfactory median,the deviation of the GGA results is significantly lower than that of its competitors.

        Table 1 Parameter setting of GAs

        5.2 Route planning with stochastic path cost

        Table 2 Results of 60 Monte Carlo simulations

        Fig. 10 Comparison of the stability of GGA, GA1,GA2, and GA3 (the central mark, bottom and top edge marks on each box indicate the 50th, 25th, and 75th percentiles, respectively, and the whiskers extend to the most extreme data points that do not consider outliers, while outliers are noted individually by “+”)

        In this subsection, the performance of the sampling-based route evaluator is discussed when the local path cost is stochastic. The GGA route planner (GGARP) and the sampling-based GGARP (SGGARP) are tested. The AUV carries out the same tasks as in the previous tests in the same ocean region,and its speed follows the previous setting. The battery duration is set as 7200 s, and the stochastic local path cost is set as described in Section 2.2.We setσ= 0.2tijandλ= 2×10-3tij, wheretijis calculated by the DE-based local path planner beforehand. Besides, we assume that each unplanned maneuver takes additional 60 s to avoid the collision,i.e., Δtm=60 s.

        One typical planning result is presented in Fig. 11. Apparently, most of the tasks in the paths given by the two planners are the same, while the differences lie in the order of these tasks and the selection of certain tasks in the initial and final moving stages. S-GGARP tends to arrange the tasks with low profits at the end of the route. By doing so,when the running time exceeds its plan so much that the AUV has to give up the last tasks, the profit shortage can be minimized. For example,S-GGARP gives up the task with profit 9 at the beginning of the voyage and includes the task with profit 19 at the end of the route. GGARP does not take the stochastic path cost into consideration;the expected returns given by GGARP and S-GGARP are close.It is obvious from Fig.12 that GGARP maintains its advance during the evolution. It suggests that the expected profit of GGARP is steadily higher than that of S-GGARP.

        Fig. 11 Routes given by GGARP (a) and S-GGARP(b)

        However, the actual profit that the AUV can achieve by following the GGARP route is often lower than expected. The reason is that S-GGARP is more discreet, which means that it inclines to ensure that the AUV can complete the planned tasks as much as possible when unplanned events occur. By contrast, GGARP tries to use every second available to improve the total profit and ignores the future uncertainty. If the situation deviates from the plan,the AUV may have to give up some tasks with high profits. For example,the deterministic running time values of the routes in Fig.11 are 7140.5 s(GGARP)and 6721.5 s(S-GGARP).If an obstacle appears and it takes the AUV 60 s to avoid the collision,the value of total time consumption will be 7200.5 s(GGARP)and 6781.5 s (S-GGARP). The AUV following the GGARP route has to give up the task with the profit of 59 to avoid running out of energy on its way,while the AUV following the S-GGARP route has to give up nothing. Finally,the obtained profit of the former(746)is lower than that of the latter(815). To testify the performance of the planners, each planning result of the 60 Monte Carlo simulations is tested in 100 different environments,where the ocean current and moving obstacles are generated randomly according to the preset parameters. The results show that the tested profit of the GGARP route is obviously lower than the expected one, while that of the S-GGARP route is almost consistent with the expected profit(Fig. 13).

        Fig. 12 Expected profits of GGARP and S-GGARP(average of 100 Monte Carlo simulations)

        Fig. 13 Expected and tested profits of GGARP and S-GGARP in Monte Carlo simulations

        Despite the lower expected profit, S-GGARP dramatically outperforms its counterpart in the tests. Table 3 lists the performances of the two algorithms after implementing 100 tests. It is difficult for the AUV to complete the GGARP route because the measurement and trajectory tracking errors and unpredictable obstacles are not taken into account.This leads to the abandonment of some high-profit tasks when the vehicle takes too much time in the early moving stage. Consequently,the average profit of the GGARP route in the tests is 6.4%lower than the expected one. Meanwhile, S-GGARP takes the localization and trajectory tracking uncertainty into consideration. Therefore, the total profits in the plan are coincident with those in the test, and SGGARP is 5.5% more profitable than GGARP. Besides, the lower standard deviation reflects that SGGARP is more robust and that its result is more predictable. S-GGARP spends more computing time than GGARP, because the sampling method consumes considerable computing resources. However,limited additional computing is acceptable, because the computing speed can be dramatically improved using the evolutionary algorithm’s parallel version with system-on-a-programmable-chip (SoPC) (Tsai et al., 2011), and the route planning does not need to be completed online in most cases.

        Table 3 Performance of GGARP and S-GGARP when the path cost is stochastic

        To further clarify the superiority of the proposed planner, planners using the three GAs with unimproved operators(GA1RP,GA2RP,and GA3RP)are also tested in the scenario in this subsection. The results of 40 Monte Carlo tests are listed in Table 4.The results indicate that the proposed S-GGARP significantly improves the optimization results of the existing methods.

        Table 4 Expected and tested profits of route planners when the path cost is stochastic

        6 Conclusions

        In this study, a GA-based AUV route planner has been proposed using the novel rebirth operator and the sampling-based route evaluator. The developed planner leads the AUV to carry out a series of tasks selectively within the limited battery life and maximize the total profit when the measurement and trajectory tracking errors are not neglectable and dynamic obstacles are unpredictable.The deterministic local path cost between any two tasks has been calculated by the DE-based planner and then delivered to the global route planner. In the proposed GGA, traditional evolutionary operators have been improved to enhance the algorithm’s capability for the AUV route-planning problem. Besides,the novel greedy strategy based rebirth operator has been integrated into the evolution process to improve the performance of the algorithm when dealing with large instances. Based on the deterministic local path cost, the stochastic local path cost has been considered as a random variable influenced by the moving uncertainty and undiscovered dynamic obstacles. The evaluation of each possible route has been executed by sampling from the PDFs of the local path costs, and the sampling-based route-cost estimator has been embedded in the route planner.The simulation results demonstrated that the planner performs effectively in the uncertain ocean environment,and the superiority of the proposed planner over traditional GA-based route planners has been testified.

        Future work will focus on extending the proposed algorithms to multi-AUV collaborative operation,including task assignment,online coordination,and low-communication information sharing.Contributors

        Jiaxin ZHANG designed the research. Jiaxin ZHANG and Meiqin LIU processed the data. Jiaxin ZHANG drafted the paper. Senlin ZHANG helped organize the paper. Ronghao ZHENG revised and finalized the paper.

        Compliance with ethics guidelines

        Jiaxin ZHANG,Meiqin LIU,Senlin ZHANG,and Ronghao ZHENG declare that they have no conflict of interest.

        av无码国产精品色午夜| 一区在线播放| 蜜桃视频网站在线免费观看| 国产高清成人午夜视频| 人妻丰满熟妇无码区免费| 黄色成人网站免费无码av| 亚洲欧美日本人成在线观看| 免费在线亚洲视频观看| 色狠狠色狠狠综合天天| а√天堂资源8在线官网在线| 国产精品深夜福利免费观看| 少妇一区二区三区精选| 国产成人a级毛片| 99香蕉国产精品偷在线观看 | 精品国产a毛片久久久av| 国产在线第一区二区三区| 欧美亚洲日本国产综合在线| 东京热加勒比在线观看| 日本啪啪视频一区二区| 中文人妻熟女乱又乱精品| 拍摄av现场失控高潮数次| 四虎成人精品国产永久免费| av网站免费在线浏览| 精品区2区3区4区产品乱码9| 国产精品亚洲欧美云霸高清| 高清av一区二区三区在线 | 久久性爱视频| 国产日产高清欧美一区| 蜜桃一区二区免费视频观看| 夜夜骚久久激情亚洲精品| 久久精品免费观看国产| 在线观看亚洲AV日韩A∨| 麻豆三级视频网站在线观看| 国产猛男猛女超爽免费视频| 日日噜噜夜夜狠狠久久无码区 | 国产av91在线播放| 国产夫妻自拍视频在线播放| 国产精品_国产精品_k频道w| 91美女片黄在线观看| 青青草视频在线观看绿色| 激烈的性高湖波多野结衣|