亚洲免费av电影一区二区三区,日韩爱爱视频,51精品视频一区二区三区,91视频爱爱,日韩欧美在线播放视频,中文字幕少妇AV,亚洲电影中文字幕,久久久久亚洲av成人网址,久久综合视频网站,国产在线不卡免费播放

        ?

        基于整車最優(yōu)原則的多傳感器融合配置機理研究(英文)*

        2024-12-31 00:00:00趙炳根曾董林昊宇邱旭波胡丕杰
        汽車技術(shù) 2024年10期
        關(guān)鍵詞:整車機理英文

        Research on the Mechanism of Multi-Sensor Fusion Configuration Based on the Optimal Principle of the Vehicle

        Zhao Binggen, Zeng Dong, Lin Haoyu, Qiu Xubo, Hu Pijie

        (Automotive Research Institute, BYD Auto Industry Company Limited, Shenzhen 518118)

        【Abstract】In order to address the issue of sensor configuration redundancy in intelligent driving, this paper constructs a multi-objective optimization model that considers cost, coverage ability, and perception performance. And then, combining a specific set of parameters, the NSGA-II algorithm is used to solve the multi-objective model established in this paper, and a Pareto front containing 24 typical configuration schemes is extracted after considering empirical constraints. Finally, using the decision preference method proposed in this paper that combines subjective and objective factors, decision scores are calculated and ranked for various configuration schemes from both cost and performance preferences. The research results indicate that the multi-objective optimization model established in this paper can screen and optimize various configuration schemes from the optimal principle of the vehicle, and the optimized configuration schemes can be quantitatively ranked to obtain the decision results for the vehicle under different preference tendencies.

        Key words: Multi-sensor fusion, Intelligent driving, Multi-objective optimization, Vehicle optimization

        【摘要】針對智能駕駛系統(tǒng)傳感器配置冗余問題,構(gòu)建了考慮成本、覆蓋能力與感知性能三方面的多目標(biāo)優(yōu)化模型。然后,結(jié)合一組具體參數(shù),利用NSGA-II算法對所建立的多目標(biāo)優(yōu)化模型進行求解,并在考慮了經(jīng)驗約束后提取出一個包含24種典型配置方案的Pareto前沿。最后,利用所提出的主客觀結(jié)合的決策偏好方法,從成本偏好和性能偏好兩方面對各類配置方案進行決策得分計算與排序。研究結(jié)果表明,所建立的多目標(biāo)優(yōu)化模型可以從整車最優(yōu)層面對各類配置方案進行篩選和優(yōu)化,從而獲得不同偏好傾向下滿足整車最優(yōu)的配置方案決策結(jié)果。

        主題詞:多傳感器融合 智能駕駛 多目標(biāo)優(yōu)化 整車最優(yōu)

        中圖分類號:U463.1" "文獻標(biāo)志碼:A" "DOI: 10.19620/j.cnki.1000-3703.20240707

        【引用格式】 趙炳根, 曾董, 林昊宇, 等. 基于整車最優(yōu)原則的多傳感器融合配置機理研究(英文)[J]. 汽車技術(shù), 2024(10): 28-37.

        ZHAO B G, ZENG D, LIN H Y, et al. Research on the Mechanism of Multi-Sensor Fusion Configuration Based on the Optimal Principle of the Vehicle[J]. Automobile Technology, 2024(10): 28-37.

        1 Introduction

        The perception technology of intelligent driving is a key way for vehicles to obtain information of surrounding environment, and it is also the primary link of the intelligent driving[1]. The mainstream perception routes in the industry include the multi-sensor fusion route (hereinafter referred to as the fusion route) and the pure visual route. Narrowly speaking, the pure visual route only includes cameras, so it has a higher technical threshold due to algorithm capabilities, hardware facilities, data, end other reason. Currently, only Tesla has truly achieved the pure visual perception solution. The fusion route is still the perception solution adopted by the vast majority of carmakers. Although the current fusion algorithm is still dominated by the visual processing of cameras, the existence of radar enables the fusion scheme to deal with more long tail scenes, so as to solve the problems of the failure of camera (night, rainstorm, and other scenes) and misjudgment (scenes with the change of complex light)[2]. At present, the common practice is to pre-embed hardware configurations, and later push software updates and functional upgrades through Over the Air (OTA) technology. However, a major problem brought about by this approach is that it can easily trigger an arms race. The hardware-first strategy significantly increases the cost of Bill of Material (BOM), but cannot maximize the advantages of hardware at the software and algorithm levels. Taking LiDAR as an example, as the most expensive sensor in high-order intelligent driving systems, its point cloud information is currently not used in most perception algorithms developed on actual vehicles.

        From the perspective of the optimization of the vehicle, cost, coverage capability, and perceiving performance are three important aspects for measuring intelligent driving systems in the vehicle. In fact, the different types of the vehicle will affect the cost of configuration schemes, and indirectly affect the coverage capability and perceiving performance. From the perspective of the optimization, the optimization directions of cost, coverage capability, and perceiving performance are conflicting with each other. Therefore, establishing a methodology for multi-sensor fusion configuration based on the optimal principle of the vehicle is of great significance for the selection and decision-making of configuration schemes. There is insufficient academic research on this problem. Some academic teams have conducted research on the optimal configuration of individual sensors, such as LiDAR[3-4] and ultrasonic radar[5]. Meadows et al. proposed a placement optimization method for multi-LiDAR systems and demonstrated the effectiveness of the proposed method through data generate, training, and evaluation[4]. Kim et al. proposed a genetic algorithm based on layout optimization (position and orientation) method to improve the point cloud resolution and blind spot size of LiDAR[3]. To address the issue of lane changing collisions caused by blind spot monitoring, Jamaluddin et al. analyzed the impact of the position of ultrasonic radar on the driver reminder function[5]. Some scholars have also studied the joint optimization problem between different sensors, but there are some limitations and shortcomings. Zhou et al. established an integer programming model that considers cost, coverage, and redundancy to determine the optimal number and location of multiple sensors, and solve it using the IBM ILOG CPLEX solver[6]. However, this model transforms multi-objective problems into single objective problems for modeling, simplifying the problem while sacrificing the model’s generalization. In addition, [7-10] and other literatures have also studied the configuration optimization problem of different sensors, but most of these studies still take the sensor layout location and orientation as the optimization goal, and unable to establish systematic constrains and decisions at the level of the vehicle.

        Thus, a mechanism model for multi-sensor fusion configuration based on the optimal principle of the vehicle is proposed in this paper, and a multi-objective optimization problem with cost, coverage capability, and perceiving performance is established and solved. In addition, a decision preference method combining subjective and objective factors is proposed, which can assist decision-makers in selecting and making decisions on multi-sensor fusion configuration schemes.

        2 Basic Principles

        At present, camera, radar, and LiDAR are commonly used in intelligent driving perception schemes. The advantages and disadvantages of all kinds of sensors are very obvious. The camera can recognize the geometric features and shapes of objects. The visual algorithm is mature, and the cost is low. However, it is greatly affected by the change of illumination, and there is the risk of failure in harsh environments[11]. Radar has all-day, all-weather detection ability, and it can achieve accurate ranging and speed measurement. The algorithm of radar is mature, and the cost is low, but it does not have the ability to measure height and recognize the stationary objects[12]. LiDAR has high precision and wide detection range, and can directly obtain high-density 3D environment information, but the cost is high and it is easy to be affected by weather[11].In recent years, 4D millimeter wave radar (hereinafter referred to as 4D radar) as a new type of sensor began to rise. 4D radar can solve the inherent defects of traditional radar, such as height measurement and stationary objects recognition, and it can also output high-density point cloud information for subsequent processing links[12]. It can be predicted that 4D radar, as a cost-effective sensor between LiDAR and traditional radar, will play an important role in the autonomous driving of L2+ or L3 level driving in the future. A statistical comparison of the characteristics, advantages, disadvantages and application scenarios of the above four sensors is given in Figure 1, and each item is measured on a ten-point scale.

        3 Modeling Methods

        A model for multi-sensor fusion configuration based on the optimal principle of the vehicle is proposed in this section. Further, a decision preference method combining subjective and objective factors is proposed to calculate the optimization results of the multi-objective model mentioned above, in order to obtain the optimal ranking result of the configuration scheme at the vehicle level.

        3.1 Scene Model

        In order to describe the problem of multi-sensor fusion configuration in this paper, the vehicle and surrounding coverage range are first modeled, as shown in Figure 2. The Region of Interest (RoI) area [Pz ]is divided into ten areas, namely [Pz=P1∪P2∪P3∪P4∪P5∪P6∪P7∪P8∪P9∪P10], and each RoI area can be assigned different perception weights. For different types of sensors, the possible layout areas are considered according to their respective characteristics, there are, [c∈1,2,…,C=C, r∈1,2,…,R=R,] and [l∈1,2,…,L=L]. Where C, R, and L represent the potential placement space of camera, radar/4D radar and LiDAR respectively, which are represented by boxes of different colors in Figure 2. In order to facilitate modeling, the orientation angle of the sensor arrangement is not considered in this paper, and it is assumed that the sensor is placed at an angle perpendicular to the horizontal surface of the vehicle.

        The RoI area covered by the perception is divided, and its possible layout area is constrained according to different sensor types. The green, blue, and pink boxes represent the layout areas of the camera, radar/4D radar, and LiDAR respectively.

        3.2 Multi-Objective Optimization Model

        The multi-objective optimization function established in this paper can be expressed as:

        [min Fn,ω,κ=J1n" " J2n,ω" " J3ω,κs.t. n∈N,ω∈Ω] (1)

        where J1([n])、J2([n],[ω]) and J3([ω],[κ]) represent 3 sub-objective functions: cost, coverage capability and perceiving performance respectively. [n] represents the quantity vector, [ω] represents the arrangement vector, and [κ] is used to represent the influence of the perception algorithm. N and Ω represent the quantity space and layout space of the configuration scheme, respectively, which can be expressed as:

        [N=n11n12…n1kn21n22…n2k????nm1nm2…nmk, Ω=ω11ω12…ω1kω21ω22…ω2k????ωm1ωm2…ωmk] (2)

        where [ni=(ni1,ni2,…,nik)] represent one of the quantity vectors, [ωi=ωi1,ωi2,…,ωik ] represents one of the arrangement vectors, and i∈[1, m].

        The cost is mainly determined by the number and price of various sensors in the configuration scheme, and its objective function can be expressed as:

        [J1n=j=1kpj?nij] (3)

        where i∈[1, m], [p] is the price vector, and it represents the price of each type of sensor, which satisfies [pi=(p1, p2,…, pk)].

        The coverage capability is mainly determined by the sensing area and resolution of various types of sensors, and its objective function can be expressed as:

        [J2n,ω=j=1knij?σωij?S jhorθ jhor+αj?S jverθ jver]" " (4)

        where [Sihor] and [Siver] represent the horizontal perceiving area and vertical perceiving area of the jth sensor respectively. The perceiving area mainly depends on the sensor’s Field of View (FOV). [θihor] and [θiver] represent the horizontal and vertical perceiving resolution of the jth sensor. [σ(ωij)] is the weight of the perceiving coverage that can be obtained by the placement. [α(j)] is a binary decision variable used to distinguish cameras from radars:

        [αj=1, the jth sensor is radar0, the jth sensor is camera] (5)

        The reason for defining the variable [α(j)] is that the image processing of the camera is in a two-dimensional coordinate, so the perceiving area of the horizontal range is mainly considered. The point cloud processing of the radar is in the three-dimensional coordinate, so the perceiving area of horizontal and vertical ranges are considered at the same time.

        The perceiving performance depends on the type of sensors and algorithms. At present, the vision algorithm is still the main perception processing, but radars are used in the fusion route. Radar and vision fusion processing can improve the robustness of the perceiving performance and enhance the coverage capability of the perception scheme. In order to effectively measure the perceiving performance of different configuration schemes, this paper introduces the mean Average Precision (mAP) of various typical algorithms. In addition to visual perception, the mAP of fusion algorithms, such as radar and vision fusion, LiDAR and vision fusion, 4D radar and vision fusion, are also considered, and sensors of different configuration schemes are combined to finally obtain the quantified evaluation results of perceiving performance. The perceiving performance can be expressed as:

        [J3ω,κ=i=1oβi?κ3Diω?σi+i=1oβi?κBEViω?σi2] (6)

        where o is the number of RoI, [σi] is a weight value representing the importance of each RoI, [β(j)] is a binary decision variable that represents whether the corresponding RoI is within the current perceiving coverage:

        [βi=1, the ith RoI is covered0, the ith RoI is not covered] (7)

        where [κ3Di] and [κBEVi] represent the mAP of various algorithms under 2 types of 3D and BEV object detections.

        Take [κ3Di] for example, the perceiving performance of a certain RoI region depends on the algorithm, and also depends on whether the region is covered by multiple sensors:

        [κ3Diω=κ3Di1ω,…,κ3Dijω,…,κ3Dioω] (8)

        where [κ3Dij(ω) ] can be repressed as:

        [κ3Dijω=mAPCj?υω+mAPCj,mAPMultij?ξ?1-υωmAPCj?ξ1mAPRj]

        (9)

        where [mAPCj]and [mAPMultij] represent the mAP of visual algorithms and fusion algorithms, respectively. [mAPMultij ]can represent the mAP of visual and radar fusion algorithms, visual and 4D radar fusion algorithms, visual and LiDAR fusion algorithms, visual and 4D radar and LiDAR fusion algorithms. [ξ] represents the weight distribution of vision algorithms and fusion algorithms in the perception task. [υω] represents the coverage ratio of the visual perception within this RoI, which is determined by the placement vector of the configuration scheme.

        3.3 Decision Preference Method Combining Subjective and Objective Factors

        Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) algorithm is a common comprehensive evaluation method, which can be applied to the problem of schemes selection when there are multiple indicators[13]. Therefore, this paper will use TOPSIS algorithm to calculate the optimized data, so as to obtain the ranking result. The type of different vehicles often has different preferences on configuration schemes. For example, there is a cost preference for medium-low models, while there is a performance preference for medium-high models. Therefore, this paper introduces the entropy weight method on the basis of TOPSIS algorithm, and calculates the score by subjectively introducing the preference weight and objectively calculating the information entropy. Finally, the ranking result of configuration schemes is obtained considering the influence of preferences.

        First of all, data forward transformation is carried out. Since TOPSIS model is generally calculated on the basis that all indicators are extremely large indicators (the larger the data, the better). While in the multi-objective model in this paper, the cost is extremely small data (the smaller the data, the better), so the data of the cost in the optimization results need to be processed forward first. Generally, the following methods can be used for processing.

        [xi=1xi1,1xi2,…,1ximxi=maxxi-xi1,xi2,…,xim] (10)

        where i represents the index to be turned forward, [xi] represents the vector to be turned forward, xij represents the corresponding data, and j∈[1, m]. [xi] represents the data after the forward transformation.

        Secondly, in order to eliminate the dimensional influence between different indicators, the normalization of the forward matrix is carried out:

        [zij=xiji=1mx2ij] (11)

        Then, the entropy weight method is introduced to calculate the final score. The probability matrix is calculated using the forward normalized matrix:

        [pij=ziji=1mzij] (12)

        Calculate the information entropy ei corresponding to each index:

        [ei=-1lnm?j=1mpij?lnpij] (13)

        Further, the utility value di of each index is calculated:

        [di=1-ei] (14)

        The entropy weight ωi of each index can be calculated by the following formula:

        [ωi=dii=1ndi?υi] (15)

        where n is the number of indicators, υi represents the subjective preference weight, and the sum of the weights of each indicator is 1.

        Furthermore, the ideal best and worst values are determined for each indicator:

        [Z+=maxz11,…,z1m,maxz21,…,z2m,…,maxzn1,…,znmZ-=minz11,…,z1m,minz21,…,z2m,…,minzn1,…,znm] (16)

        Calculate the Euclidean distance between each configuration scheme and its ideal best and worst values, taking into account the entropy weight. Define the distance of the jth scheme from the maximum and minimum values as:

        [D+j=j=1mωi?Z+i-zij2D-j=j=1mωi?Z-i-zij2] (17)

        Finally, the relative proximity S of the ideal best and worst distances is calculated and normalized to obtain the final score result [S] of each configuration method:

        [S=D-jD-j+D+j] (18)

        [S=Sjj=1mSj] (19)

        4 Model Solving

        For the multi-objective optimization problem constructed in this paper, the optimization direction of multiple objectives is conflicting, and the optimization of the cost is at the cost of the deterioration of the coverage capability or the perceiving performance, and vice versa. Therefore, there is no single optimal solution for the configuration scheme, and the different positioning of the vehicle will inevitably lead to cost differences, which in turn lead to differences in coverage capability and perceiving performance.

        Therefore, the NSGA-II algorithm is adopted in this paper to solve the above multi-objective optimization problem, and the Pareto frontier is obtained[14]. The Pareto frontier here is a set of optimal solutions that comprehensively consider the three objectives of cost, coverage capability and perceiving performance, and none of the solutions can be improved without reducing the value of other objectives. The steps of the algorithm are shown in Table 1, and the framework diagram of the NSGA-II algorithm is shown in Figure 3.

        Table 1 Steps of NSGA-II algorithm

        [Algorithm NSGA-II Step 1 Initialization: The random algorithm is used to generate the initial population P0. Step 2 Non-dominated sorting operation: Rationality judgment and non-dominated sorting are carried out for all individuals in the current population. a. Determine whether each individual has a reasonable scheme; b. Pareto grading is carried out to each individual; c. The crowding degree of individuals in the same Pareto level is calculated and sorted in descending order. Step 3 Selection: Select optimal individuals from the current population Pt, and perform crossover operations and mutation operations to generate sub-population Qt. Step 4 Merge: Merge populations Pt and Qt to produce combined population Rt. Step 5 Instead: Conduct rationality judgment and non-dominant sorting operation for the combined population (same as step 2). Select the optimal individual, and produce a new generation of population Pt+1. Step 6 Judgment: Judge whether the end condition is met, if not, jump to step 3; Otherwise exit the loop to get the optimal solution set. ]

        In order to obtain objective and quantified optimization results for perceiving performance, this paper fully surveys various typical algorithms in 3D and BEV object detections, and uses the mAP of the corresponding algorithms to the objective function of perceiving performance shown in equation (6). The quantized values of perceiving performance are shown in Table 2.

        5 Result Analysis

        In this paper, a multi-objective optimization model is constructed to study the configuration mechanism of the fusion route. The 3 objective functions of cost, coverage ability and perceiving performance are considered, and use NSGA-II algorithm to generate the optimal configuration scheme of the vehicle. In addition, this paper puts forward a preference decision method combining subjective and objective factors, and uses TOPSIS algorithm and entropy weight method to determine the optimal solution under different preferences.

        Table 3 shows parameters of sensors and algorithm parameters used for simulation analysis in this section. The parameters here are only an example and can be changed according to the actual situation.

        Figure 4a shows the result of Pareto frontier obtained by the NSGA-II algorithm, where the population size is set to 100, the number of iterations is 200, and the crossover probability and mutation probability are 0.8 and 0.05, respectively. The Pareto frontier here includes 72 kinds of optimized results. Further, considering the empirical constraints such as layout habits and quantity upper limit of the current mainstream configuration schemes, 24 major configuration schemes are extracted from the optimization results, as shown in Figure 4b. Each circle in Figure 4 represents a configuration scheme. X axis, Y axis and Z axis represent the cost, coverage capability, and perceiving performance, respectively. In Figure 4b, the 5R11V scheme is used as the boundary, and 24 optimization schemes are divided into medium-low configuration and medium-high configuration.

        The two-dimensional view is shown in Figure 5. The size of the circle in the figure is positively correlated with the relative cost in each category.

        The perception schematic of 24 typical configuration schemes and the numerical results of the three optimization objectives are given in Figure 6 and Table 4. Further, based on 24 typical configuration schemes obtained by the optimization algorithm, the decision preference method combining subjective and objective factors proposed in this paper is used for decision calculation and ranking. Two kinds of decision preferences are analyzed in this paper, they are cost preference and performance preference. The analysis results are shown in Figure 7, and the specific scores and ranking results are given in Table 5 and Table 6. It can be seen that when considering different decision preferences of cost preference and performance preference, the final ranking of configuration scheme also shows an obvious tendency. When the cost preference is given priority, the top five options are 1R1V (scheme 1), 2R1V (scheme 3), 3R1V (scheme 4), 4R1V (scheme 6) and 5R1V (scheme 7). This indicates that in the medium-low models with cost as the first, these schemes are the best schemes for the vehicle. When performance preference is given priority, the top five options are 5R12V (scheme 19), 4R1R412V (scheme 20), 4R1R411V (scheme 16), 5R11V (scheme 15), and 5R11V1L (scheme 17). This indicates that these schemes are the best schemes for the vehicle in the medium-high models with performance as the first.

        Another important conclusion about the LiDAR configuration can be drawn from Figure 7. Under the performance preference decision, the multi-LiDAR configuration scheme (scheme 21, scheme 22, scheme 23, scheme 24) does not rank high. This indicates that after considering the three important factors of cost, coverage ability and perceiving performance comprehensively, the multi-LiDAR configuration scheme is still not the optimal scheme at the vehicle level even under the performance preference decision. The main reason is that the point cloud information of LiDAR is not fully utilized in most of the current fusion algorithms. Currently, the degree of improvement brought by LiDAR to the perceiving performance is not enough to make up for the cost brought by it to the BOM of the vehicle.

        6 Conclusions

        Multi-sensor fusion sensing technology is the most widely used scheme for intelligent driving, which is composed of various types of sensors. From the perspective of the vehicle, how to select the appropriate configuration to obtain the optimal scheme at the vehicle level is a problem worthy of in-depth study.

        To solve this problem, this paper establishes a multi-objective optimization model considering three important objectives: cost, coverage ability and perceiving performance. Considering the influence of different types of the vehicle on decision-makers’ preferences, a subjective and objective decision preference method based on the TOPSIS algorithm and entropy weight method is proposed. Further, the NSGA-II algorithm is used in this paper to solve the multi-objective optimization model, and a Pareto frontier containing 24 typical configuration schemes is extracted after considering the empirical constraints. Finally, the calculation and ranking of the various schemes is carried out. The cost preference and the performance preference are taken for example to calculate the decision of 24 typical configuration schemes.

        The proposed method can be used to screen and optimize all kinds of configuration schemes, and the optimized schemes can be quantitatively ranked, so as to obtain the decision results under different preference tendencies. The research results of this paper are of great significance for decision-makers to select and make decisions on multi-sensor fusion configuration schemes.

        References

        [1] ZHAO J Y, ZHAO W Y, DENG B, et al. Autonomous Driving System: A Comprehensive Survey[J]. Expert Systems with Applications. 2024, 242.

        [2] QIAN R, LAI X, LI X R, et al. 3D Object Detection for Autonomous Driving: A Survey[J]. Pattern Recognition, 2022, 138(8): 1-24.

        [3] KIM T H, PARK T H. Placement Optimization of Multiple Lidar Sensors for Autonomous Vehicles[J]. IEEE Transactions on Intelligent Transportation Systems, 2019, 21(5): 2139-2145.

        [4] MEADOWS W, HUDSON C R, GOODIN C, et al. Multi-Lidar Placement, Calibration, Co-Registration, and Processing on A Subaru Forester for Offroad Autonomous Vehicles Operations[J]. Autonomous Systems: Sensors, Processing, and Security for Vehicles and Infrastructure, 2019, 11009: 99-116.

        [5] JAMALUDDIN M H, SHUKOR A Z, MISKON M F, et al. An Analysis of Sensor Placement for Vehicle’s Blind Spot Detection and Warning System[J]. Journal of Telecommunication, Electronic and Computer Engineering, 2016, 8(7): 101-106.

        [6] ZHOU M, CHEN Q F, CAO Y G, et al. Optimal Configuration Scheme of Multi-Sensor Perception for Autonomous Vehicles Based on Solid-State Lidar[C]// 35th Chinese Control and Decision Conference (CCDC). 2023: 188-193.

        [7] CHEN L, LI Q, LI M, et al. Design of A Multi-Sensor Cooperation Travel Environment Perception System for Autonomous Vehicle[J]. Sensor, 2012, 12(9): 12386-12404.

        [8] DEY J, TAYLOR W, PASRICHA S. VESPA: A Framework for Optimizing Heterogeneous Sensor Placement and Orientation for Autonomous Vehicles[J]. IEEE Consumer Electronics Magazine, 2020, 10(2): 16-26.

        [9] HARTSTERN M, RACK V, STORK W. Conceptual Design of Automotive Sensor Systems: Analyzing the Impact of Different Sensor Positions on Surround-View Coverage[J]. IEEE Sensors, 2020: 1-4.

        [10] PRAMANIK S, VAIDYA V, MALVIYA G, et al. Optimization of Sensor-Placement on Vehicles Using Quantum-Classical Hybrid Methods[J]. 2022 IEEE International Conference on Quantum Computing and Engineering (QCE), 2022: 820-823.

        [11] CUI Y D, CHEN R, CHU W B, et al. Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review[J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(2): 722-739.

        [12] HAN Z Y, WANG J H, XU Z, et al. 4D Millimeter-Wave Radar in Autonomous Driving: A Survey[EB/OL]. (2023-06-07)[2024-08-12]. https://arxiv.org/abs/2306.04242.

        [13] MITRA M, HAMED T. A Comprehensive Guide to the TOPSIS Method for Multi-Criteria Decision Making[J]. Sustainable Social Development, 2023, 1(1): 1-6.

        [14] DEB K, PRATAP A, AGARWAL S, et al. A Fast and Elitist Multi-Objective Genetic Algorithm: NSGA-II[J]." IEEE Transactions on Evolutionary Computation, 2002, 6(2): 182-197.

        [15] CHEN X, ZHANG T, WANG Y, et al. Futr3D: A Unified Sensor Fusion Framework For 3D Detection[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver: IEEE, 2023.

        [16] PARK D, AMBRUS R, GUIZILINI V, et al. Is Pseudo-Lidar Needed for Monocular 3D Object Detection?[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision, Nashville: IEEE, 2021.

        [17] SHI S, WANG X," LI H. PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019.

        [18] YAN Y, MAO Y, LI B. SECOND: Sparsely Embedded Convolutional Detection[J]. Sensors, 2018, 18(10): 3337.

        [19] SHI S S, GUO C X, JIANG L, et al. PV-RCNN: Point-Voxel Feature Set Abstraction for 3D Object Detection[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020.

        [20] LANG A H, VORA S, CAESAR H, et al. Point-Pillars: Fast Encoders for Object Detection from Point Clouds[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019.

        [21] SHI S S, WANG Z, SHI J P, et al. From Points to Parts: 3Dobject Detection from Point Cloud with Part-Aware and Part-Aggregation Network[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021, 43(8): 2647-2664.

        [22] DENG J J, SHI S S, LI P W, et al. Voxel R-CNN: Towards High Performance Voxel-Based 3D Object Detection[C]// Proceedings of the AAAI Conference on Artificial Intelligence. 2021: 1201-1209.

        [23] SINDAGI V A, ZHOU Y, TUZEL O. MVX-Net: Multimodal Voxelnet for 3D Object Detection[C]// IEEE International Conference on Robotics and Automation (ICRA), 2019, 35(2): 1201-1209.

        [24] LIU J N, ZHAO Q C, XIONG W Y, et al. SMURF: Spatial Multi-Representation Fusion for 3D Object Detection with 4D Imaging Radar[J]. IEEE Transactions on Intelligent Vehicles. 2023, 9(1): 1-14.

        [25] WANG L, ZHANG X Y, LI J, et al. Multi-Modal and Multi-Scale Fusion 3D Object Detection of 4D Radar and LiDAR for Autonomous Driving[J]. IEEE Transactions on Vehicular Technology. 2023, 72(5): 5628-5641.

        (責(zé)任編輯 王 一)

        修改稿收到日期為2024年8月12日。

        猜你喜歡
        整車機理英文
        基于六自由度解耦分析的整車懸置設(shè)計
        隔熱纖維材料的隔熱機理及其應(yīng)用
        煤層氣吸附-解吸機理再認(rèn)識
        中國煤層氣(2019年2期)2019-08-27 00:59:30
        霧霾機理之問
        英文摘要
        英文摘要
        英文摘要
        財經(jīng)(2016年19期)2016-08-11 08:17:03
        英文摘要
        整車低頻加速噪聲研究及改進
        HFF6127G03EV純電動客車整車開發(fā)
        国产精品成人午夜久久| 亚洲av色香蕉一区二区三区 | 国语对白做受xxxxx在线| 亚洲色大成网站www永久| 又粗又粗又黄又硬又深色的| 99久久免费国产精品2017| 日韩精品一区二区三区四区五区六| 国产亚洲午夜精品久久久| 好吊妞无缓冲视频观看| 欧美老妇与禽交| 亚洲午夜无码久久久久软件| 久久精品国产69国产精品亚洲| 国产丝袜美女一区二区三区| 好大好硬好爽免费视频| 日本熟妇中文字幕三级| 一区二区三区日韩精品视频| 波多野结衣的av一区二区三区| 国际无码精品| 欧美成人免费看片一区| 国产黄色三级三级三级看三级| 成人日韩熟女高清视频一区| 真实国产老熟女粗口对白| 免费一级黄色大片久久久| 美利坚合众国亚洲视频 | 无码电影在线观看一区二区三区| 久久精品成人一区二区三区蜜臀| 国产精品毛片无遮挡高清| 无码精品人妻一区二区三区人妻斩 | 国产优质av一区二区三区| 99在线精品免费视频| 色妞色综合久久夜夜| 亚洲一二三四五区中文字幕 | 国产精品一区二区三区三| 乱老年女人伦免费视频| 丰满五十六十老熟女hd| 国产又粗又猛又黄色呦呦| 少妇下面好爽好紧好湿一区二区| 天美传媒一区二区| 亚洲av不卡电影在线网址最新 | 最新无码国产在线播放| 国产三级国产精品三级在专区|