Abstract
Critical issues with current detection systems are their susceptibility to adverse weather conditions and constraint on the vertical field view of the radars limiting the ability of such systems to accurately detect the height of the targets. In this paper, a novel multirange radar (MRR) arrangement (i.e. triple: longrange, mediumrange, and shortrange radars) based on the sensor fusion technique is investigated that can detect objects of different sizes in a level 2 advanced driverassistance system. To improve the accuracy of the detection system, the resilience of the MRR approach is investigated using the Monte Carlo (MC) method for the first time. By adopting MC framework, this study shows that only a handful of finescaled computations are required to accurately predict statistics of the radar detection failure, compared to many expensive trials. The results presented huge computational gains for such a complex problem. The MRR approach improved the detection reliability with an increased mean detection distance (4.9% over medium range and 13% over long range radar) and reduced standard deviation over existing methods (30% over medium range and 15% over longrange radar). This will help establishing a new path toward faster and cheaper development of modern vehicle detection systems.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Developments in the automotive industry have promoted competition between manufacturers to present increasingly safe and affordable road vehicles to the market, where vehicle safety has become a primary selling point [1]. As vehicle technology has advanced, there has also seen a significant shift to the current situation where road vehicles have a wide range of driver aids including cruise control, vehicle warnings, lane occupancy warning, vehicle spacing and collision detection. The result is increased complexity in the design and analysis phase of advanced driverassistance systems (ADAS) [2]. The problem is not a new one, with significant progress being made since the launch of the first semiautomated vehicle by Houdina Radio Control in 1926, and major advances in recent years with the increasingly sophisticated technology available to the automotive manufacturers [3]. The lengthy track record of development can be seen with General Motors' Motorama car expo in 1960 unveiling concept AVs [4]. In the development of AV technology, some brilliant advancements in the EUREKA's Prometheus Project, and the Stanley AV have been obtained [5] with a timeline of ADAS technology developments being clearly laid out in [6]. A taxonomy of ground vehicle driving automation is defined in SAE document J3016 202,104 clearly specifying all six levels (level zero to level fifth) of AV. The clear conclusion of this report is that adopting at least level 2 ADAS features in vehicles offer a significant increase in safety and emission reduction.
Moving to a fully autonomous vehicle requires realtime information from the environment being available to a system to make decisions on the optimal path. [7] Sensors are widely used to transfer realtime data to the AVs [8]. Exteroceptive sensors (such as cameras, radar and lidar) collecting the environmental data have a key role in the measurement system [9, 10]. Cameras provide excellent visual results from targets, however, they suffer from weak detections and low reliability in poor weather conditions. Techniques such as 3level refinement [11] of the data collected by cameras at 55 frames per second can be used to improve the detection quality.
Despite these improvements, motion algorithms are still studied to improve the likelihood of target detection especially given that the computational resources for implementing these algorithms may be limited. A robust Adaboost detection algorithm capable of full and half body detections is utilized in [12] to enhance vision detections using cameras. This method requires a high computation time as it applies multiple training samples with information about moving vehicles, pedestrians, and motorcyclists. In [13], the challenges in building a multicamera dataset for driver monitoring system were evaluated and in handling the technical issues during the design and implementation of the monitoring system.
Radars are significantly more costeffective technology compared to cameras. Highaccuracy millimeterwave radars have detection angular resolution in the range of 1\(^\circ\). This resolution does not suffice for the AV requirements, hence, in [14] the authors suggested a wide aperture radar with a resolution of around 0.1\(^\circ\), thereby leading to more precision in long range detection. Machine learning methods based on deep neural network techniques were used in [15] leading to an approach utilizing highresolution 4D sensing radar system for AV applications. The results validated the detection of targets with low radar cross section (RCS) in the low sidelobes. Additionally, a novel low RCS radar with a diffraction mitigation technique was designed in [16]. The studied radar focused on highly vulnerable pedestrians and cyclists at junctions. Further algorithmic procedures to enhance the accuracy of detection results obtained by the radars were proposed in [17], where the researchers compared the radar detection and ground truth data, and an accuracy model for a radar system was developed. Radar latency was obtained relative to the velocity of the target, however such a paradigm requires numerous tests on a specific detection system.
ADAS system efficiency can be enhanced further using reinforcement learningbased control strategies [18]. A decisionmaking strategy utilizing deep reinforcement learning (DRL) was developed in [19]. The results demonstrated that in scenarios with a high speed or low gap between ego car and object, the suggested approach failed to prevent the collision. A lane keeping assist (LKA) system was proposed in [20] with the capability of switching between lane departure prevention and lane keep copilot states. This approach applied a model predictive control paradigm with an extended Kalman filter to learn the dynamics. The proposed method was able to switch modes with 90% accuracy while not violating the vehicle dynamics, however, the approach required a timeconsuming algorithm to be implemented in embedded processors. A Takagi–Sugeno (T–S) fuzzybased robust H∞ algorithm was implemented in the development of the LKA system in [21]. This approach considered the uncertainties, disturbances, and longitudinal velocities for robust control of desired steering input. To overcome the varying shadows and occlusions, an approach based on the model of lane boundaries in 3D space was presented in [22]. This method also applies color information for effective clustering to remove the outlier and estimate the curvature. Despite acceptable results, the processing system required high computational resources.
The role of sensors, measurement, and algorithmic improvements are vital in different engineering disciplines. For example, the researchers in [23,24,25,26] presented measurement and algorithmic improvements in other applications which advanced the system’s performance.
1.1 Problem Statement
The AV’s intended functionality requires robust risk and hazard assessment and decision making to take place. The performance of such systems is the main concern when only visionbased sensors are employed. Based on ISO 262621, malfunction behaviors associated with visionbased detection sensors have been reported. Various approaches can be considered with the goal of enhancing the AV’s detection systems, including the following methods:

i.
Algorithm improvement for the sensors.

ii.
Suitable sensor technology selection.

iii.
Sensor/s location optimization.

iv.
Use of diverse sensory technology.

v.
Algorithmic advancements (e.g. recognition)

vi.
Increase the actuator performance (e.g. accuracy)
For AVs operating in urban areas, the detection system relies fundamentally on the capabilities of the sensors deployed. It has been observed that visonbased system can obtain erroneous results, under different operating conditions, leading to potentially hazardous events. To reduce the risk of detection errors, vision systems can be assisted with multiplerange radars aligned with ii. iii, and iv.
1.2 Research Contribution
Developing different layers of the ADAS system including perception, autonomous emergency braking (AEB) and lane keeping assist (LKA) can potentially lead to higher safety. The perception as the first layer for collecting environmental information has the highest effect on the system outputs. This work aims to develop a level two driving automation with a reliable and robust object detection system. An innovative triple radar setup based on longrange, mediumrange, and shortrange radars was developed to detect objects with different heights, sizes, distances, and velocities. Longrange radar is widely used for highway driving to assist AEB and LKA systems. To further improve the performance of the system, medium and short ranges radars were also added to enhance the capability of the proposed model in urban driving. A widefield view of shortrange radar helps to detect situations such as pedestrian’s attempts to cross the street, wildlife or animals crossing the road, occluded targets behind parked vehicles, etc. This approach can provide a more reliable output despite false target identification from one radar in isolation. As level 2 ADAS requires longitudinal and lateral control assistance for the driver, the AEB and LKA systems were developed and integrated into the model. To test the modeled subsystems and the proposed level 2 ADAS system, different simulations were conducted using a virtual test model implemented in the MATLAB and Simulink environments. The subsystems were initially validated to have an optimal operation. In this work, a novel approach using the finescaled Monte Carlo (MC) method is performed to validate the operation of the proposed detection system in level 2 ADAS testing under a range of realistic test conditions. In comparison to the cost and time required for practical tests in AVs, the proposed MCbased approach was designed to be a capable resource to conduct costeffective performance analysis and optimizations. To achieve this, the most impactful parameters, i.e. time to collision and relative distance to the target, are studied. The impact of these factors on the safety of actors on the road was analyzed using the MC paradigm. The performance of the proposed MRR was evaluated using graphical and numerical results. Finally, the results were compared to that of conventional single radar detection approaches. The results demonstrate huge computational gains and improvements in accurate radar detections using the proposed MRR technique. The proposed approach generates accurate and reliable detections of moving targets in a wide range of modelbased simulation scenarios.
2 Radar Detection Framework
The main goal of an Autonomous Vehicle (AV) is to provide safe and efficient transportation between initial and final states without any human intervention. In this work, perception and decisionmaking systems are applied as major functions in the AV’s framework. The perception system utilizes the measurements from exteroceptive sensors to calculate the AVs’ dynamic parameters and detect other objects in the environment. The measurement setup includes radars and cameras. Dynamic parameters are mostly internal states of the vehicle; therefore, estimation paradigms are used to estimate the desired states. The decisionmaking function acts on the LKA and AEB algorithms to fulfill the requirements of the multistage collision avoidance system. The developed framework for the AV studies in this work is shown in Fig. 1. In this framework, a proposed triple radar sensor with short, medium, and long ranges (SR, MR, and LR) offered more reliability and robustness in the detection system than other methods. The multirange radar system observes and generates a forward collision warning (FCW) followed by a multistage autonomous collision avoidance breaking (MSCA) to safely apply a brake considering the lead object (LO) which can be a moving or constant obstacle such as a pet, cyclist, vehicle, etc. Based on ISO 26262 series and ISO/PAS 21448, the sensor’s functionality and limitations are studied under different environmental conditions.
3 Simulation Methodology
In this study, MATLAB automated driving toolbox was utilized to develop the autonomous vehicle model to test the triple radar detection system. The model architecture has four main subsystems which communicate in a closed loop, with the dynamics implemented as presented in Fig. 2. The initialization and MC assessment algorithms were written in MATLAB.
The ego car applies the perception block containing sensors and sensor fusion algorithms to understand the surrounding environment. A scenario reader also exists in this block to run the desired scenario. The AEB controller subsystem uses relative distance and velocity as inputs for the AEB controller. The AEB logic was written using state flow charts to generate the deceleration forces. Lane information and longitudinal velocity were imported to the LKA controller to find the road curvatures, lateral deviation, and relative yaw angle. The LKA controller produces a suitable steering angle to ensure the vehicle remains in the optimal lane. The ego car dynamics are regulated by the output signal of AEB and LKA blocks leading to vehicle orientation. The coordinate details of the vehicle at each time step are determined by the vehicle dynamic block which is sent as input to the perception block. Further details of the subblocks are explained in the following subsections.
3.1 Perception System
The operating functions of the perception subsystems are described in the following sections.
3.1.1 Scenario Builder
Scenarios are synthetically created by the Driving Scenario Designer application based on the type of autonomous driving test and its requirements, and then imported to the scenario reader block. This system has the capability of modeling the different actors and trajectories and gives the flexibility of configuring the sensor parameters. Sensor detections drive the corresponding information in the dedicated scenario to feed the AV controllers (AEB and LKA controllers).
3.1.2 Radar System
Radars use the actor information from the scenario builder block and quantify the range \(R\) and velocity \(\vartheta\) of obstacles relative to the ego vehicle. The fundamental equations governing radars are:
where c is \(3\times {10}^{8}m/s\), \({f}_{b}\) is the frequency of the combination of the transmitted and received signals, and \({D}_{max}\) and \({r}_{res}\) are the range and resolution of the radar, respectively. The \(\lambda\) is the wavelength and the \(\Delta f\) is the Doppler frequency shift. In this work, three radars with long, medium, and short ranges are utilized to detect the frontal obstacles in a level 2 ADAS system. Such an arrangement could be used in both highway and urban driving scenarios to detect objects of different sizes and velocities. The main characteristics of the system radars are reported in Table 1.
3.1.3 Camera System
Cameras are costeffective sensors with the capability of detecting both moving and motionless targets. Nevertheless, they have limitations in providing highresolution photos of moving objects in bad weather conditions. Hence, the camera module is applied only to detect stationary lanes in this study. Road information from the scenario builder block is imported to the camera module to generate the road lanes which are employed in the LKA controller. The main characteristics of the used camera are shown in Table 2.
3.1.4 Sensor Fusion
Sensor fusion block applies environmental data from radars and camera. The collected radars’ information is brought to the front axle center as a reference point for calibration [27,28,29]. The calibrated information is then combined to generate a concatenation signal which is used for clustering purposes. The multiobject tracker (MOT) block utilizes clustered information to create, modify, and delete moving object tracks. Track creation is performed using estimation paradigms which are mostly based on wellknown Kalman filtering algorithms [29,30,31,32]. The filtering is performed using an extended Kalman filter (EKF), with the nonlinear state space model defined in (3) and (4):
where \({w}_{k}\) is process noise and \({v}_{k}\) indicates measurement noise. \({u}_{k}\) is the input data, and \(h\left({x}_{k}\right)\) is a nonlinear function for the state estimation of \({x}_{k}\). Using (3) and (4), the target nonlinear dynamic model and measurement formulas are defined in (5) and (6):
where \({x}_{i}(k)\) is the state vector for ith state, \({z}_{i}\left(k\right)\) is the observation produced from ith state of target. \({F}_{i}\left(k\right)\) and \({G}_{i}\left(k\right)\) are the target dynamic functions. \({H}_{i}\left(k\right)\) is the sensor measurement matrix.
In the EKF, the Jacobian is employed for updating and estimating the equations as shown in (7):
The Global NearestNeighbor (GNN) criterion is applied to allocate a detection to a track. If inappropriate detections are found to be allocated to a specific track, they would be deleted. Based on extracted tracks, the lead actor is known, hereby the relative distance and velocity can be calculated. The modified GNN is developed for the feature distance in (8):
where \(O{b}^{i}\) is the feature vector for the observation i, and \({T}^{j}\) indicates the feature vector for target j.
In this study, the camera, with 1024 × 592 pixels, was only used for the road surface and lane detections. The lane information is inserted to the vision detection block then, the lanes are detected and imported to the LKA controller block for lane keeping purposes.
3.2 AEB System
The AEB, as a basic and efficient function, in the ADAS system acts automatically when the gap between the driver and target is critically reduced. In this work, relative distance, and velocity, as well as the longitudinal velocity of the ego vehicle from the perception block are imported to the AEB controller. Additionally, time to collision (TTC) and stopping time are calculated based on the received data. The logic for multistage braking with different decelerations is generated corresponding to these parameters. The TTC as a function of relative distance \({D}_{relative}\) and relative velocity \({V}_{relative}\) is derived as:
where, \({H}_{offset}\) is a predefined constant headway offset. When the TTC < \({T}_{FCW}\) (forward collision warning time), the AEB becomes activated. \({T}_{FCW}\) depends on stopping time (the period at which the ego vehicle initially applies the brakes until it reaches a full stop) and reaction time of the driver \({T}_{react}\). The stopping time \({T}_{stop}\) is calculated considering the velocity of the ego vehicle \({v}_{ego}\), and brake deceleration \({a}_{b}\), as given:
Based on the calculated parameters in (3) to (5), the AEB logic applies a brake deceleration level using the following state flow process:

1.
If the \(TTC\) ≤ \({1.2\times T}_{FCW}\), a warning is activated.

2.
If the \(TTC\) ≤ \({T}_{FCW}@{a}_{b}=3.8\), brake deceleration is set to 3.8 m/s^{2}.

3.
If the \(TTC\) ≤ \({T}_{FCW}@{a}_{b}=5.3\), brake deceleration is set to 5.3 m/s^{2}.

4.
If the \(TTC\) ≤ \({T}_{FCW}@{a}_{b}=9.8\), brake deceleration is set to 9.8 m/s^{2}.
3.3 LKA System
The LKA system acts as a preventive ADAS to avoid collisions rooting in and out of the lane motion. Lane information from the perception block is fed to the LKA. The lane center estimator subblock translates this information to predicted curvature, lateral deviation, and relative yaw angle. Finally, the steering angle is generated by a dedicated controller to direct the ego vehicle on the safe path [33, 34]. Many studies using different controllers such as dynamic state feedback, layered PID, and nonlinear MPC are conducted to enhance the lanekeeping performance [35]. In this study, the ego vehicle estimates three seconds ahead of curvature. This lookahead period allows the controller to use previewed data to compute the car's steering angle, which improves the applied MPC controller's efficiency.
3.4 Vehicle Dynamics
The vehicle dynamics model used a system of 3DoF equations to obtain the position, velocity, and direction of the ego vehicle. In this block, the rigid twoaxle vehicle body model uses acceleration (throttle input) and deceleration (brake input) data. This information comes from the AEB block and combines them with the steering angle from the LKA block in order to compute the dynamic parameters in each time step. Finally, the output information is sent to the perception block to feed the scenario builder subblock.
4 Simulation Results
The performance of the algorithms alongside the whole system are evaluated in this section. The corresponding results are presented utilizing modelbased simulations in MATLAB (version R2021a). Initially, a set of simulations were performed to assess the performance of the modeled subsystems (i.e. algorithms). The results, including analytical and numerical graphs, are explained in further detail in the following subsections. The final part of the simulation focuses on level 2 ADAS testing with the results discussed in terms of MC simulations. The most critical parameters affecting the proposed detection system were determined using the MC method. The safety of the whole system was evaluated and the results then compared to that of conventional single radar detection approaches.
4.1 Radar Detection Testing
To ensure the satisfactory operation of the proposed detection system, a scenario with a runner (i.e. pedestrian) was used. In this scenario, the pedestrian with a height of 1.6 m crosses the road with a velocity of 4 m/s. The velocity of the ego vehicle was also set to be 4 m/s. The resulting detection plots from the radars are demonstrated in Fig. 3. The shortrange radar with the widest view generates the fastest detection. The medium and longrange radars output data arises when the pedestrian enters their field of view. As observed, the system has double and triple detections after 3.8 s and 4.6 s, respectively. The concatenation signal of the detection system is also illustrated in Fig. 3.
4.2 AEB Performance Test
The performance of the AEB system is tested using the data provided by the proposed triple radar. The applied traffic scenario simulates the Euro NCAP testing scene, in which a cyclist travels across an ego vehicle with a velocity of 4.17 m/s. The ego vehicle velocity is set at 6.94 m/s and the initial distance is 31 m. If ego vehicle velocity does not change, the cyclist would collide with the middle of the front bumper. In the scenario with AEB, logic 1 becomes activated at 4.7 s. Based on the standard scenario, the cyclist velocity increases after 0.8 s. Then, the second logic is deactivated. Nevertheless, the distance between the ego vehicle and the cyclist is decreased to 6.4 s. Therefore, the AEB block enters the second logic. The TTC forces the logic 3 activation only 0.1 s later and the ego vehicle deceleration rises to 5.3 m/s^{2}. The scenario and corresponding control parameters are presented in Fig. 4.
4.3 LKA Performance Test
For testing the LKA system, a robust model is applied to detect the road lanes of different marking styles and colors using a camera module installed at front of the vehicle. As the radar system is responsible for object detection, the camera is deactivated for actors’ detection which in return improves the simulation time. In this test, the camera is only used for the road surface and lane detections. The simulation results are provided in the context of a road without marking on the left side and a solid red line on the right side is presented in this paper. Figure 5a depicts the lane detection results of the camera module. Moreover, the threesecondahead prediction of curvature lanes boundary is considered a scenario. Based on single boundary information, the curvature, lateral deviation, offset, relative yaw angle, and longitudinal velocity in the center lane are estimated. The MPC controller utilizes these parameters to calculate suitable steering angles which should be in the boundary of [− 0.5 0.5] rad. Figure 5b presents the quality of the predicted parameters and controller output. The figure shows how the lateral deviation, right lateral offset, relative yaw angle, and steering angle are behaving under the test. In this scenario, the MPC controller plays a significant role in obtaining the optimum steering angles as a function of time.
4.4 Level 2 ADAS Testing
The Monte Carlo (MC) approach is utilized for a wide range of problemsolving methods which utilize statistics of random numbers of target parameters [36,37,38]. MC methods have a wide application in physical science and engineering. The most wellknown classic MC methods draw samples from a statistical distribution to find out a deterministic value for the detection system output parameters. These types of MC methods are commonly applied in the automotive industry [37, 39]. In this work, an MC paradigm is applied to test the performance of the proposed detection system in the level 2 ADAS system.
The triple radar is combined in an ego vehicle model including both AEB and LKA controllers. The entire subsystems were developed and simulated in the MATLAB and Simulink environments. A highway scenario with two lanes, as the test bench, is utilized to run a wide range of MC simulations. Multiple actors, as reported in Table 3, are used in this scenario. Actors 1 to 3 move in the same direction as the ego vehicle, while actor 4 comes at the other vehicles from the opposite side. The arrangement of different vehicles is shown in bird’seye scope in Fig. 6. As presented in this figure, the selected highway has a circular shape with a radius of 500 m. the positive steering angle is applied to the ego vehicle to keep it in the left lane. The triple radar setup is employed to detect the objects (i.e. actors) on the highway.
For iterative MC simulations, initially a distribution is selected as the input generator. For this case, a normal distribution is applied. The minimum required number of runs is determined, as shown in Fig. 8. The statistical method for selecting a minimum number of runs is described in detail in Sect. 4.4.1. Next, the dominant parameters affecting the final performance of the proposed triple radar detection system, are randomly selected from boundaries defined in Table 4 and the selected normal distribution. These parameters are the most important parameters affecting the performance of the ego vehicle. To produce the random variables, a Gaussian distribution is selected as a generator in the MC algorithm. The probability of the selection of a value is as follows:
where \(P\) is the selection probability, \(x\) is the desired parameter, \(\mu\) and \(\upsigma\) are mean and standard deviation of the dedicated distribution. Using the selected random variables from the probability generator in Eq. (12), the highway scenario was run and dominant output for each run saved. The critical outputs are selected based on a sensitivity analysis conducted in Sect. 4.4.3 The dispersion of MC outputs were then analyzed to extract statistical indices affecting on the outputs. These indices provide a deeper view into the detection system capability in detecting actors in different road scenarios.
Each MC run requires 15 s for the dedicated highway scenario. The flowchart of the MC simulation is shown in Fig. 7. During this scenario, actor 2 changes its lane once it closes to the actor 3 position. To assess the capability of the proposed detection system, the MC campaigns, in several steps, are performed in this section.
4.4.1 Minimum Number of Runs Requirement
A minimum number of MC runs is required to analyze the performance of the proposed detection system effectively [27]. The finescaled Monte Carlo (MC) analysis was performed with huge computational savings of 40% (on average reduction of 4 h and 10 min) compared to classic MC methods. To perform and evaluate the effect of the proposed detection system on the critical outputs, a Core i7 Intel CPU running at 2.6 GHz with 12 GB RAM computer was used for MC simulations.
In the MC iterative methods there is no upper limit to the number of iterations to be simulated. Different populations and confidence levels are available to MC methods. However, the minimum number of populations (or runs) should be studied in realtime systems (such as Avs) in order to improve the speed of decision making. The minimum and reliable number of runs is computed based on a confidence interval for a precise confidence limit of 98% for the developed radar detection system. In this study, the sample mean and variance and values are given in (13) to find out how many iterations are necessary to accomplish a quantified maximum percentage error with a specified confidence level.
where \({z}_{c}\) is the confidence level, which is 98% (\({z}_{c}\)= 2.33) in this study, \({x}_{i}\) is the iterative sample, \(\overline{x }\) is the mean value of the sample, and \({E}_{max}\) is the maximum error (in percentage):
It is observed that the minimum number of iterations converges rapidly, indicating that the calculation of \({n}_{pop}\) number is quite stable after 50 iterations.
In this study, a statistical analysis of the concatenation signal, which has the combination of the radars’ outputs, is evaluated to determine the minimum required number of runs. In each MC run, the concatenation signal is recorded for 15 s of simulation runtime, then the mean value of the signal is calculated. Finally, the mean value and standard deviation of the mean concatenations over MC runs are calculated and the results shown in Fig. 8. The asymptotic trend of the statistics shows that the expected changes are low when more than 140 MC runs are performed. The main reason behind this is that the statistical characteristics become close to their true values as the number of MC runs increases, until convergence is achieved. Furthermore, the histogram of mean concatenations demonstrates that the binning of the values does not change when MC runs exceed 140 simulations. The statistics show that the MC outcome can be enough steady when the number of MC runs rises above 140 iterations. In this study, all simulations are performed with 200 runs, which is significantly higher than the calculated minimum number of runs.
4.4.2 MC Analysis
The results for the MC simulation using the proposed model and dedicated scenario are presented in this subsection. A range of simulation parameters is selected from Tables 3 and 4. The main target of the MC campaign is to assess the system’s performance in presence of the stochastic conditions. The statistics of some dominant variables (i.e. outputs) are tracked to evaluate the capability of the proposed detection system during the simulation. In this study, the range of the concatenation signal is the most important observer for examining the detection quality of the system. The boundary of the obtained concatenation signals over 15 s of simulation in 200 MC runs is demonstrated in Fig. 9. As observed, the ego vehicle has multiple detections of the actors in all simulations. The range of transparency of the curves shows the relative density of the MC output and more than 50% of the results have fallen into the highest density of the curve area (indicated by the high level of opacity). For further investigations, highly correlated TTC and relative distance are stochastically investigated. These parameters give an indication of the whole system’s performance. The system safety can be qualitatively assessed using the simple metric that the higher these parameters are, the safer the system would be. The 2D probability density function of the mean TTC and relative distance in the MC runs is depicted in Fig. 10. The probability analysis verifies that the triple radar detection system has brought about acceptable positive TTC and relative distance. The critical outputs are normally distributed. The probability of the occurrence of a set of TTC and mean relative distance is obtained using Eqs. (15) and (16).
where, \(T\) and \(M\) are TTC and mean relative distance data, \({\mu }_{T}\) and \({\mu }_{M}\) are corresponding mean values, and other operators are defined as: \(\mathrm{det}:\) calculates the square diagonal matrix; \(cov:\) calculates the covariance matrix; \(diag:\) calculates the determinant matrix.
For this case study, the values of \({\mu }_{T}\) and \({\mu }_{M}\) are 12.3531 s and 49.7463 m respectively. The standard deviation of TTC and mean relative distance are 3.3732 and 7.4001. This indicates that as result of the radars’ proposed combination, the number and severity of the worstcase scenarios (scenarios with low TTC and relative distance) are decreased considerably.
4.4.3 Sensitivity Analysis
In this subsection, a sensitivity analysis is done to recognize the most effective parameters for the performance of the ego vehicle in level 2 ADAS testing. Initially, a correlation assessment is applied to find out the critical inputs. For this purpose, the system inputs were selected randomly from predetermined boundaries which are demonstrated in Table 4. Effects of the changes of these parameters on the mean TTC and relative distance, as significant system outputs, during 200 MC runs are studied.
The ego vehicle initial velocity and its longitudinal position are identified as the most important parameters affecting the mean TTC and relative distance. The correlation factor for these parameters is noticeably high. The correlation plots are shown in Fig. 11. The mean relative distance has a strong negative correlation with the initial longitudinal position of the ego vehicle. Then, a wellestablished detection paradigm can increase system’s safety in scenarios when the ego vehicle has critical initial position. Such an interpretation can be expanded for a strong correlation between ego vehicle velocity and mean TTC.
Note that the geometrical parameters of the actors i.e. length, width and height of the actors are less effective parameters, and thus, they are not identified as safety critical parameters. As well dynamic parameters, like speed, show their influence on the relative distance. Hence, they are not considered as critical output. For further exploration, dispersions of the considerable outputs corresponding to the most effective inputs are reported in Fig. 12. The high density of the dots in the graph reports that the output parameters are highly correlated to the ego vehicle initial parameters.
4.4.4 Comparison
The MC method is a costeffective paradigm to foresee probable failures in practice. The model’s uncertainties are explored in terms of critical parameters. The prohibitive costs of practical tests can be reduced using MC method. Moreover, this approach can be used for systematic comparison between different methods in the related engineering problems. In this study, the MC is applied the proposed detection system is compared to that of single radar detection systems using MC simulations. For this purpose, 200 MC simulations are performed, in which the level 2 ADAS system is modeled with triple radar, single longrange radar, and single mediumrange radar systems. The dispersion plot of outputs of mean relative distance, as an evident parameter for all illustrated models, is presented in Fig. 13.
The results show that there are large number of occasions where the mean relative distances are low using a typical.single radar detection system. In contrast, the mean relative distances obtained by the proposed triple radar are significantly higher. Undoubtedly, this leads to better detection accuracy and system safety. The statistics of the mean relative distance obtained for all models are presented in Table 5. Following equations are applied to extract the required statistical parameters in Table 5:
where \({x}_{i}\) is mean relative distance extracted from each MC run and \(P({x}_{i})\) is corresponding probability for the ith state.
Based on the results presented in Table 5, the triple radar produces the largest mean relative distance which causes the highest level of safety compared to the single radar detection systems. The triple radar mean relative distance of detection is improved by 13% over a single mediumrange radar and 4.9% over a single longrange radar. Reduced standard deviations show that the robustness of the system, as a lower worst case, happens when the triple radar detection system is utilized. The standard deviation of the triple radar is more than 30% better than a single mediumrange radar and more than 15% better than a single longrange radar. A detailed comparative analysis using MC is conducted in this work considering MRR system for the first time. Due to the lack of literature in evaluating the whole AV system using statistical analysis techniques, this study provides the results from typical radar detection systems for comparison purposes.
5 Conclusion
A reliable and innovative multirange radar (MRR) detection system for level 2 ADASequipped vehicle is proposed in this paper. The multilevel radar system composed of radar sensors results in significantly higher detection reliability with an increased mean detection distance (4.9% over medium range and 13% over long range radar) and reduced standard deviation over existing methods (30% over medium range radar and 15% over longrange radar). The approach has also improved the quality of detection in severe weather conditions. The proposed detection system could operate independently of the target size and initial condition of the ego vehicle. Different subsystems of the ego vehicle including AEB and LKA algorithms were tested by applying different scenarios and the system performance was validated using integrated system tests. Also, the fine scaled Monte Carlo (MC) analysis was performed with huge computational savings of 40% (on average reduction of 4 h and 10 min) compared to classic MC methods to evaluate the effect of the proposed detection system on the critical outputs. Results are also discussed in terms of sensitivity analysis and comparison with traditional single radar detection systems. Despite limited literature using statistical analysis for detection system evaluation, a detailed comparison is conducted, in which the typical single radar detection system was also considered.
From an engineering perspective, whilst the realtime MRR detections are presented to showcase the common gains provided by the optimized MC algorithm, the physical insight and the engineering implications of parameter uncertainty are of independent interest in all problem cases. In this work, all problem cases are defined and modeled using a modelbased system. The results demonstrated that the proposed arrangement could detect targets of different size in different highway traffic scenarios even in the multiple objects’ presence which leads to optimal operation of the AEB and LKA algorithms.
References
Fagnant DJ, Kockelman K (2015) Preparing a nation for autonomous vehicles: opportunities, barriers, and policy recommendations. Transp Res Part A Policy Pract 77:167–181
Bagloee SA, Tavana M, Asadi M, Oliver T (2016) Autonomous vehicles: challenges, opportunities, and future implications for transportation policies. J Mod Transp 24:284–303
Bimbraw K (2015) Autonomous cars: past, present and future: a review of the developments in the last century, the present scenario and the expected future of autonomous vehicle technology. In: Proceedings of 12th international conference on informatics in control, automation and robotics, Birmingham, UK
Divakarla KP, Emadi A, Razavi S (2019) A cognitive advanced driver assistance systems architecture for autonomouscapable electrified vehicles. IEEE Trans Transp Elect 5(1):48–58
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei LF (2015) ImageNet large scale visual recognition challenge. Int J Comp 115(3):211–218
Bengler K, Dietmayer K, Farber B, Maurer M, Stiller C, Winner H (2014) Three decades of driver assistance systems: review and future perspectives. IEEE Int Trans Syst Mag 6:6–22
Campbell S, Mahony N, Krpalcova L, Riordan D, Walsh J, Murphy A, Ryan C (2018) Sensor technology in autonomous vehicles : a review. In: 29th Irish signals and systems conference, pp 1–4
Yang G, Xue Y, Meng L, Wang P, Shi Y, Yang Q, Dong Q (2021) Survey on autonomous vehicle simulation platforms. In: 8th international conference on dependable systems and their applications, pp 692–699
Woo A, Fidan B, Melek WW (2019) Localization for autonomous driving, handbook of position location, pp 1051–1087
Haris M, Glowacz A (2022) Navigating an automated driving vehicle via the early fusion of multimodality. Sensors 22:1425–1432
Xu Y, Xu D, Lin S, Han TX, Cao X, Li X (2011) Detection of sudden pedestrian crossing for driving assistance systems. IEEE Trans Syst Man Cybern 42:729–739
Ju TF, Lu WM, Chen KH, Guo JI (2014) Visionbased moving objects detection for intelligent automobiles and a robustness enhancing method. In: Proceedings of IEEE international conference on consumer electronics, Las Vegas, NV, USA
Ortega JD, Cañas PN, Nieto M, Otaegui O, Salgado L (2022) Challenges of largescale multicamera datasets for driver monitoring systems. Sensors 22:2554–268
Bialer O, Jonas A, Tirer T (2021) Super resolution wide automotive radar. IEEE Sens J 21:17846–17858
Sun S, Zhang YD (2021) 4D automotive radar sensing for autonomous vehicles: a sparsityoriented approach. IEEE J Select Top Signal Process 15(4):879–891. https://doi.org/10.1109/JSTSP.2021.3079626
Chipengo U (2018) Full physics simulation study of guardrail radarreturns for 77 GHz automotive radar systems. IEEE Access 6:70053–70060
Choi WY, Yang JH, Ch Chung Ch (2021) Datadriven object vehicle estimation by radar accuracy modeling with weighted interpolation. Sensors 21:2317–2335
Ding Zh, Sun Ch, Zhou M, Liu Zh, Wu C (2021) Intersection vehicle turning control for fully autonomous driving scenarios. Sensors 21:3995–4008
Fu Y, Li C, Yu FR, Luan TH, Zhang Y (2020) A decisionmaking strategy for vehicle autonomous braking in emergency via deep reinforcement learning. IEEE Trans Veh Tech 69:5876–5888
Bian Y, Ding J, Hu M, Xu Q, Wang J, Li K (2020) An advanced lanekeeping assistance system with switchable assistance modes. IEEE Trans on Intel Transp Syst 21:385–396
Guo J, Wang J, Luo Y, Li K (2020) TakagiSugeno Fuzzybased robust H∞ integrated lanekeeping and direct yaw moment controller of unmanned electric vehicles. IEEE/ASME Trans Mechatron 26:2151–2162
Espinoza RT, Torriti MT (2013) Robust lane sensing and departure warning under shadows and occlusions. Sensors 13:3270–3279
Jayakumar T, Ramani G, Jamuna P, Ramraj B, Chandrasekaran G, Maheswari C, Ganji V (2023) Investigation and validation of PV fed reduced switch asymmetric multilevel inverter using optimization based selective harmonic elimination technique. Automatika 64(3):441–452
Kumar NS, Chandrasekaran G, Thangavel J, Priyadarshi N, Bhaskar MS, Hussien MG, Ali MM (2022) A novel design methodology and numerical simulation of BLDC motor for power loss reduction. Appl Sci 12(20):10596
Vanchinathan K, Valluvan KR, Gnanavel C, Gokul C (2022) Numerical simulation and experimental verification of fractionalorder PIλ controller for solar PV fed sensorless brushless DC motor using whale optimization algorithm. Electr Power Comp Syst 50(1–2):64–80
Asef P, Denai M, Paulides JJH, Marques BR, Lapthorn A (2022) A novel multicriteria local latin hypercube refinement system for commutation angle improvement in IPMSMs. IEEE Trans Ind Appl 59(2):1588–1602
Yeong DJ, Hernandez GV, Barry J, Walsh J (2021) Sensor and sensor fusion technology in autonomous vehicles: a review. Sensors 21:1–37
Kim J, Han DS, Senouci B (2018) Radar and vision sensor fusion for object detection in autonomous vehicle surroundings. In: International conference on ubiquitous and future networks
Enayati J, Asef P, Jonnalagadda Y (2022) A novel triple radar arrangement for level 2 ADAS detection system in autonomous vehicles. In: IEEE 10th conference on systems, process & control (ICSPC), https://doi.org/10.1109/ICSPC55597.2022.10001787, pp 1–6, Melaka, Malaysia
Li W, Wang J, Lu L (2013) A novel scheme for DVLaided SINS inmotion alignment using UKF techniques. Sensors 13:1046–1060
Hsu LY, Chen TL (2012) Vehicle dynamic prediction systems with online identification of vehicle parameters and road conditions. Sensors 12(11):15778–15800
Enayati J, Rahimnejad A, Gadsden SA (2021) LED reliability assessment using a novel Monte Carlobased algorithm. IEEE Trans Dev Mater Rel 21(3):338–347
Blades L, Douglas R, Early J, Lo CY, Best R (2020) Advanced driverassistance systems for city bus applications. SAE Technical Papers. SAE International, pp 1–12
Utriainen R, Pollanen M, Liimatainen H (2020) The safety potential of lane keeping assistance and possible actions to improve the potential. IEEE Trans Intel Veh 5:556–564
Lee K, Li SE, Kum D (2019) Synthesis of robust lane keeping systems: Impact of controller and design parameters on system performance. IEEE Trans Intel Transp Syst 20:3129–3141
Enayati J, Moravej Z (2017) Realtime harmonic estimation using a novel technique for embedded system implementation. Int Trans Electr Energ Syst 27:1–13
Cao Zh, Yang D, Xu Sh, Peng H, Li B, Feng Sh, Zhao D, Xu Sh, Peng H, Feng Sh, Zhao D (2021) Highway exiting planner for automated vehicles using reinforcement learning. IEEE Trans Intell Transp Syst 22(2):990–1000
Enayati J, Sarhadi P, Poyan M, Zarini M (2015) Monte Carlo simulation method for behavior analysis of an autonomous underwater vehicle. J Eng Mart Environ 230(3):481–2015
Asef P, Lapthorn A (2021) Overview of sensitivity analysis methods capabilities for traction AC machines in electrified vehicles. IEEE Access 9:23454–23471
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Enayati, J., Asef, P. & Wilson, P. Resilient Multirange Radar Detection System for Autonomous Vehicles: A New Statistical Method. J. Electr. Eng. Technol. 19, 695–708 (2024). https://doi.org/10.1007/s4283502301567z
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s4283502301567z