Keywords

1 Introduction

The new driver assistance systems, as a previous step towards automated driving, require a more complete knowledge of the environment. To do this, perception sensors are used, but also positioning on digital maps is a highly valuable tool that complements the situational awareness information. In this sense, digital maps must contain more and more precision and detail [1].

Although the means used for the construction of digital maps allow the storage of road geometry data [2], the sight distance is not usually obtained directly, but it is of vital importance in some scenarios [3]. Visibility on a road should be understood as the distance up to which an observer can distinguish objects, such as the road itself, other vehicles, signs, obstacles, etc. This visibility is highly influenced by changes in the environment, such as the geography of the terrain, the vegetation, the geometry of the road, the appearance of obstacles that obstruct the line of sight, the weather of the area, the brightness of each hour of the day, etc.

Instruction [4] includes, theoretically, road layout design conditions in terms of braking and overtaking distances, and visibilities. It also raises geometric considerations about visibility in circular curves and vertical curves. In addition, it indicates that the horizontal and vertical layouts must be coordinated.

In [5], the methods are classified between those that obtain 2D visibility models, both horizontal plane of the road and vertical one, and those that obtain 3D models, combining the methods used in 2D with greater data processing capacity in accordance with technological development. Studies such as [6, 7] stand out methods to estimate the visibility distance based on data previously obtained from the geometry of the road. Another approach is the one presented in [8] in which, based on data included in a Geographic Information System (GIS), visibility zones of a vehicle can be created based on its GPS position.

In this paper, a geometric algorithm is proposed that allows estimation of this sight distance from the map information, which can be updated in real time in the presence of obstacles detected in the movement of the vehicle (detected by on-board sensors) so that the possibility of overtaking can be determined at each point on the road. This flexibility increases its field of application compared to algorithms that only work offline on the road digital map and do not have the capacity to adapt to changing situations of dynamic elements or consider the influence of critical elements for the overtaking maneuver. Thus, it is possible to support the decision-making of driver assistance systems and advanced levels of automated driving.

2 Method

The proposed method is based on the knowledge of the horizontal and vertical road geometry. For decades, there have been methods for measuring this geometry based on driving along the route with an instrumented vehicle [9]. In the case of this paper, this digital map is obtained using a vehicle with a GPS receiver, an inertial system and perception sensors such as cameras and LiDAR [10]. In this way, by merging the first two systems, the inaccuracies and signal losses typical of satellite positioning and the cumulative errors of inertial systems are reduced. On the other hand, the cameras identify the horizontal and vertical signals, and the LiDAR allows obtaining variables such as the dimensions of the transversal section of the road, location of road markings or identification of elements on the sides of the road [11]. In this way, in terms of geometry, the coordinates (x, y, z) of the median line of one of the traffic lanes are obtained and, by knowing the width of the lanes and the total roadway, the reconstruction of the median line of the road, the rest of the lanes and the road edge is possible.

The sight distance calculation is divided in two parts: horizontal and vertical. After carrying out the calculations separately, the sight distance limited by the geometry of the road will be the minimum distance of both, and this must be compared with the distances imposed by other conditions such as the obstacle in front of the ego-vehicle in the overtaking maneuver or the range of onboard perception sensors. The method is presented in a generic way to be adaptable to variable viewpoints, both vertically (height of the driver or position of the sensor on the roof of the vehicle or on the front) and transversally.

In general, in the horizontal projection, the x-y coordinates of the central axis of the road are available. According to Standard 3.1-IC [4], the viewpoint is located at 1.5 m from the axis of the road, a configurable data. In addition, some adjustable road edges have been used in the distance from the axis, as a corridor, from which it is considered that there is no vision due to obstacles located on the roadsides. These lateral limits are a strong simplification of reality. However, this approach has two advantages: on the one hand, a conservative measure can be obtained by using small values for the width of that corridor; and, on the other, the procedure can be generalized in real time to variable widths based on what is detected by the vehicle’s on-board sensors. In this way, vision is limited by the right and left edges of the road, and its range is up to where the center line of the left lane can be seen continuously, which is estimated to be located 1.5 m from the center of the axis as is shown in Fig. 1a.

Regarding the calculation in the vertical projection, according to Standard 3.1-IC [4], the viewpoint is located at a height of 1.1 m from the ground, the same as the minimum height at which a vehicle moving in the opposite direction is detectable, configurable parameters in the calculations. Figure 1b shows the concept for the calculation with which the aim is to find the minimum distance at which a vehicle would stop being seen when the vehicle’s vision intersects with the road itself.

The algorithm also considers the visibility restrictions implied by the obstacle that one wishes to overtake, as shown in Fig. 1c. In this case, the closest points of the median line of the lane next to the vehicle that are no longer perceived from the point of view are identified.

Fig. 1.
figure 1

Diagram of the algorithm to determine the sight distance a) horizontal projection, b) vertical projection, c) facing an obstacle in front of the ego-vehicle

3 Results

The algorithm has been applied on real roads. Specifically, the application to the M-104 road in Madrid (Spain) is shown. It is a single carriageway road, with one lane in each sense, with the particularity of presenting a great diversity of radii of curvature (including quite small values) and changes in elevation. Figure 2 shows the results of sight distance for each of the profiles calculated by road geometric criteria.

Fig. 2.
figure 2

Sight distance limited by horizontal alignment (green) and vertical alignment (red)

The influence of certain variable parameters on the results is analyzed. Thus, the width of the free section on both sides of the road influences the calculation of the sight distance limited by the horizontal alignment. An almost linear trend is verified between the average visibility distance and the corridor width, so that the most limiting condition is imposed by the horizontal layout versus the vertical one when the corridor narrows (Table 1).

Table 1. Influence of the corridor width

In real-time operation, the corridor width is considered variable over the distance based on the detection of the LiDAR, so obstacles can be characterized on the sides of the road that are detected and positioned on the digital map.

Regarding the height of the vehicle in the opposite direction, the sight distance is highly affected by variations around small values and tends to become more insensitive with variations on large values around 3 m. An analogous situation results from the vertical position of the sensor in the vehicle itself; elevated positions are preferable, although those values have practical implementation limits.

On the other hand, the stretches in which the sight distance is greater than the measurement range of the onboard sensors with assistance systems or partially automated driving are analyzed. Taking an average range of 250 m, on the analyzed road, it is verified that 14% of the total distance would have limited visibility due to the range of the sensor.

Finally, in an overtaking maneuver, the vehicle that is going to be passed represents an obstacle to the field of view of the ego-vehicle. In such a case, both the distance between the two vehicles and the dimensions of the first one are relevant variables. Table 2 shows the results of the influence of the presence of a car that is going to be overtaken that circulates along the median line of the lane. As can be seen, the proximity of the obstacle greatly influences visibility, especially at short distances. Mainly the biggest influence is at distances less than 50–60 m. Table 3 shows the influence of the size of the obstacle.

Table 2. Influence of the distance between the ego-vehicle and the vehicle to be overtaken.
Table 3. Influence of the obstacle size

It should be noted that the calculations are made for the entire route, but the method can also be used in specific stretches, or the data can be updated in real time.

4 Conclusions

In this paper, an algorithm has been presented for the automatic and geometric calculation of road sight distance, which can be implemented in real time and integrated with perception systems to modify parameters that influence the calculation.

The study of the influence of various parameters on the results on a single carriageway road reveals the importance of detecting these variables at all times, such as the free cross-sectional corridor that would be easily detected by perception technologies such as LiDAR (for example, due to the presence of obstacles on the sides of the road or an obstacle in front of the vehicle itself that makes it difficult to see the adjacent lane prior to an overtaking maneuver). This flexibility increases its field of application compared to algorithms that only work offline and do not have the capacity to adapt to changing situations. On the other hand, regarding the influence of the height of the obstacle, more or less conservative criteria can be adopted for decision making.

In this way, the algorithm can support decision-making for driver assistance systems or even for advanced levels of automated driving. In this sense, it should be noted that the range of the sensors has a maximum range that, in many cases, limits the sight distance detected by the geometry of the road, which is a relevant limitation and could motivate to resort to other types of technologies such as V2X communications in the framework of connected vehicles and cooperative driving to effectively carry out these overtaking maneuvers [12].