Accelerating sampling-based optimal path planning via adaptive informed sampling

—This paper improves the performance of RRT ∗ -like sampling-based path planners by combining admissible informed sampling and local sampling ( i.e. , sampling the neighborhood of the current solution). An adaptive strategy that accounts for the cost progression regulates the trade-off between exploration (admissible informed sampling) and exploitation (local sampling). The paper proves that the resulting algorithm is asymptotically optimal. Furthermore, its convergence rate is superior to that of state-of-the-art path planners, such as Informed-RRT ∗ , both in simulations and manufacturing case studies. An open-source ROS-compatible implementation is also released.


I. INTRODUCTION
Path planning is a fundamental problem in robotics and with a heavy impact on a broad variety of applications.For example, the recent developments in humanoid robotics require fast planning tools to handle high-dimensional systems.Similarly, industrial and service robotics often deal with dynamic environments where the robot must plan the motion on the fly.An example is a robot arm that picks objects from a conveyor belt or cooperates with humans to assemble a piece of furniture.A common thread of these applications is the high dimensionality of the search space and the limited available computing time to find a solution.
Path-planning problems are solved mainly through graphbased or sampling-based approaches.Graph-based methods [1], [2] are used mainly for navigation problems, while sampling-based methods are the most widespread in robotic manipulation because they are more efficient with highdimensional systems.Sampling-based methods explore the search space by randomly sampling the robot configuration space to find a sequence of feasible nodes from start to goal.Different strategies for sampling and connecting nodes have given birth to different algorithms, such as RRT [3], EST [4], and PRM [5].
Sampling-based methods are successful in robotics because they do not require discretizing the search space, do not explicitly require the construction of the obstacle space, and generalize well to different robots' structures and specifications.These advantages come at the cost of weaker completeness and optimality guarantees.In particular, they can provide asymptotic optimality; that is, the probability of converging to the optimal solution approaches one as the number of samples goes to infinity [6].The convergence rate of such algorithms is relatively slow, and actual implementations usually stop the search way before they reach the optimum.A meaningful improvement to optimal planners came with the introduction of informed sampling [7].Informed samplingbased planners shrink the sampling space every time the solution cost decreases, making the convergence to the optimal solution faster.These planners show a slow convergence rate when the cost heuristic is poorly informative.In the case of path length minimization, the Euclidean distance can be chosen as a heuristic of the cost between two points.However, with many obstacles, there is a large difference between the Euclidean distance and the actual minimum path between the two points.In these cases, the convergence speed resembles that of uninformed planners (e.g., RRT * [6]).This paper tackles this issue by proposing a mixed strategy that alternates sampling the informed set and the neighborhood of the current solution.The rationale is that the cost of the solution improves by sampling its neighborhood (i.e., local sampling), with a consequent quick reduction of the measure of the informed set.
Alternating admissible and locally informed sampling is an example of the classic exploration-versus-exploitation dilemma, which is hardly solvable with a fixed ratio between the usage of the two sampling strategies.To overcome this issue, we propose an adaptive technique to dynamically balance the choice of one sampling strategy over the other.The result is that the search algorithm prefers exploitation (i.e., local sampling) only as long as it is useful and switches to exploration (i.e., admissible informed sampling) to avoid stagnation.
The paper's contribution is twofold.First, it defines a mixed sampling strategy that combines global and local informed sampling for asymptotically optimal sampling-based path planners.Local informed sampling oversamples the neighborhood of the current solution to quickly reach a local optimum, while global informed sampling guarantees asymptotic optimality.Second, It proposes an asymptotically optimal algorithm that uses the mixed sampling strategy and dynamically adjusts the trade-off between global and local sampling, showing that this outperforms state-of-the-art planners, such as Informed-RRT * , on different classes of problems.
An open-source ROS-compatible version of the planner is publicly available [8].
The paper is organized as follows.Section II introduces the reader to optimal planning and informed sampling.Section III discusses previous works on the acceleration of informed sampling-based planners.Section IV discusses the motivation of this work through some illustrative examples.Section V describes the proposed method.Section VI compares it with other methods.Section VII concludes and discusses future works.

PLANNING
This section introduces the concepts of path planning, informed sets, and informed sampling used throughout the paper.
The path planning problem is formulated in the configuration space, X ⊆ R n , which denotes all possible configurations x of the system (for robot manipulators, x is usually a vector of joint angles).Let X obs be the space of all those configurations in collision with an obstacle, and X free = cl(X \ X obs ) the obstacle-free configuration space, where cl(•) denotes the closure of the set.Definition 1. (optimal path planning) [adapted from [7]] Given a starting point x start and a set of desired goal points X goal ⊂ X, optimal path planning is the problem of finding a curve σ * : [0, 1] → X free such that: where c : Σ → R ≥0 is a Lipschitz continuous cost function associating a cost c(σ) to a curve σ ∈ Σ, Σ is the set of solution paths, and R ≥0 is the set of non-negative real numbers.
Remark 1. Cost function c is often the length of the path so that the optimal motion plan is the shortest collision-free path from x start to X goal .
If an algorithm can find a solution to the optimal path planning problem, then it is said to be an optimal path planner.Sampling-based path planners, such as RRT * [6], can only ensure the probabilistic convergence to the optimal solution.This weaker form of optimality is referred to as (almost-sure) asymptotic optimality.The convergence rate of an asymptotically optimal planner is related to the probability of sampling points that can improve the current solution.This set is referred to as the omnisicient set [7].RRT * and similar algorithms, as proposed in [6], are very inefficient at sampling the omniscient set (the probability that RRT * samples a point that belongs to the omniscient set decreases factorially in the state dimension [7]).To increase the probability of sampling the omniscient set, Gammell et al. [7] coined the concept of informed sampling; that is, sampling an approximation of the omniscient set (the informed set) so that the probability of finding a point that improves the current solution is higher.
If the informed set is a superset of the omniscient set, it is referred to as an admissible informed set.
Definition 2. (admissible informed set)[adapted from [7]] An informed set X f is a heuristic estimate of the omniscient set X f .If X f ⊇ X f , the informed set is said to be admissible.
In minimum-length path planning, it is always possible to construct an admissible informed set by considering that the shortest path through a sample x ∈ X is lower bounded by the sum of Euclidean distances from x start to x and from x to x goal ∈ X goal ).As a consequence, all possibly improving points lie in the so-called L 2 -informed set, X f , given by: where c k is the cost of the best solution at iteration k.Notice that such an informed set is equivalent to the intersection of the free space X free and an n-dimensional hyper-ellipsoid symmetric about its transverse axis with focal points at x start and x goal , transverse diameter equal to c k , and conjugate diameters equal to c 2 k − c 2 min , where c min = ∥x goal − x start ∥ 2 . ( The volume of the hyper-ellipsoid decreases progressively as the solution cost c k decreases, improving the convergence rate of the algorithm.
The main issue with sampling bias is that, depending on the geometry of the problem, the heuristic may discard points of X f .[7].This can be deleterious for the convergence speed, and it may even compromise the optimality of the algorithm.Compared to these works, admissible informed sampling never excludes any points possibly belonging to the omniscient set; thus, it retains asymptotic optimality regardless of the geometry of the problem.Nonetheless, convergence speed may be slow when the admissible heuristic is not informative.
Few works attempted to speed up the convergence rate by combining informed planning and local techniques.In [18], [19], Kim and Song propose to run a deterministic path shortcutter every time the algorithm improves the solution.The short-cutting procedure acts as follows: i) it considers three consecutive nodes on the path at a time; ii) it discretizes the two corresponding edges; iii) it tries to connect the extreme nodes to the sampled segment until it finds a collision; iv) it moves the central node to the intersection of the two segments found in the previous step.Such an approach has two main drawbacks.First, the computational time owed to the shortcutting is significant as it requires an iterative edge evaluation (i.e., collision checking) every time it tries to refine a triple of nodes.Second, this approach is suitable only for minimumpath problems, as it relies on the triangular inequality applied to each triple of nodes.[20] proposes to refine the current solution by moving the nodes of the tree based on gradient descent.Yet, every time a node is moved, the refinement process requires an intensive edge evaluation.
The idea of combining global and local optimization was also explored by Choudhury et al. [21], who propose a hybrid use of BIT * [22], a lazy heuristic-driven informed planner and CHOMP [23], a gradient-based local planner.Roughly speaking, the local planner is used to solve a two-point problem between a pair of nodes.One main drawback is that the local planner is called every time an edge is evaluated, which may be computationally counter-effective.Other variants of BIT * were proposed in [24], and [25], but focusing on how to improve the heuristics by experience.[17] uses online learning (clustering of previous edges and Multi-Armed Bandits) to oversample promising regions.
Finally, [26] and [27] propose approaches to focus the search on subsets of the informed set.[27] decomposes a planning problem into two sub-problems and applies informed sampling to them.The union of the informed sets of the sub-problems is strictly contained in the informed set of the initial problem; thus, the search focuses on a smaller region.However, the performance of such an approximation strongly depends on the problem geometry, and the authors do not discuss how to retain asymptotic optimality.[26] estimates the cost-to-come of the tree leaves to bias the search towards a subset of the informed set, called the relevant region.In this case, the trade-off between exploration and exploitation is fixed.Thus, the performance depends on the problem geometry, and it may be even worse than admissible informed sampling.
Our approach is similar to the works mentioned above as it alternates informed sampling and local refinement of the path.Compared to [18], [19] and [20], our method refines the path by sampling the neighborhood of the current solution, and this allows for gradient-free refinement, also with generic cost functions.Moreover, [18], [19] and [20] tend to favor exploitation (i.e., path refinement) rather than exploration, wasting time optimizing suboptimal solutions (see numerical results in Section VI).Similarly, [21] and [26] use a fixed balance between exploration and exploitation; thus, the performance may vary a lot across different problems.Our method adjusts the trade-off between exploration and exploitation according to the cost progression, adapting to different problems.In this sense, we could use our adaptive scheme in [26] to dynamically balance the trade-off between exploration and exploitation and in [27] to retain asymptotic optimality.

STRATEGY
To understand the motivation behind this work, consider the minimum-path problem in Figure 1a.Because of the presence of a large obstacle between x start and x goal , the L 2 -informed set is large and poorly informative.Sampling the neighborhood of the current solution would be much more efficient than considering the whole L 2 -informed set, as the path would quickly converge to the global optimum.This situation is expected when the current and the optimal solutions are homotopic.We will refer to this sampling strategy as local sampling.
On the other hand, when the current solution is locally optimal, any efforts on the local optimization would be useless.
For example, in Figure 1b, the optimal solution passes through the narrow passage between the two obstacles; thus, sampling the neighborhood of the current solution would lead to a local optimum (yellow in Figure 1c).As the solution approaches the local optimum, the probability of improving the solution via local sampling is equal to zero.
Notice that a fast convergence to a local minimum quickly reduces the volume of the informed set.However, it is crucial to understand when the local sampling is beneficial without losing the asymptotic global optimality.
In this paper, we combine admissible informed sampling and local sampling in a mixed sampling strategy.On the one hand, sampling the admissible informed set guarantees that all points from the omniscient set are taken into account.On the other hand, local sampling has a twofold role.First, if the local and the global optima correspond, it quickly converges to the solution, as in Figure 1c.Second, it reduces the size of the admissible informed set.Indeed, the Lebesgue measure of the L 2 -informed set is directly related to the best cost to date c k as follows: where ζ n is the Lebesgue measure of the unit ball (dependent only on n) [7].Hence, improving the current solution (even in the neighborhood of a local minimum) enhances the convergence speed to the globally optimal solution.
V. PROPOSED APPROACH This section describes the proposed mixed sampling strategy.First, it defines the local informed set.Second, it designs an algorithm to dynamically change the local sampling probability based on the cost evolution.Finally, it proves asymptotic optimality.

A. Mixed-strategy sampling
Consider an n-dimensional path planning problem solved by a sampling-based planner.Let σ k ∈ X free be the current solution at iteration k and c k = c(σ k ).An RRT * -like planner is asymptotically optimal if the algorithm that connects nodes satisfies conditions on the minimum rewire radius and the sampler draws nodes from a superset of the omniscient set.If we drop the second condition, such a relaxed planner would converge to a local optimum.To formulate this idea more formally, we introduce the notion of local informed set.Then, we combine it with admissible informed sampling to obtain the adaptive mixed-strategy sampler used in the proposed planner.

Definition 3. (local informed set)
The local informed set of the current solution σ k is the intersection of the admissible informed set and the set of points with distance smaller than R from σ k :   planner converges to a local minimum with a probability equal to one.
Proof.: If the current solution is not a local optimum, the intersection of the omniscient set and any neighborhoods of σ k is not empty (c is Lipschitz): It follows that local sampling improves the solution with a probability greater than zero whenever the solution is not (locally) optimal.
We define hereafter a mixed-strategy sampler to combine admissible and locally informed sampling soundly.Definition 4. (mixed-strategy sampler) A local sampler and a global sampler are algorithms that draw samples from X f ,l and X f , respectively.A mixed-strategy sampler draws samples by using a local sampler with probability ϕ and a global sampler with probability 1 − ϕ.

Lemma 2. (optimality of admissible mixed-strategy samplers)
A sampling-based path planner that is asymptotically optimal under uniform sampling distribution is asymptotically optimal also under admissible mixed-strategy sampling.
Proof.: A mixed-strategy sampler samples X free with nonuniform probability density d.If such a sampler is admissible, d can be seen as a mixture of probability densities such that: where d 1 is a strictly positive uniform probability density over X free and Based on this consideration, the asymptotic optimality of the path planner traces back to the proof of asymptotic optimality of [28] with non-uniform sampling.In particular, the planner is still asymptotically optimal by adjusting the rewire radius of a factor (1 − ϕ) −1/n , as proved in Appendix D of [28].
At each iteration, the mixed-strategy sampler should select an appropriate value of ϕ based on the likelihood of improving the current solution.This is important to exploit the advantages of both admissible and local informed sampling (respectively, global asymptotic optimality and fast convergence to local Algorithm 1: Mixed-strategy informed planner and mitigate the flaws (slow convergence speed and stagnation into local minima).We denote the guess that σ k is not a local optimum at iteration k by p k ∈ [0, 1].If c k < c k−1 , we increase p k+1 proportionally to the relative improvement of the cost such that: where ν ∈ [0, 1) is a forgetting factor that smooths the evolution of p and u is an admissible estimate of the best cost c * .Note that the cost c k is non-increasing (namely, c k ≤ c k−1 ), therefore p k is a strictly positive number (assuming p 0 > 0).Moreover, p k+1 ≤ 1 because c k ≤ u and p 0 ≤ 1.
It follows that a selector that uses ϕ = p k is admissible.

B. Proposed algorithm
The proposed planner is the variant of Informed-RRT * in Algorithm 1.It uses the guess p k as the probability to sample the local informed set (lines 1-6).Sample x is used to extend the tree (line 7); and p k+1 is updated according to (6) (lines 8 and 9).
Procedures informedSampling and localSampling sample the admissible informed set (2) and the local informed set (5), respectively.The former follows the implementation of [7], and the latter uses Algorithm 2.
Algorithm 2 randomly samples a ball of radius R centered at a random point along the current solution path.First, it uniformly samples the n-dimensional unit ball and assigns the value to b (lines 2-4).Then, it picks a random point, σ(s), on path σ.Therefore, the final candidate sample is obtained by scaling b, from the unit ball to the ball of radius R and centered in σ(s) (lines 5-6).Finally, it uses rejection to ensure that x ∈ X f .Note that the rejection of the candidate is unlikely if R is small.Algorithm 2 does not sample the local informed set uniformly.The points closer to the path have a higher probability of being sampled than points near the boundary of the tube.Moreover, Algorithm 2 over-samples regions "inside" the corners of the path.Non-uniform local sampling does not affect the asymptotic optimality of the planner (Lemma 2).Moreover, in minimum-length problems, over-sampling regions inside the corners may be beneficial in reducing the path length.

C. Algorithm tuning and convergence performance
Algorithm 1 has two parameters more than Informed-RRT * : the radius R 0 and the forgetting factor ν. Appendix A provides an illustrative example showing the effect of the parameters on the convergence.Summarizing the results, R 0 ∈ [0.01, 0.02] and ν ≈ 0.999 consistently provide the best results across problems of different dimensionality and geometry.

VI. EXPERIMENTS
We test our Mixed-strategy Informed planner (MI-RRT * ) with robot manipulators (6, 12, 18 degrees of freedom), navigation of mobile manipulators, and a real manufacturing case study.We demonstrate that MI-RRT * consistently outperforms the baselines.

A. Robot manipulators
We consider three robotic cells (Figure 2).Each cell has four rectangular obstacles and a serial manipulator (6, 12, and 18 degrees of freedom, respectively).The cell descriptions and usage examples are available at [29].
We compare our planner (MI-RRT * ) with Informed-RRT * [7], which uses a pure admissible informed sampling method, and wrapping-based Informed-RRT * (Wrap-RRT * ) [19], which applies a shortcutting procedure whenever it improves the solution.The additional parameters of MI-RRT * are tuned according to Section V-C, namely R = 0.02(c k − u) and ν = 0.999.
First of all, we show an example of a query to illustrate the behavior of the algorithms.Figure 3a shows the cost trend for a random planning query with n = 6, repeated 30 times for each planner.MI-RRT * provides a faster convergence rate and a smaller variance.Moreover, the median cost of the proposed algorithm is closer to the 10%-percentile than the other strategies, highlighting the capability of MI-RRT * to converge sooner to the global minimum.The same behavior is clear also for n = 12, as shown in Figure 3b.In this case, Informed-RRT * suffers more from the curse of dimensionality, while Wrapping-based RRT * gets stuck in a local minimum for several iterations.
For an exhaustive comparison, we set up a benchmark as follows.Thirty queries are generated randomly (queries for which a direct connection between start and goal exists are discarded).The queries are solved with different planning times, between 0.5 and 5 seconds.Bounding the maximum planning time instead of the maximum number of iterations has been preferred because the algorithms perform different operations during the iterations.Moreover, planning time is more meaningful in practical applications.
Each planner solves each query 30 times for maximum planning times equal to 0.5, 1.0, 2.0, 5.0 seconds.The final cost of each query is normalized by an estimate of the minimum cost, obtained by solving the query with a maximum planning time equal to 60 seconds.
The box-plots of Figure 4 show that MI-RRT * has a faster convergence rate as well as a smaller variance compared to both Informed-RRT * and Wrap-RRT * .Therefore, the proposed approach finds better and more repeatable paths given the same amount of time.This result is emphasized for larger values of n, as shown in Figures 4b-4c.
We did not observe significant differences between Wrap-RRT * and Informed-RRT * , probably because the improvement of the convergence rate is counterbalanced by the computational overload owed to the wrapping procedures, as mentioned in Section III.

B. Mobile manipulators
We consider navigation scenarios with mobile manipulators.Each robot consists of a 6-degree-of-freedom manipulator mounted on an omnidirectional mobile platform (two linear and one rotational degree of freedom).In the first case, the robot has to move from one side to the other of a wall with a narrow opening (see Figure 5a).The problem has at least three homotopy classes: two circumnavigate the wall, and one passes through the narrow passage, requiring the re-configuration of the robot to fit the passage.We compare our MI-RRT * with Informed-RRT * [7] and Wrap-RRT * [19] over 30 repetitions.Results are in Figure 5b: MI-RRT * has the best convergence rate, followed by Wrap-RRT * , and Informed-RRT * .
We also consider a second scenario with two mobile manipulators (for a total of 18 degrees of freedom) required to move from one side to the other of a wall with two openings.Results are in Figure 5c: similarly to the single-robot case, MI-RRT * has the best convergence rate, followed by Wrap-RRT * , and Informed-RRT * , despite the greater number of iterations required by all methods to solve the problem.

C. Real-world case study
We validated our algorithm in a manufacturing mock-up cell designed within the EU-funded project Sharework.The cell consists of a 6-degree-of-freedom collaborative robot, Universal Robots UR10e, mounted upside down and working on a work table in front of it (Figure 6).The proposed motion planner is implemented in C++ within ROS/MoveIt! [30].An open-source version of the code is available at [8].ROS/MoveIt! runs on an external computer from which it sends the planned trajectory to the robot controller.
The robot is tasked with a sequence of fifty pick-and-place operations.We consider two experiments.In the first one, a table-shaped obstacle is placed upon the placing goal (Figure 6b).In the second one, a barrier separates the picking and placing goals (Figure 6c).These scenarios simulate realistic machine-tending operations, in which the robot needs to access a confined space.From a planning perspective, they introduce narrow passages, complicating the planning problem.For example, in the barrier experiment, the shortest path passes through the narrow space below the barrier, close to the table surface.
Figure 7 compares the performance of MI-RRT * , Informed-RRT * , and Wrap-RRT * with different planning times, for the table and the barrier experiments.Similar to Sections VI-A and VI-B, MI-RRT * has a faster convergence speed in both experiments.In the table experiment, MI-RRT * reduces the planning time up to −34% and −13% compared to Informed-RRT * and Wrap-RRT * .In the barrier experiment, MI-RRT * reduces the planning time up to −37% and −18% compared to Informed-RRT * and Wrap-RRT * .Note that, contrary to the simulations, Wrap-RRT * showed a significant improvement compared to Informed-RRT * .This suggests that the advantages of Wrap-RRT * are problem-dependent.
Overall, MI-RRT * finds better solutions with the same maximum planning time.As a matter of example, Figure 7c shows the continuous trend of the normalized costs for the barrier experiment.The key result is that MI-RRT * approaches the best cost faster in the initial phase.For example, after 10 seconds, MI-RRT * reaches 1.4c * , while Informed-RRT * and Wrap-RRT * reach 1.6c * and 1.9c * , respectively; after 60 seconds, MI-RRT * reaches 1.2c * , while Informed-RRT * and Wrap-RRT * reach 1.35c * and 1.55c * , respectively.

VII. CONCLUSIONS
Comparisons with state-of-the-art methods highlight the effectiveness of the proposed method in improving the convergence speed, especially in high-dimensional problems.The method is implemented in a manufacturing-oriented case study, where the robot is tasked with a sequence of pickand-place operations.Results show that the proposed planner converges quicker to the optimal solution, allowing for shorter planning latencies in online applications.
An open-source implementation of the algorithm is available at [8].The algorithm is implemented in C++ and is fully compatible with ROS/MoveIt! [30].Examples of usage and benchmarking are also available at [29].

APPENDIX A EFFECT OF THE TUNING PARAMETERS
We analyze the effect of the parameters R 0 and ν used in Algorithm 1.To do so, we use an illustrative example consisting of a narrow-passage problem with one local minimum c local and one global minimum c global .Different cardinalities of the configuration spaces are tested.We run 200 queries for each parameter set; each time, the algorithm runs for 10 6 iterations with an early stop condition if the cost c k satisfies the condition c k < 1.01c global .Although this analysis is limited to an illustrative example, the results can serve as tuning guidelines for parameters R 0 and ν, as demonstrated in Section VI.

A. Narrow-passage example
We consider the configuration space and an hollow hyper-spherindrical obstacle: where l c = 1 is length of the hyper-spherinder, r c2 = 1 is the external radius, and r c1 is the cavity radius.The cavity radius is such that the ratio between the volume of the cavity and that of the external cylinder is equal to 0.5 for all values of n.The starting and goal points are set equal to    The planning problem has one global optimum (green line) and one local optimum (yellow line).For the sake of readability, axis scales are not equal.

C. Effect of forgetting factor ν
The forgetting factor allows smoothing the switching between the two sampling strategies by averaging out the cost changes over multiple iterations.Figure 9b shows the relation between the forgetting factor ν and the number of iterations required to reach c k = 1.01c global (90%-percentile), the tube radius R 0 was set equal to 0.02 according to Section A-B.If ν > 0.999, results do not vary significantly; however, selecting values of ν too close to 1 could lead the solver to get stuck in local minima for many iterations.ν = 0.999 is a reasonable value for most cases in our experience.

) Lemma 1 .
(local optimality of local sampling) Consider an asymptotically optimal path planner and let the sampling algorithm draw samples only from the local informed set.The

( a )
Large obstacle between start and goal.(b) Narrow passage, current solution stacked on a local minimum.(c) Narrow passage, current solution far from local minima.

Fig. 1 :
Fig. 1: Examples of planning situations where local sampling is useful (a), deleterious (b), or unable to find the global optimum but useful to reduce the measure of the informed set (c).

18 Fig. 2 :
Fig. 2: Robot manipulator benchmark.n is the robot's number of degrees of freedom.

Fig. 6 :
Fig. 6: Experimental setup.A Universal Robots UR10e mounted upside down works on the panel in front of it.(a) Actual setup; (b) first experiment: table-shaped obstacle upon placing position; (c) second experiment: barrier with a narrow passage between picking and placing positions.

1 Fig. 8 :
Fig. 8: Narrow-passage example of Section A-A for n = 2.The planning problem has one global optimum (green line) and one local optimum (yellow line).For the sake of readability, axis scales are not equal.