In this section, we run simulations using over a real-life case using the assignment procedure by Alonso-Mora et al. (2017a). We first explain such a procedure (“The assignment procedure” section), and then describe the real-life case (“The real-life study case” section) to present detailed results and sensibility analyses (“Global performance of the anticipatory methods”–“Sensibility analysis” sections)).
The assignment procedure
The anticipatory ideas we propose in this paper are tested on top of the assignment procedure studied by Alonso-Mora et al. (2017a), which assigns passengers to vehicles. This procedure is based on deciding each \(\delta\) (here we use \(\delta = 1\ \hbox {minute}\)) how to assign the requests that have emerged during that lapse of time: how to group them, which vehicle (that might be serving some previous passengers) assign to each group, and in which order, i.e., \(\Psi =\{k \cdot \delta : k=1,...,\lceil PO/\delta \rceil \}\). The set of constraints \({\mathcal {C}}\) requires that the capacity of the vehicles is never exceeded, and deals with the quality of service, imposing that no served requests can face a waiting time large than some upper bound \(\Omega _w\), nor a total delay larger than \(\Omega _d\). We now explain how the assignments are decided in each iteration at time \(\tau _i\).
-
1.
First, for each request \(r \in {\mathcal {R}}_{e,\tau _i}\) and vehicle v it is analyzed if it is feasible that v serves r without violating the constraints \({\mathcal {C}}\). A heuristic might be implemented after this step to reduce the number of feasible requests (and thus, the computational time): for each request r, consider the set of all feasible to serve r, and discard the most costly ones.
-
2.
Assuming that the feasible assignments between vehicles and trips of size j are known (the previous bullet point explains the case \(j=1\)), study the feasible matches between vehicles and trips of size \(j+1\) based on the following fact: if an assignment between a trip and a vehicle is feasible, then the assignment between the same vehicle and all the subsets of that trip must be feasible as well. When a feasible link between a vehicle and a trip is found, it includes how to update the vehicle’s route, which is done by minimizing the total additional costs (see Eq. 8 below for details), which can be done using an exhaustive search or some insertion heuristic.
-
3.
Build an ILP that decides which of the feasible trip-vehicle assignments are taking place, ensuring that each request is either assigned to a single vehicle or rejected, and that each vehicle is assigned to no more than one trip. The objective function includes the costs of the chosen assignments, plus a penalty \(p_{KO}\) for each rejected request.
-
4.
A rebalancing step moves the idle vehicles, i.e., those that had no passengers before deciding the assignment and received none in step 3. They are moved towards the origins of the rejected requests through another ILP, minimizing the total distance driven by these vehicles and without sharing (i.e., no more than one rejected request can be assigned to each idle vehicle). Note that these vehicles are not actually meant to serve those rejected requests, so the feasibility constraints regarding waiting times, delay, and capacity of the vehicles are not included in this ILP.
Some comments are noteworthy:
-
In Alonso-Mora et al. (2017a), when the assignment is decided, those requests that are not picked up before \(\delta\) are kept for the next iteration, which allows the system to reassign. That is to say, these requests might be served by a different vehicle or can even become rejected, if doing so increases the total efficiency of the system. We introduce a slight modification in this paper: the first request that will be picked up (i.e., the next pick-up in the vehicle’s list) is not reassigned in the next iteration. This change is needed because when routing is modified through the anticipatory techniques, the time required to arrive at the first pick-up might increase because the vehicle does not necessarily follow shortest paths; therefore, the number of accumulated requests for reassigning might increases, which has an exponential impact on the computational burden. Moreover, experiments show that this change reduces the number of rejections in the system.
-
If a reassignment of an individual request takes place, Alonso-Mora et al. (2017a) forbids an increase in the achieved waiting time (compared to the one of the previous assignment), while in Fielbaum et al. (2021) they forbid rejections for a subset of the requests (those that are required to walk, an option that is not considered here). In this paper, requests that are being reassigned face the same constraints as the new requests and can be rejected.
-
Once a request becomes rejected, it is removed from the system, instead of kept for reassigning as done in Alonso-Mora et al. (2017a).
We use the following cost function when assigning trip T to vehicle v, if the route of the vehicle is updated from \(\pi _{v,\tau _i}\) to \(\pi\) (prior to introducing rewards):
$$\begin{aligned} c(v,T,\pi )&=\sum _{r \in T}{\left[ p_w t_w(r,\pi )+p_v D(r,\pi )\right] }\nonumber \\&\quad +\sum _{r_0 \in Req_{v,\tau _i}}{\left[ p_w \Delta t_w(r_0,\pi , \pi _{v,\tau _i})+ p_v \Delta D(r_0,\pi ,\pi _{v,\tau _i})\right] }\nonumber \\&\quad +p_O \left( L(\pi )-L(\pi _{v,\tau _i}) \right) \end{aligned}$$
(8)
Where the first line in Eq. (8) represents users’ costs: the first term refers to the waiting time \(t_w\) and the detour D faced by the requests r in the new trip, and the second term to the extra waiting time \(\Delta t_w\) and extra detour \(\Delta D\) imposed to the requests \(r_0\) that were already being served by the vehicle v, due to the changes induced to its route. Parameters \(p_w\) and \(p_v\) are the costs of one time-unit waiting and over the vehicle, respectively. The second line in Eq. (8) deals with the operator’s costs, which are proportional to the increase in the length of the route.
A detailed explanation of the assignment method per iteration, already affected by the anticipatory techniques, is shown in the pseudo-code of Algorithm 3 that introduces rewards, where we say that \(c(v,T,\pi )=\infty\) if the assignment is unfeasible. The algorithm including future requests is the same, but considering c instead of \(c_A\), and adding the future requests to R at the beginning of the algorithm.
It is worth recalling that the anticipatory methods we propose here do not require this specific assignment procedure to work. Different assignment methods and different cost functions may also incorporate the anticipatory techniques explained above.
The real-life study case
The proposed methods are tested over a publicly available dataset of real trips performed by taxis in Manhattan, New York, that started between 1-2 p.m. on January 15th, 2013Footnote 7. The total number of requests is 7,748, while 4,091 nodes and 9,452 edges form the city network. The numeric value of the chosen parameters is shown in the Appendix. Some conditions are modified to analyze the robustness of the system in “Sensibility analysis” section.
A fleet of 1,000 vehicles of capacity 3 was considered. This is a small fleet, unable to serve all the demand. Having a significant rejection rate enables us to analyze the impact of the methods over rejections in a crisp way, as well as how anticipatory techniques affect the trade-off between the number of served requests and the quality of service for those who are served. In “” section, we show that the main analyses and conclusions we obtain for this fleet remain vSensibility analysisalid when a larger fleet is used.
Let us first analyze the results obtained when only the methods that provide rewards at the routing stage are included (“Assignment introducing rewards: final node or first idle node” and Assignment introducing rewards: comparison of the different rates sections). We then study the effects of the artificial requests (“Assignment inserting future artificial requests: comparison of the different rates” section), we compare the methods (“Coupling and comparison of the methods” section), we study in detail the operational effects of the anticipatory techniques (“Detailed analysis of the impact over the system” section), and we finally provide a sensibility analysis (“Sensibility analysis” section).
Global performance of the anticipatory methods
Assignment introducing rewards: final node or first idle node
Recall that, as explained in “Assignment introducing rewards
” section, rewards depend on the generation rate of one of the nodes in the route that is being analyzed. Figure 6 compares the results obtained when using the final node of the route (“Last node” in the images) versus using the first detention of the route from which there is always idle capacity (“Idle node” in Figure 6), when using the basic rates, and considering three indicators of the quality of the system: Average users’ costs -an adimensional value calculated as the weighted average of waiting times, total delays and number of rejections, according to the same parameters used in the optimization, including the rejection penalty-, percentage of requests being rejected and VHT.
Results in Figure 6 are questionless. When the reward depends on the idle node of the route, the impact is mild, but when it depends on the last node, it can be quite meaningful: compared to the case with no rewards (\(\Theta =0\)), the number of rejections can be reduced by a bit more than 10%, at the cost of increasing VHT in about 10%. Users’ costs also drop, but less significantly than rejections, meaning that delay and waiting times might increase. These changes will be studied in more detail in the following subsections that compare the different rates and methods. From now on, all the results are calculated with the rewards depending on the final node.
Assignment introducing rewards: comparison of the different rates
In “Different definitions for generation and rejection rates” section, four definitions for the rates were provided: basic (B), smooth (S), calculated through particle filters (PF), and through historical data (H). The first three ones can be applied to generation or rejection rates of each node (that, as just explained, is the final node of the route), whereas historical data only gives generation rates. All this together makes seven methods to define the different rates.
The results obtained with each of these seven rates are shown in Figure 7. The following conclusions emerge:
-
Some of the rates are much more effective in reducing the number of rejections and users’ costs than others. Two rates highlight as the best ones: \(Rej_{PF}\) and \(Gen_B\). While the former achieves the minimum values for the rejection rates and the users’ costs, the latter achieves slightly worse results in those measures, requiring a lower increase in VHT. It is worth saying that if \(\Theta\) is further increased, \(Gen_B\) does not longer improve its results.
-
The results of the best methods show that these rewards can be very fruitful. For instance, the number of rejected requests with \(Rej_{PF}\) drops from 3,025 to 2,631. Of course, whether this is good news depends on how opposing objectives are evaluated, as increasing VHT is unavoidable.
-
The tuning parameter \(\Theta\) emerges as a crucial issue for these models. Which is the best \(\Theta\) depends on what rate is being used. The good news is that for all of them, the range of values of \(\Theta\) that yields good results is wide, meaning that these methods are robust even if the optimal \(\Theta\) is not known.
-
Rates that are based on rejections have, in general, better results concerning both lower users’ costs and VHT. The only exception is \(Gen_B\), which increases VHT much less than \(Rej_B\), with similar (although worse) results regarding users’ costs.
-
Regarding smoothing rates, they can improve the system if based on rejections.
-
The rates based on historical data perform worse than the ones based on recent information. That is to say, our results highlight the potential of utilizing the information that is directly generated by the system.
To understand the trade-off between the different user-related quality measures better, Figure 8 shows the average waiting time and average detour (i.e., the difference between each user’s in-vehicle time and the time-length of the shortest path he/she could have followed), considering only the two rates that achieved the best results \(Gen_B\) and \(Rej_{PF}\). Indeed, both quality measures get worse when rewards are applied. However, recalling that \(Gen_B\) achieves its minimum reject rate at \(\Theta =6\) and \(Rej_{PF}\) at \(\Theta =2\), we can also conclude that there is some synergy between these objectives, as waiting times and detour do not increase that much for these optimal values of \(\Theta\).
As a synthesis, the introduction of rewards reduces the percentage of rejections of the system if the right generation or rejection rates are selected, at the cost of providing worse service for the users that are transported.
Assignment inserting future artificial requests: comparison of the different rates
The same four generation rates \(Gen_B, Gen_S, Gen_{PF}\) and \(Gen_H\) were used to determine the origin of m artificial requests. We take \(\phi =\delta\) (1 minute). Inserting future requests makes the algorithm much slower, as they can be combined with most of the current requests without violating the constraints regarding maximum waiting times and delay, which increases the number of feasible groups. To overcome this issue, we tighten the heuristic explained in step 2 of the assignment model (“The assignment procedure” section), i.e., we discard a larger amount of feasible-but-costly vehicles when analyzing trips of size one.
Figures 9 and 10 show the results when inserting the artificial trips with each of the generation rates. The baseline (in black) corresponds to the results shown in the previous subsection for \(\Theta =0\) (i.e., without tightening the heuristic that discards vehicles), whereas the results for \(\Gamma =0\) include the change on the heuristic and are equivalent to having no artificial requests. The comparison of the results against \(\Gamma =0\) shows the direct effect of introducing these artificial requests. However, the most relevant comparison is against the baseline because it is achieved if no artificial requests are added. Note that \(\Gamma =0\) (i.e., when the heuristic is tightened) includes more rejections than the baseline, but better results in the other indices (much lower waiting times and detours just a bit larger), which is a natural consequence of removing the most costly vehicles for each request: each request has fewer options to be served, but the options that remain are less costly, i.e., provide a better quality of service (recall that the cost is defined as the sum of users’ costs and operator’s costs).
In general, all the rates are able to reduce waiting times and detours significantly. The percentage of rejections, on the other hand, is always higher than in the baseline: when compared with \(\Gamma =0\), rejections are sometimes larger and sometimes fewer, but changes are always minor (these results are similar to the ones obtained by Alonso-Mora et al. (2017b)).
These results can be synthesized by stating that inserting artificial requests by itself improves the quality of service for those users that are transported, but the induced increase to the computational time requires using heuristics that might increase the number of rejections. The interpretation is as follows: Artificial requests effectively push the vehicles towards the origins of future requests. However, when they are inserted, they compete with current requests for the same vehicles so that sometimes the system will prioritize serving the artificial ones despite their lower rejection penalty. VHT always increases.
The method that achieves the best results isFootnote 8\(Gen_B\): it presents the largest reduction in the rejection rate with \(\Gamma =\frac{1}{80}\), and in detours for \(\Gamma =\frac{1}{60}\). Reductions in waiting times are similar for all the generation rates, except for \(Gen_H\) when \(\Gamma =\frac{1}{40}\). Results obtained by \(Gen_S\) and \(Gen_{PF}\) are almost identical. The fact that \(Gen_H\) might yield the worst results if \(\Gamma\) is not properly selected (i.e., it is a less robust method) reinforces the conclusion that the direct use of past information can be an unfruitful idea for these transportation systems.
In all, the insertion of future requests achieves reductions precisely where rewards fail: waiting times and detours. This happens because both methods move the vehicles towards high-demand zones, but inserting future requests might yield to rejecting some real current ones, so that the gains in efficiency translate into waiting and delay for the requests that will emerge afterward.
Coupling and comparison of the methods
The different results obtained by the two methods proposed in this paper, introducing rewards and inserting artificial requests, might suggest that using both simultaneously is promising, as they achieve good performance in complementary indices. However, our simulations show the opposite. As they both push the system towards the same direction, the results are much worse when used together, as rejection rates jump from less than 40% in the baseline scenario to more than 50%. This bad news can be explained: rewards move the vehicles towards the same zones in which the future requests are inserted. If a vehicle has to choose between waiting for a future artificial request or serving a current real one, it might prioritize the future one despite its lower rejection rate, as being in a close position makes the cost of serving it very low.
Therefore, an operator should decide between using only one of the two methods. A detailed comparison among them is offered in Figures 11-12, in which we show the results when running the methods for the real requests from ten consecutive weekdays, always between 1-2 p.m. We compare the baseline (no anticipatory techniques), and the best versions of each of the two methods: \(Gen_B\) with \(\Theta =6\) when introducing rewards, and \(Gen_B\) with \(\Gamma =\frac{1}{80}\) when inserting future artificial requests.
Figures 11-12 synthesize the most relevant conclusions of the simulations. Both methods effectively impact the performance of the system, achieving different goals: whereas introducing rewards decreases the rejection rate of the system, including artificial future requests improves the quality of service for those requests that get served, and both methods require an increase in VHT. Moreover, these conclusions are robust, as they are shown to be valid for each of the ten days.
In general, introducing rewards is better for most situations, as serving more passengers is usually the most relevant purpose, even more recalling that waiting times and delay are always bounded. This method also has the virtue of not increasing the computational times at all. However, if the operator is most interested in improving the service for those passengers that are being served, inserting artificial requests is the best option.
Detailed analysis of the impact over the system
So far, we have analyzed the methods in terms of the most relevant indices of the system. However, when we introduced the need for this type of method, we justified it by analyzing the spatial heterogeneity of the results, and the influence of deciding without knowing the demand beforehand. Therefore, we now turn to analyze how the operation of the system is being modified. We focus on the method that introduces rewards, as it yields the best results, considering the rates \(Gen_B\) and \(Rej_{PF}\) that proved stronger.
Impact over the temporal evolution of the system
We first study how the number of rejections evolves in time, compared to the case with no anticipatory methods (\(\Theta =0\)). We know that \(Gen_B\) and \(Rej_{PF}\) have less rejection in total, but when does this happen? Figure 13 shows how many rejections are being saved thanks to introducing rewards. This is done by depicting the difference between the accumulated rejections with no rewards, and the accumulated rejections with rewards for both rates: as before, the solid blue curve represents \(Gen_B\), and the dotted green curve represents \(Rej_{PF}\).
Both curves begin quite flat and even take negative values, meaning that in the first iterations, rewards worsen the system’s quality. The \(Rej_{PF}\)’s curve rapidly starts to increase (i.e., to have fewer rejections than the method with no anticipation), whereas \(Gen_B\) requires almost half an hour to do so, which verifies that the central motivation of these methods is achieved: modifying its current decisions to be better prepared for future requests. Note that, after about 40 minutes of operation, both methods reach an almost-linear increase, i.e., they keep saving rejections at a rate that keeps somewhat constant.
Impact over the spatial mismatch between vehicles and requests
We now analyze how the operation changes in space. We have noticed before that the most demanded zones were receiving a worse quality of service, so that more vehicles seemed to be required there. To analyze the changes, Figure 14 shows, at the end of the hour that was modeled, the differences in the vehicles’ positions between having no anticipatory methods and \(Gen_B\) (left) or \(Rej_{PF}\) (right). We partition the whole map into the same zones used for the methods with particle filters and historical data, and each vehicle is assigned to the zone corresponding to its closest node. A red sector means that there were more vehicles assigned with rewards, whereas blue means the opposite: the more intense the color, the higher the difference.
Both Figures have almost only blue zones at the north of the network. In the center, intense red zones clearly dominate for \(Rej_{PF}\), and not so clearly for \(Gen_B\). That is to say, rewards are indeed moving some vehicles from the north of the network (a low demand area) to the center. It is worth saying that \(Gen_B\) makes no noticeable difference in the southwest (at the bottom of the network), while \(Rej_{PF}\) increases the number of vehicles there.
A similar analysis can be done to see where the rejects are concentrated. In Figure 15, each zone is an origin, that is blue if the percentage of rejected requests emerging there was higher with no rewards than with \(Gen_B\) (left) or \(Rej_{PF}\) (right), and red in the opposite case. Again, the more intense color, the higher the difference. The reduction of the rejections rates in the central zones is apparent. Notably, we see that the overall reduction is achieved by increasing the number of rejections in some zones, mostly at the south and north of the network.
When describing methods that insert rewards, a possible drawback was identified: as there is no global control of the effects introduced by the rewards, many vehicles could be sent towards the same places, which could yield a worse mismatch between vehicles and requests than with no anticipatory methods. To analyze if this is the case, we create the following indices per zone z, also evaluated after 60 minutes. For each zone z, define \(v_z\) as the proportion of the vehicles inside z, and \(r_z\) as the proportion of current requests departing from z. A perfectly balanced system would require that \(v_z \approx r_z \forall z\). Therefore, a measure for the level of mismatch at z is:
$$\begin{aligned} M_z=|v_z-r_z| \end{aligned}$$
(9)
We choose these absolute values rather than normalized so that high-demand zones weigh more when calculating average values of \(M_z\).
We now study how these values \(M_z\) change when anticipatory methods are used. We use \(Rej_{PF}\) for the comparison because it presents the largest impact on the system. A first analysis can be done by looking at the mean and median values of \(M_z\), which is shown in Table 1. As apparent, the level of mismatch is reduced when using the anticipatory scheme. Note that the absolute numbers of \(M_z\) depend on many different structural aspects of the problem (such as fleet size, number of shareable requests, distances), so the relevant insight we obtain from Table 2 is how the numbers change when the anticipatory method is included. It is also noteworthy that \(M_z\) might not take other relevant features into account (for instance, how shareable are the requests emerging from each zone), so the analysis is meaningful at a global scale rather than for specific conclusions.
Table 2 Comparison of the mean and median of the level of mismatch per zone, whith and without anticipatory methods Conclusions from Table 1 are reinforced by looking at Figure 16, in which we show the level of mismatch at each zone, without anticipatory methods (left), and using the rewards technique with \(Rej_{PF}\) (center). Introducing the rewards builds a figure with more dark zones, i.e., more zones with a low level of mismatch. The difference is mostly achieved at the north of the network. Recalling that the rewards remove vehicles from that area, we conclude that an oversupply is now prevented there.
In all, this subsection allows us to synthesize the impact of the rewards over the ridepooling system: using anticipatory rewards improves the system only after some initial lapse of time; the improvement is achieved by moving vehicles towards the high-demand zones, which decreases the quality of service in the low-demand zones but achieves better overall results. Moving these cars decreases the mismatch between vehicles and requests.
Sensibility analysis
So far, we have shown results for a particular network, for a fixed demand and a given fleet. We now show that the proposed techniques are robust, i.e., that they also work properly under different conditions, although results are not exactly equal. In particular, the optimal values for \(\Theta\) and \(\Gamma\) are highly sensitive to the specific conditions.
Two alternative scenarios will be used for the sensibility analysis. In the first one, we serve the same demand and the same network, but using a fleet of 2000 vehicles of capacity 4, to study whether the methods are effective when the number of rejections is lower.
The second scenario uses data from Didi Udian, a real ridepooling company in Shenzhen, China. As on-demand ridepooling is an emerging mobility system, the scale is much smaller: 311 requests during 12 hours. The simulation considers a fleet of 10 vehicles of capacity 5. The network is formed by 5,502 arcs and 2,201 nodes, clustered into 96 zones. A heatmap exhibiting the location of the origins is shown in Figure 17, showing that this case is of smaller case and presents a different demand structure, with two separated high-demand zones.
Sensibility of the method that introduces rewards
Let us begin analyzing the method that introduces rewards. We focus on the two rates that presented the best results in the previous sections: \(Gen_B\) and \(Rej_{PF}\). Figure 18 shows the results in the same Manhattan case with a larger fleet of vehicles with a greater capacity. The most relevant conclusions are:
-
Introducing rewards can reduce the number of rejections even when this number is already low. The relative decrease is similar (about 10% of the original rejections), which means that the absolute decrease is less significant.
-
The best value for the parameter \(\Theta\), however, changes a lot. In the original scenario \(Gen_B\) should select \(\Theta =6\) and \(Rej_{PF}\) should select \(\Theta =2\), and now these numbers turn to \(\Theta =2\) and \(\Theta =2/3\), respectively. Using a smaller \(\Theta\) means that the rewards are less weighted in the objective function, which is reasonable as more vehicles can provide better results at the current time. A relevant question for future research emerges: how to determine the optimal \(\Theta\) according to the external and internal conditions of the system. Note that an erroneous selection for \(\Theta\) can lead to awful results, as shown by the \(Rej_{PF}\) curve when \(\Theta \ge 2\).
-
\(Gen_B\) presents much better results than \(Rej_{PF}\) in this context, as it increases VHT just mildly.
Shenzhen’s case presents several differences. The low amount of requests and vehicles allows us to analyze whether the introducing-rewards methods depend on some scale effects to be effective. Moreover, as we are now simulating 12 hours of operation, the long-run effects of this technique can be studied.
The small scale permits describing the results in a very synthetic way and are identical for \(Gen_B\) and \(Rej_{PF}\): the rejection rate drops from 40.8% to 37.3% when \(\Theta\) reaches 6 (there is no effect for lower \(\Theta\)), with VHT increasing from 26.9 to 27.6. That is, the results show that the methodology is sound, reducing rejections also in this scenario.
Sensibility of the method that includes future artificial requests
To analyze the robustness of this method, we consider the same two scenarios. We use the basic generation rate because it has presented the best results so far. When applied over Manhattan with a greater fleet of larger vehicles, results are exhibited in Figures 19 and 20. Results are consistent with the base scenario: this method allows for a reduction in waiting times and detours at the cost of increasing the number of rejections and VHT. However, the relative losses regarding rejection rates are worse here than in the base scenario, and the gains are similar, i.e., this method is less effective when using a larger fleet.
However, when solving the Shenzhen case (Figures 21 and 22), results are inverted compared with the base scenario: the rejection rate drops from 40.8% to 27.7% in the most extreme case (\(\Gamma =\frac{1}{60}\)), while average waiting time increases from 3 [min] to 3.29 [min] average detours from 0.45 [min] to 0.8 [min], and VHT increases from 26.9 to 36.1.
This situation might look paradoxical but can be explained. Let us first say that in this case, we only insert one future request per iteration (\(m=1\)) to preserve a proportion between real and artificial requests that is somewhat similar to the one used in Manhattan (where total future requests represent between a third and a half of total real requests). Nevertheless, the low number of real requests in Shenzhen implies that several iterations have none, meaning that the artificial requests are not competing with the real ones on those occasions; recall that such a competition was identified as the explanation of why the rejection rate could increase with this anticipatory technique. Moreover, no heuristic is needed in this case, which was the other crucial factor in explaining the increase in the rejection rate for the basic scenario.
In conclusion, we can state that this method outperforms the one that introduces rewards when the scale of the problem is small. Moreover, in the small scale case, the computational time (that was said to be a relevant drawback of this method) is not an issue at all.