1 Introduction

Over the years, the European Union (EU) has addressed the equality of access to technological benefits among its citizens, where social welfare has been emphasised in the sense that no EU citizens should be left behind from a communications perspective. Previous regulatory frameworks granted citizens in unprofitable areas (typically rural) access to basic services in the framework of universal service obligations (e.g. narrowband Internet access and fixed telephone services) only. However, in 2016, the European Commission proposed a new regulatory framework for the telecom sector–the European Electronic Communications Code – which was adopted in late 2018 by the Council and the European Parliament and should have been transposed to national laws by the member states by December 21, 2020, at the latest (by 2025, the Directive and universal service are to be evaluated and reviewed).Footnote 1 The objective is to guarantee that most citizens will have access to very fast Internet connections with a download capacity of 100 Mbps, regardless of where they live (European Parliament 2018). The EECC thus aims to reduce or eliminate the digital divideFootnote 2 by setting in motion the European Commission’s (EC) ambitious goal of providing 100 Mbps broadband mass coverage also in areas or regions of the Union in which operators do not have commercial incentives for deploying networks capable of such a throughput. There will, therefore, be a governmental responsibility in cases where the market does not deliver to use public funds to support investment in these areas incentivising economic efficiency with the enforcement of a set of rules and policies. Governments can use financial instruments such as those available under the European Fund for Strategic Investments and Connecting Europe Facility, as well as public funding from the European structural and investment funds (recital 229 of the EECC). Governments should use EU funding alongside national public budgets to fully fund the deployment of the infrastructure, even when the deployment begins before the end of 2025.

During this process, public decision-makers are expected to launch public tenders to select one network operator per region, which is then free to offer any technology as long as it is capable of delivering a 100 Mbps connection for that particular area or region. The subsidised monopolistic operator must operate under regulated tariffs, or under tariffs agreed with the service providers, eventually defined by the government in case of lack of agreement. In this case, governments or regulators should announce the regulatory rule to be used in the future to set the prices of access to the subsidised networks, cf. (Araújo et al. 2018b).Footnote 3

Note that, despite the network costs being publicly subsidised, the optimal network technology for the tender is not necessarily the least costly solution for the government. It also depends on the relevant concept of public interest that is formed during the decision-making process. For instance, if it is considered to be in the public interest that technology should be as “future-proof” as possible to avoid the hypothetical necessity of replacing recently built networks in the near future, the solution should have a quality criterion alongside a cost criterion. The credibility of the candidate might be assessed as well (this includes, for example, concerns regarding the quality and track record of the candidate). There is also a strong case for a democratic approach to the issues discussed. There are information asymmetries and problems of collective action and the public decision-makers should follow a process of negotiation with local interest groups, making it clear how their contributions would influence the decision (e.g. the weight given to the position of a given industry might be proportional to its share of local employment). In any case, in the legal framework of EU telecom regulation participation of all stakeholders is required for most decisions involving electronic communications services. Any measures using state aid to promote broadband development in specific areas should be adequately publicised and the stakeholders invited to comment.Footnote 4 In general, national authorities should ensure that interested parties are given the opportunity to comment on draft measures in electronic communications markets within a reasonable period, considering the complexity of the issues.Footnote 5 Furthermore, a consultation mechanism involving consumers, manufacturers, and service providers should be implemented by national authorities to ensure that due consideration is given to consumer interests.Footnote 6 The objective is to try to ensure that the process of formation of the public interest should be reasonably transparent. That is why open decision models may contribute to the protection of public interest and to transparency in decision-making.

Since the European framework is not yet implemented, we have only included a fictitious example to demonstrate the model. Three main criteria are used in this paper for the purpose of the simulations presented. Nevertheless, we argue for why these are natural to include in a basic document to be presented to the stakeholders either early in the decision-making process or later in the design of draft decision documents subject to public consultations: net costs of the solution, technical quality of the solution, and credibility of the candidate, where each of these criteria might then have various sub-criteria. Thereafter, having selected the public interest criteria, in a process involving local interest groups, there should be negotiations with the potential service providers, which we formally account for by introducing a method from option theory to be able to balance the need for rapid development while still allowing for technical advances and actual testing.

In the next section, we provide the research context followed by some basic real options terminology in Sect. 3. Section 4 discusses the components involved in the multi-criteria decision-analytic model for evaluating the candidates and their relationships and the results are illustrated in Sect. 5. Finally, Sect. 6 provides concluding remarks and suggests some directions for future research.

2 Research context

The problem in this paper is thus a multi-criteria problem involving stakeholders, potentially with conflicting objectives. Multi-criteria models have, not surprisingly, been applied to various aspects of telecommunication, cf. (Clímaco and Craveirinha 2019). More particularly, there are some previous applications aimed at the deployment of telecommunications infrastructure in rural areas. AHP (the Analytical Hierarchical Process), including its variant ANP (the Analytic Network Process), where hierarchies are replaced by networks enabling the modelling of feedback loops, is a popular method for decision analysis of rural telecommunications deployment. (Andrew et al. 2005) proposed an AHP based method (Saaty 1980) for the selection of communication technologies for rural areas considering uncertainty to some extent. Likewise, AHP has been used in considering cost and network quality indicators (e.g. connection speed) by, e.g., (Sasidhar and Min 2005) and (Nepal 2005). Applications of ANP can be found in (Gasiea et al. 2009, 2010). The AHP method has well-known problems with rank reversal (Bana e Costa and Vansnick 2008; Dyer 1990; Belton and Gear 1983; Whitaker 2007). Theoretically, this can be avoided by including all possible technologies and criteria at the beginning of the AHP exercise, and not add or remove technologies. However, for our practical application, this is not feasible since the public tenders can start up to 2025 and by that time new technologies and alternatives can appear and thus, the decision model must be flexible enough to add or subtract technologies as time moves forward. In any case, as demonstrated in (Danielson and Ekenberg 2016), the CAR method that we use here should generally be a preferred option. There are also other pairwise comparison methods, such as MACBETH (Bana e Costa and Vansnick 1994), that express preference strengths on a semantic scale for value increments.

Except for the frequently used AHP (family), there is a manifold of possible candidates for procurement situations in the wide and ever-expanding field of MCDA/MCDM, and we have benchmarked a number of them in, for instance, (Danielson and Ekenberg 2014, 2015, 2016, 2017a, 2017b). Methods based on Multi-attribute Value Theory, MAVT, and Multi-attribute Utility Theory, MAUT, cf., e.g., (Keeney and Raiffa, 1976), are commonly used for a wide variety of applications. In these methods, the relative importance of each criterion is assessed as well as the value functions over the alternatives under the respective criteria, after which the overall values of the alternatives are calculated. There are also other, for our purpose, less suitable families of methods including outranking methods, of which the most widely used are ELECTRE (Roy 1996) and PROMETHEE (Brans et al. 1986). There are further methods based on distances to ideal points, such as (Malczewski 1999), and so on.Footnote 7 In this paper, we need to combine quantitative and qualitative information for the evaluations under uncertainty, including interval estimates and relations within the same framework integrated with an option theory component, rendering the more classical methods not entirely suitable for our purpose. Independent of the approach chosen, a problem is that complete information about the situation to be analysed is unavailable. Regarding the issue of representing this situation, there have been many suggestions on how to modify the strong requirements for decision-makers when it comes to the provision of precise information. Some approaches are based on capacities, sets of probability measures, upper and lower probabilities, interval probabilities (and utilities), evidence and possibility theories, as well as fuzzy measures (see, for example, (Danielson and Ekenberg 2007; Dubois 2010; Dutta 2018; Rohmer and Baudrit 2010; Shapiro and Koissi 2015)). Most of them are, however, more focused on representation than on the (potentially significant) computational aspects of evaluations. In the model proposed here, for the evaluation parts, we will use a MAVT-type suggestion from (Danielson et al. 2021), where we combine uncertain weights, probabilities, and values in an integrated approach. By doing so, we can take all relevant criteria and alternatives as well as quantitative data and qualitative preferences into consideration, while still being able to model and analyse the uncertainties involved. The theory has been implemented in the software tool DecideIT which was specifically developed for analysing problems with underlying imprecise background information, cf. (Danielson et al. 2019). We will also augment this method with a module for integrating real options as a conflict resolution instrument in the dialogues between service providers and the government.

3 Real Options in Procurement

In many cases, large-scale procurements should include a negotiation component for making the process more flexible and efficient. In our case, we allow for this by including a possibility to delay the full-scale implementation. In particular, since the abstaining possibility exists for the bidding Internet providers, an option theoretical analysis could therefore be a part of the input for the governments for estimating the rationales behind a suggested solution as well as the plausibility that the suggested solution will finally be realised. From the government perspective, the commencement of commercial operations, as well as the possibilities to realise the project, should be important criteria. A pure discounted cash flow analysis from a potential provider would have a serious flaw in that the investment projects would be assumed to be riskless. To avoid neglecting the project risks in the bids, a requirement of an account of the bidders’ background financial analysis is suggested.

To attract a larger pool of candidates in a public tender and increase the possibility of an efficient solution, the government could offer the bidders an option to defer the investment decision for a period of time for a fee, i.e. a possibility of waiting for up to one year to decide whether to start the project. This introduces a challenge-driven component in the procurement process, in which the bidders can purchase the right to look for another solution that fits the tender call even better (and thus delivers an overall more suitable solution). For the candidate, one reason for waiting might, for instance, be that they believe that OPEX and CAPEX costs are going to decrease in the near future, which will allow them to have higher returns in a shorter time if they delay the start of the project.Footnote 8 Nevertheless, the government’s interest is still that the project starts as soon as possible meaning that for two competing candidates in the public tender, the one who starts commercial operations at the first possible instance has a better value under this sub-criterion. Another factor is to stipulate the fee that should be paid for this option. Here, a theory of real options could be a component.

A real option is the right, but not the obligation, to take action concerning an investment project (e.g., deferring, expanding, contracting, or abandoning) at a predetermined cost, for a predetermined period of time (Copeland and Antikarov 2003). It can be seen as a generalisation of a financial call option, in which the option holder has a choice between making an investment upfront (time t0) or gather more information while deferring the investment until a time t1 in the future where more information is available and thus some of the uncertainties connected to the investment have been reduced or eliminated. Since the call option (and all of options theory) originates from financial options markets, the mapping of preconditions can be more or less complicated. While some researchers have mapped the value of a real option onto financial options theory (i.e. contingent claims analysis, see, e.g., (Trigeorgis 1993)), others have made a case for using decision analysis when the mapping is rough to the point of obfuscating the original preconditions for the solution to the option pricing problem. In our case of telecommunications investment projects, there are quite a few assumptions of the financial option that render an option analytic approach problematic, not least the fact that many of the parameters are hard to determine with accuracy and that the underlying assumption of a random process is hard to justify. Researchers such as (Smith and Nau 1995) argue for the use of probabilistic decision analysis wherein a real option can be valued in cases where the generalisation and mapping become complicated.

The options analysis can be used either by the government to set a price for the call option, or by a bidding company as an instrument to assess whether they should buy the option. Whether the company buys the option is given a weight in the decision model. This will be further discussed in the next section. Since the option by itself can leave the government without a supplier if the option holder decides not to start the project, this factor must be offset. Thus, for each rural area contract, an option will only be issued if there is a second bidder (that placed slightly lower in the assessment procedure) that is willing to build the network (fulfil the contract) at that later time, should the situation occur. This second party will then, as an incentive to enter into this backup agreement, be awarded the fee paid by the first contractor for its option.Footnote 9

4 The Evaluation Process, Model, and Criteria Structure

We will now discuss the decision structure and suggest how to assign weights to some relevant criteria as well as how to evaluate potential candidate infrastructure providers under the criteria. Thereafter, we evaluate the entire decision problem.

Figure 1 shows the overall structure of the decision problem that we are considering, modelled in a criteria hierarchy including the main criteria and the sub-criteria of relevance. The service providers are evaluated under each of the sub-criteria, and thereafter the entire governmental decision problem is evaluated. The details regarding the criteria assessments and the valuation of the providers will be further explained in the forthcoming sections.

Fig. 1
figure 1

The overall multi-criteria decision structure

Note that Fig. 1 – with only these three criteria – is merely illustrative. As stated in Sect. 1, an a priori negotiation process with local interest groups takes place with the purpose of i) defining additional criteria (and possibly sub-criteria) and ii) define the importance rank of each criterion. As an example, one of these interest groups could be the local municipality and they might suggest adding a sub-criterion called Jobs to the Delivery criterion. This Jobs sub-criterion would have a certain value assigned to it that would vary according to the number of permanent jobs that the network solution from the candidate would bring to the local county. The rank of this sub-criterion (comparatively to the Time, Tech, and Finance sub-criteria) would be assigned through a group negotiation process.

4.1 Process

A multi-stakeholder decision is generally complicated to manage in a purely formal way and eliciting stakeholder values is likewise filled with complications, not least when it comes to more technically elaborated issues. A more inclusive step-by-step description of the overall procedure can be briefly described as:

  1. 1.

    Determine the adequate group of stakeholders in the decision process;

  2. 2.

    Provide an overview of the situation, identifying options for relevant criteria and sub-criteria for the stakeholders;

  3. 3.

    Discuss the criteria and sub-criteria with the stakeholders in various formats (questionnaires, interviews, workshops, etc.);

  4. 4.

    Collection of stakeholder feedback on criteria and sub-criteria.

  5. 5.

    Having defined criteria and sub-criteria discuss the ranking of each criterion and sub-criterion with stakeholders.

  6. 6.

    Collection of stakeholder feedback on the rankings.

  7. 7.

    Rank the criteria and sub-criteria.

  8. 8.

    Valuation of providers under the respective criteria, i.e.,

    1. a.

      Estimate deployment costs for each applicable technology;

    2. b.

      Obtain the key performance indicators of each technology, e.g., speed, latency, jitter, and packet loss, and define value functions.

  9. 9.

    Calculate the overall values based on the criteria and sub-criteria weights and values.

Steps 1 to 7 correspond to the definition of the relevant concept of public interest to be pursued in a project. They set constraints on public decision makers’ discretion on the definition of the public interest.Footnote 10 The objective is to make sure that the public interest is captured from an aggregation of stakeholders’ preferences on criteria and weights. A well-known problem in this process is to define the aggregation method.

For instance, a democratic approach, meaning citizens’ voting on criteria and rankings, would probably have serious shortcomings related to the lack of voters’ information (public sessions to inform citizens about technology are not likely to attract widespread participation) and to the design of the voters’ options. Another idea would be to follow a utilitarian approach and select criteria and rankings based on the willingness to pay by residential and business consumers. This might possibly be in the spirit of the EECC if we read it as implying that lack of universal access is a market failure. Calculation of the willingness to pay might be feasible using discrete choice experiments (McFadden 2015). However, the outcome might be jeopardised again by residential and business users’ imperfect information on the services to be provided. The eventual need to adjust the estimated willingness to pay to the impact of the income distribution would add a layer of public decision maker’s discretion to the final decision.

A third approach might be to understand this as a problem of negotiation between different interest groups and undertake substantive consultations with these interest groups. The basic point here would be to ensure that the problems of collective action are solved for all interest groups and to engage them in the process. This is usually associated with “good” regulation (Baldwin et al. 2012). The focus of the public decision-makers is on the participation of all stakeholders and not on the final outcomes – these are endogenous to the regulatory process. An identification of all interest groups and of the weights to be given to their preferences in the final decision is required. This approach is consistent with public interest theories of regulation based on a procedural political approach. The objective is to ensure that a dialogue takes place between different stakeholders about the desirability of a given outcome (Prosser 1989; Morgan and Yeung 2007).

The method we suggest supports the various phases of this process, but the focus in this paper is on the negotiation process between the service providers and the authorities responsible for the selection by introducing an option theory model in a multi-criteria framework. Conflicts of a fundamental nature will, in this case, possibly appear between the providers and the authorities and if so, the option instrument can be used to trade off some disagreements. A general process for the actual stakeholder consultation part in an interactive format has been employed in, e.g., (Komendantova et al. 2018) which turned out to be quite useful to focus the discussion, not the least regarding contested issues. Furthermore, the use of deliberate imprecision in the form of intervals provides a possibility to model preference sets. The model suggested in this paper is complementary to the general process suggested there.

For qualitative assessments, we will use rankings and utilise the CAR method for providing surrogate weights and values. The latter is discussed and benchmarked against other candidates in (Danielson and Ekenberg 2016), where it was found to be more versatile than other candidates. Assuming an ordering of N criteria, we use > i to express the strength in the rankings between criteria and measures, where > 0 is the usual ordinal ranking > . For instance, in a criteria ranking, we get a user-defined ordering \({w}_{1}{>}_{{i}_{1}}{w}_{2}{>}_{{i}_{2}}\dots {>}_{{i}_{n-1}}{w}_{n}\). This is transformed into an ordering containing the symbols = and > by introducing auxiliary variables x(ki):

$$ w_{k} >_{0} w_{{k + 1{ }}} is w_{a} = w_{b} $$
$$ w_{k} >_{1} w_{k + 1} is w_{a} > w_{b} $$
$$ w_{k} >_{2} w_{k + 1} is w_{k} > x_{k\left( 1 \right)} > w_{k + 1} $$
$$ w_{k} >_{i} w_{k + 1} is w_{k} > x_{k\left( 1 \right)} > \ldots > x_{{k\left( {i - 1} \right)}} > w_{k + 1} $$

This defines a new Euclidian space defined by the simplexes constrained by the new orderings and we obtain a computationally meaningful representation of the strengths. Now the number transformation of the criteria ranking is given by assigning a number to each position in the complete ordering, starting with the most important position as number 1. Each criterion i then get the position p(i) ϵ {1,…,Q}, where Q is the total number of positions. For every two adjacent criteria ci and ci+1, whenever \({c}_{i}{>}_{{s}_{i}}{c}_{i+1}\), where si =| p(i + 1) – p(i) |. Position p(i) thus represents the relative criteria importance from the stakeholder consultation process. The weights are then obtained by

$$ w_{i}^{CSR} = \frac{{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {p\left( i \right)}}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{${p\left( i \right)}$}} + \frac{Q + 1 - p\left( i \right)}{Q}}}{{\mathop \sum \nolimits_{j = 1}^{N} \left( {{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 {p\left( j \right)}}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{${p\left( j \right)}$}} + \frac{Q + 1 - p\left( j \right)}{Q}} \right)}} $$
(1)

The transformation of the mitigation value orderings is analogous. In summary, the process is then simple:

  1. 1.

    For each criterion in turn, rank the alternatives from the worst to the best outcome. The strength is expressed in the notation with ‘ > i’ symbols.

  2. 2.

    For each criterion in turn, rank the importance of the criteria from the least to the most important. The strength is expressed in the notation with ‘ > i’ symbols.

  3. 3.

    The weighted overall value is calculated by multiplying the centroid of the weight simplex with the centroid of the alternative value simplex.

Thus, the transformation of the rankings does not introduce and computational difficulties. In this model, the winner of the tender is the candidate i who scores the highest value according to the formula

$$ V_{i} = \mathop \sum \limits_{j} w_{j} \mathop \sum \limits_{k} w_{jk} v_{ijk} $$
(2)

where wj is the weight of criterion j and wjk and vijk are respectively the weights and normalised values of the sub-criterion k to criterion j for a particular candidate i.Footnote 11

The actual evaluation in DecideIT is computationally demanding. We allow statements where the weights and values are represented using interval variables in order to consider the inherent uncertainties. The general expected value can then be expressed as (2) above, given the distributions over the variables, i.e. criteria weights and alternative values.

To evaluate this, we use methods from (Danielson et al. 2020), taking into account that there are only two evaluation operators of relevance, multiplication and addition. The addition case is covered by ordinary convolution, i.e. assume that h is the distribution over a sum z = x + y associated with the distributions f(x) and g(y). Then the resulting distribution h(z) is

$$ h\left( z \right) = \frac{d}{dz}\mathop \smallint \limits_{0}^{z} f\left( x \right)g\left( {z - x} \right)dx $$
(3)

The multiplication case is quite similarly handled. With the same assumptions as above, the cumulative multiplied distribution h(z) is derived by first defining

$$ H\left( z \right) = \iint\limits_{{\Gamma_{x} }} {f\left( x \right)g\left( y \right)}dxdy = \mathop \smallint \limits_{0}^{1} \mathop \smallint \limits_{0}^{z/x} f\left( x \right)g\left( y \right)dxdy = \mathop \smallint \limits_{z}^{1} f\left( x \right)G\left( {z/x} \right)dx $$
(4)

where G is a primitive function to g, Γz = {(x,y) | xy ≤ z}, and 0 ≤ z ≤ 1. Then let h(z) be the corresponding density function:

$$ h\left( z \right) = \frac{d}{dz}\mathop \smallint \limits_{z}^{1} f\left( x \right)G\left( {z/x} \right)dx = \mathop \smallint \limits_{z}^{1} \frac{{f\left( x \right)g\left( {z/x} \right)}}{x}dx $$
(5)

Thus, the addition of the products is the standard convolution of two densities and the multiplication part is handled by a slightly more computationally complicated operation. Combining these two operations, we straightforwardly obtain the distribution over the expected utility.

The results of the process will be a detailed analysis of each option’s performance compared with the others, and a sensitivity analysis to assess the robustness of the result. During the process, the entire range of alternatives across all criteria can be analysed as well as how plausible it is that a provider would outrank the remaining ones, and thus provide a robustness measure for the stability of the respective strategies. This will be demonstrated later and a detailed explanation of the method is provided in (Danielson et al. 2020).

4.2 Main Criteria

According to, e.g., (Handfield et al. 2009), three obvious criteria are commonly used to evaluate candidates in a public tender, viz. the cost (or price), the quality, and the delivery. (Choi and Hartley 1996) performed a study in which they reached the same conclusion. (Chen 2011; Tahriri et al. 2008; Min 2003) have performed studies in which they broke down a criterion into smaller sub-criteria and ranked them. They noted also that quality criteria generally outrank delivery criteria. Following these findings, we will use the main criteria Costs, Quality, and Delivery below and make an overall assumption that wcosts > wquality > wdelivery.Footnote 12 We will discuss the respective criteria and their sub-criteria in more detail below.

We thus have three ranked main criteria, Costs, Quality, and Delivery, where the Quality criterion has four sub-criteria and the Delivery criterion has three sub-criteria.

4.3 Delivery

The studies of Chen (2011), Tahriri et al. (2008) and Min (2003) identified some criteria (and sub-criteria) that compose the delivery criteria of the candidate. These can be very broad and include items such as “managerial organisation”,“discipline”,“communication system”,“warranty”, etc. The three studies have in common the criteria "time delivery”,“technical capability”, and “financial situation”, which they rank accordingly:

  • Time delivery > Technical capability > Financial situation (Chen 2011)

  • Time delivery > Financial situation > Technical capability (Tahriri et al. 2008)

  • Time delivery > Technical capability > Financial situation (Min 2003)

Analysing these studies, “time delivery” is always the most important and “technical capability” is second most of the time (but not always). This results in the following ranking (1 is better, 4 is worst in terms of importance):

Using Eq. (1) it is possible to estimate their weights because, from the data present in Table 1, we know that N = 3, Q = 4, p(time delivery) = 1, p(technical capability) = 3, and p(financial situation) = 4. This yields the following weights:

Table 1 Ranking scale for the delivery criterion

4.3.1 Financial situation

We rate the candidates according to their S&P rating (S&P 2009) under the financial situation criterion.

If a candidate has chosen to not be rated by a rating agency, it should estimate and supply a synthetic rating when bidding in the public tender, cf., e.g., (DePamphilis 2019). It furthermore seems reasonable to exclude candidates that do not fall within the “investment grade” category (rating BBB– or higher) from the public tender. In terms of values, we know that (using S&P rating nomenclature): VAAA > VAA +  > … > VBBB–

So, if we were to rank these weights in order of importance (1 is better, 10 is worst):

The cardinal ranking (CAR method) is normalised to a proportional [0, 1] value scale according to the following equation (Fasth et al. 2018):

$$ v_{j} = \frac{Q - p\left( j \right)}{{Q - 1}} $$
(6)

where vj is the value of criterion j associated with the position p(i) ϵ {1,…,Q}, where Q is the total number of positions. For example, if the candidate has rating AA–, the value v is (10 – 4) / (10 – 1) = 2/3.

4.3.2 Time Delivery

Recall that the candidate has the possibility of purchasing an option, corresponding to the possibility to wait up to one year before deciding to invest. Since the government’s best interest is that the candidate starts and finishes the project as soon as possible, it is reasonable to assume that the government is to rank higher a candidate that chooses to not buy this option over one who does:

4.3.3 Technical Capability

The technical capability of the candidate refers to its competence to successfully execute the project from an engineering point of view. This can be assessed in terms of the candidate’s past experiences:

  1. A.

    The candidate has practical field experience, in the deployment of the exact network technology it proposed in its public tender bid (e.g. the candidate is applying for 5G deployment and it has practical experience in the deployment of 5G networks)

  2. B.

    The candidate has practical field experience, in the deployment of a similar network technology it proposed in its public tender bid (e.g. the candidate is applying for 5G deployment and it has practical experience in the deployment of 4G networks, but not in 5G)

  3. C.

    The candidate has practical commercial experience regarding service provisioning and operations in the exact network technology it proposed in its public tender;

  4. D.

    The candidate has practical commercial experience regarding service provisioning and operations in a similar network technology it proposed in its public tender;

A and B are mutually exclusive. Likewise, C and D are also mutually exclusive. Ranking them, we obtain:(Tables 2, 3, 4, 5, 6).

Table 2 Weights of the delivery sub-criteria
Table 3 Rating scale
Table 4 Ranking scale of the financial situation sub-criterion
Table 5 Ranking scale of the time delivery sub-criterion
Table 6 Ranking scale of the technical capability sub-criterion

4.4 Quality

The quality criterion should act as an indicator of how “future proof” a technology is. In telecommunications, we can see this as an aggregation of several quality-of-service (QoS) indicators. The ITU-T G.1011 and ITU-T E.800 recommendations (ITU 2017) refer to four important KPIsFootnote 13 (key performance indicators) for the QoS – data transmission speed, latency, jitter, and packet loss – and their relative importance in the range from one (very relevant) to four (less relevant) regarding six application types, see Table 7.

Table 7 Application ranking per criteria type

Considering this table, e.g. ‘web browsing’, we have that data transmission speed is equally important as packet loss, but more important than latency. And latency in turn is more important than jitter.

Considering that our input data are already in the form of a ranking, we estimate the weights using the CAR – CArdinal Ranking – methodology from (1):

From Table 8, we can derive the weight of each KPI, see Table 9 from (Schulze and Mochalski 2009).

Table 8 Sub-criteria weights per application type
Table 9 Application type proportion in Europe

Table 10 shows the overall weights of the sub-criteria, derived from Table 8 and Table 9 (e.g. for data tr. speed, 0.369 = 0.329 × 25.83% + … + 0.137 × 0.58%).

Table 10 Weight per sub-criterion within the quality criterion

Thus, we have identified our four sub-criteria and their respective weights. Now we need to estimate their values for technologies capable of delivering a steady 100 Mbps connection. The currently feasible technologies were officially identified by (EU Commission 2013) as being optical fibre directly to the end-user’s home (FTTH), optical fibre to a cabinet followed by DSL (copper telephone lines) from the cabinet to the end-user’s home (FTTC), and mobile 5G. From (EU Commission 2012) we can obtain the mean values of DSL/FTTC. From (Ofcom 2017) we obtain the mean values of FTTH, and from the 3GPP technical note TS22.261 the 5G requirements. These are shown in Table 11.

Table 11 Values before normalisation for the Quality criterion

The scale calibration is a significant problem as discussed thoroughly in (Danielson et al. 2019), but to go into the details regarding this and suitable elicitation methods is beyond the scope of this article. Keeping this in mind, we will use a simplified method-based normalisation, of which there exist several candidates, cf. (Jahan et al. 2016). More precisely, by using Eq. (7) below, in the event that the highest value is the best one (for example, in the data transmission speed, the highest value is the best value) and by Eq. (8), in the event that the lowest value is the best value (for example, in latency, where the lowest value naturally is the best one).Footnote 14

$$ r_{ij} = \frac{{x_{ij} }}{{\sqrt {\mathop \sum \nolimits_{i = 1}^{m} x_{ij}^{2} } }} $$
(7)
$$ r_{ij} = 1 - \frac{{x_{ij} }}{{\sqrt {\mathop \sum \nolimits_{i = 1}^{m} x_{ij}^{2} } }} $$
(8)

The normalisation is thus split into two kinds: “the best value is the highest” is used for data transmission speed; and “the best value is the lowest” is used for latency, jitter, and packet loss. xij represents the value x (before normalisation) of criterion j of alternative i. For example, for j = jitter and i = 5G, we have from Table 11 that xij = 1 μs. Finally, rij represents the normalised value of xij. Table 12 illustrates the data from Table 11 after being normalised.

Table 12 Values after normalisation for quality criterion

Note that data transmission speed has the same normalised value for all three technologies. The rationale for including this sub-criterion in the model is that expectations may change in the future. For example, a particular government can change the rules of the procurement process and postulate “a connection of at least 100 Mbps” instead of “a connection of 100 Mbps”. If so, a candidate could propose an FTTH solution of 200 Mbps instead that would put it in an advantageous position against an FTTC candidate that can only supply 100 Mbps. So, by including this sub-criterion, and despite that this part of the assessment is the same for all candidates, the model can easily be adjusted.Footnote 15

4.5 Cost

The cost for the government is how much they must subsidise (above the EU fund which is an EU resource). According to the EU bureau of statistics,Footnote 16 we know that:

  • During the period 2014‒2020, the rural broadband funding was six billion euros (6×109).

  • Only 16% of the households in rural EU have access to a 100 Mbps connection.

  • The EU has 466 million inhabitants, of which 28% live in rural areas.

The funding is expected to be roughly the same amount for the next funding period, so given the above, this yields an average funding of 126€ per house from the EU fund.Footnote 17 Thus, the cost for the government is the difference between the actual infrastructure deployment price and this EU fund. For the sake of simplicity, we will assume that the funding is the same, no matter which technology is chosen.

As we mentioned above, we will furthermore assume that in order to attract a larger pool of candidates in the public tender, the government could, for a fee, provide a possibility of waiting for up to a year to decide on starting the project. Thus, the government may also receive an optional fee, F, for something analogous to a call option, where we assume that the fee the government will charge equals the option valuation. One of the most well-known methods for estimating the value of an option is the binomial model (Cox et al. 1979), which is the same as the DecideIT tool usesFootnote 18:

Take the three circles in Fig. 2 as an illustrative example of how the tool estimates the value of the option. The red circle shows the value that the project would have if it were to increase in value every two months for a total of one year (i.e. 12 months in total). The same goes for the brown circle, except only for 10 months. The purple circle shows the value that the project would have if it were to increase in value for the first ten consecutive months and then decrease in value in the last two months of the one-year period. As already mentioned, DecideIT is based on the binomial model of (Cox et al. 1979) in which u is a number greater than 1 reflecting a proportional increase in the project value given a certain investment risk σ, and d is a number smaller than 1 reflecting a proportional decrease, given by:

$$ u = e^{{\sigma \sqrt {\Delta t} }} $$
$$ d = {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 u}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$u$}} $$
$$ \Delta t = \frac{M}{n} $$
(9)

where Δt is the time-step interval of the analysis for a total of n time increments until the option maturity M. Take for example the purple circle, where a project risk of σ = 6.43% and an NPV (Net Present Value) of 368 (per house) are assumed (the risk can be estimated using e.g. a Monte Carlo simulation, c.f. (Araújo et al. 2019a)). Its value is simply NPV × u4 × d1 = 368 × exp(6.43% × √1/6)4 × exp(–6.43% × √1/6)1 = 430.8 (this is the value that DecideIT shows in dark blue inside the purple circle). The light blue/green value inside the purple circle (62.77ϵ) is equal to max (0; 430.8 – NPV) = max (0; 430.8 – 368) = 62.77.

Fig. 2
figure 2

Real options valuation

The procedure to calculate the values in dark blue colour in the various branches of the tree is always the same, but for the light blue/green values of the tree, this procedure only works for the last column of the tree. Take, for example, the brown circle: NPV × u4 = 419.6€, which is the same value as shown in dark blue colour inside the brown circle; and 419.6 – NPV = 51.6, which is different from the light blue/green value of 53.31€ inside the brown circle.

To complete the tree, in the first step we first calculate all the dark blue values for the whole tree. In the second step, we calculate the light blue/green values only for the last column. And then, in the third and final step, we work backward. Let p be the probability of an upward movement on the project value. From the binomial model of (Cox et al. 1979), we have

$$ p = \frac{{e^{r\Delta t} - d}}{u - d} $$
(10)

where r is the risk-free rate (for example, a 10-year German treasury bond). The option value of a tree node at position i is

$$ i = \frac{{jp - k\left( {1 - p} \right)}}{{e^{r\Delta t} }} $$
(11)

where j is the option value of the node in the next upward position and k is the option value of the node in the next downward position. Take the brown circle as an example: the next upward position is the red circle (whose light blue/green value is 62.77€) and the next downward position is the purple circle (whose light blue/green value is 40.74€). Thus:

$$ i = \frac{{jp - k\left( {1 - p} \right)}}{{e^{r\Delta t} }} = \frac{{62.77p + 40.74\left( {1 - p} \right)}}{{e^{r\Delta t} }} = 53.31 $$

Proceed to do this for every node backward until the light blue/green value in the NPV node (i.e., the first node) is obtained. This will be the value of the option. In Figure 2 we can see that the option is valued at 14.78€ (per house/installation).

4.5.1 5G infrastructure Deployment Costs

In a 5G access network, a significant cost is the number of required base stations. Let the intended coverage region have a total of H houses dispersed within an area of A m2. If each base station offers a connection with throughput equal to T Mbps, then the maximum distance d that the signal can travel in meters is the largest value of d that satisfiesFootnote 19:

$$ \begin{gathered} \frac{{270A\log_{2} \left( {10^{{\left[ {8.15 - 2.31\log_{10} \left( d \right)} \right]}} } \right)}}{2.6HT} \ge 2.6\frac{H}{A}d^{2} \hfill \\ 0 < d < 1800 \hfill \\ \end{gathered} $$
(12)

Then, to provide 5G coverage a region with an area of A m2, a total of B base stations will be required:

$$ B = \frac{A}{{2.6d^{2} }} $$
(13)

The (EU Commission 2017) estimates that each 5G base station will cost around 40.000€ in rural areas.

4.5.2 Optical Fibre Infrastructure Deployment Costs

Two optical fibre-based solutions are available. FTTH (Fibre-to-the-Home) is a pure optical fibre-based solution. The second solution, FTTC (Fibre-to-the-Cabinet), is one where there are optical fibre links between the central office and the street cabinets. From the street cabinet towards the subscribers’ homes, the cables are of the old copper type:

Let the intended coverage region have a total of H houses dispersed within an area of A m2. If each house link offers a connection with throughput equal to T Mbps, then the network’s infrastructure cost is given by (14) for the FTTH scenario:

$$ \frac{HT}{F}O + \frac{H}{64}S + \left( {\frac{4HT}{F} + 3} \right)\sqrt {\frac{FA}{{2.6HT}}} L $$
(14)

And by (15) for the FTTC scenario:

$$ \frac{HT}{{10000}}\left( {C + O} \right) + \sqrt {\frac{45AHT}{{130000}}} L $$
(15)

where F represents the fibre feeder capacity (a typical value would be e.g. 10 Gbps). The network equipment costs are provided by (Wang et al. 2017) and illustrated in Table 13:

Table 13 Network deployment costs for optical fibre solutions

For example, for FTTH, using (14) with H = 10 × 103, T = 75 (mean simultaneous broadband usage of 75% total capacity) and A = 25 × 106, yields a deployment cost of 430€ per house (Tables 14, 15, 16).

Table 14 Cost in euros per technology for the cost criteriona
Table 15 Normalised value per technology for the cost criterion
Table 16 Ranking scale and respective weights of the criteria

A simulation using these equations, for an area of 25 km2 with 10,000 houses, yields the following deployment costs per house: 362€ for FTTC, 430€ for FTTH, and 1148€ for 5G. Thus, assuming the average funding of 126€ per house as established at the beginning of this section:

And after applying the normalisation formula (lower cost entails a higher value for the government):

4.6 Summary of Weights and Values

We can now evaluate the entire decision situation based on the sub-evaluations above. As mentioned earlier, we will assume that wcosts > wquality > wdelivery. By (1) we then get:

And including all the sub-criteria:

Since the values of all alternatives under the sub-criterion Data transmission speed are equal in this example, that sub-criterion has its weight modified to zero and the other sub-criteria under the Quality criterion are modified accordingly. This leads to Table 17 being modified into Table 18 for this particular example.

Table 17 Weights of the criteria and their sub-criteria
Table 18 Weights of the criteria and their sub-criteria

The values under the costs and quality criteria are technology dependent (Table 19):

Table 19 Values of the technologies under the criteria/sub-criteria per technology for costs and quality

5 Results

Using the decision analysis tool DecideIT, we evaluated 24 synthetic contenders (bidders) in a tender with the following characteristics (Table 20) as an illustrative example of the methodology.Footnote 20 Note that (2) is applied to the sub-criteria time delivery, technical capability, and financial situation respectively.

Table 20 Characterisation of the contenders’ technologies and delivery capacities

Take, for example, candidate T which has an overall value of

$$ \begin{gathered} V_{T} = w_{{costs}} v_{{costs}}^{T} \hfill \\ \;\;\;\;\;\;\; + w_{{quality}} \left( {w_{{speed}} v_{{speed}}^{T} + w_{{latency}} v_{{latency}}^{T} + w_{{jitter}} v_{{jitter}}^{T} + w_{{p.loss}} v_{{p.loss}}^{T} } \right) \hfill \\ \;\;\;\;\;\;\; + w_{{delivery}} \left( {w_{{time}} v_{{time}}^{T} + w_{{tech}} v_{{tech}}^{T} + w_{{finance}} v_{{finance}}^{T} } \right) \hfill \\ \end{gathered} $$

The weights are summarised in Table 18 and the values are summarised in Table 19, except for the delivery criterion and its sub-criteria. For these, we apply Eq. (2) to Tables 46 to obtain the overall values. Thus, the value of candidate T is:

$$ \begin{gathered} V_{T} = 0.522 \times 0.722 \hfill \\ \;\;\;\;\;\;\; + 0.304\left( {0.000 \times 0.577 + 0.282 \times 0.744 + 0.181 \times 0.184 + 0.537 \times 0.710} \right) \hfill \\ \;\;\;\;\;\;\; + 0.173\left( {0.600 \times 1 + 0.250 \times 0.667 + 0.150 \times 0.556} \right) = 0.7243 \hfill \\ \end{gathered} $$

Taking all this together, the overall results are shown in Fig. 3. The higher the bar is for a candidate, the better it is from the government perspective (less government funding), given the background information. The robustness of the results is colour-marked, where a green square means that there is a significant difference between the bidders (contenders) and that there must occur substantial input changes before the ranking changes (Fig. 4).Footnote 21

Fig. 3
figure 3

FTTC solution

Fig. 4
figure 4

Evaluation results – Costs (light grey), Quality (blue), Delivery (dark grey)

In the presentation above, it is worth noting that for options with high-cost weights, the low normalised value of the 5G option (Table 15) practically eliminates the A-H candidates. We have used a precise representation for demonstrational purposes. In an actual situation, the scenario is far from that clear-cut. This is, however, not a limitation of the model or the tool and it can straightforwardly be extended to a more realistic analysis.

For instance, if the CAR weights still are considered to be too precise for the weight representation or if there is an actual conflict in the group which we want to investigate the significance of, we can assert them in a more imprecise format and investigate the effects. Assume that some participants in the process still consider costs to be more important than quality, which in turn is more important than delivery, but that they consider that the differences between the sub-criteria sometimes are too small for differentiating between them, e.g. that speed and packet loss are equally important, but still significantly more important than latency and jitter and that delivery time is more important than the technical and financial aspects. We can then incorporate this and re-evaluate the situation (now in more imprecise terms) and get the representation in Fig. 5 as well as the results in Fig. 6.Footnote 22

Fig. 5
figure 5

A refined decision tree

Fig. 6
figure 6

Evaluation results introducing weight uncertainty – Costs (light grey), Quality (blue), Delivery (dark grey)

A yellow square means that there is still a noticeable difference, but it is more sensitive to input data. A black square implies that there is no significant difference between the candidate providers. The result is now slightly different from Fig. 3 and some sub-rankings have even been reversed. We do not take any firm position here and this is only to demonstrate the various possibilities to further expand this analysis and include a similar procedure for the actual valuations of the respective candidate providers under the different criteria where all these uncertainties and possible conflicting views are taken into account. Significant conflicts can be modelled separately and discussed from a more informed perspective when the effects are visible in this way. A detailed account of systematic conflict resolution in such an extension is, however, outside of the scope of this article whose main purpose is to demonstrate the model and its possibilities.

6 Concluding Remarks and Future Research

The main idea in this article is to suggest an alternative candidate to the prevailing procurement models in the telecommunications field which have some major drawbacks in the lack of embracing elasticity in technology changes over time. A further advantage of our approach lies in its transparency and flexibility regarding current market trends as well as societal and technological needs. The framework that we propose is a multi-stakeholder multi-criteria evaluation one, allowing for a challenge-driven procurement process in which the bidding operators also are offered a shorter time period for providing better solutions, for instance to be able to use technological development or do a pilot to test different solutions before making a full-scale implementation. This facilitates the implementation of a transparent group negotiation and decision process. The model is aimed at governmental agents and national regulatory authorities for the procurement of internet infrastructure, with a focus on rural areas. An important component herein is an element of challenge-driven procurement to allow contract contenders (bidders) to acquire an option to defer the decision to build the network for a shorter period of time. Since such areas are not normally targeted by providers, it includes an external funding mechanism that is already decided by the EC and that will be distributed to the local governments. The model is based on an integrated decision analytical method, developed for situations where the background information is not easily quantifiable, due to various uncertainties and the presence of qualitative data, such as decision-makers’ preferences. In such contexts, the decision-makers can normally neither assign adequate criteria weights nor values for different service providers in such procurement situations. Despite this, the bidders can be ranked by the government in an efficient and effective way. Further, sensitivity analyses are easy to carry out in the framework.

Since this new telecommunications framework was made into law only in December 2020, a real-life application has hitherto been unfeasible, but it will be the next natural phase. In particular, we will investigate how the criteria set should, in collaboration with the stakeholders, be formed to capture a wider span of perspectives. In this work, we have only included the most obvious ones that are usually discussed (at best), but such an extension is straightforwardly made in the suggested framework, provided that we are able to elicit information from relevant stakeholders in an interactive process.