1 Introduction

The micro- and nano-launchers are of great interest to globally strengthen the access to the space sector. In particular, the micro-launchers, designed for a maximum payload of around 350 kg, are now under investigation. The development of these new launch systems answers the request for dedicated and flexible access to space for the small satellite platforms. Moreover, it provides a solution to a new regulatory framework that will reduce the piggyback missions that conventional launchers can perform [1, 2].

During the last decade, the near-Earth space paradigm has shifted from powerful, large, and sometimes overly-expensive solutions to cost-effective and small-sized satellites. The technological advancements in electronics, avionics, and software reliability facilitate and support this tendency. The tasks once performed by massive satellites can now be achieved by a small satellite, a CubeSat, or CubeSats swarm. Both the scientific and the commercial market are pointing towards those solutions, creating new needs for dedicated launchers.

Usually, because of their small dimensions, most of those satellites ride as piggyback alongside the primary payloads, e.g., as secondary payloads of the International Space Station resupply cargo. Those solutions’ main problems lay in the lack of flexibility on the target orbit parameters and slots availability of the conventional launchers. As the potential applications of small satellites rise, a new market opens its opportunities: new space launchers, many private-led [3], totally dedicated to small satellites. Those companies point towards reduced launch prices, launch location flexibility, and a high launch rate per year.

A large number of micro-launcher projects are undergoing (more than 80 different concepts are estimated [3]). The innovative space vectors are proposed featuring quick turn-over time, several launches per month, and competitive prices. Some of them aim towards green or innovative solutions, others towards the application of off-the-shelf technologies reducing the development risk. The variety of concepts raises the need for an evaluation methodology addressing this innovative space system.

Politecnico di Torino, in collaboration with the European Space Research and Technology Centre (ESTEC), is performing a study to compare the performances versus costs of several micro- and nano-launchers.

This article focuses on a trade-off methodology for the assessment of innovative micro-launcher concepts. The work is supported by an ad-hoc conceptual design tool and a database of current and prospective micro- and small launchers. This approach allows the estimation of data either not publically available or not existing in those concepts at an early development stage. The debate on the small-sat market’s actual size, the reliability of the business case, and the competitiveness of those launch services are still open points. This work aims at providing a helpful tool to evaluate different concepts and provide awareness on the criteria that drive the evaluation.

2 Methodology

Several trade-off methodologies have been considered for this work. No literature has been found for applications on micro-launchers. The proposed methodology addresses the need for new criteria definition and the implementation of also qualitative assessments.

These may be grouped by categories constituting the different levels of the so-called Figures of Merits (FoMs), i.e., quantifiable criteria reflecting the stakeholders’ values for the system’s attributes [4].

The stakeholder analysis is in this article omitted due to the variability of the analysis output.

The set of criteria influencing the choice are hence defined and evaluated with their related weights. The AHP (Analytically Hierarchical Process) method is used for this analysis [5].

The main drivers in evaluating the micro-launcher concepts need to be identified. These are implemented in a trade-off analysis that will compare the concepts with respect to the criteria defined by the stakeholders. The sensitivity analysis assesses the robustness of the results. Due to the very early stage of definition of the micro-launcher systems, and due to the confidentiality of some launcher characteristics, e.g., aerodynamic characteristics, sizing tools are developed for the conceptual modelling.

2.1 Database

To define the main system’s attributes and to provide reference data for the estimates of the missing information, a database is produced. This includes the characteristics of similar systems, i.e., launchers of small–medium size. The database also includes programmatic, mass, propulsion, dimensions, and performances data when available. The entries in the database are extracted from available user manuals, e.g., Electron [6], on the Federal Aviation Administration (FAA) annual compendiums of commercial space transportation, e.g., the 2018 edition [7], and on literature, e.g., [8]. Figure 1 shows example pages from the database. The index is divided into the following sections:

  • General: launcher status (e.g., in development, operational, etc.), manufacturer, typical launch sites, and price per kg.

  • Dimension: dimensions in terms of length and diameter for each stage and the overall vehicle.

  • Mass: inert and propellant masses for each stage or, at least, the overall vehicle take-off mass.

  • Propulsion: motor performances, namely thrust, throat area, chamber pressure.

  • Fairing: fairing dimensions.

  • Environment: launcher allowable axial and lateral loads, dynamic pressure, and acoustic environment.

  • Performance: injection orbit inclinations and altitudes.

The trade-off analysis may hence rely on the database to identify the attributes that characterises the FoMs.

Fig. 1
figure 1

Database spreadsheet extract

2.2 Criteria

In identifying the FoMs, the availability of data influences the quality of the evaluation and the usability of some system’s attributes. The conceptual design modelling tools presented in this article, Sect. 2.6, are proposed as tools to cope with the lack of data for some system’s attributes. Moreover, the stakeholders may also want to consider non-quantifiable attributes. For this reason, a qualitative score is given to the attribute, or criteria, from 1 to 9, with 9 being the best score and 1 the worst. For each case, an unambiguous scale will be defined. The identified FoMs, i.e., the most important attributes, are listed with the relative division in criteria and sub-criteria. The criteria categorisation may aid the stakeholders in keeping the awareness of the link between the FoM and the system.

A brief definition of the FoMs follows, in which for each criterion, it is specified if the data used are quantitative or qualitative.

  • Payload (PL), concerns the capability and performances of the system to carry a payload.

    • Mass (PL-M), is the nominal payload mass (in LEO) as stated by the launcher’s manufacturer (quantitative).

    • Volume (PL-V), is the maximum allowable payload volume. The mass and dimension sizing tool, discussed in Sect.  2.6.2 of this article, provides an estimation of the fairing dimensions allowing the payload bay volume estimation (quantitative estimation).

    • Shareability (PL-S), is the capability to accommodate and release multiple payloads at different orbits. Including the last stage manoeuvrability and the maximum number of re-ignition (qualitative).

    • Price (PL-P), is the rate per kilo. This criterion considers the price, not the cost of the launcher. Indeed, the aim is to take into account the predicted and advertised price by the manufacturer. For this reason, this criterion will not have the common high weight as it is usually attributed to the launcher’s cost. When the price value is not available or obtainable from the business plan, the price is approximated through the relation with the Gross lift-off mass of the relative launcher, as discussed further in this section (quantitative).

    • Ownership (PL-O), is the launcher manufacturer, builder, vendor, and company. This attribute allows the stakeholder to consider possible influences and or preferences for collaborations among specific countries (qualitative).

  • Physical characteristics(PH), is the main specific launcher’s physical properties.

    • Impulse (PH-I), is the vacuum specific impulse (\(I_{sp}\)) of the first stage. This attribute is chosen as a general indicator of the propulsive capabilities including by relation any consideration on the thrust value. Only the \(I_{sp}\) of the first stage is considered, since most of the alternatives under study maintain the same average \(I_{sp}\) among the different stages (quantitative).

    • Propellant (PH-P), is the type of propellant used by the proposed launcher. Depending on the stakeholders, the propellant may indeed be a decision-making driver due to the propellant features, e.g., low emissions (qualitative).

    • Dimensions (PH-D) are twofold, the overall length, and the main launcher diameter (neglecting possible boattails). A less encumbrance launcher may be preferred for handling convenience due to the consequential reduction in operation costs. This criterion is independent of the others for being the only one considering the operation costs (quantitative).

  • Flight profile (FP), is the flight and mission achieving capabilities.

    • Complexity (FP-C), concerns the number of stages, boosters, and their composition (quantitative).

Fig. 2
figure 2

Identified criteria for the micro-launchers’ trade-off analysis, grouped in two levels

The gross lift-off weight was omitted due to the interdependence with the PL-P. The two criteria are not independent in this study only due to the early stage of concept development characterising the analysed systems. In further stages of development, this constraint will easily be untied.

All the alternatives may be launched from the same launch site and may target the same orbit, a scenario considered to enhance comparability. It is worth mentioning that the stakeholders’ needs may also influence the launch site. For instance, the stakeholder may be interested in developing a local economy by establishing a launch site, being this unrelated to the space access capability of the system.

Additional criteria not reported in the list have been considered. As a risk parameter, the reliability of the launcher is defined through heritage, the number of failures, and successful flight. Likewise, the availability is characterised in terms of flight rate and time-to-flight (TTF). Both parameters are of great interest; however, due to the current early stage design of the micro-launchers and the limited data availability, these criteria have been excluded.

Cost and price

The micro-launcher concepts are strictly linked to the concept of sustainable space access, aiming, among all, to a reduction of the launcher’s cost.

The cost estimation for a launcher of this category may be even more critical than what already is considered crucial for conventional launchers. A research group at Politecnico di Torino is already working on the cost estimation methodology for micro-launchers, and an ad-hoc tool is under development. The currently favoured approach relies on cost estimating relationships (CERs) for development, manufacturing, and operations costs, where the most are aggregating with considerations at equipment level [9].

For the same reasons, the micro-launchers’ service price will determine the magnitude of the market, shaping the industry and the number of new applications that will start accessing space. In this work, the choice is to highlight the price estimated by the launchers’ manufacturers more than the cost. This choice is also due to the limited availability of data at the equipment level to perform a robust cost analysis. The commercial price per kilo, if not explicitly mentioned, can be derived through extrapolation from the market/business plan.

For the alternatives of which the price value is not available, this is roughly approximated with respect to the proportion of the Gross Lift-Off Weight (GLOW), as for Eq. 1. The assumption of a direct relationship between the service price and the GLOW is accepted only in the preliminary analysis stage as the case study presented in this work. However, the price obtained through estimation is in several cases in line with the price extrapolated through the business plan, validating the rough estimation

$$\begin{aligned} price_j=\frac{price_i}{GLOW_i}*GLOW_j. \end{aligned}$$
(1)

Qualitative criteria The trade-off analysis is an analytical method of comparison. However, the decision drivers may be of qualitative nature, i.e., not directly link to a quantifiable attribute of the system.

To cope with the need of taking into account also qualitative attributes, a scale is defined. Since this work is focused on the evaluation of innovative concepts, the scale parameters are the performance and the innovation potential, and it will be referred to as the concept’s attribute assessment.

Fig. 3
figure 3

The concept’s attribute qualitative assessment scale

In Fig. 3, a graphical depiction of the defined scale is reported. In the horizontal axis is the performance parameters, from low to high (left to right), defined as for the performing capabilities of the system’s attribute to assess. In the vertical axis is the potential, from low to high (bottom to top), defined as the potentiality of innovation of the system’s attribute to assess. Ideally, the highest value for both features is preferred. Therefore, it is possible to position the alternatives for a system’s attribute in the grid to obtain a value from 1 to 9, where 1 is the lowest score and 9 is the highest. In the configuration presented in this work, the performance parameter is preferred with respect to the potential parameter, justifying the allocation of the values in the grid. However, this may be re-configured case by case.

The scale is inspired by the most common policies and representations in risk assessment and by the nine-box grid used during the talent review process by HR experts. The application is similar to the scale of Table 1 in the next section.

2.3 Prioritisation

The stakeholders may attribute different importance, i.e., priority, to the different criteria. Such importance is defined by the criteria weights, i.e., relative importance, indicating the influence of each criterion in the ranking, derived from pairwise comparison. Each criterion is compared to each other through values from 1 to 9 and their reciprocals (for inverse comparison), being 1 ”of equal importance” and 9 ”of extreme more importance” [10], as reported in Table 1. These values, scoring the criteria with respect to each other, populate the so-called prioritisation matrix.

Table 1 The absolute comparison values’ scale [10]

Hence, a priority vector may define the criteria priorities, proved to be best represented by the principal eigenvector (when priorities are derived from a positive reciprocal pairwise comparison) [5]. The eigenvector and eigenvalue are calculated on the matrix resulting from the pairwise comparison, i.e., the prioritisation matrix, to define the priority vector, hence the criteria weights. The priority vector is derived by the normalisation of the eigenvector for the maximum eigenvalue.

It needs to be underlined that the pairwise comparison, leading to the criteria weight, is influenced by the stakeholders performing the analysis. This step of the trade-off analysis presents an inherently subjective judgement. However, the effects of the criteria weights are monitored through sensitivity analysis.

For the reasons above, the values attributed by the stakeholders may result in inconsistent evaluations. Hence, the principal eigenvalue is also used to calculate the consistency index and the consistency ratio. The latest shall be equal or less than 10%, the threshold to consider the inconsistency in the pairwise comparison acceptable. [11].

If different stakeholders take part in the decision, each stakeholder may populate a different prioritisation matrix, and the result will consider the aggregation of the different evaluations.

The results for the criteria weights may be placed in a Pareto chart, underlying the most influencing criteria but most of all providing awareness on criteria that have almost no influence in the trade-off. Indeed, it is often displayed in the Pareto charts only the first most influencing 95%.

2.4 Synthesis

The alternative micro-launchers may be scored with respect to the sub-criteria (indirectly also to the higher level criteria).

The scores depend on the system’s attribute the criteria are referring to (as defined in Sect. 2.2), then to be normalised, such that the sum of the scores per criterion equals 1, as for the ideal mode AHP method [12]. The overall score, resulting from the aggregation of the scores per criterion, takes into account the obtained criteria weights, i.e., the scores per criterion are multiplied by the criterion weight and the result for all criteria summed per alternative.

The decision matrix is hence populated, combining all the previous elements and methodology steps, i.e., synthesis. An example is provided in this article for the case study discussed in Sect.  3. The decision matrix may be considered one of the principal trade-off results.

Performance maps and rate per kilo may be compared with the decision matrix, the latest reflecting the parameters’ combination beyond the performances.

Moreover, it could be possible to extract the results per pair of criteria to assess the local result [13].

2.5 Sensitivity analysis

To assess the results of the trade-off, a sensitivity analysis is performed.

The final ranking is influenced not only by the parameter’s score but also by the weights assigned to the criteria. Being the score per the criterion of the different parameters depending mainly on the launcher characteristics and model, we may focus the sensitivity analysis on the criteria weights.

The sensitivity analysis assesses the criteria sensitivity and the identification of the most critical criterion.

Awareness of the most influencing criteria may sometimes lead the stakeholder to revise the early steps of the analysis, such as the pairwise comparison.

Each criterion may be more or less critical, being a critical criterion the one for which a slight change in its weight may determine a different result in the final ranking of the alternatives. The most critical criterion may differ from the criterion with the highest weight [12].

It is possible to calculate the minimum change in the criteria weight to vary the ranking of alternatives. This is possible considering the aggregate score with respect to the score obtained per criterion. The computation is performed per couples of alternatives (i.e., launchers).

The minimum changes \(\delta \) are evaluated for each combination of alternatives (\(A_i-A_j\)) per criterion. When the minimum changes are expressed in absolute value, this is the necessary increase or decrease of the weight to change the final ranking order. However, if the evaluated change is higher in magnitude than the previously assigned criterion weight, the change may be considered not feasible [12].

The relative minimum change value may instead be obtained relating the absolute minimum change with the criterion weight. Among the relative minimum changes, we may consider only the values less than 100, given that if the value is greater than 100, no changes will affect the ranking of any of the alternatives, and hence, it may be considered robust [14].

The minimum value per each criterion obtained is often referred to as the criterion criticality degree [12]. Hence, the most critical criterion is the most sensitive criterion, with the highest sensitivity coefficient obtained by the reciprocal of the criterion criticality degree, i.e., the ranking of the alternatives is most sensitive to a change in weight of the most critical criterion which has the highest sensitivity coefficient.

The most critical criterion may be different if considering the influences on the result as for the first position in the ranking or as the whole list and its order [13].

In the case study presented in Sect. 3, the final ranking of alternatives may also be sensitive to the score per criterion of the alternatives, since many of the scores are obtained by approximation and modelling to cope with data lacking. The uncertainty margins shall be evaluated in this analysis at a further development step to assess if the margins are within the ranking affecting minimum changes. However, in this work, possible margins for the scores are left within the modelling tools presented and the data provider, i.e., references.

2.6 Conceptual design modelling

The evaluation of the micro-launcher main characteristics at their concept stage requires generating estimates of their shape, mass, size, and performance. Because micro-launchers are new technologies, privates companies and space agencies are not prone to share their data.

This section provides an overview of some conceptual design tools applied to the micro-launchers framework. The objective is filling the gap of the parameters needed for trade-off methodology previously presented and not found in the literature.

It will particularly provide the means to generate the launcher’s preliminary trajectories and performance maps starting from the target values of payload and insertion orbit. The presented approach uses both literature and historical data as main inputs. Then, it applies conceptual design system modelling to size in terms of mass and dimensions and its the main systems and subsystems.

First, a preliminary aerodynamics estimation tool is analysed. After, the mass, dimensions and thrust estimation tool is discussed in detail. The preliminary aerodynamics estimation tool starts from a guess of shape and dimensions derived from the database 2.1. It inputs its data to the second tool that evaluates the new mass, shape, and size of the system. At the end of this process, the first tool is called until convergence.

Fig. 4
figure 4

Data-flow of the aerodynamics tool

Fig. 5
figure 5

Comparison with the aerodynamics tool results (in orange) and the data of VEGA(in blue) [20]

2.6.1 Aerodynamic model

The aerodynamic is built as a sum of skin friction drag [15], base drag [16] [17] [18], supersonic and transonic wave drag [17] [18], as for Eq.  (2). Drag is the only aerodynamic force considered in the preliminary launcher design. The data flow is depicted in Fig.  4. The problem of the estimation of a launcher drag during the preliminary phases of a project is widely debated and different solutions in the literature may be found, e.g., [19, 17]. The main novelty of the PoliTO is a simple aerodynamics modelling that gives quick outputs based on the launcher shape. These outputs are still affected by error; however, they are reliable enough to model the first part of the launcher ascent. The objective is not the optimisation of the shape, instead the search of reliable data to test the feasibility of a launcher project. The tool is intended to be used during a phase-0 or pre-phase A of the design, where few data are known, but the output should be the design space. The tools inputs are: (i) the dimensions (e.g., length, diameter) of the stages, (ii) the Mach interval that the user wants to analyse, (iii) the surface parameters (e.g., type of coating of the stages), (iv) the presence of fins, protuberances, boattails, and (v) the nose shape (e.g., conical, ogive,..). If the nose shape is still unknown, it is advisable to use an ogive fairing: the drag will be a little overestimated, but it can be a good compromise in an initial design phase.

The validation of the aerodynamics has been performed using the data at null angle of attack from [20], Fig. 5. The maximum difference between the proposed model and the VEGA data is around \(10\%\), an acceptable margin during preliminary design phases. A further validation analysis has been performed using the aerodynamic data of the DNEPR, assessed by ESA–ESTEC by means of inviscid CFD simulations. The drag contributions considered in this validation are the skin friction drag and wave drag, Fig.  6. The aerodynamics tool seems to underestimate the pressure drag just after the transonic region with respect to the inviscid calculations

$$\begin{aligned} C_{D_{tot}} = C_{D_{friction}}+C_{D_{base}}+C_{D_{wave}}. \end{aligned}$$
(2)

The high-level drag estimation starts with the study of the skin friction drag [15] [21]. It is the easiest contribution to evaluate. The friction drag is defined as the sum of the main body friction drag, the fins friction drag, the protuberances friction drag (e.g., feedlines), and the excrescences friction drag, Eq. (3). The simulation estimates the speed of sound, the kinematic viscosity, and the Reynolds number for each of the friction drag elements. Then, the skin friction coefficient is evaluated with a surface roughness that varies with the coating applied [21]:

Fig. 6
figure 6

Comparison between PoliTO tool (in orange) and Euler equations results for the DNEPR launcher (in blue)

$$\begin{aligned}&C_{D_{friction}}= C_{d_f}(body)+K_f\cdot C_{d_f}(fins) \nonumber \\&\quad + K_f\cdot C_{d_f}(protuberance)+C_{d_e}, \end{aligned}$$
(3)

where \(K_f\) is the mutual interference factor of fins with respect to the body and it is set at 1.04 [21].

The base drag is difficult to estimate and depends on the shape of the launcher and the skin friction drag. Thus, a hybrid method that combines mathematical expressions and data from NASA tunnel testing is used [17, 18, 22]. An analytic method is used up to Mach 2 [23], then the data from [22] and [17] are fitted up to Mach 10, and then again, an analytical expression is employed [16]. The transonic and supersonic drag is derived from [18] and from [23]. These entries are again quite difficult to estimate and are deeply linked with the shape of the fairing. Usually, the drag rise start is set at Mach 0.8, while the end is set around Mach 1.2.

2.6.2 Mass and dimensions model

The mass, dimension and propulsion model is needed to estimate crucial system’s attributes.

The required inputs are (i) the nominal mission orbit, (ii) the desired payload, (iii) the diameter of the vehicle, and (iv) the overall probable delta-V. If known, the overall initial mass, overall dry mass, and length of the launcher can be fed to the program, but they are not mandatory inputs. The tool analyzes the quantity of propellant and inert mass in each stage, the thrust per stage, the minimum tank dimensions, and the length of the stages (Fig. 7). The launchers of interest are characterised by two or three stages. Thus, the analysis starts with a launcher restricted body problem [24]. This assumption is used in the first iteration of the code as a preliminary initial guess. This method’s main assumption is that the stages have the same payload ratio of \(\lambda \), Eq. (4).

The method above shows a difference from the actual mass values of a two-stage or three-stage launcher around 10% in the worst case scenario, as shown in Table 3. The main assumption of this method is that the stages have the same payload ratio \(\lambda \), Eq. 4.

Fig. 7
figure 7

Mass and dimensions estimation tool data workflow

$$\begin{aligned} \lambda = \frac{m_{PL}}{m_0-m_{PL}}, \end{aligned}$$
(4)

where (i) \(m_0\) is the overall mass of the vehicle; (ii) \(m_{PL}\) is the mass of the payload.

The payload mass may be set as the mass of the second stage with respect to the first stage, or the mass of the third stage with respect to the overall mass of the second stage. For a two-stage to orbit, considering that \(\lambda _1 = \lambda _2\), the modelling equations are reported in Eqs. (5), (6), and (7), [24]:

$$\begin{aligned} \lambda _1= & {} \frac{m_{0_2}}{m_{0_1}-m_{0_2}} \end{aligned}$$
(5)
$$\begin{aligned} \lambda _2= & {} \frac{m_{PL}}{m_{0_2}-m_{PL}} \end{aligned}$$
(6)
$$\begin{aligned} m_{0_2}= & {} \sqrt{m_{0_2}}\sqrt{m_{PL}}. \end{aligned}$$
(7)

For a three-stage to orbit, the related equations are reported from Eqs. (8)–(13), [24]:

$$\begin{aligned} \lambda _1= & {} \frac{m_{0_2}}{m_{0_1}-m_{0_2}} \end{aligned}$$
(8)
$$\begin{aligned} \lambda _2= & {} \frac{m_{0_3}}{m_{0_2}-m_{0_3}} \end{aligned}$$
(9)
$$\begin{aligned} \lambda _3= & {} \frac{m_{PL}}{m_{0_3}-m_{PL}} \end{aligned}$$
(10)
$$\begin{aligned} m_{0_2}= & {} \frac{m_{PL}}{\pi _{PL}}^{2/3} \end{aligned}$$
(11)
$$\begin{aligned} m_{0_3}= & {} \frac{m_{PL}}{\pi _{PL}}^{1/3} \end{aligned}$$
(12)
$$\begin{aligned} \pi _{PL}= & {} \frac{m_{PL}}{m_0}. \end{aligned}$$
(13)

The assumptions of the same specific impulse and the same structural ratio reported in [24] can be relaxed. Therefore, the inert and propellant mass allocation in the stages is obtained through the structural ratio \(\epsilon \) (reported in Table  2), which may be different for the different stages. The related equations for a two-stage to orbit are reported from Eqs. (14)–(17). The equations related to three-stage to orbit follow the same pattern:

$$\begin{aligned} m_{inert_1}= & {} \epsilon _1 \cdot (m_{0_1} -m_{0_2}) \end{aligned}$$
(14)
$$\begin{aligned} m_{inert_2}= & {} \epsilon _2 \cdot (m_{0_2} -m_{0_PL}) \end{aligned}$$
(15)
$$\begin{aligned} m_{prop_1}= & {} m_{0_1}-(m_{inert_1}+m_{0_2}) \end{aligned}$$
(16)
$$\begin{aligned} m_{prop_2}= & {} m_{0_2}-(m_{inert_2}+m_{PL}). \end{aligned}$$
(17)
Table 2 Structural ratio for different type of propulsion [25]
Table 3 Comparison between Epsilon second and third stage gross mass values [21] and the estimated restricted body problem values, considering a payload mass of 700 kg

After the preliminary mass estimation, the tool evaluates the available delta-V with Tsiolkovsky formula. This value is then compared with a needed overall delta-V set at 8 km/s for Low Earth Orbits (LEO) [25]. If the delta-V available is less than the overall one, the payload and the structural ratios are adjusted until the convergence of the delta-V is reached. To perform this task, maximum and minimum altitudes for stage separation are set. These values differ from a two and three stage to orbit and they depend on the literature data from the micro-launchers database. At the end of the iterations, the payload ratios may be slightly different from those of the restricted body problem, that are just assumed as an initial point for the in-loop design. The various stages’ length is estimated from propulsion data from [26] and from the propulsion-related database data. Likewise, the database is used to estimate the level of thrust for the first stage, Fig. 8. While, the other stages have a thrust level estimated taking into account a conventional value of thrust over weight ratio, \(\frac{T}{W}=1.3 \div 2\) [24].

The thrust of the different stages is estimated considering that usually, the thrust over weight ratio is \(\frac{T}{W}=1.3 \div 2\), [24]. The values are then refined through an iterative process, inside the trajectory generation tool. In the case of this study, all the simulations are performed with ASTOS (Analysis, Simulation and Trajectory Optimization Software) [27]. The stages length is guessed using correlation data from the database as for Sect.  2.1, built for the analysis of micro-launchers. Those first data are then refined with the propulsion data from [26].

Fig. 8
figure 8

“Thrust-Take off Mass” correlation curve

3 Case study

This section aims in applying the methodology proposed in the previous section to a case study referring to a limited set of micro-launchers.

The name of the analysed micro-launcher or its manufacturer will not be mentioned in this article, since it is out of the scope of this work to advertise or discourage one concept with respect to the other.

The information available on the four micro-launchers includes general dimensions (overall length, main body diameter), the proposed propulsive strategies (type of propellant per stage and, sometimes, thrust level), the number of stages, the reference mission, and the nominal payload. The preliminary accessible data are reported in Fig. 4. As expected, the payload capability of those launchers spans from 100 up to 300 kg.

The reference mission orbits are a sun-synchronous orbit (SSO, 500  km of altitude for \(98^{\circ }\) of inclination), a polar orbit (700  km of altitude for \(90^{\circ }\) of inclination), and an equatorial low earth orbit (LEO, \(0^{\circ }\) of inclination). The interest in these orbits is determined by their suitability for Earth observation missions (e.g., PRISMA mission [28]) and Earth weather monitoring missions (e.g., Aeolus mission [29]). Those satellites are usually small: there are many concepts in development of constellation of CubeSats to perform Earth observation mission tasks. The four launchers point towards flexibility: many launches per year, small turn-over time in between the launches and different launch sites [30]. At the same time, there is a technology push towards innovative propulsion systems. One launcher adopts an innovative propulsive system, a sub-cooled liquid petroleum gas and Liquid Oxygen (LPG-LOX) [30] (green propellant). While another one exploits a hybrid propulsion system (H2O2-HTPB) with re-start capabilities and wide throttling capabilities [30].

The European potential launch sites of great interest are five. Besides the already existing spaceport in Kourou, the location of Santa Maria in the Azores (Portugal) seems convenient. On the safety reasons side, the trajectory progresses towards the south pole over the Atlantic Ocean. While keeping high performances in terms of deliverable payloads, as shown by the trajectory simulations. The location of Andøya (Norway), Esrange (Sweden), and Sutherland (United Kingdom) seems more ideal to target SSO missions. To analyse their potentiality, a first iteration on performance maps for the various spaceports was performed. The performance maps’ results presented in this article are evaluated in the Santa Maria spaceport. The performances recorded in Santa Maria are a good compromise with the ones in Andøya and Kourou.

Table 4 Known characteristics of the four launchers under analysis

3.1 Trajectory generation and performance maps

Implementing the conceptual design modelling tools, as illustrated in Sect.  2.6, enough data are available to start the trajectory analysis. The different flight phases and their duration time are derived from the database data, during the first trajectory estimation. Those preliminary guesses are then refined during the various study iterations, 9. All the preliminary trajectories are studied using ASTOS (Analysis, Simulation and Trajectory OptimiSation Software) [27]. The software enables not only the analysis of launchers trajectories, but it incorporates a useful optimisation routine. The PoliTO tool reasoning flow starts from the launcher guidelines and the database entries, and then, it moves to the subroutines where the mass, the dimensions, and the aerodynamics are evaluated and eventually enters the ASTOS simulation. At the end of the ASTOS trajectory generation and optimisation, the outputs are compared with the expected results. If they are not compliant, another iteration will start up to convergence.

Fig. 9
figure 9

Simulation reasoning flowchart

The phases for a two-stage to orbit are: (i) lift-off, (ii) pitch over, (iii) constant pitch, (iv) first-stage burn out, (v) coast, (vi) second-stage first ignition, (vii) coast, and (viii) second-stage second ignition and insertion in orbit. Likewise, the phases for a three stage to orbit are: (i) lift-off, (ii) pitch over, (iii) constant pitch, (iv) first-stage burn out, (v) coast, (vi) second-stage ignition, (vii) second-stage burn out and third-stage first ignition, (viii) coast, and (ix) third-stage second ignition and insertion in orbit. The polar trajectories exploit a long coast arc for the orbit insertion, Fig. 10. The primary objective of a launcher is to maximise the amount of payload that it can transport toward an orbit. Using the ASTOS optimisation routine, a cost function to maximise the payload can be defined. The simulation needs the definition of the outside world: the environment is set as a spheroid (equatorial radius of 6378 km, polar radius of 6356 km) and US standard atmosphere 1976 is used. No hydrosphere or wind is considered. For each of the launcher stages, the overall dimensions, the propellant mass, and inert mass are defined. The rockets are modelled defining the nozzle area, the vacuum thrust, and the vacuum \(I_{sp}\). The set constraints are the initial position (altitude, latitude, and longitude) and velocity (north, east, and radial) of the launcher and the final orbit altitude and inclination. An upper limit of 1135 \(W/m^2\) for the heat flux is set as a path constrain. This is the same value of the VEGA rocket simulation example in ASTOS [31]. Starting from the data of the preliminary design phase, a set of iterations on throttle settings and propellant mass were performed using ASTOS. At the end of this first set of simulations, the focus shifted towards the maximisation of the payload. The simulations are run for the five different spaceports considering a nominal payload reported in Table 4. The simulation settings used for each launcher are reported in Table 5. For the last launcher, the propulsion levels of the first and second stages are given in [30]. To properly design the trajectories, the missing information such as the mass and dimension for the stages has been derived using the tools defined in Sect. 2.6.

Fig. 10
figure 10

Preliminary trajectory of launcher n. 2 from Andøya Spaceport

Table 5 Simulation setting for the four launchers

3.2 Case study trade-off

The four micro-launchers presented in the previous section, with both the available and generated data, may be compared and evaluated through the trade-off analysis proposed in Sect. 2.

Attributes evaluation For the criteria identified and applicable for the case study, the considered attributes and system’s features may be evaluated. The values will serve as input in the trade-off analysis.

Besides the straight-forward quantitative criteria, e.g., the payload mass in kg for the criterion PL-M, it is worth mentioning the evaluation of the qualitative criteria. The qualitative criteria identified in Sect. 2.2 are PL-S, shareability, PL-O ownership, PH-P propellant.

Assuming that the main interest in considering the country ownership of the alternatives may be related to the funding process, the ESA’s Industrial Policy has been considered to evaluate the PL-O criterion. Nevertheless, one of the main elements of the policy is the geographical distribution [32]. The microlauncher owners’ countries and their budget contribution to ESA activities and programmes for 2019 are considered [33]. In the case of collaborations among more countries, the budget contribution of both countries is considered. As for the other scores, the result is normalised, such that the sum equals 1.

To score, instead, the alternatives with respect to the criteria PL-S and PH-P, the concept’s attribute qualitative assessment scale, Fig. 3, is used. In particular

  1. PL-S
    • 9 is attributed if the launcher allows multiple re-ignition (high performance and high potential)

    • 6 is attributed if the launcher uses a propellant suitable for the Reaction and Control Systems (moderate performance, moderate potential)

    • 4 is attributed if the launcher aims to multiple re-ignition, but no proof has been found (low performance, high potential)

  2. PH-P
    • 9 is attributed if the launcher uses an innovative and green propellant (high performance, high potential)

    • 7 is attributed if the launcher uses an innovative propellant (moderate performance, high potential)

    • 5 is attributed if the launcher uses a non-innovative propellant but already tested and reliable (high performance, low potential)

The green propellant is in this case study defined as an environment-friendly propellant. It is here preferred over conventional propellants due to the recent political interest in emissions reduction, e.g., the LPG propellant may reduce emissions up to 80% [34]. This choice may change at the change of whom is performing the analysis, i.e., stakeholder.

Furthermore, it is mentioned that for this case study, the complexity criteria, FP-C, is evaluated considering only the number of stages, since no other difference have been identified from the data available of the micro-launchers concept. The maximum allowable payload volume, PL-V, is instead estimated using the dimensioning tool presented in Sect. 2.6.2. The launch site and orbit access criteria, FP-S and FP-O, are instead omitted from the case study trade-off, since all the alternatives may be launched by the same launch site, the scenario considered to enhance comparability of the alternatives.

As for the data acquired and described in the other sections of this article, all the alternatives (launchers) consider, as nominal target orbit, an SSO (sun-synchronous orbit), orbits of known commercial and scientific interest. Since all the launchers may be conveniently launched from the same launch site (Santa Maria), to enhance comparability, the launch site and the orbit access (that would have been sub-criteria for the Flight Profile) are excluded from this trade-off as constant for all the alternatives.

In this case study, as previously mentioned, heritage, number of failures, success flights, and other programmatic parameters have been excluded due to the early stage of the micro-launchers under study.

Another FoM that may seem missing is the cost estimation of the micro-launchers. The cost is defined through considerations at equipment level [9]. However, the development of the launchers under study may be considered premature for such analysis. Moreover, the price and its relation to the gross lift-off mass are contemplated, PL-P. That may reflect the cost if a constant profit margin is assumed among the alternatives. Additionally, the dimension criterion, PH-D, takes into account potential disadvantages linked to operational costs.

Prioritisation

The identified criteria are weighted through pairwise comparison: the FoMs are scored with respect to each other with values from 1 to 9 and their reciprocal, being 9 the ”extremely more important” and 1 ”of equal importance”.

From the pairwise comparison is derived the prioritisation matrix, Table 6, which may be assessed by the consistency ratio factor [5].

Table 6 Prioritisation matrix for the case study

For this case study, a consistency ratio of \(\sim 10\%\) is obtained, assessing the robustness of the scores attributed to the pairwise comparison to the criteria. The consistency ratio is obtained through the eigenvector and its maximum eigenvalue [5], as for the priority vector of which elements are reported in Table  7 for both criteria and sub-criteria.

The alternatives are scored per each criterion and the decision matrix may, hence, be fully populated as in Table 8.

The alternatives, i.e., the launchers, are identified with a number, from 1 to 4, as for the previous section of this article. The criteria weights are reported in the column next to the relative criterion, expressed by the identification acronym. The total is obtained by the sum-product of the score per criterion with respect to the criterion weight. The total of each row, i.e., the sum of the scores per criterion of the alternatives, is equal to one, as in the ideal model of the AHP method [12].

The final ranking is shown in the last row of Table 8, from the 1st to the 4th position.

Sensitivity analysis

To assess the robustness of the trade-off, a sensitivity analysis is performed. This helps to identify the most critical criterion to provide awareness to the decision-makers, i.e., stakeholders.

Table 7 Weights for the identified criteria
Table 8 Decision matrix

Indeed, the stakeholders are the main driver in the population of the prioritisation matrix derived from the pairwise comparison of the criteria. They may want to revise those judgements with respect to the outcomes of the sensitivity analysis.

Considering the aggregate score with respect to the score per criterion, it is possible to calculate the minimum change to vary the ranking of alternatives. The computation is performed per couples of alternatives.

Table 9 Minimum change \(\delta \) (absolute change in criteria weight)

The minimum changes \(\delta \) for the case study are reported in Table 9, for each combination of alternatives (\(A_i-A_j\)), per criterion. This, expressed in absolute value, is the necessary increase or decrease of the weight to change the final ranking order.

The smallest values are those which more easily may vary the trade-off results. It may be noticed that the smallest values, in bold, are found for the couple of alternatives \(A_1-A_3\) and \(A_1-A_4\), respectively, positioned at the second–third and second–first position in the final ranking. This refers to the trade-off result: first (\(A_4\)), second (\(A_1\)), third (\(A_3\)), and fourth (\(A_2\)) position as reported in Table 8. The sensible results of the trade-off are hence the first three positions, excluding the fourth.

Moreover, besides the \(\delta \) values for the first criterion (PL-M), 0.14 for \(A_1-A_4\), 0.29 for \(A_2-A_4\), and 0.19 for \(A_3-A_4\) which are less than the criterion weight (0.33), all the other exceed the value of the criterion obtained weight W, Table  7, and hence are changes that may be considered ”Not Feasible”, Sect. 2.5 [12].

Table 10 Minimum change \(\delta '\) in percentage (relative change in criteria weight)0
Table 11 Feasible minimum change \(\delta '\) in percentage (relative change in criteria weight)
Table 12 Sensitivity coefficients of the criteria for the study case
Fig. 11
figure 11

The criteria weights and sensitivity graphed in Pareto chart (Tables 12 and 7)

The changes in Table 9 are in absolute value, therefore indicate an increase or decrease of the weight necessary to change the order of the final ranking, without considering the influence already produced in the result by the chosen weight. To consider the criterion weight used in the analysis, we may obtain the relative minimum change value, reported in Table 10.

This relates the absolute minimum change value with the criterion weight. However, only values lower than 100 are considering within the boundaries of the sensitivity analysis, as discussed in Sect. 2.5.

Table 10 can be reprinted as in Table 11.

Only one among the minimum values per criterion is less than 100: the change of \(41\%\) in weight of the criterion PL-M (in bold in Table 11), affecting the final position between alternatives 1 and 4, \(A_1-A_4\). In other words, a change in weight of the criterion regarding the payload mass of more than the \(40\%\) would be necessary to switch positions in the final ranking, obtaining launcher 1 being the preferred one instead of launcher 4.

The presence of a feasible relative minimum change only for one criterion may be considered already an indicator of a robust trade-off result [14].

The most critical criterion will be the one for which a ”relative” minimum change in the criterion weight will determine a different final ranking of the alternatives (Sect. 2.5). The change is described as relative, since it is obtained with respect to the criterion assigned weight. The minimum among the relative minimum changes per each criterion defines its criticality degree. The sensitivity coefficient may be derived from the reciprocal of the criticality degree value. The sensitivity coefficients for the case study are tabled in Table  12, being the highest sensitivity coefficient the one referring to the most critical criterion (in bold), i.e., PL-M with a sensitivity coefficient equal to 0.0244.

The most sensitive criterion is the payload mass, PL-M, by coincidence being also the criterion with the highest weight allocated.

Pareto chart The Pareto chart results in being a valuable tool to understand the trade-off results. Representing and comparing the criteria weights and criteria sensitivity, Fig.  11, the stakeholders can increase awareness in what are the main criteria influencing the ranking of alternatives.

Moreover, it proves how the highest weight is not always related to the most sensitive criteria. Indeed, besides the PL-M, keeping the first position for both weight and sensitivity, the other criteria are placed differently in the Pareto charts. The sensitivity considers not only the weight but also the criterion score, i.e., the attribute evaluation. For example, the stakeholder may attribute a high weight to a criterion for which the alternatives obtain a very similar score. In this case, there will be probably another criterion that will influence more the ranking of alternative, even with a lower weight. Only the first most influencing \(95\%\) data is printed in the Pareto charts.

4 Conclusions

The trade-off methodology proposed fits the needs of comparison for the systems under analysis, with enough flexibility. The choice of the criteria, the criteria weights, and the scores per criterion are influenced by the abundance and quality of the available and estimated launchers’ data. This may and should be revised following the design development of the micro-launchers. The identified weights are influenced by the uncertainties in the data used and do not reflect the absolute preference of the criterion. Moreover, a qualitative assessment scale is proposed based on performance and potential for innovation of the system’s attributes. For both the aerodynamics and the overall dimensions, data have been inferred from a database or the PoliTO’s described tools. Those assumptions affect the results of the trade-off but not the methodology per se. However, in the parameters’ identification phase of the trade-off methodology, the database was of fundamental importance. Among all, the pairwise comparison may change at any iteration or replica of this work depending on the stakeholders’ evaluation. Single or multiple stakeholders and their different evaluations may be considered. The sensitivity analysis shows a robust ranking of the alternatives for the obtained criteria weights, which are demonstrated to be consistent.