Keywords

1 Introduction

In today’s business and operational environments, multiple organizations routinely work collaboratively in pursuit of a common mission, creating a degree of programmatic complexity that is difficult to manage effectively. Success in these distributed environments demands collaborative management that effectively coordinates task execution and risk management activities among all participating groups [1]. In order to reduce errors caused by complex problems, mainly in distributed environment, Information Technology (IT) organizations require that the relevant activities risk management should be successful [1, 2]. The management of risks is a central issue in the planning and management of any venture. In the field of software, Risk Management is a critical discipline. The process of risk management embodies the identification, analysis, planning, tracking, controlling, and communication of risk [3].

The strategy to be adopted for risk management of complex systems requires a comprehensive and full view, so that the uncertainties of these process can be managed with structured techniques and decision-making can be made in different areas in the organization, so that risks can be identified, prioritized and mitigated [4,5,6]. In this context, the analytic hierarchy process (AHP), a well-known multicriteria decision method has been used for risk assessment, forecasting benchmarking, resource allocation in several segments, such as manufacturing systems, financial systems, governmental, information technology, trying to reduce subjective judgement errors and increase the decision reliability [7,8,9,10]. AHP was also used for evaluating the risk in software projects [11].

Nevertheless, from your knowledge, this method was not used for evaluating the risk in software programs, in which management control is shared by multiple people from different organizations [1]. Therefore, the purpose of this article is to apply the AHP modelling combined to sensitivity analysis to the process of evaluating the risk priority of this complex system.

2 Risks Key Drivers

A systemic risk assessment is based on a small set of factors, called drivers, which strongly influence the eventual outcome or result. This set of drivers can be used to assess the program’s current strengths and weaknesses, and forms the basis for the subsequent risk analysis. The Software Engineering Institute (SEI) risk management research that cataloged sources of risk in software development, system acquisition and operational security. The result of our analysis was the development of a common structure, or framework, for classifying a set of drivers that influence a program’s outcome. As listed in Table 1, the driver framework comprises six categories:

Table 1. Risk key drivers categories [1]
Fig. 1.
figure 1

Decision model scheme

3 Analytic Hierarchy Process (AHP)

AHP Saaty [12] is a multicriteria selection method that is applied to the solution of complex problems that can have multiple objectives that affect decision-making [10, 13], making it possible to evaluate qualitative and quantitative criteria simultaneously according to the judgments and importance attributed to each criterion and alternative by the decision-makers, resulting in a classification of alternatives. Generally, the process can be divided into three steps.

  1. 1.

    Decompose the problem into a hierarchy structure. In this step, the problem is decomposed into criteria and subcriteria, defining a decision hierarchy, as depicted in Fig. 1.

  2. 2.

    Construct the pairwise comparison matrix using the Saaty scale importance [13].

The comparison between pairs is carried out by means of specific software or electronic spreadsheet programs. The process is done through decision matrix A, which calculates the partial results of weights of each criterion, as follows:

$$\begin{aligned} \nu _{i}(A_{j}),j=1,\ldots ,n \end{aligned}$$
(1)

where \(\mathrm{A}_{j}\) is the weight of an alternative relative to criterion i.

In order to interpret and give relative weights to each criterion, it is necessary to normalize the previous comparison matrix. To do so, the following expression is used:

$$\begin{aligned} \sum _{i=1} \nu _{i}(A_{j})=1, for \ j=1,\ldots ,n \end{aligned}$$
(2)

where n is the criterion number, sub-criterion or alternative to be compared.

The judgments made by those involved in the judging process are evaluated by means of a consistency calculation. Firstly, it is necessary to obtain the maximum value of the eigenvector for each matrix through the following equation:

$$\begin{aligned} \lambda = (\varSigma _{i \in \kappa } C_{i \kappa } ^{-1} ) / n \end{aligned}$$
(3)

where n is the number of criteria. Index of consistency (CI) is calculated by:

$$\begin{aligned} CI= \frac{\lambda - n}{n-1} \end{aligned}$$
(4)

The consistency ratio (CR) is calculated by the following equation:

$$\begin{aligned} CR= \frac{CI}{RI(n)} \end{aligned}$$
(5)

RI(n) is a fixed value based on the number of criteria, as presented in Saaty [12]. If \(\mathrm{CR} \le 0.1\), the degree of consistency is satisfactory, but if \(\mathrm{CR} > 0.1\), serious inconsistencies may exist, and the AHP may not yield meaningful results [12].

Next, the sums of partial results of each criterion are calculated by the following expression:

$$\begin{aligned} \nu _{i}(A_{j})= \frac{a_{ij} }{ \varSigma _{i=1}}, for \ j=1,\ldots ,n \end{aligned}$$
(6)
  1. 3.

    Calculate the priority weights of alternatives according to the pairwise comparison matrix: For that, the priorities vectors of each alternative i relative to criterion Ck are calculated with the following expression:

$$\begin{aligned} \nu _{k}(A_{i})= \frac{\varSigma _{i=1} \nu _{i}(A_{j})}{n}, for \ j=1,\ldots ,n \end{aligned}$$
(7)

After this, the weight of each criterion \(\mathrm{C}_{k}\) and its impact on each of the alternatives is calculated using the following equation:

$$\begin{aligned} W_{i}(C_{j})= \frac{ C_{ij} }{ \varSigma _{i=1}C_{ij} }, for \ j=1,\ldots ,m \end{aligned}$$
(8)

where m is the value of criteria at the same level. The priority vector is obtained by:

$$\begin{aligned} w_{i}(C_{i})= \frac{ \varSigma _{i=1}w(C_{j}) }{ m }, for\ i =1,\ldots ,m \end{aligned}$$
(9)

Finally the evaluation of values of each alternative after normalization is obtained by Eq. 7:

$$\begin{aligned} f(A_{i})= \varSigma _{i=1} \ w(C_{j}) \ *\ \nu _{i} (A), for \ j=1,\ldots ,n \end{aligned}$$
(10)

where n is the number of alternatives.

4 Sensitivity Analysis

This approach involves changing the weight values and calculating the new solution. The method, also known as One-at-a-time (OAT), works by incrementally changing one parameter at a time, calculating the new solution and graphically presenting how the global ranking of alternatives changes. In this method, the global weights are a linear function depending on the local contributions [14]. Given this property, the global priorities of alternatives can be expressed as a linear function of the local weights. Furthermore, if only one weight wi is changed at a time, the priority Pi of alternative Ai can be expressed as a function of wi using the following formula:

$$\begin{aligned} Pi= \frac{Pi'' - Pi'}{w''-w'} \ (wi - wi') + Pi' \end{aligned}$$
(11)

where Pi\(''\) and Pi\('\) are the priority values for wi\(''\) and wi\('\), respectively.

5 Numerical Application

In this section the proposed decision model was applied to evaluate the risk level of hypothetical software programs, as follows: the implementation of BI and CRM solutions (PROG 1); Solution Development and Embedded Systems Trading (PROG 2) and Computer Cloud Deployment in the organizations (PROG 3). Table 2 shows the normalized weights assigned to each risk driver obtained from the expert judgements. In this table one can see that the preparation (c2) is the more critical risk driver. Based on these results, the software programs were analyzed. Table 3 shows the decision matrix and the results obtained for the alternatives.

Table 2. Normalized pairwise comparison matrix
Table 3. Decision matrix

Once obtained the decision matrix, it was possible to evaluate the score and classify the software programs according to their risk priority. The final classification is shown in Fig. 2, giving the following order, from first (more critical) to last: PROG 1 (0.4487), PROG 2 (0.29579) and PROG 3 (0.2721). The PROG 1 presents a higher risk level, when compared to the PROG 2 AND PROG 3, which have very close values. Alternatively, the weights for each risk driver assigned to the alternatives can be compared against each other graphically as shown in Fig. 3.

Fig. 2.
figure 2

Final evaluation of the alternatives.

Fig. 3.
figure 3

Driver risks weights assigned to the alternatives.

5.1 Sensitivity Analysis

Figure 4 illustrates how the alternatives perform with respect to the risk driver “objectives”. One can see that by shifting the current value (27%) to 100%, there is no change in ranking. Similarly, when shifting the value of this driver to zero, it does not result in any changes in the rank. This behavior can also observed for the other risk drivers. Overall, based on the sensitivity analysis, it can be concluded that the final decision is consistent and reliable.

Fig. 4.
figure 4

Numerical incremental sensitivity analysis.

6 Conclusions

In this work, The AHP technique combined to sensitivity analysis was applied to evaluate software programs risk level. The results show that PROG 3 is less critical, as it presents lower values for the main risk drivers.

The sensitivity analysis performed in this study showed that changes in current values do not lead to ranking changes showing that the decision process was well-conducted, being useful for decision-makers. Moreover, as knowing which risk factors is more critical, the decision maker can more effectively focus his/her attention to that one in a given multicriteria decision problem.

Finally, hybrid decision models, such as Fuzzy AHP (F-AHP) and Fuzzy TOPSIS techniques may also be developed on the basis of this model.