1 Introduction

Since 1950, reliability optimization plays a progressively crucial role because of its critical requirements on engineering and industrial applications like automotive industries, military, aerospace, computer and communication systems, transportation systems, etc. To be more competitive in practical life, the basic goal of reliability engineering is always to improve the reliability of product components or manufacturing systems. Generally, the reliability optimization problem can be distinguished into two classes: integer and mixed-integer reliability problems. In the integer reliability problem, the main task is to allocate the number of system redundant components while the reliability of the components is known. On the other hand, both the redundancy allocation and the reliability of the component are to be designed together in the mixed-integer reliability problem. RRAP [1] is such kind of a problem in which system reliability is maximized through the choices of redundancy and component reliability. To optimize a system RRAP, redundancy levels are considered as integer values and component reliabilities are taken as continuous values lie between zero and one. Therefore, RRAP is a mixed-integer problem approach with the objective of maximizing system reliability under constraints such as the system cost, volume, and weight [2,3,4,5,6,7,8]. Obviously, an excellent reliability design facilitates a system to run more safely and reliably. Thus, the study of reliability optimization problems has been achieved great attention and has become a hot research topic in the engineering field.

Because of complexity, RRAP has been considered to be an NP-hard combinatorial optimization problem which has been already considered as the subject of much prior work in many different formulations and over various optimization approaches. Due to this computational difficulty, meta-heuristics algorithms (MHAs) have been successfully applied to a wide range of practical optimization problems to deal with them. In MHAs, optimization problems are assumed like a black box which indicates that there is no need to find the derivative of the mathematical models any more, rather it can be solved only by detecting the inputs and outputs. These methods use a random population of individuals in the search space, apply probabilistic rules and also approximate the optimal solution rather than finding the mathematical optimum, which makes these methods more flexible to find better solutions compared to deterministic methods for solving optimization problems.

Inspiring by biological phenomena and human characteristics, several authors have been developed a variety of population-based optimization techniques to address complex optimization and in terms of the inspiring source, this can be broadly classified into three categories: evolutionary-based, swarm-based, and human-based algorithms. Evolutionary algorithms (EA) mimic the mechanism of biological evolution, which is initiated with a population of random individuals. The first ever EA named Genetic algorithm (GA) based on the Darwinian principle of natural evolution is proposed by Goldberg and Holland [9]. Swarm-based algorithms mimic the social behaviours or intelligence of animals in nature and the main virtue of such algorithms is their collaborative survive ability. Particle swarm optimization (PSO) [10], which resembles the bird flocking social behaviour, is considered as the first swarm-based algorithm introduced in the literature. Finally, the human-based algorithms are inspired by some human nature, activities or perception. A list of some nature-inspired algorithms with their operators is reported in Table 1. Further, several researchers have applied some well-known MHAs to deal with the RRAP, including GA [11,12,13,14], PSO [15,16,17], CS [18, 19], ACO [20], BBO [21], ABC [22], HS [23,24,25,26] and DE [27]. In addition to the above mentioned studies, some novel MHAs have also been introduced to solve RRAP, such as grey wolf optimizer algorithm [28] and cuckoos search algorithm [29]. Again, Qiang et al. [30] presented a novel artificial fish swarm algorithm which mimic the social behaviours of fish swarm, for solving large-scale RRAPs. The above-cited algorithms has been successfully tested to solve various kinds of real-world optimization problems. Although, there are some noticeable deficiencies on these algorithms in solving some optimization problems. For example, ABC with its simple structure and an advantage of few parameters, it has been successfully applied to solve various kinds of real-world optimization problems. Despite these efficiencies, the ABC suffers from the problem of local optima stagnation and also it fails to maintain a proper balance between exploration and exploitation. Again, for HHO, it has a good global searching ability and provides high accuracy in extracting the optimal parameters. Although HHO experience is satisfactory in exploration but devoid of exploitation, which forces slow convergence. In case of the algorithm SSA, it has a powerful neighbourhood search ability and it can easily fitted for wide search space which makes SSA an efficient technique. But, there are some major drawbacks of SSA in solving some optimization problems like local optima stagnation, poor convergence tendency, lacking of exploitation etc. Like ABC, HHO and SSA, the other cited algorithms also has some advantages as well as disadvantages in solving optimization problems.

Table 1 Nature-inspired optimization algorithms

According to the “No Free Launch (NFL)” theorem [52], there exist no MHAs best fitted to solve all optimization problems. Alternatively, it may happen that a particular algorithm gives efficient solutions for some optimization problems, but it may fail to perform well on another set of problems. Thus, no MHAs are perfect and its limitation affects the performance of the algorithm. Therefore, NFL provokes researchers to develop new MHAs or upgrade some original methods for solving a wider range of complex optimization problems (COPs). The hybridization of two algorithms is one of the remarkable choice between all strategies to upgrade an existing algorithm and to overcome shortcomings. For example, Ghavidel et al. [53] introduces a hybrid LJaya-TVAC algorithm by combining the Jaya algorithm based on time-varying acceleration coefficients (TVAC) and the learning phase of TLBO, for solving various types of non-linear mixed-integer RRAPs. Juybari et.al. [54] presented a penalty-guided fractal search algorithm to deal with RRAPs with cold-standby strategy and the same problem is solved by a new enhanced nest cuckoo optimization algorithm, which is introduced by Mellal and Zio [55]. Ouyang et. al. [56] developed an improved PSO algorithm to solve the RRAP with mixed redundancy strategy. Later, Devi and Garg [57] presented a new hybrid HGAPSO algorithm which inherits the advantages of the PSO and the GA for solving RRAP. An improved GWO algorithm called random walk gray wolf optimizer (RW-GWO) is presented by Gupta et. al. [58] to obtain the optimal redundancies to optimize the system reliability with several constraints like volume, weight, and system cost. Recently, Kundu et. al. [59] proposed a hybrid salp swarm algorithm with teaching–learning-based optimization (HSSATLBO) for solving RRAP.

In recent years, neural network technique has been used to solve various kinds of optimization problems, and which has been developing very rapidly. Impressively, Sadollah et. al. [60] first introduced that a neural network technique can be implemented to design an optimization algorithm and proposed a novel meta-heuristic method called neural network algorithm. NNA is one of the newest meta-heuristic algorithms, which is inspired by artificial neural networks (ANNs) and biological nervous systems and the important characteristics like, simple structure, robustness, and scalability, makes NNA an efficient method for solving various kinds of real world problems. Compared to other existing MHAs, the exclusive structure of ANNs guides NNA to find the global optimum solution and shows strong global exploration ability with the minimum chance of getting captured in a local optimum. Despite of these efficiency, the basic NNA has some noticeable deficiency in solving some optimization problems. Firstly, the NNA suffers from the problem of local optima stagnation in solving some large scale optimization problems such as large-scale reliability system with higher dimensions and CEC (Congress of Evolutionary Computation) benchmark test problems. Secondly, the NNA experience is satisfactory in exploration but it fails to maintain a proper balance between exploration and exploitation due to lack of exploitation ability. Finally, it has slow convergence tendency and sometimes, it required more time to evaluate a new solution for some problems. These shortcomings will reduce its applications in some optimization problems with limitations and researchers have applied different search mechanisms and adopted modified operators to upgrade the original NNA. To mention a few- Zhang et. al. [61] introduced a novel hybrid algorithm GNNA combining GWO and NNA for solving global numerical optimization problems. The core idea of this work is to make full use of good global search ability of NNA and fast convergence of GWO. A new hybrid TLNNA algorithm based on TLBO and NNA is developed by Zhang et. al. [62] for solving engineering design optimization problems. In 2022, Kundu and Garg [63] proposed a new TLNNABC hybrid algorithm to solve reliability and engineering design optimization problems. In this algorithm, the structure of ABC algorithm has been improved by incorporating the features of the NNA and TLBO.

Teaching and learning are two common human social behaviours and are also an important motivating process in which an individual tries to learn from others. A regular classroom teaching–learning environment is motivational process that allows students to improve their cognitive levels. Based on this fact, Rao [49] first introduced the TLBO algorithm in 2011. TLBO has fast convergence speed and good exploitation ability, which has been used to solve many real-world optimization problems [64,65,66,67,68]. However, TLBO may tend to convergence to local minima in solving some complex optimization problems. The main advantages of the TLBO algorithm is that without any effort for tuning initial parameters, it leads to first convergence speed and also, the computational complexity of the algorithm is much better than several existing algorithms.

Motivated by the advantages of NNA, an improved algorithm called INNA has been developed in this paper. Basically, searching processes with similar nature may lead to the loss of diversity in the search space and also there is a chance of getting trapped into a local optimum. But, the different searching techniques of two different algorithms can maximize the capacity of escaping from the local optimal. In this proposed INNA, the basic structure of the NNA has been renovated by embedding the features of the learner phase of TLBO and a logarithmic spiral search operator. Therefore, in the search process of INNA, TLBO and the logarithmic search operator helps to accelerate the convergence speed of INNA, whereas the excellent global exploration ability of NNA helps to find a better global optimal solution and produces efficient and effective results for solving RRAP. In this study, a population diversity definition of the proposed method is also introduced and performed the exploration-exploitation evaluation for investigating the search behaviour of both INNA and NNA. The measurement of exploration and exploitation also help to identify how the proposed INNA performs better on an optimization problem. The experimental study shows that the performance of INNA is improved compared to the conventional NNA by the population diversity enhancement. Finally, being a new optimization method and there is still much room left for future research. The main contribution of this paper can be stated as follows-

  • A improved neural network algorithm (INNA) is proposed by combining the features of TLBO and a new search operator. Proposed algorithm mainly contains the structure of the basic NNA and, meantime, it has been reconstructed by embedding the searching strategy of the learner phase of TLBO and a new logarithmic spiral search operator.

  • The proposed method makes a proper balance between exploration and exploitation in which the basic NNA looks after the exploration part and the presence of a new search operator and the searching strategy of TLBO increases the exploitation capability of the algorithm.

  • To validate the effectiveness and efficiency of the proposed INNA, it is examined against seven well-known RRAP that includes series system, series-parallel system, complex (bridge) system, overspeed protection system, convex system, mixed series-parallel system, and large-scale system with dimensions 36, 38, 40, 42 and 50. Section 5 illustrates that the proposed method gives an effective result and also provide superior performance compared to other existing MHAs in terms of best optimal solutions and others.

  • In order to check the statistical significance on the results obtained from the proposed INNA and the existing algorithms, a number of tests have been carried out, such as rank-tie, Wilcoxon sign-rank test, Kruskal–Wallis test, and multiple comparison tests. From these computed results, it is verified that the proposed algorithm produces an effective result and also outperforms other existing algorithms in terms of the best optimal solutions and the maximum possible improvement.

The rest of the papers is organized as follows: Sect. 2 briefly describes the basic NNA and TLBO algorithms. Section 3 presents a new logarithmic spiral search operator, the proposed algorithm INNA, and exploration-exploitation measurement. The RRAPs are described in Sect. 4. Section 5 presents the experimental results of the proposed INNA and compares them with several existing algorithms. The results obtained have also been validated through statistical test analysis. Finally, the conclusions and future works are drawn in Sect. 6.

2 Background

In this section, the basic NNA and the conventional TLBO algorithms are briefly described.

2.1 Neural network algorithm

NNA [60] is one of the newest MHAs and is followed by the nature of ANNs and biological nervous systems. In most of the cases, ANNs are used for making prediction and the main feature of this is to receive input data and predict the relationship between input and target data. Generally, the input values of the ANNs are obtained by experiments, calculations, and so on. Thus, it can be concluded that the ANNs are trying to map input data to the target data to minimize the error between the predicted solutions and the target solutions by iteratively changing weight function values. Although, the basic goal of the optimization problem is to find a feasible solution which optimize the objective function using the strategy of the said MHA. Considering the unique structure of ANNs and take up for using as an optimization algorithm, NNA considers the current best solution as the target solution (i.e., temporal optimal solution) and tries to minimize the distance between the target solution and the other solutions present in the population (i.e., moving other predicted solutions towards the target solution). More details can be found in the literature [60]. Although, the explanation and process of NNA are described in the following sections:


1. Generate new solution: In NNA algorithm, a population matrix \(X=(x_{ij})\) \((i=1,2,\ldots ,N_{p}; j=1,2,\ldots ,D)\) of size \(N_{p}\times D\) is randomly generated in the search space, where, \(N_{p}\) is the population size and D is the number of dimensions. In the algorithm, every individual of the \(N_{p}\) solutions are given by Eq. (1).

$$\begin{aligned} x_i=[x_{i1}, x_{i2},\ldots ,x_{iD}] \ \ \ for \ \ \ i=1,2,\ldots ,N_{p} \end{aligned}$$
(1)

Then for every individual of the population \(N_{p}\), a weight vector \(w_i = [w_{i1}, w_{i2},\ldots , w_{iN_{p}}]\) is generated which satisfying Eq. (2)

$$\begin{aligned} \sum _{j=1}^{N_{p}}w_{ij} = 1, \ \ \ 0 \le w_{ij} \le 1, \ \ \ i=1,2,\ldots ,N_{p} \end{aligned}$$
(2)

After forming the weight matrix, a new solution has been generated using Eqs. (3) and (4), where t is the current number of iteration. Figure 1 describes the process of population generation in NNA.

$$\begin{aligned} x_\mathrm{new,j}^{t+1}= & {} \sum _{i=1}^{N_{p}} w_{ij}^t \times x_i^t, \ \ j=1,2,...,N_{p} \end{aligned}$$
(3)
$$x_{i}^{{t + 1}} = {\text{ }}x_{i}^{t} + x_{{{\text{new}},j}}^{{t + 1}}$$
(4)
Fig. 1
figure 1

The process of population generation in NNA


2. Weight matrix update: Based on the best weight value known so-called “target weight”, the weight matrix can be updated according to Eq. (5)

$$\begin{aligned} w_{i,\mathrm{updated}}^{t+1}=w_i^t+2\cdot d \cdot (w_{\mathrm{Target}}^t-w_i^t ) \ \ \ for \ i=1,2,\ldots ,N_{p} \end{aligned}$$
(5)

Here d is the random number between 0 and 1. The best solution obtained in each iteration is considered as the target solution \(x_{\mathrm{Target}}^t\) and the associated weight is taken as the target weight vector \(w_{\mathrm{Target}}^t\). The target weight vector \(w_{\mathrm{Target}}^t\) and the target solution \(x_{\mathrm{Target}}^t\) are updated simultaneously. If \(x_{\mathrm{Target}}^t\) is equal to \(x_k^t (k \in [1,N_{p} ])\) at iteration t, \(w_{\mathrm{Target}}^t= w_k^t\).


3. Bias operator: The bias operator in the NNA helps to explore the search space (exploration process) and performs the same as the mutation operator in the GA. Generally, the bias operator restricts the algorithm from early convergence and customizes a number of individuals in the population. In the bias operator, the modification factor \(\beta\) is initially set to 1 (i.e. 100 per cent of the chance to reconstruct all individuals in the population) and its value has been flexibly reduced at each iteration using the following reduction formula given in Eq. (6).

$$\begin{aligned} \beta ^{t+1}=\, & {} 0.99\times \beta ^t \end{aligned}$$
(6)

In NNA, there are two parts in bias operator named bias of population and bias of weight matrix. Firstly, a random number \(N_p^\mathrm{Bias}\) is generated for bias of population according to the Eq. (7)

$$\begin{aligned} N_p^\mathrm{Bias} =\lceil \beta \times D \rceil \end{aligned}$$
(7)

Then a set Pop is introduced by randomly selecting \(N_p^\mathrm{Bias}\) integers between 0 and D. Let, \(lb = (lb_1,lb_2,\ldots , lb_{N_p})\) and \(ub = (ub_1,ub_2,\ldots , ub_{N_p})\) represents the lower and the upper bounds of design variables respectively. Then the bias of population is described by the following Eq. (8)

$$\begin{aligned} x_{i,Pop(k)}^t =&lb_{Pop(k)}+ r \cdot (ub_{Pop(k)}-lb_{Pop(k)}), \nonumber \\&\quad \quad \ \ \ for \ k = 1,2,\ldots ,N_p^\mathrm{Bias} \end{aligned}$$
(8)

where r is a random number between 0 and 1. Again, in case of bias weight matrix, a random number \(N_w^\mathrm{Bias}\) is firstly generated, that is equal to \(\lceil \beta \times N_p \rceil\). Then a new set T is formed consisting \(N_w^\mathrm{Bias}\) randomly selected integers between 0 and \(N_p\) and the bias weight matrix is defined according to Eq. (9)

$$\begin{aligned} w_{i,T(l)}^t \sim U(0,1),\ \ \ \text{ for } \ l = 1,2,\ldots ,N_w^\mathrm{Bias} \end{aligned}$$
(9)

where U(0, 1) is a random number between 0 and 1.


4. Transfer operator: In the NNA, the transfer function operator is transferring the current solution to a new position to upgrade and develop better quality solutions to the current best solution. Improvement of the solutions is achieved by moving current new pattern solutions closer to the target solution, which is expressed as the following Eq. (10)

$$\begin{aligned} x_i^{t+1}=x_i^t+2\cdot r \cdot (x_{\mathrm{Target}}^t-x_i^t ) \ \ \ \text{ for } \ \ i=1,2,\ldots ,N_{p} \end{aligned}$$
(10)

where r is the random number between 0 and 1. Based on the above descriptions, the pseudo-code of NNA is presented in Fig. 2.

Fig. 2
figure 2

Pseudo-code for NNA

2.2 Teaching–learning-based optimization (TLBO)

Like other population-based algorithms, Rao et al. [49] proposed an algorithm called TLBO based on the conventional teaching–learning aspects of a classroom. In TLBO, a group of learners is recognised as the population and various subjects taught to learners represents different design variables. The fitness value indicates the students grade after learning and the student with the best fitness value is witnessed as the teacher. This algorithm describes two basic modes of learning: (1) through teacher (known as teacher phase) and (2) interacting with the other learners (known as the learner phase). The working procedure of the TLBO algorithm is explained below -

2.2.1 Teacher phase

In the teacher phase, let us assume, at any iteration t, the number of subjects or course offered to the learners is D and \(N_p\) denotes the population size (i.e. number of learners). In this phase, the basic intention of a teacher is to transfer knowledge among the students and also to improvise the average result of the class. Here, the parameter \(Mean_j(t)\) indicates the mean result of the learners in jth subject (\(j = 1,2,\ldots , D\)) and at generation t, it is given by Eq. (11).

$$\begin{aligned} \mathrm{Mean}_j(t)=[ M_1^t,M_2^t,\ldots ,M_D^t ] \end{aligned}$$
(11)

Let \(X_\mathrm{Teacher}(t)\) indicates the learner with the best objective function value at iteration t and is recognised as the teacher. The teacher tries to give his/her maximum effort to increase the knowledge of each student in the class, but learners will gain knowledge according to their talent and also by the quality of teaching. Then, the difference vector between the teacher and the average results of students can be calculated given by the Eq. (12).

$$\begin{aligned} G_{i,j}^{\mathrm{Mean}}(t)=c \times [X_\mathrm{Teacher}(t)-T_F \cdot \mathrm{Mean}_j(t)] \end{aligned}$$
(12)

where c indicates a random number lies between 0 and 1, \(T_F\) denotes the teaching factor and its value can be 1 or 2. Based on the \(G_{i,j}^{\mathrm{Mean}}(t)\), the existing solution of the population can be updated and is given by the following Eq. (13)

$$\begin{aligned} X_{i,j}^\mathrm{new}(t)=X_{i,j}(t)+G_{i,j}^{\mathrm{Mean}}(t) \end{aligned}$$
(13)

The new solution \(X_{i,j}^\mathrm{new}(t)\) in generation t is accepted if it found to be a better than the previous one.

2.2.2 Learner phase

In addition to learning from the teacher, interaction with other students is also an effective way to enhance their knowledge. A learner can also gain new information from other learners having more knowledge than him or her. In this phase, a student \(X_p\) randomly select classmate \(X_q\) (\(\ne X_p\)) to obtain more knowledge. If \(X_p\) performs better, \(X_p\) moves towards \(X_q\); otherwise, moves away from it. The following formulas (14) and (15) can be used to describe this process:

$$\begin{aligned} X_{p,j}^\mathrm{new}(t) =\,&X_{p,j}(t)+ m \times (X_{q,j}(t)-X_{p,j}(t) ), \nonumber \\&\quad \quad if \ \ f(X_{q,j}(t) < f(X_{p,j}(t)) \end{aligned}$$
(14)
$$\begin{aligned}&X_{p,j}^\mathrm{new}(t) = X_{p,j}(t)+ m \times (X_{p,j}(t)-X_{q,j}(t) ), \nonumber \\&\quad \quad if \ \ f(X_{q,j}(t) > f(X_{p,j}(t)) \end{aligned}$$
(15)

Where m is a random number between 0 and 1 and \(f(X_{p,j}(t))\) and \(f(X_{q,j}(t))\) are fitness values of \(X_{p,j}(t)\) and \(X_{q,j}(t)\) respectively. The pseudocode of the basic TLBO is given in Fig. 3.

Fig. 3
figure 3

Pseudo-code for TLBO

2.3 Constraint handling technique

The standard form of the constrained optimization problem is formulated as follows:

$$\begin{array}{*{20}l} {{\text{Optimize}}\;f(x)} \hfill \\ {{\text{subject}}\;\;{\text{to}},\;\;g_{i} (x) \le 0,\;\;\;\;i = 1,2, \ldots ,M\;\;} \hfill \\ {x_{k}^{L} \le x_{k} \le x_{k}^{U} ,\;\;\;\;k = 1,2, \ldots ,n} \hfill \\ \end{array}$$
(16)

where f(x) is the objective function, \(g_i(x)\) is the set of inequality and equality constraints and \(x=[x_1,x_2,\ldots ,x_n]\) is the decision variables. \(x_k^L\) and \(x_k^U\) are the lower and upper limits respectively for each \(x_k,( k=1,2,\ldots ,n.)\) defined in the search space S.

Often, it is very difficult to obtain the optimal solution of the optimization problems by a MHA because of the presence of constraints in it, and also some new solutions generated by these methods may be infeasible which violates some constraints. To overcome this situation, a penalty function method is introduced, which can convert a constrained optimization problem to an unconstrained optimization problem and as a result, a global feasible solution can be achieved in an equitable time. The basic goal of a penalty function is to penalize the infeasible solutions. In this study, an exterior penalty function method is used to penalize the infeasible solutions by penalizing the objective value. Dealing with this penalty function, the maximum constrained problem f(x) can be converted into minimum problem F(x) as follows

$$\begin{aligned} \text{ minimize } \ \ F(x)= -f(x)+ \lambda \sum _{j\in S} \max (0, g_j(x)) \end{aligned}$$

Here, \(\lambda\) represents the penalty coefficient.

2.4 Exploration and exploitation measurement

In this study, an in-depth empirical analysis is performed to examine the searching behaviour of the proposed INNA in terms of diversity. Through diversity measurement, it is possible to measure explorative and exploitative capabilities of the algorithm. In the exploration phase, the difference expands between the values of dimension D within the population and hence swarm individuals are scattered in the search space. On the other hand, in exploitation phase, the difference reduces and swarm individuals are clustered to a dense area. These two concepts are ubiquitous in any MHAs. In case of finding the globally optimal location, the exploration phase maximizes the efficiency in order to visit unseen neighbourhoods in the search space. Contrarily, through exploitation, an algorithm can successfully converge to a neighbourhood with high possibility of global optimal solution. A proper balance between this two abilities is a trade-off problem. For better illustration about the exploration and exploitation concept, see Fig. 4. According to Hussain [69], diversity in population is measured mathematically, using the following Eqs. (17) and (18):

$$\begin{aligned} Div_j(t)= & {} \frac{\sum _{i=1}^{N_{p}} \left[ med(x_j(t))-x_{i,j}(t)\right] }{N_{p}} \end{aligned}$$
(17)
$$\begin{aligned} Div(t)= & {} \frac{\sum _{j=1}^D Div_j(t)}{D} \end{aligned}$$
(18)

Where, \(x_{i,j}(t)\) denotes the j-th dimension of i-th swarm individual in \(N_{p}\) population in iteration t, whereas \(med(x_j(t))\) is median of dimension j. \(Div_j(t)\) and Div(t) indicates the diversity in the j-th dimension and the average of diversity of all dimensions respectively. After determining the population diversity \(Div^t\) for all the iterations, it is now possible to calculate the exploration and exploitation percentage ratios during search process, using Eqs. (19) and (20) respectively:

$$\begin{aligned} Expl\,(\%)= & {} \frac{Div(t)}{Div_\mathrm{max}(t)} \times 100 \end{aligned}$$
(19)
$$\begin{aligned} Expt\,(\%)= & {} \frac{|Div(t) - Div_\mathrm{max}(t)|}{Div_\mathrm{max}(t)}\times 100 \end{aligned}$$
(20)

where Expl(%) and Expt(%) denotes exploration and exploitation percentages respectively for iteration t, whereas \(Div_\mathrm{max}(t)\) is the maximum population diversity in all iterations (T).

Fig. 4
figure 4

Candidate population representation for exploration-exploitation

3 Proposed method: INNA

This section is divided into three subsections. Firstly, a new logarithmic spiral search operator is introduced in Sect. 3.1. The design structure and the implementation of INNA are described in Sect. 3.2. Finally, exploration and exploitation measurement of the proposed INNA is discussed.

3.1 New search operator

In original NNA, all the individuals followed the same direction pattern which leads some individuals to move aside from the promising region of the search space and as a result, the convergence rate of the NNA decreases. Therefore, we proposed a new logarithmic spiral search operator in our proposed algorithm (INNA) to overcome this issue and that can be expressed by the Eq. (21).

$$\begin{aligned} X_i^{t+1}= |X_i^t-X_{\mathrm{Target}}^t| \cdot e^{\beta \theta } \cdot \cos (2 \pi \theta ) + X_{\mathrm{Target}}^t \end{aligned}$$
(21)

where \(\beta\) is a constant and taken as 1. This parameter controls the specific shape of the spiral. In addition, \(\theta = 2(1-\frac{t}{T})-1\), is a parameter that decreases linearly from 1 to − 1 as the number of iteration increases, where t and T indicates the current iteration and the maximum number of iterations, respectively. The updated positions of obtained solutions in each iteration for the logarithmic spiral search model are described in Fig. 5 and it is clear from that figure, as the value of \(\theta\) switches from 1 to − 1, current solutions pointedly move closer to the target solution. The exploitation capability of the algorithm is highlighted, and the convergence speed is further enhanced.

Fig. 5
figure 5

Illustration of the logarithmic spiral search operator

3.2 The proposed INNA

The detailed framework of the proposed algorithm INNA is demonstrated in this section. The traditional NNA is simple and effective swarm optimization technique that has been used to solve various kinds of practical optimization problems. Benefiting from the unique structure of artificial neural networks, NNA has a great exploration ability. Despite its strong global search ability, NNA has a noticeable deficiency being its slow convergence speed as it fails to manage the convenient balance between exploitation and exploration and these shortcomings will also reduce its applications in some optimization problems with limitations. To get rid of these types of insufficiency, the updating phase of the solution is enhanced by reconstructing the basic formation of the NNA. During this modification, the searching mechanism of the learner phase of TLBO (Eqs. (14) and (15)) and a new logarithmic spiral search operator (Eq. (21)) are implemented into the main structure of the NNA. Further, \(CP_1\) and \(CP_2\) are two predetermined parameters lie in the range [0,1] which are introduced in our proposed INNA to control the probability of selecting the above searching strategies. The TLBO algorithm having first convergence speed and much better computational complexity than several existing algorithms makes it an exceptional search algorithm. Thus, the inclusion of TLBO and the new search operator adds more flexibility to the NNA and subsequently, the exploration and exploitation abilities of the NNA algorithm are also improved. Therefore, in the search process of INNA, TLBO and the logarithmic spiral search operator aims on the local search and NNA accentuate on the global search, that may help to maintain a convenient balance between exploration and exploitation.

The process of implementation of the proposed INNA is described as follows:

  1. Step 1:

    Initialize the required parameters first, such as, maximum number of iterations (T), population size (\(N_{p}\)), the lower bound (\({\textbf {lb}}\)) and upper bound (\({\textbf {ub}}\)) of the decision variables, dimension (D) and fitness function \(f(\cdot )\) of the problem. Additionally, initialize the control parameters \(CP_1\) and \(CP_2\) with values 0.3 each. Initially, the bias operator and the number of iteration is set to 1 and 0 respectively.

  2. Step 2:

    Based on the initialize parameters, a population matrix X of size (\(N_{p} \times D\)) and a weight matrix W of size (\(N_{p} \times N_p\)) are generated randomly and described in the Eq. (22) and (23).

    $$\begin{aligned} X = [x_{i1}, x_{i2},\ldots ,x_{iD}] = \left( \begin{array}{cccc} x_{11}&{} x_{12}&{} \ldots &{} x_{1D} \\ x_{21}&{} x_{22}&{} \ldots &{} x_{2D}\\ \ldots &{}\ldots &{}\ldots &{}\ldots \\ x_{N_p1}&{} x_{N_p2}&{} \ldots &{} x_{N_pD} \end{array}\right) \end{aligned}$$
    (22)
    $$\begin{aligned} W = [w_{i1}, w_{i2},\ldots ,w_{iN_p}] = \left( \begin{array}{cccc} w_{11}&{} w_{12}&{} \ldots &{} w_{1N_p} \\ w_{21}&{} w_{22}&{} \ldots &{} w_{2N_p}\\ \ldots &{}\ldots &{}\ldots &{}\ldots \\ w_{N_{p1}}&{} w_{N_{p2}}&{} \ldots &{} w_{N_{pN_p}} \end{array}\right) \end{aligned}$$
    (23)
  3. Step 3:

    The fitness value of each individual of the population is evaluated and the best one i.e., the optimal solution and the optimal weight is selected.

  4. Step 4:

    In this step, a new solution is generated and the weight matrix is updated through Eqs. (3)–(5).

  5. Step 5:

    A random number is generated and if it is less than \(\beta ^t\), perform bias operator to update the current solution. Otherwise, depending on the controlling parameters \(CP_1\) and \(CP_2\), current solution is updated either using the searching operator of TLBO, or via logarithmic spiral search operator or using the transfer operator.

  6. Step 6:

    In this step, \(\beta ^t\) is updated and the current population is evaluated. Then, the greedy search mechanism is performed and the optimal solution \(x_{\mathrm{Target}}^{t+1}\) and the optimal weight \(w _{\mathrm{Target}}^{t+1}\) are selected.

  7. Step 7:

    Go to step 3 if the termination criterion is not satisfied, otherwise stop the process.

The pseudo-code and the detailed flowchart of the proposed INNA has been shown in Figs. 6 and 7 respectively.

Fig. 6
figure 6

Pseudo-code for the proposed INNA

Fig. 7
figure 7

Flowchart of the proposed INNA

To evaluate the computational complexity of the proposed INNA algorithm, the complexity of the algorithm is calculated according to the worst-case complexity. Thus, Big-O notation is used here as a common terminology. Complexity is dependent upon population size (\(N_P\)), dimensions (D) and the maximum number of iterations (T). In the initialization phase, the computational complexity of INNA is \(O(N_P)\) after initializing the population of \(N_P\) individuals. After that, the fitness of each individual is evaluated in the main loop of the INNA algorithm, so the computational complexity in this stage becomes \(O (T \cdot N_P)\). Finally, the current position of each search agent is updated via different searching strategy in the population update stage, so the computational complexity in this stage is \(O(T \cdot N_P \cdot D)\). After complete analysis, the computational complexity of INNA is calculated as follows:

$$O(\mathrm{INNA}) = O(\mathrm{Initialization})+ O(\mathrm{Fitness} \ \ \mathrm{evaluations})+ O(\mathrm{Population} \ \ \mathrm{update})$$

i.e., \(O(\mathrm{INNA}) = O(N_P) + O (T \cdot N_P) + O(T \cdot N_P \cdot D) = O(T \cdot N_P \cdot D)\).

4 Problem formulation

In the present-day scenario, the demands for highly reliable products and equipment are increasing day by day. Therefore, in recent years, the requirement of reliability analysis to evaluate the performance of products, equipment, and several engineering systems is also increasing. Reliability optimization can figure out these issues and capable of finding a high-quality system that performs efficiently and safely in a given period. In this section, RRAPs are considered to explore the performance of the INNA algorithm. Before introducing the RRAPs, we define the following assumptions:

4.1 Assumptions

  1. (1)

    The failure of a component of any subsystem is independent of that of others i.e., the entire system will not be damaged.

  2. (2)

    There is only two states of the components and the system i.e., operating or failure.

  3. (3)

    All redundancy are considered as active redundancy and are not repaired.

  4. (4)

    Component associates like reliability, weight, cost, volume etc. are fixed.

The system consists of m subsystems and in each subsystem i, the least number active components required to function is given by \({n_i}^\mathrm{min}\) which constitutes the pre-specified lower bound of the redundancy level for that particular subsystem. On the other hand, the upper bound of the redundancy level for the i-th subsystem is denoted by \({n_i}^\mathrm{max}\), which is either supplied in advance or can be produced by solving the system constraints, if linear. The goal of the problem is to maximize system reliability by computing the number of redundant components \(n_i\) and the components reliability \(r_i\) in each subsystem satisfying the given resource constraints. The general form of the reliability–redundancy problem can be formulated as the non-linear integer-programming problem given by Eq. (24).

$$\begin{array}{*{20}l} {\max \;\;R_{S} (r_{1} ,r_{2} , \ldots ,r_{m} ;n_{1} ,n_{2} , \ldots ,n_{m} )} \hfill \\ {s.t,\;\;g_{k} (n) = \sum\limits_{{i = 1}}^{m} {g_{{ki}} } (n) \le b_{k} ,\;\;\;for\;\;k = 1,2, \ldots ,l} \hfill \\ {0.5 \le r_{i} \le 1,\;\;n_{i}^{{{\text{min}}}} \le n_{i} \le n_{i}^{{{\text{max}}}} ,\;\;i = 1,2, \ldots ,m.{\text{ }}} \hfill \\ {n_{i}^{{{\text{min}}}} ,n_{i}^{{{\text{max}}}} ,n_{i} \in {\mathbf{Z}}^{ + } ,\;\;i = 1,2, \ldots ,m.} \hfill \\ \end{array}$$
(24)

In this study, seven benchmark problems of the RRAP have been considered such as series system, series-parallel system, complex (bridge) system, overspeed protection system, convex quadratic system, mixed series-parallel system and large scale system with dimensions 36, 38, 40, 42 and 50. All the above problems are shown to maximize the system’s reliability under different non-linear constraints and can be stated as the mixed integer linear problems. For each problem both the component reliability and the redundancy allocation are to be examined simultaneously and are formulated as follows.


P1. Series System [Fig. 8a] The series system is a nonlinear mixed-integer programming problem, which has been used in [14, 17, 18, 22, 70,71,72,73]. The problem formulation is given as follows:

$$\begin{aligned} \max R_S(r,n)= & {} \prod _{i=1}^{5} R_i\nonumber \\ s.t,\ \ g_1(n)= & {} \sum _{i=1}^{5} {v_i}{n_i^2}-V \le 0, \end{aligned}$$
(25)
$$\begin{aligned} g_2(n)= & {} \sum _{i=1}^{5} {\alpha _i \left( \frac{-1000}{ln(r_i)}\right) ^{\beta _i}}\left[ n_i + \exp \left( \frac{n_i}{4}\right)\right ]-C \le 0, \end{aligned}$$
(26)
$$\begin{aligned} g_3(n)= & {} \sum _{i=1}^{5} w_i n_i \exp \left( \frac{n_i}{4}\right) -W \le 0, \nonumber \\ 0.5 \le r_i \le 1,&1 \le n_i \le 5, \ \ n_i\in \mathbf {Z^+}, \ \ i = 1,2,\ldots ,5 \end{aligned}$$
(27)

The parameters \(\alpha _i\) and \(\beta _i\) are physical features of system components. Constraints \(g_1 (n)\), \(g_2 (n)\), and \(g_3 (n)\) represents volume, cost and weight constraint respectively. The coefficients of the series system are shown in the literature [18] and Table 2.

Fig. 8
figure 8

Layout of the series, series-parallel, bridge and overspeed protection systems

Table 2 Values of parameters used in the literature

P2. Series-Parallel System [Fig. 8b] The Series–parallel system has been studied in many recent publications study such as [14, 17,18,19, 21, 22, 70,71,72,73,74,75,76,77,78,79]. The mathematical formulation is as follows:

$$\begin{aligned} \max R_S(r,n) =1-(1-R_1 R_2)[1-(1-(1-R_3)(1-R_4)) R_5] \end{aligned}$$

subject to, the same constraint given by the Eqs. (25), (26) and (27) respectively. And also, \(0.5 \le r_i \le 1, \ \ 1 \le n_i \le 5, \ \ n_i\in \mathbf {Z^+}, \ \ i = 1,2,\ldots ,5.\) The coefficients of the series-parallel system are shown in the literature [18] and Table 2.


P3. Complex (bridge) system [Fig. 8c] Complex (bridge) system consists of five subsystems, which is a classical reliability-redundancy problem, and it has been investigated in [14, 17, 18, 21, 25, 70,71,72, 74, 77, 79, 80]. The formulation of the complex (bridge) is described as follows:

$$\begin{aligned} \max R_S(r,n)= \,& {} R_1 R_2+R_3 R_4+R_1 R_4 R_5+R_2 R_3 R_5 -R_1 R_2 R_3 R_4-R_1 R_2 R_3 R_5 \\&\qquad -R_1 R_2 R_4 R_5- R_2 R_3 R_4 R_5+2R_1 R_2 R_3 R_4 R_5 \end{aligned}$$

subject to, the same constraint given by the Eqs. (25), (26) and (27) respectively. And also, \(0.5 \le r_i \le 1, \ \ 1 \le n_i \le 5, \ \ n_i\in \mathbf {Z^+}, \ \ i = 1,2,\ldots ,5.\) The coefficients of the complex system are shown in the literature [18] and Table 2.


P4. Overspeed protection system for a gas turbine [Fig. 8d] The fourth problem is considered for the RRAP of the Overspeed protection system for a gas turbine. Overspeed detection is continuously provided by the electrical and mechanical systems. It is necessary to cut off the fuel supply in case of an overspeed occurs and thus, 4 control valves (V1-V4) must be closed. The control system is modeled as a 4-stage series system. This problem has been considered in [14, 17, 18, 21, 71, 72, 76, 77, 79, 80]. This reliability problem is formulated as follows:

$$\begin{aligned}&\max R_S(r,n) = \prod _{i=1}^{4} R_i \\&s.t,\ \ g_1(n)=\sum _{i=1}^{4} {v_i}{n_i^2}-V \le 0,\\&g_2(n)=\sum _{i=1}^{4} {\alpha _i \left( \frac{-1000}{\ln (r_i)}\right) ^{\beta _i}}[ n_i + \exp \left( \frac{n_i}{4}\right) ]-C \le 0,\\&g_3(n)=\sum _{i=1}^{4} w_i n_i \exp \left( \frac{n_i}{4}\right) -W \le 0 \\&0.5 \le r_i \le 1, 1 \le n_i \le 5, \ \ n_i\in \mathbf {Z^+}, \ \ i = 1,2,\ldots ,5 \end{aligned}$$

The coefficients of the overspeed protection system are shown in the literature [18] and Table 2.


P5. Convex quadratic reliability problem This problem is an integer programming with convex quadratic constraints, which has been investigated by [14, 77, 81, 82]. The detailed mathematical formulation of this problem is as follows:

$$\begin{aligned} \max R_S(r,n)= & {} \prod _{i=1}^{10}(1-(1-r_i)^{n_i}) \\ s.t,\ \ g_j(n)= & {} \prod _{i=1}^{10} (a_{ji}{n_i}^2+C_{ji}n_i) \le b_j \\ n_i \in [1,6],&i = 1,2,\ldots ,10. \ \ j = 1,2,3,4 \end{aligned}$$

The parameters \(r_i\), \(a_{ji}\) and \(C_{ji}\) are generated from uniform distributions that lies between [0.80, 0.99], [0,10] and [0,10] respectively. A randomly generated set of values of these coefficients are given as follows: \(r_i\) = [0.81, 0.93, 0.92, 0.96, 0.99, 0.89, 0.85, 0.83, 0.94, 0.92] ;

\(b_j\) = (2.0 \(\times 10^{13}\), 3.1\(\times 10^{12}\), 5.7\(\times 10^{13}\), 9.3\(\times 10^{12}\));

a = \(\left( \begin{array}{cccccccccc} 2 &{} 7 &{} 3 &{} 0 &{} 5 &{} 6 &{} 9 &{} 4 &{} 8 &{} 1 \\ 4 &{} 9 &{} 2 &{} 7 &{} 1 &{} 0 &{} 8 &{} 3 &{} 5 &{} 6 \\ 5 &{} 1 &{} 7 &{} 4 &{} 3 &{} 6 &{} 0 &{} 9 &{} 8 &{} 2 \\ 8 &{} 3 &{} 5 &{} 6 &{} 9 &{} 7 &{} 2 &{} 4 &{} 0 &{} 1 \end{array} \right) ;\) C = \(\left( \begin{array}{cccccccccc} 7 &{} 1 &{} 4 &{} 6 &{} 8 &{} 2 &{} 5 &{} 9 &{} 3 &{} 3 \\ 4 &{} 6 &{} 5 &{} 7 &{} 2 &{} 6 &{} 9 &{} 1 &{} 0 &{} 8 \\ 1 &{} 10 &{} 3 &{} 5 &{} 4 &{} 7 &{} 8 &{} 9 &{} 4 &{} 6 \\ 2 &{} 3 &{} 2 &{} 5 &{} 7 &{} 8 &{} 6 &{} 10 &{} 9 &{} 1 \end{array} \right)\)


P6. Mixed series-parallel system The mixed series-parallel system is studied in [14, 77, 81, 82] and formulated as follows.

$$\begin{aligned} \max R_S(r,n)= & {} \prod _{i=1}^{15}(1-(1-r_i)^{n_i})\\ s.t,\ \ g_1(n)= & {} \sum _{i=1}^{15} c_i n_i -400 \le 0\\ g_2(n)= & {} \sum _{i=1}^{15} w_i n_i -414 \le 0 \\&n_i \ge 1,\ \ n_i\in \mathbf {Z^+}, \ \ i = 1,2,\ldots ,15. \end{aligned}$$

The coefficients of the mixed series-parallel system are taken from the literature [12] and are listed in Table 3.

Table 3 Parameter used for P6

P7.Large-scale system reliability problem Large-scale system reliability problem has been studied by [17, 26, 70, 73, 77, 78, 81] and the detailed mathematical formulation of this problem is as follows.

$$\begin{aligned} \max R_S(r,n)= & {} \prod _{i=1}^{m} R_i \end{aligned}$$
(28)
$$\begin{aligned} s.t,\ \ g_1(n)= & {} \sum _{i=1}^{m} \alpha _i n_i^2 -\left( 1+\frac{\theta }{100}\right) \sum _{i=1}^{m} \alpha _i l_i^2 \ \ \le 0 \end{aligned}$$
(29)
$$\begin{aligned} \ \ g_2(n)= & {} \sum _{i=1}^{m} \beta _i \exp \left( \frac{n_i}{2}\right) -\left( 1+\frac{\theta }{100}\right) \sum _{i=1}^{m} \beta _i \exp \left( \frac{l_i}{2}\right) \le 0 \end{aligned}$$
(30)
$$\begin{aligned} \ \ g_3(n)= & {} \sum _{i=1}^{m} \gamma _i n_i -\left( 1+\frac{\theta }{100}\right) \sum _{i=1}^{m} \gamma _i l_i \ \ \le 0 \end{aligned}$$
(31)
$$\begin{gathered} g_{4} (n) = \sum\limits_{{i = 1}}^{m} {\delta _{i} } \sqrt {n_{i} } - \left( {1 + \frac{\theta }{{100}}} \right)\sum\limits_{{i = 1}}^{m} {\delta _{i} } \sqrt {l_{i} } \;\; \le 0 \hfill \\ 1 \le n_{i} \le 10,\;\;n_{i} \in {\mathbf{Z}}^{ + } ,\;\;i = 1,2, \ldots ,m \hfill \\ \end{gathered}$$
(32)

Here, \(l_i\) indicates the lower bound of \(n_i\). The parameter \(\theta\) indicates the tolerance error that implies \(33 \%\) of the minimum requirement of each available resource \(l_i\). The average minimum resource requirements for the reliability system with m subsystems is given by \(\sum _{i=1}^m g_{ji}(l_i), (j=1,\ldots ,4)\) and the average values of which is given by \(b_j=\left( 1+\frac{\theta }{100}\right) \sum _{i=1}^m g_{ji}(l_i)\). In this way, we set the available system resources [26] for reliability systems with 36, 38, 40, 42,  and 50 subsystems, respectively, as shown in Tables 4 and 5.

Table 4 Available system resources for each system for P7
Table 5 Constant coefficients for P7

5 Experimental results and discussions

This section presents the results of all of the above-mentioned reliability optimization problems analysed by the proposed INNA algorithm. It is divided into the following four parts. Section 5.1 introduces the experiment settings including parameters settings and maximum possible improvement (MPI). Section 5.2 describes the results obtained by the proposed algorithm and compared the performance with a number of existing approaches that are presented Table 6. The performance in terms of population diversity and the exploration-exploitation measurement of INNA and the conventional NNA are described in Sect. 5.3. Finally, the statistical analysis of the proposed algorithm and all compared algorithms are illustrated in Sect. 5.4.

Table 6 Some existing meta-heuristic algorithms for solving reliability optimization problems

5.1 Experiment settings

5.1.1 Parameter settings

The proposed algorithm is implemented in MATLAB (2015a) on the personal laptop with AMD Ryzen 3 2200 U with Radeon Vega Mobile Gfx 2.50GHz and 4.00 GB of RAM in Windows 10. In this study, to compare the results by INNA statistically with the other existing optimizers like, ABC, NNA, TLBO, SSA, HHO, SMA, and SCA, the initial population sizes were set as 100 for each algorithm and also the parameters of these compared algorithms are considered as: ABC (Maximum number of trials i.e., limit = 100), NNA (modification factor, \(\beta\) = 1), TLBO (teaching factor, TF = 1 or 2), HHO (\(\beta\) = 1.5), SMA (control parameter, z = 0.03), SCA (parameter, a = 2); Due to the stochastic nature of metaheuristics algorithms, it might be unreliable if one considers the results obtained in a single run. Therefore, 30 independent runs were performed for all applied algorithms ABC, NNA, TLBO, SSA, HHO, SMA, SCA and INNA for solving every reliability optimization problems. In our experiment, for every independent run, the maximum number of iterations for each algorithm is taken as 1500.

5.1.2 Maximum possible improvement (MPI)

For each reliability optimization problem, the system reliability is to be maximized by computing both the components reliability \(r_i\) and number of redundant components \(n_i\) for each subsystem. During the computational procedure, the redundant components \(n_i\) are firstly considered as real variables and after completion of the optimization process, the real values are converted to their respective nearest integer values. In this study, we introduce the maximum possible improvement (MPI) index to evaluate the performance of INNA and is expressed by the Eq. (33).

$$\begin{aligned} \begin{array}{c} \mathrm{MPI} = \frac{R_S\,(\mathrm{INNA})-R_S\,(\mathrm{Others})}{1-R_S\,(\mathrm{Others})} \end{array} \end{aligned}$$
(33)

Where \(R_S(\mathrm{INNA})\) denotes the best optimal solution obtained by the proposed algorithm and \(R_S(\mathrm{Others})\) implies the best result obtained by the other compared approaches and the greater MPI indicates greater improvement.

5.2 INNA comparison with existing optimizers

This section describes the performance evaluation of proposed INNA in terms of best solution and the maximum possible improvement value. The results obtained by the proposed algorithm is compared with the other existing optimizers and the results of the compared algorithms are taken from their respective papers. The comparative analysis for solving the reliability problems are presented in Tables 7, 8, 9, 10, 11, 12.

For the series system (P1), Table 7 shows that the best optimal solution obtained by the proposed INNA is 0.931682387907051, which is preferable to all compared algorithms GA [14], SCA [70], SAA [72], IA [22], ABC1 [79], IPSO [17], CS2 [18], PSO [71], NAFSA [84], SSO [71], PSSO [71], MICA [85] with the improvements 3.24E-01 %, 3.50E-03 %, 4.65E-01 %, 7.01E-05 %, 5.68E-04 %, 3.50E-03 %, 4.13E-04 %, 3.87E+01 %, 1.76E-04 %, 2.63E-01 %, 1.33E-04 %, and 4.39E-03 % respectively. Table 8 presents that the best result for the series-parallel system (P2) obtained by the proposed method is 0.9999844228326 and also better than the algorithms given by GA [14], SCA [70], IA [22], ABC1 [79], and PSO [71] by the improvement 5.02E+01 %, 39.7%, 34.2%, 31.3% and 89.0% respectively. Again, for the rest of the algorithms SAA [72], IPSO [17], CPSO [75], CS1 [73], ICS [78], CS-GA [19], ABC2 [74], MPSO [76], INGHS [77], CS2 [18], DE [86], EBBO [21], and NAFSA [84], the proposed INNA gives better results with MPI 33.3% for each. Further, the optimal redundant component by INNA for this problem P2 is (3,2,2,2,4) which is completely different from the other approaches. It can be observed from Table 9 that the optimal solution for the complex system (P3) produced by INNA is 0.9998896373034 which is better than the best result given by the other compared algorithms and also have most symbolic improvement 8.67%, 1.78%, 47.6%, 0.259% and 66.4% over the results given by GA [14], SCA [70], SAA [72], IA [22], and PSO [71] respectively.

It can be noticed in Table 10, the best optimal solution for overspeed protection system (P4) is 0.9999546746307478 that is obtained by INNA. In fact, the proposed algorithm has succeeded to improve considerably the best known solution found so far by the ten competitive algorithms. By implementing the new search operator, INNA provide more accurate solutions as well as makes the search balance in favour of exploitation behaviour. Table 10 depicts that the improvement indices 1.76E+01%, 1.78E-04%, 1.02E-02%, 1.02E-0%, 7.30E-04%, 1.39E-03%, 1.39E-03%, 5.24E+01%, 1.02E-02%, and 3.60E-03% over the results by SAA [72], IA [22], IPSO [17], NMDE [80], INGHS [77], CS2 [18], EBBO [21], PSO [71], DE [86] and MICA [85] respectively. Table 11 indicates that INNA executes the same or better than the other existing algorithms given in this literature for solving the convex quadratic reliability problem (P5) and the mixed series-parallel system (P6) in terms of best results. Table 12 reports the test results of the problem P7. It can be seen that the INNA algorithm gives equal or better results compare to other algorithms in terms of the best objective function value for the large-scale problems of dimensions 36,38,40,42 and 50. But in case of dimension 40 and 42, it comes with weaker objective value than two existing algorithms INGHS and IABC. It should be observed that even very small up-gradation in the reliability is important and valuable for the efficiency of the system and it is generally a difficult task to do in real-life applications.

Table 7 Comparison of the best result for the Series system (P1) with other results in the literature
Table 8 Comparison of the best result for the Series-parallel system (P2) with other results in the literature
Table 9 Comparison of the best result for the Complex system (P3) with other results in the literature
Table 10 Comparison of the best result for the Overspeed system (P4) with other results in the literature
Table 11 Comparison of the best result for the Convex system (P5) and Mixed series-parallel system (P6) with other results in the literature
Table 12 Comparison of results for the Large scale system (P7) with other results in the literature

In order to show the convergence performance of the stated algorithm over several existing algorithms like SSA, SCA, SMA, HHO, ABC, TLBO, and NNA, we vary the best solution for each considered problem and the results are plotted in Fig. 9. From this convergence analysis, we can conclude that, as the iteration number increases, the proposed INNA algorithm also shows better performance than the existing algorithms.

Fig. 9
figure 9

Comparison of Convergence curve of INNA with SSA, SCA, SMA, HHO, ABC, TLBO and NNA

5.3 Diversity and exploration-exploitation analysis

For an effective in-depth performance analysis, the population diversity and the exploration-exploitation measurement in INNA and the conventional NNA are presented in Table 13 while solving the reliability optimization problems. A graphical presentation on comparison of diversity measurement and the exploration-exploitation phases between the proposed INNA and the basic NNA are also given in Figs. 10 and 11 respectively. According to Table 13, the proposed INNA algorithm keeps the population diversity high compared to the conventional NNA for all the reliability problems. In case of solving P1 to P6, INNA maintained population diversity value 14.4026, 14.2185, 14.4711, 12.5628, 20.1557, and 38.7695 which is relatively higher than diversity values 9.5045, 9.6495, 9.5714, 8.5150, 13.2548 and 21.6344 in NNA respectively. Moreover, Table 13 also reveals that mostly INNA maintained exploration percentage lower than exploitation and maintain a proper balance between exploration and exploitation on all of the reliability problems. During the search process, it is necessary to keep the value of population diversity at a large number and this could help the solutions jump out a local optima. The above discussed experimental study shows that the performance of the proposed INNA is improved compared to existing NNA by the population diversity enhancement. This discussion can be further assimilated via Fig. 10 for diversity measurement and Fig. 11 for exploration and exploitation behaviours in the proposed algorithm.

Table 13 Diversity and Exploration-Exploitation measurement on reliability problems
Fig. 10
figure 10

Comparison on Diversity measurement between NNA and INNA on reliability problems (P1–P7)

Fig. 11
figure 11figure 11

Exploration-exploitation measurement of NNA and INNA on reliability problems (P1–P7)

5.4 Statistical analysis

In addition, to analyze whether or not the results obtained by the proposed INNA algorithm are statistically significant, here we consider the following quality indices described below:


I. Value-based method and tied ranking: The solution quality in terms of standard deviation and mean value is described here. The lower mean value and standard deviation indicates that the algorithm has a stronger global optimization capability and more stability. Also, Tied rank (TR) [87] is used here to compare intuitively the performance between the considered methods. In this study, the algorithm with the best mean value is assigned to rank 1; the second-best get rank 2, and so on. Besides, two algorithms having same results share the average of ranks. The algorithm with the smaller rank indicates that it is better than the compared algorithms.

In view of the above two quality parameters, the statistical results achieved for INNA and all other existing algorithms (like SSA, SCA, SMA, HHO, ABC, TLBO, NNA) are computed and summarized in Table 14 for the considered problems. In this table, the mean, Std and median of the best fitness value after the 30 independent runs of each algorithm is reported. From this table, it is observed that the proposed algorithm is rank 1 followed by the other algorithm, which shows its stability and convergence for all of the reliability issues. Also, we can sort the ranking, as per their achievement, in the order: INNA, TLBO, SMA, SSA, NNA/ABC, HHO and SCA.

The ranking order in Table 14 indicates that the TLBO algorithm shows strong competitiveness and is the second-best on all test issues except Overspeed system. It can therefore be argued that INNA is an efficient and effective method for solving various kinds of optimization problems. Figure 12 provides a better visualization of the ranking of all compared algorithms for solving RRAPs.

Fig. 12
figure 12

Graphical illustration of overall ranking of compared algorithms for solving reliability problems

Table 14 Comparison of the statistical results obtained by INNA and the existing optimizers

Apart from this analysis, a statistical test named Wilcoxon signed-rank test is performed to check the statistical significance of the results obtained from the proposed algorithm.


II. Wilcoxon signed-rank test This statistical test-based method [88] is used to compare the performance of the proposed INNA with the other algorithms. Also, it has several advantages, compared to the t-test, such as: (1) normal distributions is not considered here for the sample tested; (2) It’s less affected and more responsive than the t-test. This advantages makes it more powerful test for comparing two algorithms [89,90,91]. Wilcoxon signed-rank test is performed here with a significance level \(\alpha = 0.05\) and the obtained results are shown in Table 15. In this table, “H ” scored “1” if there is a symbolic difference between INNA and the existing algorithm and also “H” is labelled as “0” if there is no significant difference. Again, the sign of “S” is taken as “\(+\)” if the proposed algorithm is superior to the compared algorithm and “−” is assigned to “S” if INNA is inferior to the compared algorithm. It is noted that the proposed algorithm INNA dominates all compared algorithms on all reliability problems. Thus, from this analysis, we conclude that the proposed INNA can obtain better solutions than the comparative algorithms, which means that the proposed method has a better global performance optimization capability than the comparable algorithms.

Table 15 The comparison results of the applied algorithms by Wilcoxon signed-rank test (a level of significance \(\alpha = 0.05\))

III. Kruskal-Wallis and Multiple Comparison Test The multiple comparison test (MCT) test is performed here to justify whether the proposed INNA algorithm is better than the other optimizers (e.g., SSA, SCA, SMA, HHO, ABC, TLBO, NNA). For this purpose, we perform a non-parametric Kruskal-Wallis test (KWT) between the best values obtained for each problem considered. This test was used to investigate the hypothesis that the different independent samples of the distributions had or did not have the same estimates. On the other hand, the MCT is used to determine the significant difference between the different estimates by performing multiple comparisons using one-way ANOVA. To addressed this, the significance of the proposed INNA algorithm results are compared with the compared algorithms results. The optimized results between the pairs of the different algorithms are summarized in Table 16. In this table, the first column represents the problem considered, while the second column indicates the indices between the pairs of the different samples. The third and fifth column describes the boundary of the true mean difference between the samples considered at a 5% level of significance. At the end of the last column, the p-value of the test obtained by KWT corresponds to the null hypothesis of equal means.

The box-plot and the MCT graphs for the problems (P1 - P6) considered are shown in Fig. 13. In this figure, the left graph describes the boxes with the values of the 1st, 2nd and 3rd quarters, while the vertical lines that extend the boxes are called the whisker lines that provide information on the re-imagining values. On the other hand, on the right side of this figure, the MCT makes a multiple comparison between the different pairs and makes a significant difference between them. The blue line on these graphs represents the proposed INNA results and the red line indicates which algorithm results (such as SSA, SCA, SMA, HHO, ABC, TLBO, NNA) are statistically significant from the proposed INNA. For example, in case of series system (P1), as shown in Fig. 13 we calculate that the existing algorithms (SSA, SCA, SMA, HHO, and ABC) have statistically significant resources from the INNA algorithm. Furthermore, the vertical lines (right/left, shown in black colour) shown around the INNA results (displayed in blue colour) describe the marginal area to show which method is statistically better or not considered to be problematic. From this analysis and the results are shown in Fig. 13 and Table 16, we conclude that the performance of the proposed algorithm is statistically significant with the other algorithms. The best results are therefore provided by the INNA.

Table 16 Statistical results of the existing optimizers using MCT analysis
Fig. 13
figure 13figure 13

Box plot of objective function using the reported optimizers

6 Conclusions and future work

In practical life, we have to handle various types of complex optimization problems that appears in the field of engineering and as a result, an efficient and accurate methods are required to deal with them. This paper introduces an improved neural network algorithm (INNA) for solving some RRAP with non-linear resource constraints. In this study, INNA is proposed by implementing a new logarithmic spiral search operator and the searching strategy of the learner phase of TLBO to the basic NNA to make a proper balance between exploration and exploitation. Here, the basic NNA looks after the exploration part and the presence of a new search operator and the searching strategy of TLBO increases the exploitation capability of the algorithm, which makes the proposed algorithm an efficient, effective, and more acceptable for solving COPs.

A comprehensive set of seven well-known reliability optimization problems consist of series system, series-parallel system, complex systems, overspeed protection systems, convex quadratic system, mixed series-parallel system and large scale system are employed to examine the performance of INNA and compared with several reported algorithms in the literature. All of these problems considered are mixed variables – discrete, continuous and integer. The results are also compared statistically with 7 competitive MHAs including SSA, SCA, SMA, HHO, ABC, TLBO, and NNA. According to the experimental results, the proposed INNA outperforms the compared algorithms for all reliability issues in terms of best and mean values. In addition, in order to eliminate the stochastic nature of the algorithm, we perform several statistical tests, namely a Tied-rank test and a Wilcoxon signed-rank test for P1 to P6. All of the above discussions and evaluations in this study ensure that the proposed algorithm is a competitive approach, not only that it performs well but also that it can effectively achieve the best results for all reliability problems.

In future work, we will broaden the proposed INNA to solve more complex design and constraint optimization problems. We are also trying to expand the approach by using the neural network and applying it to solve different kinds of problems, such as the stock market, finance, decision-making, etc. Further, we will try to improve the other MHAs by the proposed reinforcement searching mechanism.