Advertisement

Evolving Systems

, Volume 9, Issue 2, pp 169–180 | Cite as

Modality of teaching learning based optimization algorithm to reduce the consistency ratio of the pair-wise comparison matrix in analytical hierarchy processing

  • Prashant Borkar
  • M. V. Sarode
Original Paper

Abstract

This paper presents an approach to improve the consistency of pair-wise comparison matrix in analytical hierarchy process (AHP) using teaching learning based optimization (TLBO) algorithm. The purpose of this proposed approach to minimize the consistency ratio (CR). Consistency test for the comparison matrix in AHP have been studied rigorously since AHP was introduced in 1970s. However, existing approaches are either too complicated or difficult. Most of them could not preserve the original judgments provided by an expert. To improve the consistency ratio (CR), this research work proposes a simple, effective and efficient method which will minimize the CR to almost zero while preserving the judgment values in pair-wise comparison matrix. The correctness of the proposed method is proved by applying it to two real world case studies reported in literature, namely new product design selection and material selection (work tool combination). The experimentation shows that the proposed approach is efficient and accurate to satisfy the consistency requirements of AHP.

Keywords

Analytical hierarchy process (AHP) Pair-wise comparison matrix Teaching learning based optimization (TLBO) Consistency ratio 

1 Introduction

In past three decades number of methods have been proposed and developed which uses the pair-wise comparison matrix for solving the multiple criteria decision making (MCDM) between finite alternatives (Keeney and Raiffa 1976). To resolve the qualitative and quantitative factor for the decision makers in the MCDM, Saaty in 1970s has proposed an analytical hierarchy process (AHP) (Saaty 2001, 2003, 2005, 2006). An AHP has been successfully applied to the wide variety of real world applications (Li and Ma 2007; Cao et al. 2008; Dong et al. 2008; Iida 2009, Peng et al. 2012; Lin et al. 2011; Peng et al. 2011a, b; Rao 2007, 2013; Borkar et al. 2016). In AHP the pair-wise comparison matrix consist of judgments expressed on a numerical scale of 1–9 by decision maker based on their expertise and experiences. Consistency is one of the most important issues in AHP, meanwhile the consistency ratio is hard to obtain when a large number of criteria is evaluated. In some cases this comparison matrix would be inconsistent due to the limitations of expertise and experiences. Some existing approaches are difficult and complicated to either revise the comparison matrix or to preserve the original comparison matrix. What matters in AHP is how to construct a pair-wise judgment matrix with consistency ratio small enough. Saaty in (Saaty 1980) proposes a consistency index \({\text{CI}}~={\text{ }}(\lambda \max ~ - 1)/\left( {{\text{n}} - 1} \right)\) and a consistency ratio CR = CI/RI. In Saaty’s opinion, the consistency ratio (CR) of less than 0.1 is acceptable. But it is difficult to construct such a judgment matrix with satisfactory CR because of the complexity of the decision problem and the limited ability of human thinking. There are two approaches by which inconsistent matrices can be made consistent: (1) the decision maker will go for reassessment process to get the new comparison matrix which is consistent. This process is quite time consuming as the reassessment is to be repeated until the matrix is consistent. (2) Modify the value of comparison matrix by a method until the consistency ratio is satisfied. This approach has taken attention of many researchers to modify the inconsistent pair-wise comparison matrix (Cao et al. 2008; Lin et al. 2011; Costa 2011).

The population based heuristic algorithms have two important groups: evolutionary algorithms (EA) and swarm intelligence (SI) based algorithms. Some of the recognized evolutionary algorithms are: Genetic Algorithm (GA), Evolution Strategy (ES), Evolution Programming (EP), Differential Evolution (DE), Bacteria Foraging Optimization (BFO), Artificial Immune Algorithm (AIA), etc. Some of the well known swarm intelligence based algorithms are: Particle Swarm Optimization (PSO), Shuffled Frog Leaping (SFL), Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Fire Fly (FF) algorithm, etc. Few of the above optimization technique have been modeled to reduce the consistency ratio of AHP and researchers modified the inconsistent matrix using above intelligent techniques to prepare the consistency matrix. Genetic algorithm (GA) has been used by (Lin et al. 2008) and (Costa 2011) to obtain the consistent matrices. A research using particle swarm optimization (PSO) and taguchi method is presented in (Yang et al. 2012) to solve the inconsistent comparison matrix. Tahuchi method is used to reduce the number of experiments required for tuning the control parameters of PSO. This approach has improved the previous research using genetic algorithm (Lin et al. 2008). Besides considering CR must be less than 0.1, (Lin et al. 2008) and (Yang et al. 2012) also determine two aspects, difference index (Di) aspect which represent the distance matrix and consistent ratio aspect. These two aspects are combined in the overall index (OI). They proposed method repairing the inconsistent matrix while minimizing the OI (Girsang et al. 2014aa, b) has implemented ant colony optimization (ACO) approach to solve inconsistency problem. ACO is used to enhance the minimal deviation matrix and to enhance the minimal consistent ratio.

The objective of this research work is to propose the simple, efficient and accurate approach to reduce the consistency ratio of the pair-wise comparison matrix. In order to reduce the consistency ratio, we have applied the teaching learning based optimization (TLBO) to tune the judgment values of the comparison matrix.

This research work includes:

  • A simple, efficient and accurate approach to reduce the consistency ratio.

  • Modality of TLBO to tune the elements of pair-wise comparison matrix, while preserving the judgments made by an expert.

  • Novel approach for identifying variables (judgment elements) and deciding lower and upper bound for it.

  • Several case real world cases studies were evaluated to present the robustness of the proposed method.

The contribution of this research work includes:

  • A hybrid model which will tune the parameters of multi-attribute decision making methods using multi-objective decision making approach.

  • Decision making using algorithmic parameter less optimization technique while preserving the expert’s judgment.

The remaining parts of this paper are organized as follows. Why teaching learning based optimization is incorporated in this research work and details of it are presented in Sect. 2. The proposed method for tuning the judgments of the pair-wise comparison matrix and reducing the consistency ratio is presented in Sect. 3. Two real world case studies: new product design selection and work tool selection (material selection) are presented in Sect. 4. Finally conclusion is provided.

2 Teaching learning based optimization

Some of the best and well known meta-heuristics techniques developed over the last three decades to solve the many engineering optimization problems are: Genetic Algorithm (Goldberg 1989), Artificial Immune Algorithm (AIA) (Farmer et al. 1986), Ant Colony Optimization (ACO) (and Stutzle 2004), Particle Swarm Optimization (PSO) (Kennedy and Eberhart 1995), Differential Evolution (DE) (Efren et al. 2010), Harmony Search (HS) (Geem et al. 2001), Bacteria Foraging Optimization (BFO) (Passino 2002), Artificial Bee Colony (ABC) (Karaboga 2005) etc. The above algorithms require common controlling parameters like population size, number of generations, elite size, etc. Besides the common control parameters, these algorithms require their own algorithm-specific control parameters. Various studies has been carried out to either enhance the existing optimization algorithm (Liu and Tang 1999; Chakraborty et al. 2011; Shi et al. 2007) or to hybridize the existing algorithms (Karen et al. 2006; Yildiz 2009).

The main limitations of the above mentioned algorithms are that different control parameters are required for functional working of these algorithms. The proper selection of these parameters is the crucial step and it is important. In case of GA, controlling parameters are population size, crossover rate, mutation rate etc. Similarly, In PSO, it uses inertia weight, cognitive and social parameters. Similarly, ABC requires number of employed, scout and onlookers bees etc. HS requires harmony memory, pitch adjustment rate, etc. Continuous research has been carried to develop an optimization algorithm which does not require any algorithm specific parameters (Rao 2016). Teaching Learning Based Optimization (TLBO) is the optimization method which works on the philosophy of teaching and learning. In TLBO, teacher is considered to be highly learned person and he/she shares his/her knowledge with the learners.

2.1 TLBO algorithm

A teaching learning process inspired optimization algorithm, TLBO is recently proposed by (Rao et al. 2012a; Rao 2013b, 2015; Rao and Patel 2012b, 2013b, c). TLBO mimics the teaching–learning ability of the teacher and learners in classroom. In TLBO, population is group of learners, design parameters are the different subjects offered to learners and fitness value of the optimization problem is the learners result. The best solution in the entire population is termed to be teacher. The working of TLBO algorithm is divided into two phases, ‘Teacher phase’ and ‘Learner phase’. The working of both phases is explained below.

2.1.1 Teacher phase

In the first phase of the algorithm, learners learn through the teacher. In this phase, teacher try to increase the mean result of the entire classroom from any value \({M_1}\) to his/her level, but practically it is not possible and hence teacher can move the mean of classroom \({M_1}\) to another better value \({M_2}.\) Let \({T_i}\) be the teacher at any iteration \(i\) and \({M_j}\) be the mean. Now \({T_i}\) will try to improve existing mean \({M_j}\) towards it so the new mean as \({M_{new}}\) and the difference between the existing mean and new mean is given by (Rao 2011).
$$Difference\_Mea{n_i}={r_i}\left( {{M_{new}} - {T_F}{M_j}} \right)$$
(1)
where \({r_i}\) is the random number in the range [0, 1], \({T_F}\) is the teaching factor which decides the value of mean to be changed, and Value of \({T_F}\) can be either 1 or 2, which is a heuristic step, and it is decided randomly with equal probability as:
$${T_F}=round[1+rand(0,1)\{ 2 - 1\} ]$$
(2)
Teaching factor is generated randomly during the algorithm in the range of 1–2, in which 1 corresponds to no increase in the knowledge level and 2 corresponds to complete transfer of knowledge. In-between values indicate the level of knowledge transfer. The transfer level of knowledge can be any level depending on learners’ capabilities. Based on this \(Difference\_Mean\), the existing solution is updated according to the following expression,
$${X_{new,i}}=~{X_{old,~i}}+~Difference\_Mea{n_i}$$
(3)

2.1.2 Learner phase

The second phase of TLBO is the Learner phase where learners increase their knowledge by doing interaction among themselves. A learner interacts randomly with others for enhancing his/her knowledge. If other learner has more knowledge than him/her, then a learner learns new things. Mathematically, the learning phenomenon of this phase is expressed below. At any iteration \(i\), consider two different learners \({X_i}\)and \({X_j}\) where \(i - j\).
$${X_{new,i}}=~{X_{old,~i}}+~{r_i}\left( {{X_i} - {X_j}} \right)\quad if~F\left( {{X_i}} \right)<~F({X_j})$$
(4)
$${X_{new,i}}=~{X_{old,~i}}+~{r_i}\left( {{X_j} - {X_i}} \right)\quad if~F\left( {{X_j}} \right)<~F({X_i})$$
(5)

Accept \({X_{new}}\) if it gives better function value.

3 Proposed method

In this research work, a novel framework is proposed to get the consistent pair-wise comparison matrix, which includes problem definition formulation, conversion of qualitative values to the quantitative values, the normalization using the beneficiary attributes and non-beneficiary attributes. A novel approach is presented in this research work to select the variables for TLBO along with the proposed mechanism to decide the lower and upper bound of the variables. Modality of TLBO is employed in this research work to get the optimal judgment values of the comparison matrix and to get the minimum consistency ratio (CR).

3.1 Step 1: the objective is to assess the given alternatives based on considered attributes

The decision table, given in Table 1, shows alternatives, Ai (for i = 1, 2, …, n), attributes, Tj (for j = 1, 2, …, m), weights of attributes, wj (for j = 1, 2, …, m) and the measures of performance of alternatives, Cij (for i = 1, 2, …, n; j = 1, 2, …, m). Given multi attribute decision making method and the decision table information, the task of the decision maker is to find the best alternative and/or to rank the entire set of alternatives. To consider all possible attributes in decision problem, the elements in the decision table must be normalized to the same units.

Table 1

Decision table in MADM methods

Alternatives

Attributes (weights)

 

T1 (w1)

T2 (w2)

T3 (w3)

Tm (wm)

A1

C11

C12

C13

C14

A2

C21

C22

C23

C24

A3

C31

C32

C33

C34

An

Cn1

Cn2

Cn3

Cnm

3.2 Step 2: compute the normalized decision matrix:

The attributes can be considered as beneficial or non-beneficial. Normalized values are calculated by (Cij)K/(Cij)L, where (Cij)K is the measure of the attribute for the Kth alternative, and (Cij)L is the measure of the attribute for the Lth alternative that has the highest measure of the attribute out of all alternatives considered. This ratio is valid for beneficial attributes only. A beneficial attribute (e.g., efficiency) means its higher measures are more desirable for the given decision-making problem. By contrast, non-beneficial attribute (e.g., cost) is that for which the lower measures are desirable, and the normalized values are calculated by (Cij)L/(Cij)K (Rao 2007). In reality, measure of performance (Cij) can be crisp, fuzzy and/or linguistic. The decision makers can appropriately make use of any of the eight scales suggested (Chen and Hwang 1992). For example, 5- and 11-point scale and the corresponding crisp scores of the fuzzy numbers are given in Table 2 (Rao 2007, Chap. 4).

Table 2

Conversion of linguistic terms into fuzzy scores (5- and 11-point)

5-point scale

11-point scale

Linguistic term

Assigned crisp score

Linguistic term

Assigned crisp score

Low

0.115

Exceptionally low

0.0455

Below average

0.295

Extremely low

0.1364

Average

0.495

Very low

0.2273

Above average

0.695

Low

0.3182

High

0.895

Below medium

0.4091

  

Medium

0.5000

  

Above medium

0.5909

  

High

0.6818

  

Very high

0.7727

  

Extremely high

0.8636

  

Exceptionally high

0.9545

3.3 Step 3: construct a pair-wise comparison matrix using relative importance scale:

Saaty in (Saaty 1980) has provided a fundamental scale of AHP for entering judgments, wherein attributes are compared with itself, so main diagonal values will always be 1. The numbers 3, 5, 7, and 9 are corresponds to judgments ‘moderately important’, ‘strongly important’, ‘very strongly important’ and ‘absolutely important’ respectively. The numbers 2, 4, 6, and 8 are used when there is compromise between the above mentioned values. Assuming T attributes, the pair-wise comparison of attribute i with attribute j results in TM × M where tij denotes the comparative importance of i attribute with j. In the matrix TM × M, tij = 1 when i = j and bji  = 1/bij.

\(\begin{array}{*{20}{l}} {{{\text{T}}_1}} \\ {{{\text{T}}_2}} \\ {{{\text{T}}_3}} \\ - \\ - \\ {{{\text{T}}_{\text{M}}}} \end{array}\mathop {\left[ {\begin{array}{*{20}{l}} 1&{{{\text{t}}_{12}}}&{{{\text{t}}_{13}}}& - & - &{{{\text{t}}_{1{\text{M}}}}} \\ {{{\text{t}}_{21}}}&1&{{{\text{t}}_{23}}}& - & - &{{{\text{t}}_{{\text{2M}}}}} \\ {{{\text{t}}_{31}}}&{}&1& - & - &{{{\text{t}}_{{\text{3M}}}}} \\ - & - & - & - & - & - \\ - & - & - & - & - & - \\ {{{\text{t}}_{{\text{M}}1}}}&{{{\text{t}}_{{\text{M}}2}}}&{{{\text{t}}_{{\text{M}}3}}}& - & - &1 \end{array}} \right]}\limits^{\begin{array}{*{20}{l}} {{{\text{T}}_1}}&{{{\text{T}}_2}}&{{{\text{T}}_3}}& - & - &{{{\text{T}}_{\text{M}}}} \end{array}}\)

3.4 Step 4: identification of variables for TLBO

In this research work, a new way of identification of required variable for teaching learning based optimization algorithm is proposed. This identification process preserves the relative importance of attribute i with all other attributes. The algorithm for identification of variables is as follows:

[X] be the vector which hold all the identified variables using above algorithm.

3.5 Step 5: deciding lower and upper bounds for variables

Very crucial aspect of TLBO is identification of lower and upper bound for the identified variables. Here variables values are the relative importance of attribute i to attribute j. The pair-wise comparison matrix is supposed to be prepared by domain expert having deep knowledge about the problem definition and this pair-wise comparison matrix leads to the weights of the considered attributes. But in most of the cases even an expert makes slightly wrong judgment may result in 1–10% of error in decision making. Here in this research work we are assuming that even an expert can make 10% of judgment error. In general lower bound will be the judgment value −1 and upper bound will be the judgment value +1. The algorithm for deciding lower and upper bound for variable is as follows:

3.6 Step 6: minimize the consistency ratio (CR) using TLBO

Let the TM × M be the pair-wise comparison matrix prepared in step 3 and [X] be the array of identified variables and [L] and [U] be the array of lower and upper bound respectively for the identified variables. Teaching learning based optimization algorithm is applied in this research work minimize the consistency ratio by optimally tuning the identified variables values with specified bound. This approach will preserve an expert judgment and also it reduces the CR significantly which result in best decision making. Goal is to obtain the optimal pair-wise comparison matrix OTM × M from TM × M by using TLBO. Algorithm to minimize CR using TLBO is as follows:

1.

Initialize the population (i.e. learners’), variables of the optimization problem (here variables are identified in step 4) and lower and upper bound for the identified variables

2.

Select the best learner of each subject as a teacher for that subject and calculate mean result of learners in each subject

3.

Evaluate the difference between current mean result and best mean result according to Eq. (1) by utilizing the teaching factor (TF) [Eq. (2)].

4.

Update the learner’s knowledge with the help of teacher’s knowledge according to Eq. (3)

5.

Update the learner’s knowledge by utilizing the knowledge of some other learner according to Eqs. (4) and (5)

6.

Repeat the procedure from 2 to 5 until the termination criterion is met (CR <0.1)

The mentioned algorithm has to be applied on single objective function: Minimize CR()

a.

Find the normalized weight (wj) of each attribute by calculating geometric mean and normalizing geometric mean of rows in comparison matrix TM × M

b.

Calculate the matrices A3 = A1 * A2 and A4 = A3/A2

c.

Determine the Eigen value λmax which is average of A4

d.

Calculate the consistency index CI = (λmax − M)/(M − 1) where M is the number of attributes

e.

Calculate the consistency ration CR = CI/RI where RI is the random index

The above algorithm results in optimal pair-wise comparison matrix OTM × M.

3.7 Step 7: evaluate the consistency ratio for the optimized relative importance matrix

Let OTM × M be the optimal pair-wise comparison matrix obtained in any one of the generation by TLBO. The relative normalized weight (wj) of each attribute is calculated by (a) calculating the geometric mean of the i-th row, and (b) normalizing the geometric means of rows in the comparison matrix. This can be represented as:
$$GM{({\text{OTM~}} \times {\text{~M}})_j}={\left[ {\mathop \prod \limits_{j=1}^M {b_{ij}}} \right]^{{\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 M}}\right.\kern-\nulldelimiterspace}\!\lower0.7ex\hbox{$M$}}}}$$
(6)
$${w_j}=\frac{{GM{{({\text{OTM}}~ \times ~{\text{M}})}_j}}}{{\mathop \sum \nolimits_j^M G{M_j}}}~$$
(7)
where \({\text{GM}}{({\text{OTM}}~ \times ~{\text{M}})_j}~\) is the geometric mean of the optimal pair-wise comparison matrix and \({w_j}\) are the attribute weights.

3.8 Step 8: evaluate alternative to obtain the overall performance score for the alternatives

Evaluate each alternative, Ai by the following formula:
$${P_i}=~\mathop \sum \limits_{j=1}^m {w_j}{({C_{ij}})_{normal}}$$
(8)
where (Cij)normal represents the normalized value of Cij, wj is the weight obtained correspond to the A2 matrix from step 7 and Pi is the overall or composite score of the alternative Ai. The alternative with the highest value of Pi is considered as the best alternative.
$${w_j}={{{m_j}} \mathord{\left/ {\vphantom {{{m_j}} {\mathop \sum \limits_j^m {m_j}}}} \right. \kern-\nulldelimiterspace} {\mathop \sum \limits_j^m {m_j}}}$$
(9)
where mj is preference weight for attribute obtained when TLBO is applied on pair comparison matrix.

4 Case studies

To evaluate the robustness of the proposed research work, we have applied the proposed method to two real world case studies and the objective data of the alternatives are taken from literature. Case study 1 is about selection of new product design taken from (Besharati et al. 2006), case study 2 is about material selection (work tool combination) taken from (Enache et al. 1995).

4.1 Case study 1: evaluation of product designs

In case of product design, the selection of final design for the production is the crucial stage. Selection of best design results in overall success of the product in the market. A case-based reasoning model is presented in (Haque et al. 2000) for engineers and managers. The probability of success of the product design is introduced in (Suh 2001). For the innovative product design, creativity-based design model is presented in (Hsiao and Chou 2004). A detailed framework for how several factors affect the making of new product design in presented in (Ozer 2005), they have also provided the guidelines for reducing the negative impacts. An idea-screening method for new product design in presented in (Lo et al. 2006, Kulak and Kahraman 2005) with a group of decision makers having imprecise, uncertain and inconsistent preferences. This model provides authors with consistent information. A sensitivity analysis for new product design in presented in (Maddulapalli et al. 2007) with implicit value function.

To demonstrate the proposed model, the case study presented by (Besharati et al. 2006) is considered in this research work. Both performance related and market related attribute are considered. They have considered a problem of design and the selection of power electronic device based on three attributes namely: manufacturing cost, junction temperature and thermal cycles to failure. Ten alternatives were considered.

Step 1 The objective is to assess the alternative, i.e., ten alternative product designs were considered based on considered attributes: junction temperature (JT), cycles to failure (CF) and manufacturing cost (MC) (refer the objective data presented in Table 3).

Table 3

objective data of product design

Design no.

Junction temperature (°C)

Cycles to failure

Manufacturing cost ($)

1

126

22,000

85

2

105

38,000

99

3

138

14,000

65

4

140

13,000

60

5

147

10,600

52

6

116

27,000

88

7

112

32,000

92

8

132

17,000

75

9

122

23,500

85

10

135

15,000

62

Step 2 Normalization of the objective data.

The quantitative values of product design selection problem are normalized. In this example, CF is a beneficiary attribute whereas JT and MC are the non-beneficiary attributes. The values for these attributes are normalized and are presented in Table 4.

Table 4

Normalized data of the product design selection problem

Design no.

Junction temperature (°C)

Cycles to failure

Manufacturing cost ($)

1

0.8333

0.5789

0.6118

2

1

1

0.5223

3

0.7609

0.3684

0.8

4

0.75

0.3421

0.8667

5

0.7143

0.2789

1

6

0.9052

0.7105

0.5909

7

0.9375

0.8421

0.5652

8

0.7955

0.4474

0.6933

9

0.8607

0.6184

0.6118

10

0.7778

0.3947

0.8397

Step 3 Determine the relative importance of various attributes under consideration with respect to objective.

Let the decision maker/domain expert select the following exercise depending upon the requirements. The assigned values for this example are taken from (Rao 2007, 2013).

\(\begin{array}{*{20}{l}} {{\text{JT}}} \\ {{\text{CF}}} \\ {{\text{MC}}} \end{array}\mathop {\left| {\begin{array}{*{20}{l}} 1&{1/3}&{1/5} \\ 3&1&{1/3} \\ 5&3&1 \end{array}} \right|}\limits^{\begin{array}{*{20}{l}} {{\text{JT}}}&{{\text{CF}}}&{{\text{MC}}} \end{array}}\)

CF is considered moderately important than JT, so judgment value 1/3 is assigned to JT over CF. MC is considered as strongly important than JT, so judgment value 1/5 is assigned to JT over MC. MC is considered as strongly important than CF, so judgment value 1/3 is assigned to CF over MC

Step 4 Identification of variable for TLBO:
$$\left[ {{X_i}} \right]=\left[ {\frac{1}{3},\frac{1}{5},\frac{1}{3}} \right]$$

From the original pair-wise comparison matrix, it is noted that the distinct values for JT over CF and MC is 1/3 and 1/5, so it will be treated as variables. Only single value is noted for CF over MC i.e. 1/3, so it will be considered as third variable. Here we have taken only distinct values of say attribute i over all other attributes, this will preserve the expert judgment and also reduces the number of variable count required for TLBO.

Step 5 Deciding lower and upper bounds for variables:

The lower and upper bounds for the identified variables (step 4) are presented in Table 5. Detailed algorithm for deciding these bounds is provided in step 5 of Sect. 3.

Table 5

lower and upper bound for \(\left[ {{X_i}} \right]\)

[Xi]

1/3

1/5

1/3

Lower bound

1/2

1/4

1/2

Upper bound

1/4

1/6

1/4

Step 6 Obtain the optimal relative importance matrix using TLBO: \(\begin{array}{*{20}{l}} {{\text{JT}}} \\ {{\text{CF}}} \\ {{\text{MC}}} \end{array}\mathop {\left| {\begin{array}{*{20}{l}} {1.0000}&{0.4472}&{0.2243} \\ {2.2361}&{1.0000}&{0.4981} \\ {4.4581}&{2.0076}&{1.0000} \end{array}} \right|}\limits^{\begin{array}{*{20}{l}} {{\text{JT}}}&{{\text{CF}}}&{{\text{MC}}} \end{array}}\)

TLBO algorithm is applied several times to check for any further improvement in the consistency ratio and it is observed that the global optimum solution for the problem under consideration is obtained in fifth generation (refer Fig. 1).

Fig. 1

Variation of consistency ratio (CR) with generations for product design selection problem

Step 7 Evaluate the consistency ratio for the optimized relative importance matrix

 

JT

CF

MC

Weights

A3

A4

λmax

CI

CR

JT

1.0000

0.4472

0.2243

0.1299

0.3896

2.9997

3

−1.0432e-05

−2.00628-05

CF

2.2361

1.0000

0.4981

0.2898

0.8693

3

MC

4.4581

2.0076

1.0000

0.5803

1.7411

3.0002

The geometric mean of the matrix which is obtained in step 6 results in the normalized weights of each attribute are JT = 0.1299, CF = 0.2898, and MC = 0.5803. The value of λmax is 3 and CR = −2.00628-05, which is very much less (almost zero) than the allowed CR value of 0.1. The obtained CR value using TLBO is much less as compared to CR value of 0.0370 if it is computed from original pair-wise comparison matrix. Thus, there is good consistency obtained by using TLBO for the expert’s pairwise comparison matrix.

Step 8 Evaluate alternative/obtain the overall performance score for the alternatives.

The values of product designs are calculated. The scores of the each alternative, Ai by considering the original pair-wise matrix and by optimal pair-wise comparison matrix is presented in Table 6. From both the scores it is observed that, product design 5 will be the first choice, but now product design 2 will be the second choice which was previously at fourth position. Product design number 3, 4, 7, 8, 9 are also having now new ranking. As the ranking obtained using TLBO is with almost zero percent error, so it will be treated as final ranking for alternatives

Table 6

Alternatives scores and rank for product design selection

Alternatives score with

Product design no.

Original pair-wise comparison matrix (step 3)

Optimal pair-wise comparison matrix obtained using TLBO (step 6)

Score

Rank

Score

Rank

1

0.6265

10

0.6310

10

2

0.6976

4

0.7245

2

3

0.6844

5

0.6699

6

4

0.7190

2

0.6995

4

5

0.7838

1

0.7540

1

6

0.6547

7

0.6664

7

7

0.6757

6

0.6938

5

8

0.6405

8

0.6353

9

9

0.6396

9

0.6460

8

10

0.7177

3

0.7022

3

From the Fig. 1, it is noted that the global optimal solution is obtained in fifth generation where CR is 5.15e-006 which is almost zero. The number of generations used to run the TLBO algorithm is 10. Algorithm is allowed to run ten times in order to check the best result and population size of 50 were considered

4.2 Case study 2: machinability evaluation

A process of removal of material using cutting tools is machining and the manufacturing industries strive for minimum production cost and maximum rate of production. A manufacturing process generally consists of several phases like process design, planning, machining and quality control wherein machinability is related to process planning and machining operations. While product design machinability of materials need to be taken into consideration where best material is to be chosen from finite set of materials based on design and functionality satisfaction. Material selection is an important task in the process of product design, for overall reducing the production costs. Here the machinability refers to, selection of the best material from available set of materials which also satisfy the required product design and functionality. This selection of material depends upon manufacturer’s interest and on other aspects. The machining process is affected by number of variables (input and output). The most common input variables are machine tool, cutting tool, cutting conditions, work material properties, cutting fluid properties and output variables are cutting tool life, cutting force, power consumption, metal removal rate, noise, vibrations, cutting temperature, etc (Rao 2007). From the literature it is noted that the criteria in general for machinability evaluation of different work materials include tool wear rate, cutting force, tool life, power consumption, etc (Arunachalam and Mannan 2000; Ong and Chew 2000; Dravid and Utpat 2001; Kim et al. 2002; Boubekri et al. 2003; Rao 2005, Salak et al. 2006; Morehead et al. 2007).

Enache et al. (1995) carried out several experiments on titanium alloys using various cutting tools, and presented a mathematical model for machinability. Here six work tools are under consideration and evaluation will be done with three attributes namely tool wear rate (TWR), specific energy consumed (SEC) and surface roughness (SR). TWR, SEC and SR are considered to be non-beneficiary attributes, means lower values are desired.

Step 1 The objective is to assess the alternative, i.e., work tools 1–6 based on considered attributes: TWR, SEC and SR (Refer the objective data presented in Table 7).

Table 7

Objective data of material selection problem

Work tool combination

Tool wear rate (m/min)

Specific energy consumed (N)

Surface roughness (μm)

1

0.0610

219.7400

5.8000

2

0.0930

3523.7200

6.3000

3

0.0640

2693.2100

6.8000

4

0.0280

761.4600

5.8000

5

0.0340

1593.4800

5.8000

6

0.0130

2849.1500

6.2000

1. TiAl6V4-P20

2. TiMo32-P20

3. TiAl5Fe2.5-P20

4. TiAl6V4-P20 (TiN)

5. TiAl6V4-K20

6. TiAl6V4-K20* (K20* is a special form of tool without top in contrast with other tools). Cutting conditions: dry, cutting speed—150 m/min, feed—0.15 mm/rev, and depth of cut—0.5 mm

Step 2 Normalization of the objective data:

The quantitative values of work material selection problem are normalized. In this example, TWR, SEC and SR are non-beneficial attributes. The values for these attributes are normalized and are presented in Table 8.

Table 8

Normalized data of attributes of work material selection problem

Work tool

TWR

SEC

SR

1

0.2131

1

1

2

0.1398

0.0624

0.9206

3

0.2031

0.0816

0.8529

4

0.4643

0.2886

1

5

0.3824

0.1379

1

6

1

0.0771

0.9355

Step 3 Determine the relative importance of various attributes under consideration with respect to objective:

Let the decision maker/domain expert select the following exercise depending upon the requirements. The assigned values for this example are for demonstration purposes [taken from (Rao 2007)] only and it is to be decided by a domain expert.

\(\begin{array}{*{20}{l}} {{\text{TWR}}} \\ {{\text{SEC}}} \\ {{\text{SR}}} \end{array}\mathop {\left| {\begin{array}{*{20}{l}} 1& 5& 7 \\ {1/5}& 1& 3 \\ {1/7}& {1/3}& 1 \end{array}} \right|}\limits^{\begin{array}{*{20}{l}} {{\text{TWR}}}& {{\text{SEC}}}& {{\text{SR}}} \end{array}}\)

Step 4 Identification of variable for TLBO:

\(\left[ {{X_i}} \right]=[5,~7,~3]\)

Step 5 Deciding lower and upper bounds for variables (Table 9).

Table 9

lower and upper bounds for variables

[Xi]

5

7

3

Lower bound

4

4

2

Upper bound

6

8

4

Step 6 Obtain the optimal relative importance matrix using TLBO:

\(\begin{array}{*{20}{l}} {{\text{TWR}}} \\ {{\text{SEC}}} \\ {{\text{SR}}} \end{array}\mathop {\left| {\begin{array}{*{20}{l}} {1.0000}&{4.0711}&{7.9768} \\ {0.2456}&{1.0000}&{2.0065} \\ {0.1254}&{0.4984}&{1.0000} \end{array}} \right|}\limits^{\begin{array}{*{20}{l}} {{\text{TWR}}}&{{\text{SEC}}}&{{\text{SR}}} \end{array}}\)

Step 7 Evaluate the consistency ratio for the optimized relative importance matrix.

The normalized weights of each attribute are TWR = 0.7289, SEC = 0.1805, and SR = 0.0907. The value of λmax is 3.0001 and CR = 6.0501e-005, which is much much less (almost zero) than the allowed CR value of 0.1. Thus, there is good consistency obtained by using TLBO for the experts pairwise comparison matrix.

Step 8 Evaluate alternative/obtain the overall performance score for the alternatives.

The values of work tools are calculated (refer Eq. 8). Score comparison for original pair-wise matrix and optimal pair-wise matrix using TLBO is presented in Table 10. From the Table 10, it is observed that, work tool 6 will the first choice and also other rankings are also same.

Table 10

Alternatives scores and rank for material selection problem

Alternatives score with

Work tool combination

Original pair-wise comparison matrix (step 3)

Optimal pair-wise comparison matrix obtained using TLBO (step 6)

Score

Rank

Score

Rank

1

0.4251

3

0.4265

3

2

0.1884

6

0.1966

6

3

0.2328

5

0.2401

5

4

0.4746

2

0.4811

2

5

0.3863

4

0.3942

4

6

0.8209

1

0.8276

1

From the Fig. 2, it is noted that the global optimal solution is obtained in fifth generation where CR is 6.05e-005 which is almost zero. The number of generations used to run the TLBO algorithm is 10. Algorithm is allowed to run ten times in order to check the best result and population size of 50 were considered.

The proposed approach can be applied on several decision making problems and some of the Industrial Applications are as follows (Rao 2007, 2011; Borkar et al. 2016): Evaluation of product design, Material selection for given engineering applications, Machinability Evaluation of Work Materials Cutting Fluid Selection for a Given Machining Application, Evaluation and Selection of Modern Machining Methods, Evaluation of Flexible Manufacturing Systems, Machine Selection in a Flexible Manufacturing Cell, Failure Cause Analysis of Machine Tools, Robot Selection for a Given Industrial Application, Integrated Project Evaluation and Selection, Selection of Rapid Prototyping Process in Rapid Product Development, Optimal route selection problem etc.

Fig. 2

Variation of consistency ratio (CR) with generations for work tool selection problem

5 Conclusion

This paper presents a simple and effective method, to improve the inconsistent comparison matrix to obtain the almost zero consistent ratio (CR). The proposed approach is built based on teaching learning based optimization (TLBO) algorithm, which was proved to be algorithmic parameters less and successful for solving several optimization problems. To improve the consistency ratio (CR) while preserving the judgment values in pair-wise comparison matrix, modality of TLBO is proposed and applied. The variable identification algorithm is proposed in this research work. Selection of bounds for variables is also proposed. The correctness of the proposed method is proved by applying it to two real world case studies reported in literature, namely new product design selection and material selection (work tool combination). In case of product design selection problem, alternative 5 was ranked first by using conventional and proposed approaches but for rank 2, alternative 2 was selected instead of alternative 4. The proposed approach can be applied to variety of alternative selection problems as enlisted above. The result shows the TLBO algorithm is the potential method to solve the inconsistent pair-wise comparison matrix in AHP. The other intelligent algorithm like particle swarm optimization, ant colony optimization, genetic optimization or any of the advance optimization technique can be used to solve these selection problems.

References

  1. Arunachalam R, Mannan M (2000) Machinability of nickel-based high temperature alloys. Mach Sci Technol 4:127–168. doi: 10.1080/10940340008945703 CrossRefGoogle Scholar
  2. Besharati B, Azarm S, Kannan P (2006) A decision support system for product design selection: a generalized purchase modeling approach. Decis Support Syst 42:333–350. doi: 10.1016/j.dss.2005.01.002 CrossRefGoogle Scholar
  3. Borkar P, Sarode M, Malik L (2016) Acoustic signal based optimal route selection problem: performance comparison of multi-attribute decision making methods. KSII Trans Internet Inf Syst 10(2):647–669Google Scholar
  4. Boubekri N, Rodriguez J, Asfour S (2003) Development of an aggregate indicator to assess the machinability of steels. J Mater Process Technol 134:159–165. doi: 10.1016/s0924-0136(02)00446-6 CrossRefGoogle Scholar
  5. Cao D, Leung L, Law J (2008) Modifying inconsistent comparison matrix in analytic hierarchy process: a heuristic approach. Decis Support Syst 44:944–953. doi: 10.1016/j.dss.2007.11.002 CrossRefGoogle Scholar
  6. Chakraborty P, Das S, Roy G, Abraham A (2011) On convergence of the multi-objective particle swarm optimizers. Inf Sci 181:1411–1425. doi: 10.1016/j.ins.2010.11.036 MathSciNetCrossRefzbMATHGoogle Scholar
  7. Chen S, Hwang C (1992) Fuzzy multiple attribute decision making. Lect Notes Econ Math Syst. doi: 10.1007/978-3-642-46768-4 CrossRefzbMATHGoogle Scholar
  8. Costa J (2011) A genetic algorithm to obtain consistency in analytic hierarchy process. BJOPM 8:55–64. doi: 10.4322/bjopm.2011.003 CrossRefGoogle Scholar
  9. Dong Y, Xu Y, Li H (2008) On consistency measures of linguistic preference relations. Eur J Oper Res 189:430–444. doi: 10.1016/j.ejor.2007.06.013 MathSciNetCrossRefzbMATHGoogle Scholar
  10. Dorigo M, Stutzle T (2004) Ant colony optimization. MIT Press, CambridgezbMATHGoogle Scholar
  11. Dravid SV, Utpat LS (2001) Machinability evaluation based on the surface finish criterion. J Inst Eng (India) Prod Eng Div 81:47–51Google Scholar
  12. Efren MM, Mariana EMV, Rubi DCGR (2010) Differential evolution in constrained numerical optimization: an empirical study. Inf Sci 180:4223–4262MathSciNetCrossRefzbMATHGoogle Scholar
  13. Enache S, Strjescu E, Opran C et al. (1995) Mathematical model for the establishment of the materials machinability. CIRP Ann Manuf Technol 44:79–82. doi: 10.1016/s0007-8506(07)62279-3 CrossRefGoogle Scholar
  14. Farmer J, Packard N, Perelson A (1986) The immune system, adaptation, and machine learning. Phys D 22:187–204. doi: 10.1016/0167-2789(86)90240-x MathSciNetCrossRefGoogle Scholar
  15. Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Simulation 76:60–70CrossRefGoogle Scholar
  16. Girsang A, Tsai C, Yang C (2014a) Ant algorithm for modifying an inconsistent pairwise weighting matrix in an analytic hierarchy process. Neural Comput Appl 26:313–327. doi: 10.1007/s00521-014-1630-0 CrossRefGoogle Scholar
  17. Girsang AS, Tsai CW, Yang CS (2014b) Ant colony optimization for reducing the consistency ratio in comparison matrix. In: Proceedings of the International Conference on Advances in Engineering and Technology (ICAET’14), pp 577–582Google Scholar
  18. Goldberg DE (1989) Genetic algorithms in search, optimization, and machine learning. Addison-Wesley, ReadingzbMATHGoogle Scholar
  19. Haque B, Belecheanu R, Barson R, Pawar K (2000) Towards the application of case based reasoning to decision-making in concurrent product development (concurrent engineering). Knowl Based Syst 13:101–112. doi: 10.1016/s0950-7051(00)00051-4 CrossRefGoogle Scholar
  20. Hsiao S, Chou J (2004) A creativity-based design process for innovative product design. Int J Ind Ergon 34:421–443. doi: 10.1016/j.ergon.2004.05.005 CrossRefGoogle Scholar
  21. Iida Y (2009) Ordinality consistency test about items and notation of a pairwise comparison matrix in AHP. In: Proceedings of the International Symposium on the Analytic Hierarchy ProcessGoogle Scholar
  22. Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Technical report-TR06. Erciyes UniversityGoogle Scholar
  23. Karen A, Yildiz A, Kaya N et al (2006) Hybrid approach for genetic algorithm and Taguchi’s method based design optimization in the automotive industry. Int J Prod Res 44:4897–4914. doi: 10.1080/00207540600619932 CrossRefzbMATHGoogle Scholar
  24. Keeney R, Raiffa H (1976) Decisions with multiple objectives; preferences and values tradeoffs. Wiley, New YorkzbMATHGoogle Scholar
  25. Kennedy J, Eberhart R (1995) Particle Swarm Optimization. In: Proceedings of IEEE International Conference on Neural Networks, pp 1942–1948Google Scholar
  26. Kim K, Kang M, Kim J et al (2002) A study on the precision machinability of ball end milling by cutting speed optimization. J Mater Process Technol 130–131:357–362. doi: 10.1016/s0924-0136(02)00824-5 CrossRefGoogle Scholar
  27. Kulak O, Kahraman C (2005) Multi-attribute comparison of advanced manufacturing systems using fuzzy vs. crisp axiomatic design approach. Int J Prod Econ 95:415–424. doi: 10.1016/j.ijpe.2004.02.009 CrossRefGoogle Scholar
  28. Li H, Ma L (2007) Detecting and adjusting ordinal and cardinal inconsistencies through a graphical and optimal approach in AHP models. Comput Oper Res 34:780–798. doi: 10.1016/j.cor.2005.05.010 CrossRefzbMATHGoogle Scholar
  29. Lin C, Wang W, Yu W (2008) Improving AHP for construction with an adaptive AHP approach (A3). Autom Constr 17:180–187. doi: 10.1016/j.autcon.2007.03.004 CrossRefGoogle Scholar
  30. Lin M, Lee Y, Ho T (2011) Applying integrated DEA/AHP to evaluate the economic performance of local governments in China. Eur J Oper Res 209:129–140. doi: 10.1016/j.ejor.2010.08.006 CrossRefGoogle Scholar
  31. Liu J, Tang L (1999) A modified genetic algorithm for single machine scheduling. Comput Ind Eng 37:43–46. doi: 10.1016/s0360-8352(99)00020-0 CrossRefGoogle Scholar
  32. Lo C, Wang P, Chao K (2006) A fuzzy group-preferences analysis method for new-product development. Expert Syst Appl 31:826–834. doi: 10.1016/j.eswa.2006.01.005 CrossRefGoogle Scholar
  33. Maddulapalli A, Azarm S, Boyars A (2007) Sensitivity analysis for product design selection with an implicit value function. Eur J Oper Res 180:1245–1259. doi: 10.1016/j.ejor.2006.03.055 CrossRefzbMATHGoogle Scholar
  34. Morehead M, Huang Y, Ted Hartwig K (2007) Machinability of ultrafine-grained copper using tungsten carbide and polycrystalline diamond tools. Int J Mach Tools Manuf 47:286–293. doi: 10.1016/j.ijmachtools.2006.03.014 CrossRefGoogle Scholar
  35. Ong S, Chew L (2000) Evaluating the manufacturability of machined parts and their setup plans. Int J Prod Res 38:2397–2415. doi: 10.1080/00207540050031832 CrossRefzbMATHGoogle Scholar
  36. Ozer M (2005) Factors which influence decision making in new product evaluation. Eur J Oper Res 163:784–801. doi: 10.1016/j.ejor.2003.11.002 CrossRefzbMATHGoogle Scholar
  37. Passino K (2002) Biomimicry of bacterial foraging for distributed optimization and control. IEEE Control Syst Mag 22:52–67. doi: 10.1109/mcs.2002.1004010 CrossRefGoogle Scholar
  38. Peng Y, Kou G, Wang G et al (2011a) Ensemble of software defect predictors: an AHP-based evaluation method. Int J Inf Tech Decis Mak 10:187–206. doi: 10.1142/s0219622011004282 CrossRefGoogle Scholar
  39. Peng Y, Wang G, Kou G, Shi Y (2011b) An empirical study of classification algorithm evaluation for financial risk prediction. Appl Soft Comput 11:2906–2915. doi: 10.1016/j.asoc.2010.11.028 CrossRefGoogle Scholar
  40. Peng Y, Wang G, Wang H (2012) User preferences based software defect detection algorithms selection using MCDM. Inf Sci 191:3–13. doi: 10.1016/j.ins.2010.04.019 CrossRefGoogle Scholar
  41. Rao R (2005) Machinability evaluation of work materials using a combined multiple attribute decision making method. Int J Adv Manuf Technol 28:221–227Google Scholar
  42. Rao R (2007) Decision making in the manufacturing environment using graph theory and fuzzy multiple attribute decision making. Springer series in advanced manufacturingGoogle Scholar
  43. Rao R (2011) Advanced modeling and optimization of manufacturing processes. Springer series in advanced manufacturing. doi: 10.1007/978-0-85729-015-1
  44. Rao R (2013a) Decision making in manufacturing environment using graph theory and fuzzy multiple attribute decision making methods, vol 2. Springer series in advanced manufacturingGoogle Scholar
  45. Rao R (2013b) Decision making in manufacturing environment using graph theory and fuzzy multiple attribute decision making methods. Springer series in advanced manufacturing. doi: 10.1007/978-1-4471-4375-8
  46. Rao R (2015) Teaching learning based optimization and its engineering applications. Springer, LondonGoogle Scholar
  47. Rao R (2016) Jaya: a simple and new optimization algorithm for solving constrained and unconstrained optimization problems. Int J Ind Eng Comput 7:19–34Google Scholar
  48. Rao R, Patel V (2012b) An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems. Int J Ind Eng Comput 3:535–560. doi: 10.5267/j.ijiec.2012.03.007 Google Scholar
  49. Rao R, Patel V (2013b) Multi-objective optimization of heat exchangers using a modified teaching–learning-based optimization algorithm. Appl Math Modell 37:1147–1162. doi: 10.1016/j.apm.2012.03.043 MathSciNetCrossRefzbMATHGoogle Scholar
  50. Rao R, Patel V (2013c) Multi-objective optimization of two stage thermoelectric cooler using a modified teaching–learning-based optimization algorithm. Eng Appl Artif Intell 26:430–445CrossRefGoogle Scholar
  51. Rao R, Savsani V, Vakharia D (2012a) Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems. Inf Sci 183:1–15. doi: 10.1016/j.ins.2011.08.006 MathSciNetCrossRefGoogle Scholar
  52. Saaty TL (2001) Deriving the AHP 1–9 scale from first principles. In: ISAHP 2001 Proceedings, BernGoogle Scholar
  53. Saaty TL (2003) Decision-making with the AHP: why is the principal eigenvector necessary. Eur J Oper Res 145(1):85–91MathSciNetCrossRefzbMATHGoogle Scholar
  54. Saaty TL (2005) Theory and applications of the analytic network process: decision making with benefits, opportunities, costs and risks. RWS Publications, Pittsburgh (ISBN 1-888603-06-2) Google Scholar
  55. Saaty TL (2006) The analytic network process, decision making with the analytic network process. Int Ser Oper Res Manag Sci 95:1–26Google Scholar
  56. Šalak A, Vasilko K, Selecká M, Danninger H (2006) New short time face turning method for testing the machinability of PM steels. J Mater Process Technol 176:62–69CrossRefGoogle Scholar
  57. Shi W, Shen Q, Kong W, Ye B (2007) QSAR analysis of tyrosine kinase inhibitor using modified ant colony optimization and multiple linear regression. Eur J Med Chem 42:81–86CrossRefGoogle Scholar
  58. Suh NP (2001) Axiomatic design: advances and applications. Oxford University Press, New YorkGoogle Scholar
  59. Yang I, Wang W, Yang T (2012) Automatic repair of inconsistent pairwise weighting matrices in analytic hierarchy process. Autom Constr 22:290–297. doi: 10.1016/j.autcon.2011.09.004
  60. Yildiz AR (2009) A novel hybrid immune algorithm for global optimization in design and manufacturing. Rob Comput Integr Manuf 25:261–270CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringGHRCENagpurIndia
  2. 2.Department of Computer Science and EngineeringGovernment Polytechnic YawatmalYawatmalIndia

Personalised recommendations