1 Introduction

High environmental pollution and low renewable material resources are just a few disadvantages of the traditional manufacturing industry [1]. New manufacturing and remanufacturing approaches such as cloud manufacturing and remanufacturing and also smart manufacturing and remanufacturing have shown good ability as the future in the manufacturing industry [2]. These new approaches can employ all new manufacturing resources and abilities [3]. Mishandling of the end-of-life (EOL) products in the traditional manufacturing industry for many years has led to the wasting of the resources and severe environmental problems [4]. Remanufacturing of the EOL products deals with both environmental and economic aspects of the EOL products by considering reusing them in an optimal way [5]. The first consideration of each remanufacturing process is disassembly which mostly has been carried out manually due to its complexity.

Although it looks like that disassembly process planning is the reverse of the assembly process in manufacturing, there are significant differences especially in the purpose of disassembly process which is to retrieve demand parts. This requires unique and different approaches. Disassembly planning has to deal with necessary and unnecessary parts at the same time, which makes disassembly planning much more complicated than assembly planning in practice [6].

It can be seen that majority of disassembly planning studies can be categorised into three classes: (a) to increase the value of the parts that need to be disposed and (b) to minimise the cost of retrieving some specific parts and (c) efficient full disassembly sequence of the operations of a product [6].

Gupta and Taleb [7] focused on increasing the value of retrieved parts through minimising the disassembly costs in comparison with the value of disassembled parts. In order to optimise the disassembly operations sequences, Lambert introduced a mathematical model using AND/OR graph. Although Lambert’s model efficiently finds optimum sequences, it does not consider reusable part demand which is applicable in remanufacturing companies [8].

In a remanufacturing field in which disassembly operations sequences and costs have to be optimised, Barba et al. [9] investigated a model using lot-sizing and studied the effect of it on the costs. Veerakamolmal et al. [10] introduced a structure known as the disassembly tree (DT) to demonstrate the relationships and order of disassembling. The main advantage of this method is that the reusable parts can be demanded at any point and make it applicable in the remanufacturing field, although the size of DT increases dramatically by increasing the parts.

The optimization of the disassembly planning especially where the number of parts rises is categorised as NP-hard problems. One of the most effective tools to optimise this kind of problem is the genetic algorithm (GA) that was proposed by Kongar and Gupta [11] to optimise disassembly planning. GA is a heuristic approach to solve the problem quickly and effectively and as it is easy to adapt this technique with rather little mathematics it is getting more and more popular in this field. McGovern et al. [12] studied this method to balance the disassembly lines. Parsa and Saadat [13] investigated automated disassembly using the genetic algorithm and proposed a model for robotic disassembly sequence optimisation. Other optimisation methods such as particle swarm optimisation algorithm were used to solve multi-objective optimisation problems [14].

Although disassembly requests are expanding and disassembly planning techniques are getting more efficient, because of the environmental regulations and the increasing amount of the products require to be disassembled, manual disassembly is getting more difficult and inefficient [15]. Therefore, robotized disassembly processes are in the focus of new researches and are getting more and more essential in the disassembly industry. Studies on robotized disassembly process for electronic products started at the early 1990s. First industrial application in this field was a robot assistant for telephone disassembling [16]. Furthermore, in this area, Torres et al. [17] proposed a robotized disassembly cell that can handle non-destructive disassembly with some degree of automatization. Pomares et al. [18] followed their work and proposed an object-oriented model. This model was required in order to a disassembly process. Gil et al. [19] used co-operative robots to develop a flexible multi-sensorial system in an autonomous disassembly process. Torres et al. [20] followed this work and proposed a task planner using decision trees.

Current demand for more effective disassembly strategies makes full disassembly uneconomical [21,22,23,24]. To find an optimal solution for planning problems, selective disassembly which aims to disassemble a product partly to retrieve demanded components is receiving more attraction in current studies. ElSayed et al. [25] investigated an intelligent automated disassembly cell which disassembled products online and selectively. They modelled an online GA (genetic algorithm) for selective disassembly to optimise the disassembly sequences. A selective disassembly planning method for waste electrical and electronic equipment (WEEE) was proposed by Li et al. They develop a selective disassembly planning method based on particle swarm optimisation with customisable decision-making. They applied this model on WEEE to maximise the economic profit and reduce environmental problems [26].

In this work, first new parameters and objectives for selective disassembly planning are introduced. Then genetic algorithm using new parameters and objectives is employed to find the optimum solution. Finally, the proposed method is tested on an automotive case study to verify its effectiveness and results are discussed.

2 Methodology

2.1 Introducing new parameters for intelligent selective disassembly planning

A full disassembly plan allows a product to be fully disassembled. However, in realistic industry problems, a product does not require to be fully disassembled and it can be inefficient. Full disassembly plan disassembles all the individual parts of the product regardless of the disassembly process requirement. Selective disassembly planning considers the disassembly aims and objectives and makes it efficient and practical. The total number of required operations in full disassembly plan is n, and the number of required operations in selective disassembly is m which nm. In selective disassembly, some parts are indicated as target parts to be disassembled and the disassembly process will be continued until the target parts are disassembled.

Since the early 1990s, in order to solve the disassembly sequence problems, researchers have started to use intelligent heuristic methods and the majority of studies are going in this direction. The mainly accepted methodology among the professional is the “graph model + solving method”. However, some differences among these methods are noticed, the main idea is the same. Based on the “graph model + solving method”, disassembly planning problems can be divided into three sub-problems: (a) product disassembly modelling, (b) sequence generating, (c) disassembly sequence optimization [27]. Most of the researches on disassembly sequence optimization have focused on cost and time of the disassembly operations as the main parameters of disassembly optimization. Using times and costs of disassembly operations as the main optimization parameters leads to inaccurate and unrealistic results. First, the majority of the studies estimate the time of a disassembly operation as measuring actual times of operations required to disassemble product completely. Also, the same EOL products have a different condition which leads to a different time for the same operation. Secondly, the same disassembly operation in different sequences of operations has different disassembly time and measuring or estimating all this time can be problematic.

Therefore, in this work, Disassembly Handling Index (DHI) and Disassembly Operation Index (DOI) are introduced in order to consider the difficulty and feasibility of the disassembly operations as the main optimization parameters instead of the time and cost. The main advantage of this method is that the disassembly operations can be evaluated easily and quickly without the need to disassemble the EOL product. Also, this evaluation can be done for the same EOL product with different condition individually. Furthermore, Disassembly Demand Index (DDI) is defined to prioritise the demand for each component. This parameter indicates the level of demand for each component in the product. In addition to these parameters, the Disassembly Cost Index (DCI) is defined to include costs and times of disassembly operations. In a sequence of the disassembly process, the operation for component i is shown as Op(i) and the position of the Op(i) in the sequence is shown as Pos(Op(i)).

2.1.1 Disassembly handling index

The first parameter for a new approach to the intelligent disassembly planning is the Disassembly Handling Index (DHI). Disassembly Handling Index is found by analysing the part’s shape, size, weight and orientation, part handling difficulty and where it fits into the product. In order to analyse each part’s disassembly handling, a table that categorises the product parts is defined, which is shown in Table 1 [28]. This categorisation is based on the part’s geometric characteristics and also the handling of the parts during a disassembly operation. DHI analyses each part of the product using three parameters: A, size of the part; B, weight of the part and, C, shape of the part. As can be seen from Table 1, each characteristics of a part allocated with a score. A higher score means that the part is more difficult to be handled during a disassembly operation. First, Part Handling Index (PHI) for each part P(i) is defined as below:

Table 1 Disassembly handling categories and scores
$$ \mathrm{PHI}\left(\mathrm{P}(i)\right)=\mathrm{A}+\mathrm{B}+\mathrm{C} $$
(1)

where A, B and C can be found using Table 1, for example, for P(1), a difficult to be grasped part (A = 4), light (B = 2) and symmetric (C = 0.8), PHI(P(1)) = 6.8.

Then DHI for a set of operations during the disassembly process is calculated as below:

$$ \mathrm{DHI}={\sum}_{i=1}^n\mathrm{PHI}\left(P(i)\right)/\mathrm{Pos}\left(P(i)\right) $$
(2)

where Pos(P(i)) is the position of the part in the disassembly process sequence. For example, if P(1) with PHI(P(1)) is disassembled with the third operation, it needs to be divided by 3. A smaller DHI indicates that the parts that are easier to be handled are disassembled first and ensures that the unnecessary parts with high PHI will not be disassembled. Dividing PHI(P(i)) by Pos(P(i)) allows the algorithm to arrange the disassembly process in a way that the parts with smaller PHI be disassembled earlier. For example, consider component {P1, P2, P3} with PHI {2, 4, 6} respectively. Now, assume two possible disassembly sequences as seq1 = {P2, P1, P3} and seq2 = {P3, P2, P1}. For seq1, DHI = (4/1 + 2/2 + 6/3) = 7 and, for seq2, DHI = (6/1 + 4/2 + 2/3) = 8.67. It can be seen that DHI1 is smaller than DHI2 therefore component with higher PHI, i.e. P3, is at the end of the sequence, while for seq2 with higher DHI, P3 is at the beginning of the sequence.

2.1.2 Disassembly operation index

Disassembly Operation Index (DOI) analyses the difficulty of each operation in the disassembly process. In order to calculate the DOI, main disassembly operations were categorised into four categories: A, disassembly force; B, requirement of tools for disassembly; C, accessibility of joints/grooves and, D, positioning. Each category has several sub-categories to which a score is given which can be seen in Table 2 [28]. A higher score indicates that an operation is more difficult to be carried out in the disassembly process. Disassembly Operation Index (DOI) for each operation Op(i) is computed as below:

$$ \mathrm{DOI}\left(\mathrm{Op}(i)\right)=\mathrm{A}+\mathrm{B}+\mathrm{C}+\mathrm{D} $$
(3)

where A, B, C and D can be found using Table 2. For instance Op(1), a pull operations with hand with moderate effort (A = 1), with common tool requirement (B = 2), on a plane surface (C = 1) and symmetry with high accuracy requirement (D = 5) has DOI(Op(1)) = 9.

Table 2 Disassembly operation categories and scores

Disassembly Process Index (DPI) for all operations in a selective disassembly process is computed as below:

$$ \mathrm{DPI}={\sum}_{i=1}^n\mathrm{DOI}\left( OP(i)\right)/\mathrm{Pos}\left( Op(i)\right) $$
(4)

where Pos(Op(i)) is the position of the operation in the disassembly process sequence. For example, if Op(1) with DOI(Op(1)) is the third operation in the sequence, it needs to be divided by 3. A smaller DPI is beneficial by which the operations with lower DOI is carried out first and ensures that the unnecessary and difficult operations will not be carried out. Dividing DOI(Op(i)) by Pos(Op(i)) allows the algorithm that arranges the disassembly process in a way that the operations with smaller DOI be started earlier.

2.1.3 Disassembly demand index

In selective disassembly planning, the goal is to first disassemble specifically targeted components without disassembling the product completely. In this work, Disassembly Demand Index (DDI) is defined to optimise the disassembly process in a way that the most demanded components disassembled first without disassembling unwanted components. Therefore, the level of demand of each component is categorised in four levels, i.e. low, medium, high and very high which can be represented quantitatively by (5, 3, 1, 0) for (low, medium, high and very high) respectively. DDI can be calculated using the following equation:

$$ \mathrm{DDI}={\sum}_{i=1}^n\mathrm{LD}(i)/\mathrm{Pos}(i) $$
(5)

where LD(i) is the level of demand of component i and Pos(i) is the position of component i in the disassembly sequence. For example, for a sequence of operations of (3, 1, 4, 2) and LD of (5, 3, 0, 2) respectively, DDI = 5/3 + 3/1 + 0/4 + 2/2 = 5.67.

2.1.4 Disassembly cost index

Disassembly Cost Index (DCI) is defined to analyse each operation’s cost to disassemble a specific part of the product in a disassembly process in term of time. It can be used to determine the cost of each operation to allow the operations with lower cost be carried out earlier. DCI can include different operation costs which all converted in a time unit. In this research, DCI parameters are as below:

  • Operation time:

The basic time requires to disassemble part P(i) using operation Op(i) is shown as OT(Op(i)). As the CDI needs to have the same effect as the other parameters in optimisation algorithm, the operation times are normalised as below:

$$ {\mathrm{OT}}_{\mathrm{N}}\left( Op(i)\right)=\frac{\mathrm{OT}\left(\mathrm{Op}(i)\right)-{\mathrm{OT}}_{\mathrm{min}}}{{\mathrm{OT}}_{\mathrm{max}}-{\mathrm{OT}}_{\mathrm{min}}} $$
(6)

where OTN(Op(i)) is the normalised time for each operation and OTmax and OTmin represent the maximum and minimum operation times of all possible disassembly operations respectively.

  • Tool change:

The other parameter of DCI is tool change. In this work, tool changing for an operation is penalised in a time unit.

$$ \mathrm{TC}\Big(\mathrm{Op}(i)={\displaystyle \begin{array}{c}0\ \left(\mathrm{s}\right)\ \mathrm{if}\ \mathrm{tool}\ \mathrm{changed}\\ {}1\ \left(\mathrm{s}\right)\ \mathrm{if}\ \mathrm{tool}\ \mathrm{not}\ \mathrm{changed}\end{array}} $$
(7)

where s represents a time unit in second.

  • Disposal cost:

Finally, the last factor is the disposal cost of a part after it is disassembled. If after a disassembly process a part of the product is not reusable and it costs to be disposed of, the operation will be penalised in time unit:

$$ \mathrm{DC}\left(\mathrm{Op}(i)\right)={\displaystyle \begin{array}{c}0\ \left(\mathrm{s}\right)\ \mathrm{if}\ \mathrm{part}\ \mathrm{is}\ \mathrm{reusable}\\ {}1\ \left(\mathrm{s}\right)\ \mathrm{if}\ \mathrm{part}\ \mathrm{is}\ \mathrm{not}\ \mathrm{reusable}\end{array}} $$
(8)

Now total DCI for disassembly process can be calculated:

$$ \mathrm{DCI}={\sum}_{i=1}^n{\mathrm{OT}}_{\mathrm{N}}\left(\mathrm{Op}(i)\right)/\mathrm{Pos}\left(\mathrm{Op}(i)\right)+{\sum}_{i=1}^n\mathrm{TC}\left(\mathrm{Op}(i)\right)/\mathrm{Pos}\left(\mathrm{Op}(i)\right)+{\sum}_{i=1}^n\mathrm{DC}\left(\mathrm{Op}\left(\mathrm{i}\right)\right)/\mathrm{Pos}\left(\mathrm{Op}(i)\right) $$
(9)

2.2 Improved genetic algorithm optimisation

2.2.1 Disassembly representation using a hybrid graph model

Graph-based methods such as AND/OR which was introduced by Lambert and Homem are widely used to represent disassembly sequences and precedencies space; however, the number of nodes rises dramatically by the rising number of the components. For example, there will be 16,383 nodes in the AND/OR graph if the product consists of 14 components [29]. In this work, a hybrid graph method is used to represent disassembly sequences and precedencies space [30]. The hybrid graph method describes the topological structure of the product in the form of a graph. It defines the relationships of the constraints between the components of the product using a four-tuple, G = {V; Ef; Efc; Ec}. In this four-tuple nodes set V = {v1, v2, v3, .., vn} defines a minimum disassembly component unit (part or sub-assembly) where n is the number of units. Ef = {ef1, .. efi} represents the contact constraints between two components and is shown using an undirected solid line. Efc = {efc1, .., efcj} is disassembly contacted constraints and precedence between two components and is shown using a directed solid line. The direction of the line defines the disassembly precedence between the components. Finally, Ec = {ec1, …, eck} which is represented by a directed dashed line defines the disassembly precedence constraints for two non-connected components.

2.2.2 Disassembly feasibility and constraint matrices

The relation between product components and disassembly constraints are represented mathematically by two matrices:

  1. 1-

    Components relation matrix detonated by Cr which represents the relationship between product components

$$ {C}_r={\left\{{cr}_{ij}\right\}}_{n\times n}=\left[\begin{array}{cc}\begin{array}{cc}{cr}_{11}& {cr}_{12}\\ {}{cr}_{21}& {cr}_{22}\end{array}& \begin{array}{cc}\cdots & {cr}_{1n}\\ {}\cdots & {cr}_{2n}\end{array}\\ {}\begin{array}{cc}\vdots & \vdots \\ {}{cr}_{n1}& {cr}_{n2}\end{array}& \begin{array}{cc}\ddots & \vdots \\ {}\cdots & {cr}_{nn}\end{array}\end{array}\right] $$
(10)

where, i,j = 1, 2, 3, …, n and

$$ {cr}_{ij}=\left\{\begin{array}{c}1\kern8.25em if\ \left({v}_i,{v}_j\right)\kern1em \in \kern0.5em {E}_f\\ {}2\kern2.5em if\kern0.5em \left({v}_i,{v}_j\right)\ or\ \left({v}_j,{v}_i\right)\kern0.75em \in \kern0.5em {E}_{fc\kern0.5em }\\ {}0\kern14.75em else\end{array}\right. $$
(11)
  1. 2-

    Disassembly constraints matrix, Dc, represents the disassembly constraints mathematically

$$ {\mathrm{D}}_{\mathrm{c}}={\left\{{\mathrm{dc}}_{ij}\right\}}_{n\times n}=\left[\begin{array}{cc}\begin{array}{cc}{\mathrm{dc}}_{11}& {\mathrm{dc}}_{12}\\ {}{\mathrm{dc}}_{21}& {\mathrm{dc}}_{22}\end{array}& \begin{array}{cc}\cdots & {\mathrm{dc}}_{1n}\\ {}\cdots & {\mathrm{dc}}_{2n}\end{array}\\ {}\begin{array}{cc}\vdots & \vdots \\ {}{\mathrm{dc}}_{n1}& {\mathrm{dc}}_{n2}\end{array}& \begin{array}{cc}\ddots & \vdots \\ {}\cdots & {\mathrm{dc}}_{nn}\end{array}\end{array}\right] $$
(12)

where i,j = 1, 2, 3, …, n and

$$ {\mathrm{dc}}_{ij}=\Big\{\ {\displaystyle \begin{array}{c}1\kern8.25em if\ \left({v}_i,{v}_j\right)\kern1em \in \kern0.5em {E}_c\\ {}2\kern1.25em if\ \left({v}_i,{v}_j\right)\ or\ \left({v}_j,{v}_i\right)\kern2.25em \in \kern0.5em {E}_{fc}\\ {}0\kern14.75em \mathrm{else}\end{array}} $$
(13)

This matrix provides geometrical constraints and disassembly precedencies to determine whether a part can be disassembled without restriction. Unit vj can be disassembled if:

$$ \sum \limits_{i=1}^n{\mathrm{dc}}_{ij}=0\ \mathrm{and}\ \sum \limits_{i=1}^n{\mathrm{cr}}_{ij}>0 $$
(14)

2.2.3 GA parameters and operators

Several studies have investigated different optimization methods in order to find optimum disassembly sequences such as ant colony optimization (ACO), simulated annealing and genetic algorithm (GA). Studies show that GA is the most successful technique and is widely used [27]. Genetic algorithm is a nature-inspired method based on Darwin’s natural selection of evolution. It has been used for optimising the constrained and unconstrained problem. GA repeatedly modifies and generates a new population of possible solutions as a new generation. In order to produce a new generation, better solutions of the current population combined together and mutate to produce offspring. The aim is that the characteristics of the better solutions pass to the next generation and population evolve toward an optimum solution. GA has been employed as a powerful optimisation tool in a variety of subjects and researches [31,32,33]. Basic GA approach was modified by several researchers to improve results. According to Kongar and Gupta who are the pioneer of GA, the method got its idea from the evolution theory and can be simplified based on that [11]. To initiate the algorithm, a set of the possible solutions which was called population is selected in which any member can be encoded as a chromosome. A chromosome is identified using a combination of several different characters. Different characteristics of product and disassembly can be encoded in each chromosome. Then chromosomes are given scores. These scores are based on a fitness function. The fitness function depends on the disassembly parameters which in this work are DHI, DOI, DDI and DCI. To identify a chromosome with an optimum score, the new population needs to be generated iteratively in each step in which mutation can happen. Also, the crossover may happen in which two different chromosomes can mate and a child is produced. Depending on the disassembly aim and objectives, the disassembly parameters in the objective function can be customised by using different weights. In the following, these parameters are defined:

  • Chromosome representation

In order to represent the disassembly solutions and parameters, the disassembly sequences are encoded in chromosomes. A chromosome is a string of genes that occupy specific locations in a chromosome. Parameters of the disassembly operations are encoded in each chromosome as genes. Each chromosome indicates a disassembly sequence and its operations characteristics. A combination of numbers and other characters are used to encode solutions and parameters. For example, if the disassembly process has five variables, they will be codified in a chromosome form composed of five equal section in which each parameters of operations are represented respectively. In this research, each chromosome consists of five string; Sequence, Disassembly Handling Index (DHI), Disassembly Operation Index (DOI), Disassembly Demand Index (DDI) and Disassembly Cost Index (DCI).

  • Objective function

An objective function was defined to evaluate the chromosomes and find their fitness level. This function depends on the disassembly process parameters. In this work, these parameters are Disassembly Handling Index (DHI), Disassembly Operation Index (DOI), Disassembly Demand Index (DDI) and Disassembly Cost Index (DCI) which previously were defined. Therefore, the objective function is calculated as follows:

$$ f\left(\mathrm{ch},\mathrm{gn}\right)=\alpha \mathrm{DHI}+\beta \mathrm{DOI}+\gamma \mathrm{DDI}+\varepsilon \mathrm{DCI} $$
(15)

where f(ch, gn) is the fitness value of the chth chromosome in the gnth generation and α, β, γ and ε are the user-defined weights for disassembly process parameters which are dependent on the aims and objectives of the disassembly. In this research, the objective of the GA is to minimise the fitness function by minimising DHI, DOI, DDI and DCI of each chromosome.

  • Initial population

In order to initiate the optimisation algorithm, a series of randomly selected chromosomes are considered as initial population. The number of chromosomes in the initial population (ncr) is defined depending on the disassembly process characteristics. Higher ncr can result in higher numerical calculation time, while smaller ncr can result in wrong solutions [27]. All constraints and other relationships based on structure graph (feasible solutions) must be satisfied among these chromosomes. In this work, ncr was examined and the optimum ncr subject to optimum numerical calculation time and a better solution was selected.

  • Chromosome selection

Roulette wheel technique was employed to select chromosomes of each generation as parents to generate a new population. In the roulette wheel technique, each chromosome is assigned a probability of selection based on its fitness level. Probability of a chromosome to be selected is calculated by Eq. (16).

$$ {P}_i=\frac{1}{f_i{\sum}_{j=1}^n\frac{1}{f_j}} $$
(16)

where fi is the fitness value of individual chromosome in the population. This technique is used to assure that the chromosomes with the lower fitness level are selected. However, some chromosomes with higher fitness value can be selected as parents of a new generation.

  • Crossover

  • Crossover operator:

In order to generate a new population, crossover operator must be applied to the most efficient chromosomes of the previous generation. In this study, the precedence preservative crossover (PPX) was employed in order to generate new populations. In this methodology, two chromosomes are selected as parents and considered as parent 1 and parent 2. The algorithm starts with generating randomly selected masks which contain 1 and 2 and have the same length as the first section of the chromosomes. These masks impose the order of each child in a new generation. Then an empty offspring is initialized, and crossover operation based on the relative mask is applied and the offspring chromosome is filled with new genes. The mask specifies which parent should be considered to select the gene and after the selected gene will be removed from both parents. This algorithm is repeated until both parents are emptied and a new child is generated. As an example, consider the first two chromosomes of the initial population as parent 1 and parent 2 respectively. Two randomly selected masks which contain 1 and 2 to generate child 1 and child 2 are as follows:

Mask 1: 1 2 1 2 2 2 1

Mask 2: 2 1 1 2 2 1 2

Using (PPX) method to generate new population, the chromosomes of child 1 and child 2 are as follows:

Child 1: 1 7 5 3 6 2 4

Child 2: 7 1 5 3 6 4 2

  • Mutation

After a new population is generated using crossover operation, the chromosomes are subjected to mutation. Mutation is not a dominant operator in genetic algorithm method, and just a small number of the chromosomes are subjected to mutation. A number of chromosomes are selected randomly, and some genes are exchanged. The mutation happens with a defined mutation probability Pm and in such a way that all relations and constraints are preserved. The mutation operation preserves the diversity of the chromosomes and makes sure that the new solution candidates are explored. In this study, the swap technique was used as the mutation operator and different probabilities were investigated. If a chromosome is selected, its genes will be subjected to mutation otherwise its properties will be preserved, and it remains unchanged.

  • Termination conditions

Two conditions are set in order to terminate GA calculations. If one of these conditions is met, GA will be terminated. The first condition is that if the number of produced generations exceeds a maximum value GA will be terminated (in this work 40). The second condition is that if the difference between the average fitness of the new generation and the fitness of the previous generation is smaller than a pre-defined number, i.e., the solutions are remaining constant.

3 Case study and performance analysis

3.1 Case study 1: turbocharger

3.1.1 Background

As the first case study, a turbocharger supplied by Reco Turbo Ltd. is selected to verify the proposed method. It is made by BorgWarner and is used in different cars such as Renault, Nissan and Dacia. Figure 1a, b, c show the turbocharger, CAD model and exploded drawing. As it can be seen, the turbocharger made of 12 individual components which can be categorised into 7 different types: A, B, C, D, E, F and G. Components such as A and C can be disassembled further, but in this work, they assumed as individual components that do not require further disassembly. The properties and required disassembly tasks for all individual components are shown in Table 3. The main disassembly operations to disassemble the product completely are unscrewing and removing.

Fig. 1
figure 1

a BorgWarner turbocharger. b Turbocharger CAD model. c Turbocharger exploded view

Table 3 The properties and required disassembly tasks for all individual components

It is assumed that the whole disassembly process is carried out manually. The penalty time for a tool change is calculated by Eq. (6). In order to disassemble this product completely, one spanner and one hammer are required. Also, the disposal cost penalty is described by Eq. (7). Figure 2 shows a general flow chart of the proposed model.

Fig. 2
figure 2

Flow chart of the proposed model

3.1.2 Performance analysis

Proposed genetic algorithm method was programmed on MATLAB© (version 8.5.0 (R2015a)). Then it was run on a computer with Intel® Core™ i5 6500 CPU at 3.2 GHz and 8.00 GB RAM. In this section, the first convergence capability of the proposed method is discussed and the performance of this method for different iteration and generation is investigated. Then, the optimum sequence of the operations based on this method is obtained and discussed. Finally, in order to validate this method, the turbocharger is disassembled based on the sequences proposed by this method, conventional time-based genetic algorithm and total disassembly time compared.

In order to represent the product disassembly proceedings and constraints mathematically, the hybrid graph method is used. Figure 3 shows the hybrid graph model for turbocharger. It is constructed according to the rules described in Section 2. Based on this model, relation matrix Cr and constraint matrix Dc are as follows:

$$ \mathrm{Cr}=\left[\begin{array}{cc}\begin{array}{ccc}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 2\end{array}\\ {}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 2\\ {}0& 2\end{array}\\ {}\begin{array}{cc}0& 0\\ {}0& 2\end{array}& \begin{array}{cc}0& 0\\ {}2& 2\end{array}& \begin{array}{cc}0& 2\\ {}2& 0\end{array}\end{array}& \begin{array}{ccc}\begin{array}{cc}0& 0\\ {}2& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}2& 2\\ {}0& 0\end{array}\\ {}\begin{array}{cc}2& 0\\ {}2& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 2\end{array}\\ {}\begin{array}{cc}2& 0\\ {}2& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 2\\ {}0& 1\end{array}\end{array}\\ {}\begin{array}{ccc}\begin{array}{cc}0& 2\\ {}0& 0\end{array}& \begin{array}{cc}2& 2\\ {}0& 0\end{array}& \begin{array}{cc}2& 2\\ {}0& 0\end{array}\\ {}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}\\ {}\begin{array}{cc}2& 0\\ {}2& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 2\end{array}& \begin{array}{cc}0& 0\\ {}2& 1\end{array}\end{array}& \begin{array}{ccc}\begin{array}{cc}0& 2\\ {}2& 0\end{array}& \begin{array}{cc}2& 2\\ {}0& 0\end{array}& \begin{array}{cc}0& 1\\ {}2& 0\end{array}\\ {}\begin{array}{cc}2& 0\\ {}2& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}2& 0\\ {}2& 0\end{array}\\ {}\begin{array}{cc}0& 2\\ {}1& 0\end{array}& \begin{array}{cc}2& 2\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0.\end{array}\end{array}\end{array}\right] $$
$$ \mathrm{Dc}=\left[\begin{array}{cc}\begin{array}{ccc}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 1\\ {}0& 2\end{array}\\ {}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 2\\ {}0& 2\end{array}\\ {}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 2\\ {}0& 0\end{array}\end{array}& \begin{array}{ccc}\begin{array}{cc}1& 0\\ {}2& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}2& 2\\ {}0& 0\end{array}\\ {}\begin{array}{cc}2& 0\\ {}2& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 2\end{array}\\ {}\begin{array}{cc}2& 0\\ {}2& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 2\\ {}0& 0\end{array}\end{array}\\ {}\begin{array}{ccc}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}\\ {}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}\\ {}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}\end{array}& \begin{array}{ccc}\begin{array}{cc}0& 0\\ {}2& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}2& 0\end{array}\\ {}\begin{array}{cc}2& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}2& 0\\ {}2& 0\end{array}\\ {}\begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0\end{array}& \begin{array}{cc}0& 0\\ {}0& 0.\end{array}\end{array}\end{array}\right] $$
Fig. 3
figure 3

Hybrid graph model for turbocharger

Table 4 shows the disassembly handling analysis, disassembly operation analysis and demand for the product which are calculated using Tables 1, 2 and 3. For each component, size, weight and shape were considered to find PHI using Eq. (1) and Table 4. Then DHI was calculated using Eq. (2). In order to analyse the disassembly operation for each component, force, requirement of tool, accessibility and positioning were considered. The values for these properties are shown in Table 4 which were used in Eq. (3) to calculate DOI. Then, DPI was calculated using Eq. 4. Finally, based on the demand level in Table 3, the demand for each component was determined

Table 4 Disassembly handling analysis and disassembly operation analysis for turbocharger

In this work, it is assumed that the basic operation time to disassemble a component is constant and does not depend on the position of the operation in the sequence. Different initial population numbers (ncr) were examined in order to determine the optimum convergence time and more realistic solutions. Figure 4a, b show the calculating time and average fitness values under different population number respectively. It can be seen that higher ncr result in higher calculating time; however, it leads to lower average fitness values. It can be seen after ncr = 30 the average fitness value remains constant; however, the calculating time rises with increasing ncr. Therefore, ncr = 30 was selected as the optimum population number for this study. Figure 5a, b show the average fitness values and calculating time under a different number of generations.

Fig. 4
figure 4

a Calculation time under different population. b Average fitness values under different population number

Fig. 5
figure 5

a Average fitness values under different generations. b Running time under different generation number

The effects of different mutation probabilities on the proposed method characteristics were studied. Average fitness value trends and calculation times for different probabilities are shown in Fig. 6a, b respectively. It can be seen that mutation probability has a significant effect on both fitness value and calculating time. Selecting a very small or very high mutation probability leads to higher calculating time and higher average fitness function. Therefore, choosing an optimum mutation probability in order to achieve an optimum solution is essential. Analysing these results show that the mutation probability from 0.1 to 0.15 gives best results, which minimise both calculating time and fitness value.

Fig. 6
figure 6

a Average fitness values under different probability. b Running time under different probability

The optimum disassembly process sequence based on the proposed model and its fitness value is shown in Table 5. This optimised solution was achieved in 0.25 s using the proposed method.

Table 5 Final optimum disassembly process sequence

In order to compare the proposed method with conventional methods, the turbocharger was disassembled manually based on the sequences generated by this research and conventional GA method and operations times and the total disassembly process time were measured. To generate a sequence based on the conventional GA, the method proposed by Parsa and Saadat [13] was employed. Based on their method, the main parameter for GA objective function to find an optimum solution is basic operations times which should be estimated to initiate the algorithm. Also, as in this research the method for disassembly is manual, the algorithm used at [13] was modified to remove the automation parameter, i.e., time travel of the robot arm. The sequence generated using this method can be seen in Table 6.

Table 6 Optimum disassembly process sequence generated by the conventional time-based method

Then, the turbocharger was disassembled 5 times to minimise the error and mean times for operations and total disassembly process calculated. The disassembled turbocharger can be seen in Fig. 7. Also, the turbocharger was disassembled 5 times based on the sequence generated by the conventional GA method and mean times calculated. The measured and calculated times are shown in Table 7.

Fig. 7
figure 7

Disassembled components of the turbocharge

Table 7 Disassembly operations and process times for turbocharger

Although the main advantage of the proposed method in this research is to avoid initial time estimation and generate more realistic sequences of disassembly operations based on the disassemblability of the products, it can be seen that the overall disassembly time was improved by 13%. This improvement can be due to more realistic objective parameters and avoiding estimating operation time which reduce the errors. It should be noticed that the main objective of the conventional time–based method is to minimise the overall disassembly time; however, the proposed method in this research considers other objectives such as demand and disassembly times alongside the main objective which is disassemblability.

3.2 Case study 2: water pump

The second case study is a water pump model GMP187. Figure 8a, b, c show the water pump, exploded drawing and disassembled components of the water pump respectively. The water pump made of 7 sub-assemblies. Table 8 shows the properties and required disassembly tasks for all individual components.

Fig. 8.
figure 8

a Water pump. b Disassembled components. c Exploded view

Table 8 The properties and required disassembly tasks for all individual components

As can be seen, the disassembly operations are carried out manually and just on the tool is required. Furthermore, demand for each component, disassembly tasks and required tool are detailed in Table 8. The same method as the case study 1 is employed to represent the product disassembly proceedings and constraint, therefore, presenting the graph and matrices are not repeated for this case study.

The disassembly handling analysis, disassembly operation analysis and demand for the product were calculated using Tables 1, 2 and 8 which presented in Table 9. It can be seen that components 1 and 7 have the highest demand. It is assumed that the basic operation time to disassemble a component is constant and does not depend on the position of the operation in the sequence.

Table 9 Disassembly handling analysis and disassembly operation analysis for turbocharger

GA parameters and operators were set up to find an optimum operation sequence based on the DHI, DOI, DDI and DCI. For the case study, ncr = 20, as the component number is lower than the first case study. This helps to reduce computational cost. Analysing the results of the first case study showed that mutation probability from 0.1 to 0.15 archives the best results, therefore, mutation probability for this case study was set at 0.15.

Figure 9 shows average fitness values under a different number of generations. It shows that at generation 10 the algorithm reaches its minimum fitness value and running it further does not improve the results. The optimum disassembly process sequence based on the proposed model and its fitness value is shown in Table 10. This optimised solution was achieved in 0.16 s using the proposed method.

Fig. 9
figure 9

Average fitness values under different generations

Table 10 Final optimum disassembly process sequence

The same method as the first case study was employed to generate a time-based disassembly sequence which is shown in Table 11. Then the water pump was disassembled manually based on the two different disassembly sequence. In order to minimise the error, the water pump was disassembled 5 times based on each sequence and average times were calculated which can be seen in Table 12. It can be seen the overall disassembly time was improved by 10% which is slightly less than the first case study. This can be due to lower components for this case study than the first one.

Table 11 Optimum sequence generated by the time-based method
Table 12 Disassembly operations and process times for water pump

4 Conclusion

In this paper, a new method was introduced to solve disassembly sequence planning problems. The majority of studies have focused on time as the main parameter to find an optimum solution and other parameters such as tool change requirement and disposal casts are converted in a time unit. These studies estimate the time required to disassemble a component of a product as it is difficult to determine the time accurately. This is due to the fact that the product should be disassembled completely in order to measure the time. Also, the same EOL products can have different conditions, which cause different disassembly times. Furthermore, the same disassembly operation in a different sequence order can have a different disassembly time. In this work, DHI, DOI and DDI were introduced as optimisation parameters to analyse handling, disassembly difficulty and demand of a product. Additionally, DCI was introduced to consider disassembly time and other disassembly costs. The hybrid graph method was used to represent the mathematical model of the product and disassembly constraints. A genetic algorithm was then employed to search the possible sequences and find a near-optimum solution. Finally, a turbocharger as an industrial product was selected to test and verify the proposed method. The results showed the effectiveness and compatibility of the method. One of the solutions with the minimum fitness value was presented as the optimum sequence for this case study. In this sequence, all disassembly constraints are met. In order to compare the effectiveness of this method with conventional time-based GA methods, the turbocharger was manually disassembled based on the sequences generated by the proposed method and conventional method. Disassembly operations and process times were measured and compared. The results showed 13% and 10% improvement in disassembly time for case studies 1 and 2 respectively. Further improvement of this method can be obtained by improving the disassemblability categorisation and scoring.