A neutrosophic set-based TLBO algorithm for the flexible job-shop scheduling problem with routing flexibility and uncertain processing times

Different with the plain flexible job-shop scheduling problem (FJSP), the FJSP with routing flexibility is more complex and it can be deemed as the integrated process planning and (job shop) scheduling (IPPS) problem, where the process planning and the job shop scheduling two important functions are considered as a whole and optimized simultaneously to utilize the flexibility in a flexible manufacturing system. Although, many novel meta-heuristics have been introduced to address this problem and corresponding fruitful results have been observed; the dilemma in real-life applications of resultant scheduling schemes stems from the uncertainty or the nondeterminacy in processing times, since the uncertainty in processing times will disturb the predefined scheduling scheme by influencing unfinished operations. As a result, the performance of the manufacturing system will also be deteriorated. Nevertheless, research on such issue has seldom been considered before. This research focuses on the modeling and optimization method of the IPPS problem with uncertain processing times. The neutrosophic set is first introduced to model uncertain processing times. Due to the complexity in the math model, we developed an improved teaching-learning-based optimization(TLBO) algorithm to capture more robust scheduling schemes. In the proposed optimization method, the score values of the uncertain completion times on each machine are compared and optimized to obtain the most promising solution. Distinct levels of fluctuations or uncertainties on processing times are defined in testing the well-known Kim’s benchmark instances. The performance of computational results is analyzed and competitive solutions with smaller score values are obtained. Computational results show that more robust scheduling schemes with corresponding neutrosophic Gantt charts can be obtained; in general, the results of the improved TLBO algorithm suggested in this research are better than those of other algorithms with smaller score function values. The proposed method in this research gives ideas or clues for scheduling problems with uncertain processing times.


Introduction
As two crucial components in a flexible manufacturing system, the job-shop scheduling module and the process planning module received a number of research attentions [1][2][3][4][5]. In tradition, process planning specifies the technical details [6,7], e.g., cutting parameters; the scheduling module, on the other hand, arranges operations on machines to shorten the maximum completion time (makespan) or meet other criteria [8,9]. These two modules usually perform separately and sequentially [10][11][12]. Nevertheless, such paradigm ignores the inherent relationship between the two functions, since there is a lack of coordination mechanism between them and more importantly, the flexibility in the two functions cannot be utilized fully [13,14]. For example, the processing flexibility is rigidified, since one cannot change processing sequence at the scheduling stage with sequential paradigm. Thus, this will lead to inefficiencies of a flexible manufacturing system [15,16], such as resource conflicts and unbalanced utilization of machines [17]. Lots of efforts have been paid to eliminate resource conflicts and improve the performance of a manufacturing system by taking the advantage of the flexibilities in the two modules [4,8,13,17]. Fruitful results have been observed and the meta-heuristics have garnered wide research interests due to the high cost performance and the accessibility.
There are three kinds of flexibilities in this problem: operation flexibility (OF), sequencing flexibility (SF), and processing flexibility (PF) [14,18]. OF means that an operation may be processed by more than one machine tools. SF allows more than one feasible operation permutations as long as these operations satisfy the precedence constraints. PF means that there may be more than one possible operation combinations (or sets) to finish the same feature of a part [10], since the same feature can usually be realized by many feasible operation combinations. Clearly, it is quite necessary to carry out joint optimization for the problem by properly managing OF, SF, and PF.
Many efforts have been paid for the plain IPPS problem. Nevertheless, these research papers mainly pay attentions to the optimization methods to shorten the makespan. In other literature, researchers either consider the problem in a dynamic environment with random job arrivals or performing multi-objective optimizations. Most of these studies deem the processing times as static ones; however, this goes against with real-life situations: the processing times always vary in a certain range due to various kinds of disturbances [19]. As a result, the makespan will fluctuations within a certain range [20]. Clearly, existing optimization techniques for deterministic processing times are not fit for the uncertain IPPS problem any more [21]: the actual starting times or finishing times will always deviate largely from the predetermined ones, and there will be large deviations between the actual makespan and the so-called "optimal" one. Unfortunately, modeling and optimization techniques on the uncertain IPPS problem have seldom been considered and robust scheduling schemes are not available to absorb processing time varyings or hedge against the uncertainty in processing times. Reallife requirements stress the need to perform this study.
There are mainly two types of methods to handle uncertain scheduling problems: the on-line mode and the off-line mode. They also correspond to the reactive scheduling method and the proactive (or preventive) scheduling method [21,22]. In reactive scheduling paradigm, an optimal scheduling scheme is first generated; when disturbances occur or starting times of operations deviate from the predetermined values to a certain extent, the rescheduling process is trigged in time by the scheduling module for the remaining operations. Such rescheduling procedure will be performed interactively till all the operation have been processed. This paradigm cannot ensure the global optimality, and the total makespam will increase with the number of reschedulings [23,24]. The proactive scheduling considers the possible processing time fluctuations or disturbances in advance and the resultant scheduling scheme is capable to absorb possible processing time fluctuations. In this research, the proactive scheduling paradigm is adopted and we try to obtain a scheduling scheme with certain immunity to uncertain processing times.
Several methods have been proposed to model actual processing times. For example, random variables that subject to a certain probability distribution have been considered [25]. In other cases, processing times are treated as fuzzy numbers that subject to a certain kind of membership function [26][27][28], such as triangular fuzzy numbers (TFNs) or trapezoidal fuzzy numbers (TrFNs). Usually, it is very hard to distinguish which distribution operation processing times follow and the math operators. The fuzzy set-based optimization techniques have received more research attentions due to its convenience in modeling uncertain processing times as well as in the implementation details of algorithms. Traditional fuzzy theory is not perfect in describing the fuzziness of things. Prof. Smarandache further developed the concept of "neutrosophy" in 2008 for a better conveying of people's thinking [29]. After that, the neutrosophic set was formally proposed to efficiently and effectively cope with indeterminate and inconsistent information [30,31]; as a powerful general framework which generalizes classic sets, fuzzy sets, intuitionistic fuzzy sets, tautological sets, and other sets [32], the neutrosophic set is capable to handle incomplete information, indeterminate information, and inconsistent information [32]. Therefore, it can be used to more precisely or accurately describe uncertain objects.
In fuzzy sets, there is membership function μ x only and the information of indeterminacy and nonmembership is lost [32]. In neutrosophic sets, indeterminacy is quantified explicitly as three independent components: the truth membership (T A (x)), the indeterminacy membership (I A (x)), and the falsity membership (F A (x)). Because all the three memberships are used to reflect the ambiguous nature of subjective judgments, there is no restriction to express uncertain objectives and it is quite easy to capture imprecise or uncertain information. Nevertheless, since the three components T A (x), I A (x), and F A (x) are non-standard subsets (nonstandard interval ]0 − , 1 + [), they cannot be directly used in engineering problems or real-world applications. Therefore, the single-valued neutrosophic set (SVNS), a branch of neutrosophic sets, is proposed [33,34]. Let E be a universe; each of the three membership functions of an SVNS is mapped into the closed interval: Successful applications of SVNSs have been reported. Deli et al. developed a ranking method for single-valued neutrosophic numbers and they applied the method in multi-attribute decision-making [35]. Pramanik and Mallick developed a TODIM strategy to deal with multi-attribute group decision-making problem, where score function, accuracy function, and Hamming distance function for single-valued trapezoidal neutrosophic numbers were considered [36]. The shortest path problem in neutrosophic set environment has also been considered [37][38][39][40]. For example, Broumi et al. in their research suggested a new score function for interval-valued neutrosophic numbers and the neutrosophic shortest path is determined based on the score function [39]; the score function is used to evaluate the paths that have been chosen. Applications of SVNSs can also be found in pattern recognition and medical diagnosis [41,42], taxonomy, and clustering analysis [42] and other areas. Unfortunately, most of the existing studies regarding SVNSs or related applications focus on decision-making issues; to the best of our knowledge, there is no research regarding SVNSs for the uncertain IPPS problem. In this study, SVNSs are first introduced to model the uncertain processing times in solving the IPPS problem.
Scheduling problems, due to their complexity and the NPhardness, are usually solved by meta-heuristic algorithms. Inspired by the effects of influence of teachers on learners, Rao et al. in 2008 proposed the teaching-learning-based optimization algorithm [43]. The TLBO algorithm also hires many individuals as learners and teacher(s). Nevertheless, the outstanding feature of TLBO is parameter independent; that is, every learner will take part in the 'teacher phase' and the 'learner phase' and there is no limitations caused by pre-set probabilities throughout the two phases. Relative applications of the TLBO algorithm can be found in existing literature. Rao and Patel used the TLBO algorithm in multi-objective optimization of heat exchangers to achieve maximum heat exchanger effectiveness with minimum total cost [44]; they also give the improved version of the plain TLBO algorithm to enhance the exploration and exploitation capacities [45]. Tang et al. suggested a hybrid TLBO algorithm for solving multi-constraints' stochastic two-sided assembly line balancing problem [46]. In terms of scheduling problems, applications of the TLBO algorithm have also been reported [47][48][49]; for example, Shao et al. proposed a hybrid discrete TLBO algorithm for the no idle flow shop scheduling problem to minimize the total tardiness [48]. In this research, we extent the application of this algorithm to the discrete problem and an improved TLBO algorithm is developed for the IPPS problem.
This research tries to develop a neutrosophic based TLBO algorithm for the IPPS problem which is contaminated with uncertain processing times. Neutrosophic sets are used to model uncertain processing times; this is the novelty of this research. To improve the exploitation ability of the algorithm, some modifications are made for the TLBO algorithm to adapt to the problem. The remainder of the paper is organized as follows. Some research papers on the IPPS problem as well as the scheduling problem with uncertain processing times will be reviewed in "Literature review". "Mathematical modeling with neutrosophic set" gives some preliminaries on neutrosophic sets; besides, the method to convert a deterministic mathematical model of the IPPS problem to the neutrosophic counter part will also be presented in this section. In the next section, details of the proposed improved TLBO algorithm is demonstrated. The experimental study with discussions will be arranged in "Experiments with discussions". Conclusions with further research directions will be given in the last section.

Literature review
The IPPS problem with deterministic processing times has been investigated widely [10,16,17,[50][51][52][53]. The research of the IPPS problem starts from single objective optimization, and many research papers mentioned above pay more attentions to makespan reduction. Many novel optimization algorithms have been applied in their optimization methods. For instance, Kim et al. suggested the symbiotic evolutionary algorithm to tackle this problem [17]; Zhang et al. developed an object-coding genetic algorithm to optimize the makespan criterion [50]; Lian et al. introduced the imperialist competitive algorithm in IPPS instance optimizations with relative promising results [10]. Besides, Petrovic et al. and Jin et al. also adopted novel meta-heuristic algorithms to address deterministic IPPS problem [51,54]. Recently, Liu et al. proposed a modified genetic algorithm and the corresponding encoding and decoding methods for the IPPS problem and promising results have been observed [13]. Relative literatures have been summarized in Table 1.
Some researchers have shifted their attentions to the IPPS problems with practical requirements, e.g., handling uncertainty in IPPS instances. As analyzed in "Introduction", the fluctuations in processing times will affect the upcoming operations (they will be put off) and further disturb the whole scheduling plan; if such issue is not settled properly, the original scheduling scheme will be useless. Therefore, it is quite necessary to develop effective optimization method to hedge against such uncertainty. One of the paradigms is the reactive or on-line rescheduling approach [23]; the rescheduling will be triggered when there is a large deviation between the actual and the predefined scheduling scheme. However, the renewed scheduling scheme cannot ensure the global optimality [23]. Therefore, the proactive paradigm received more research attentions. There are some kinds of methods in the proactive scheduling paradigm. The chance constrained programming (CCP) approach is the first kind of approach that is used in scheduling problems or other discrete optimization problems with parameters uncertainty [55][56][57]. This method can transform a mixed integer linear programming (MILP) model with uncertain parameters into a "deterministic" one, and the resultant model can resist the uncertainty in parameters. A typical application of such method is reported in [57]; the processing times in a flow shop are regarded as random variables and an equivalent deterministic model is provided. Similar research can also be found in recent publication [58]. Other applications of CCP can be found in assembly line balancing [56] and project scheduling [55], etc. However, this method relies on the deterministic MILP model of the problem; in many cases, e.g., the IPPS problem and the flexible job-shop scheduling problem, the corresponding model is quite complex with lots of variables and constraints and the model after conversion is more complex than before. This hinders the application of the method.
Fuzzy set-related optimization methods are another kind of approach in coping with uncertainty in scheduling problems. Because this method can be easily combined with meta-heuristic algorithms in tackling complex scheduling problems or other NP-hard problems, many fuzzy set-related studies have been investigated since 1999 [59]. Sakawa and Mori gave a textbook like example of such application in jobshop scheduling in the genetic algorithm framework [59]; they adopted triangular fuzzy numbers to model the uncertain processing times and due dates and the approximation for the max operator was developed to ensure that the result is also a triangular fuzzy number. In subsequent studies, many fuzzy set-based optimization methods for parameter uncertain scheduling problems have been published, and meta-heuristic algorithms are adopted. Lei suggested a decomposition-integration genetic algorithm (DIGA) to reduce the fuzzy makespan value for the flexible job-shop scheduling problem where the uncertain processing times are also mapped into triangular fuzzy numbers [27]. Later, they gave a similar research, where an efficient swarm-based neighborhood search algorithm (SNSA) is developed for the fuzzy flexible job-shop scheduling problem [60]. Following Lei's step, Wang et al. combined fuzzy numbers with the artificial bee colony (ABC) algorithm and presented a hybrid artificial bee colony (HABC) algorithm in their research to capture the best fuzzy makespan [61]; again, the uncertain processing times are treated as triangular fuzzy numbers. With almost the same coding and decoding methods, Wang et al. also suggested an effective fuzzy number-based estimation of distribution algorithm (EDA) to obtain the best makespan in the flexible job shop [28]. Gao et al. reported a study on uncertain flexible job shop scheduling problems [1]; a discrete harmony search (DHS) algorithm with a simple heuristic rule which is used to initialize the individuals was proposed to shorten the maximum fuzzy completion time. Li et al. recently presented a research regarding the uncertain IPPS problem based on the interval number [26], which can be deemed as a kind of fuzzy number with a uniform distribution-like membership function. In recent years, multi-objective cases of uncertain scheduling problems have been reported. For example, Gao et al. proposed an improved artificial bee colony (IABC) algorithm to minimize the maximum fuzzy completion time and the maximum fuzzy machine workload, and the benchmark instances as well as the practical instances were adopted to test the algorithm [62]. According to the literature mentioned above, it can be found that the uncertain processing times are regarded as triangular fuzzy numbers and other types of fuzzy numbers have seldom been considered. The reason behind this is that triangular fuzzy numbers are more close to the actual situations. However, existing investigations pay less attention on the uncertain IPPS problem and worse still, there is no application of neutrosophic sets on the uncertain IPPS problem.
Other kinds of methods dealing with uncertain scheduling problems have also been reported according to related literature. For instance, Liu et al. suggested a Petri netbased model for the emergency response process which is constrained by resources and uncertain durations [63]. Haddadzade et al. considered the uncertain IPPS problems using a two stage method, where the process planning procedure and the scheduling module are separated [8]; nevertheless, in their research, only several 'promising' process plans are considered, and hence, the corresponding flexibilities cannot be fully utilized. Different with existing research, we propose in this paper a novel optimization method for the uncertain IPPS problem; the neutrosophic set is applied to model the uncertain processing times. The resultant scheduling scheme is capable to hedge against the uncertainty and improve the robustness at a certain extent.

The uncertain IPPS problem
The IPPS problem can be deemed as the extension of the flexible job-shop scheduling problem. However, the process planning module increases the flexibilities as well as complexity of the problem. Based on Ref. [64], the uncertain IPPS problem can be defined as: given a set of n parts (jobs) to be processed on m machines with operations that have alternative manufacturing resources and uncertain processing times, select the suitable manufacturing resources and sequence the operations so as to determine a schedule in which the precedence constraints among operations can be satisfied and the corresponding objectives, e.g., the fuzzy maximum completion time, can be optimized. In this research, the actual processing time of each operation on each available machine will be presented using neutrosophic sets.
A job in the IPPS problem can be presented by a network graph, as shown in Fig. 1. The starting node and the ending node are the dummy nodes, representing the beginning and the finishing operations. The other two types of nodes are operation nodes and 'OR' nodes; an operation node stands for an operation where the operation ID, alternative machines with corresponding processing times have been specified. For instance, operation 6 in Fig. 1 can be processed on machine 1 or 5 with nominal processing times 42 and 38, respectively. The 'OR' node sometimes may not appear in a network graph; however, if it appears after a certain node, there will be at least two OR link paths, each of which begins after the 'OR' node and ends when the path merges with the other [14]. For example, there are two OR link paths after the 'OR' node among operation nodes 8, 9, and 12 in Fig. 1; therefore, only the left OR link path (operation nodes 9, 10, 11) or the right OR link path (operation nodes 12, 13) needs to be visited. If a bifurcation without 'OR' node, operation nodes in all the link paths should be visited. Operation sets 2, 3 and 4, 5 are in two link paths according to Fig. 1, and these two link paths are not OR link paths; therefore, operation nodes 2-5 have to be visited. The arrows in Fig. 1 indicate the precedence relationships between operations: an one-way arrow from node A to B means that operation B should be processed directly or indirectly after operation A. In such a case, a job may have many possible process plans (operation permutations), since there may be lots of operation precedence relationships specified by the one-way arrows in the network graph. Especially, there are quite a few operation nodes whose precedence relationships are not determined, since no arrow is placed between any two operation nodes, e.g., operation nodes 3 and 12 in Fig. 1. Two feasible process plans (operation permutations) of the example network graph are also given in Fig. 1. With such flexibility, one does not know which operation permutation is the 'best' one from the scheduling point of view; clearly, it is quite necessary to consider both the scheduling and the process planning modules simultaneously: a 'bad' process plan in process planning module may be the best one in scheduling.
The three types of flexibilities are reflected in a network graph. The situation where an operation can be processed by more than one available machines reflects the operation flexibility; the situation where given operations can have different permutations as long as they satisfy the precedence relationships stands for the sequencing flexibility; finally, the situation where only operations in one OR link path are selected relates to the processing flexibility.

Neutrosophic sets
This section introduces some basic concepts of neutrosophic sets, and related operations.
Definition 1 [31,40] Let 1 be a special neutrosophic set on the real number set R, the truth membership function μã(x), the indeterminacy membership function νã(x), and the falsity membership function λã(x) of are defined as (1) otherwise. (2) For the case when 0 ≤ã T ≤ã I ≤ã P ≤ã S ≤ 1 and Tã, Iã, Fã ∈ [0, 1],ã is called a normalized trapezoidal neutrosophic number; whenã I =ã P ,ã is transformed into a triangular neutrosophic number. In this research, the uncertain processing time are modeled as triangular neutrosophic numbers.
Definition 2 [33,40] Let X be a space point, and x be an element in X , a single valued neutrosophic set(NS) V in X is characterized by membership functions, e.g., μã(x), νã(x), and λã(x), and μã(x) : It is clear that, the truth, the indeterminacy and the falsity membership functions are mapped into the range [0, 1] in single-valued NSs; this brings convenience for the applications of NSs. However, operators of triangular neutrosophic sets (TNSs) are required to be defined, because the operators are used in the scheduling (decoding) procedure. More precisely, three operators of TNSs, e.g., the addition operator, the maximization operator, and the ranking operator, should be ensured. The addition operator is defined to determine the sum of two TNS numbers in completion time calculation. The the ranking operator is employed in comparing two TNSs numbers so as to compare the completion time on each machine, and hence, the makespan is determined. The maximization operator is developed to determine the neutrosophic starting time of an operation. Based on the existing literature [31,40] Definition 4 Letr N = [r T ,r I ,r S ], (Tr , Ir , Fr ) be a TNS, and the score function can be defined as , (Ts, Is, Fs) be two TNSs, and the ranking of the two TNSs is easy: if s r N ≺ s s N thenr N ≺s N . Furthermore, if s r N = s s N coincidentally, the accuracy function is applied [35,40]. In many cases, using the score function is enough to compare two TNS numbers.
For the maximization operator, it is used to determine the maximum neutrosophic completion times of two operations: the current operation should be started only after the maximum neutrosophic completion time between the job predecessor and the machine predecessor. Since the indeterminacy and the falsity membership functions are considered, the maximization operator will be discussed in the decoding procedure in later sections.

IPPS modeling
In this research, the TNS-based mathematical model is developed based on previously proposed Type-2 model [14], which is more powerful and general than Type-1 models [65]; nevertheless, it is suitable for plain or crisp number-based processing times only. We therefore try to introduce TNS numbers into the plain mixed integer linear programming (MILP) model, and then to interpret it into the deterministic one. The MILP model with plain processing times is provided here for ease of understanding.
In modeling the problem, we assume that: (1) job preemptions are not allowed; (2) each machine can only process at most one job at any time; (3) at any time, a job can only be processed by at most one machine; (4) all the jobs are available at time zero; (5) the transportation time as well as the set-up time can be included in the nominal processing time. Corresponding sets, parameters, and variables are given as follows.

Subscripts and notations
The jth operation of the ith job, O ih j The jth operation of the ith job in the hth operation combination.

Sets and parameters
p i jk The nominal processing time of O i j processed by machine k, p N i jk The neutrosophic processing time of O i j processed by machine The operation set that contains the operation belonging to the hth combination of the i-th job, V i j j =1, If there is an arrow from operation node j to j in the network graph of job i (O i j is to be processed before O i j ); =0, otherwise; This parameter expresses the partial precedence relationships among operations, The parameter V i j j determines the precedence relationships of a few operations only; only this set of parameters is not enough to determine the precedence relationships of all the selected operations (the operations belonging to the selected combination). Therefore, the parameters Q i j j s, which specify the precedence relationships of all the operations, are constructed (see the algorithm in Ref. [14]). The basic principle of parameters Q i j j s is easy: if the precedence relationship of any two selected operations is determined, the precedence relationships of all the selected operations are determined. Variables The nominal makespan value, If the h-th combination of the i-th job is selected; =0, otherwise, The neutrosophic completion time of opera- Constraints Z i j j + Z i j j = 1, ∀i ∈ n, j, j ∈ n i , j = j , Equation (8) gives the objective: to minimize the nominal makespam. Constraint set (9) means that each job is forced to select an operation combination which is divided by the 'OR' link paths in the network graph; only all the necessary operations are selected, can the job be completed. Constraint set (10) indicates that for the selected operations in the hth combination, they should be assigned to exactly one machine; otherwise, operations will not be assigned to any machines (Y ih = 0). Constraint set (11) further restricts the completion time of unselected operations: if operations belonging to the hth combination are not selected, their completion times are set to zero. Constraint set (12) determines the completion times of two operations that have a precedence relationship specified directly by the network graph. For the two operations that have no precedence relationship (Q i j j + Q i j j = 0), constraint set (13) is used to determine which operation will be processed before the other. After this, such operations can be scheduled sequentially based on constraint set (14). Constraint sets (15) and (16) schedule two operations on the same machine by determining the precedence relationship of any two operations on the same machine; if the operation is not processed by machine k, the two constraint sets are useless. Finally, constraint set (17) determines the the completion time of each job.
In the following, we try to interpret the current model into the neutrosophic version. Note that for a real number or a integer a, the corresponding neutrosophic version is [a, a, a](1.0, 0.0, 0.0) and its score function value is 1 12 [a + 2a + a] × [2 + 1.0 − 0.0 − 0.0] = a; therefore, there is no need to reform the constraint sets that contain binary variables only, e.g., constraint sets (9) and (10). For other constraint sets that contain continuous variables, they should be reformed. The makespan should be calculated and determined using the score function instead of a crisp number C max ; the objective function (8) can thus be reformed as The constraint set (11) is interpreted in constraint set (19), where the strict inequality is converted to the inequality of score function values For constraint set (12), according to Definition 3, it can be reformed as However, constraint set (20) contains the non-linear terms, e.g. 1 − 1 − T p i j k X ih j k ; besides, the terms C N ih j and C N ih j can be further unfolded. Fortunately, the binary variable X ih j k has only two states: suppose operation O ih j is selected and processed on machine k ; the result of summation is Therefore, the summation operator in constraint set (20) can be considered separately to reduce the complexity of the model; the refined model is given in constraint set (21) constraint set (21) adopts binary variable X ih j k to distinguish whether operation O ih j is processed on machine k ; if so, 1 − X ih j k equals 0 and the neutrosophic comple- is determined. In other cases, naturally,C ih j ≥C ih j .
With the similar method, other constraint sets (14)- (17) can also be converted into the corresponding constraint sets; nevertheless, the resultant constraint sets are very complex, and more importantly, massive binary variables with constraints hinder the application of the model. For the MILP models of the IPPS problem with crisp real parameters and variables [14], the results cannot be obtained in reasonable time; for the models of uncertain IPPS problem, one cannot obtain the optimal solutions also, since the model proposed above is more complex than the previous ones [14]. Instead, we try to solve this problem using the soft computing approach.

TLBO-based algorithm
The improved TLBO algorithm By far, there are many nature-inspired meta-heuristic algorithms; among these algorithms, the genetic algorithm is the most classical algorithm. GA, which hires the Darwin's 'the survival of the fittest', can provide promising solutions for many optimization problems. Nevertheless, the critical failing of GA stems from the parameter setting: the values of key parameters in GA, e.g., the crossover probability, will affect the effectiveness of the solutions. For other meta-heuristics, the determination of parameters is also very empirical. For example, in particle swarm optimization (PSO) algorithm, some parameters, the inertia weight for example, should also be properly determined. In view of this, Rao et al. paid effort in developing novel parameter-free meta-heuristics and they proposed the TLBO algorithm, which simulates the efforts of the influence of teachers on the students.
The TLBO algorithm is also developed based on the swarm intelligence mechanism; in the algorithm, the most knowledgeable individual in the population is mimicked as the teacher and other individuals are the learners. The teacher tries his best to disseminate knowledge to the learners to improve the knowledge levels of the population. Obviously, the knowledge level of the teacher will influence the learners; in many times, learners may require a more superior, qualified, or sophisticated teacher to teach them; besides, the learner may surpass their teacher. In such a case, a new teacher will be selected among the latest population. On the other hand, acquiring knowledge can also be realized by learning from each other; that is, the learners can also improve their knowledge level by consulting other advanced learners. Therefore, the TLBO algorithm consists of two phases: the teacher phase and the learner phase, and there is no probability-related parameter setting requirement in both the two phases.
However, the TLBO algorithm is originally designed for unconstrained non-linear continuous optimization problems; meanwhile, there is only one teacher from whom students can learn. Therefore, the algorithm should be improved properly to adapt to the uncertain IPPS problem which is a discrete optimization problem. In this research, the original TLBO algorithm is properly improved by employing the coding and the decoding scheme discussed below. Moreover, different teachers may be good at different subjects and only one teacher in the algorithm is usually not enough to improve the students' knowledge level. There has the practical significance: the algorithm will be trapped into local optimum with only one teacher in the IPPS problem optimization, since the diversity of the population is largely limited. Therefore, top 5% of the individuals are deemed as the teachers in the improved TIBO algorithm.

Encoding and decoding
In the proposed improved TLBO algorithm, an individual stands for a solution and a solution can also be mapped into a individual; this can be realized using the coding and the decoding procedures. Like previous research [21,23,51], the coding scheme, presented in Fig. 2a, considers operation combination selection, machine selection, and operation permutation simultaneously. The coding scheme contains three strings: the scheduling string, the process plan string, and the operation string. The process plan string contains only one position in which the operation combination ID is recorded. By properly sequencing the operations belonging to the selected operation combination and selecting the corresponding machines, a feasible process plan can be obtained. The operation string contains the information of each operations belonging to the job. Each job has exactly one process plan string and operation string; therefore, the number of jobs, e.g. |n|, corresponds to the number of process plan strings as well as operation strings. According to Fig. 2, the process plan string is attached to the corresponding operation string. In an operation string, the sequence of each operations is determined properly using the binary tree method [66] according to the precedence relationships specified in the network graph. The number of positions in the operation string is exactly the number of operations of the selected operation combination. In each position, the selected machine together with the nominal processing time is given in the pair of brackets. However, the number of positions of an operation string depends on the number of operations of the selected operation combination, and it equals |R ih |. The scheduling string contains |R ih | max positions and it is a permutation of job IDs; if the actual number of operations in selected operation combination is less than the maximum one, |R ih | < |R ih | max , the vacant positions will be filled with 0s. In the scheduling string, the operation-based coding paradigm is adopted to avoid any possible infeasibility: if job ID i appears exactly j times in current position, it means that the current operation is located in the jth position of operation string of job i.
For example, there are three jobs in the IPPS instance in Fig. 2a, they contain three, two, and four operations, respectively. The third operation combination is selected in the first job, and there are three operations in this combination. For all the three jobs, there are totally nine operations; thus, there are 9 nonzero positions in the scheduling string. The third position of the scheduling string is number 3 and this number appears exactly for the second time: the third operation to be scheduled is in the second position of the operation string of job 3. This operation is the second operation of job 3 and it will be processed by machine 2.
The decoding procedure is developed based on the active scheduling paradigm in which the insertions of operations are allowed to shorten the makespan. The final makespan is a TNS number; therefore, the decoding procedure will be developed based on the existing deterministic parameterbased scheduling method with some improvements. In the following, the decoding procedure proposed in this research are described.
1. Based on the coding scheme, especially the the scheduling string and the operation string, determine the scheduling sequence according to which the operations will be allocated on machines one by one. 2. For each operation to be scheduled, determine the machine as well as the neutrosophic processing times t i jk = t T i jk , t I i jk , t S i jk , T t i jk , I t i jk , F t i jk . 3. If current operation O i jk is to be scheduled on the machine on which previous operations have located, idle time slots are required to be checked one by one. This process can further be divided into four situations below: , insert current operation into this time slot. This procedure is illustrated in Fig. 3a, where the curves representing indeterminacy and falsity are not given. In the Gantt chart with neutrosophic processing times, the curves of starting times are given below horizontal lines, while the neutrosophic completion times are above horizontal lines.
3.2 If the current operation has JP and has no MP, the starting time of the time slot depends on the completion time of JP:t J P . If S F(t J P +t i jk ) ≤ S F(S E T ), insert current operation into this time slot. This procedure is illustrated in Fig. 3b If either of the two cases holds, insert current operation into this time slot. This procedure is illustrated in Fig. 3d. 4. If the current operation cannot be arranged at any idle time slot on a certain machine, it can only be appended at the bottom of the machine. In such a case, as the situation in Step 3.4, the starting time of the current operation is determined by the maximum completion time between its JP and MP. 5. Return to Step 2 to schedule the next operation till all the operations have been processed.

Genetic operators
The crossover procedure is the main genetic operator, and it is responsible for the learning process. In GA, the excellent gene fragments in parents are passed on to their offsprings through the crossover procedure; similarly, the teacher will disseminate knowledge to the learners in TLBO and this is also realized by performing the crossover between the teacher and the learners (the teacher will not be changed after the crossover procedure). As shown in Fig. 2b, c, there are two levels in the crossover operator. For two individuals, e.g., parent 1 (P1) and parent 2 (P2) in figure, determine several jobs randomly. Exchange the operation strings and the process plan strings of the selected jobs, and keep other strings as they are. In Fig. 2b, jobs 2 and 3 are selected and the corresponding operation strings and process plan strings are exchanged. Depending on the selected operation combina- tion, the amount of operations needed to complete a job may be distinct; therefore, this will affect the job IDs in the scheduling strings in both individuals. As presented in Fig. 2b, the scheduling strings in both P1 and P2 are adjusted accordingly: the job IDs of unselected jobs in P1(P2) are kept in the same position in O1(O2), and the job IDs of the selected jobs in P2(P1) are placed at the void positions in offsprings O1(O2) with the same sequence as they are originally in the scheduling string in P2(P1). Finally, the void position(s) will be filled with 0s to avoid any possible infeasibility. In this case, operation combination 4 of job 3 contains only three operations and there will be 8 operations only in O2; thus, a 0 is added in O2. The resultant two offsprings, O1 and O2, are also given in Fig. 2b. In 'teacher' phase, P1 and P2 stand for the teacher and the learner, respectively, and the teacher is recovered as it is before the crossover process. In 'learner' phase, P1 and P2 are two randomly selected individuals.
There is another kind of crossover operator: the crossover in process planning level. The operations in the same operation combination may have different permutations as long as the precedence relationships in the network graph are satisfied. For the operations of the same job in P1 and P2, if they have the identical operation combination (have the same number in the corresponding process plan strings), the crossover operator between operation strings can be performed. The single point crossover [67] is adopted to maintain the feasibility of the two process plan strings. For instance, the first operation combination has been selected by the same job and the single point crossover is performed. A crossover point is randomly determined; the operations before the crossover point keep untouched and the positions after the crossover point are filled with the remainder operations in the other operation string. As a result, two distinct process plans are generated in Fig. 2c. The work flow of the whole algorithm is depicted in Fig. 4.

Experiments with discussions
To simulate the real-life situations, three scenarios, e.g., large uncertainty, medium uncertainty, and small uncertainty, are designed. According to Fig. 5a, some parameters are defined to describe the three scenarios: the fluctuation |α|, the range of the truth membership function T , the range of the indeterminacy membership function I , and the range of the falsity membership function F. By properly setting these parameters, these three scenarios can be simulated. For example, the large uncertainty scenario corresponds to larger |α|, I , F values and smaller T value, while small |α|, I , F values mean the uncertainty has been reduced. The |α| value determines the upper bounds of the t T , t S values according to Fig. 5a and the t I is the nominal processing time of an operation. Table 2 gives the range of these parameters in different scenarios. It can be found out that the t T , t S values are uniformly sampled from the range (1 − α) t I , t I and t I , (1 + α) t I respectively; furthermore, the maximum truth and indeter- The kim's benchmark [17], which contain 24 instances, is adopted to test the proposed TLBO algorithm. All the 24 instances covers the small-scale, the medium-scale, and the large-scale instances; in each instance, there are 15 machines to process the operations. The objective is to minimize the maximum completion time (score value) as mentioned in objective function (18). The proposed algorithm is coded in C++ language and is performed on a computer with an Intel i5-9600 3.7GHz CPU and 16GB memory. The number of individuals is set to 800 and the algorithm will be stopped after 800 iterations. For each instance, 5 independent computations are performed.
We first give the intuitive presentations on the convergence of the algorithm. The convergence curves of both the score value and the nominal makespan of Instance 24 with large uncertainty (the last instance is the most complex one) in Figs. 6 and 7. Since the score value of the makespan is calculated using the truth, the indeterminacy, and the falsity membership values using objective function (18), it is usually less than the nominal makespan value. With teaching and learning optimization processes going on, both the two values decrease; this reflects the effectiveness of the improved TLBO algorithm. According to Fig. 6, the curve of nominal makespan rises sometimes, while the other curve declines all the way; reasons behind this are: 1) a scheduling scheme with a large nominal makespan value may perform better in resisting the uncertain in processing times and 2) the objective is guided by the score value of the makespan instead of the nominal makespam. It is noteworthy that the nominal makespan value according to Figs. 6 and 7 is larger than that of deterministic cases [50,51]. However, this is reasonable, because compact operation permutations on machines in a scheduling scheme with a small makespan value usually extrude the idle time between operations, and hence, the scheduling scheme cannot absorb processing time fluctuations. In other words, a scheduling scheme with a large makespan value may be much more robust. The similar phenomena can be observed in Fig. 7, where the plain TLBO algorithm is adopted. Nevertheless, the resultant score value of neutrosophic makespan is larger than the case in Fig. 6 (compare the red curves in both the figures). This indicates that the improved TLBO algorithm performs better than the plain TLBO algorithm. Later, the performances of the two algorithms will be further compared.
In the following, the results obtained by both the two algorithms are compared and corresponding results are listed in Tables 3, 4, 5, 6, 7, and 8. The results obtained by the improved TLBO algorithm and the plain TLBO algorithm with large uncertainty in processing times are first compared. For the nominal makespan values, the plain TLBO algorithm performs better than the improved TLBO algorithm for most of the 24 instances. It seems that the plain TLBO algorithm is more promising; however, the nominal makespan value is useless in uncertain IPPS problems, since the actual scheduling scheme may lose effectiveness in real-life cases where the processing times are undetermined. The score value of neutrosophic makespan is more important; as can be seen in the table, score values of neutrosophic makespan of the  Table 3 are shown in bold. A smaller score value of neutrosophic makespan means that the resultant scheduling scheme is more robust with processing times contaminated with uncertainties. The enhanced 'teacher' phase of the proposed improved TLBO algorithm renders the high diversity of individuals during the learning process, and hence, more competitive results can be observed. Besides, the maximum and the minimum score values of the neutrosophic makespan of the improved TLBO algorithm are also better than those of plain TLBO algorithm; this further testifies that the proposed algorithm is better than the plain TLBO algorithm in uncertain IPPS optimization. As analyzed in the paragraph above, a scheduling scheme with a large nominal makespan value may performed better in uncertain processing time scenarios, since the idle time slots can absorb the uncertainty or fluctuations in processing times; it can be perceived that this conclusion holds after comparing the average nominal makespan values and the average score values obtained by the two algorithms in Table 3. For example, the mean nominal makespan value of Instance 1 of the improved TLBO algorithm is 486.20 and it is larger than the one obtained by the plain TLBO algorithm (458.20); but the average score value of this instance is 392.63 with improved TLBO, and it is better than the one of plain TLBO.
Other meta-heuristic algorithms includng GA, differential evolution (DE) algorithm, and particle swarm optimization (PSO) algorithm have been applied also in this research for comparisons. In PSO algorithm, particles (individuals) follow the best particle as well as the the historical optimal solution of itself. In both the GA and DE algorithms, the crossover probabilities are set to 0.6; the scaling factor in DE and the mutation probability in GA are set to 1.0 and 0.05, respectively. As the case in the proposed TLBO algorithm, 800 individuals are employed in the three algorithms respectively and each algorithm will be stopped after 800 iterations. Five independent computations are performed for each instance. Corresponding results with comparisons are presented in Tables 4, 5, and 6. According to Table  4, the improved TLBO algorithm performs better than GA for large-scale instances, while GA performs better than the improved TLBO algorithm for some small-scale instances. This reveals that the improved TLBO algorithm has better exploration and exploitation capacities because the solution space is more complex in large-scale instances; in such a case, GA may be trapped into local optimum. Such phenomenon can also be observed in DE and PSO algorithms according to Tables 5 and 6: an individual mainly refers to or imitates the global best individual, and hence, it lacks genetic diversity. Therefore, such individuals are not likely to produce highquality solutions. From Tables 5 and 6, it can be perceived that the improved TLBO algorithm performs totally better than the DE and the PSO algorithms. In the following, results of medium uncertainty and small uncertainty scenarios are compared and analyzed. Based on the number of jobs in an instance, the 24 instances can be classified into three types: the small-scale, the medium-, and the large-scale instances. Typical instances covering the three types of instances are selected and the results are briefly summarized in Tables 7 and 8. Similar situations can be observed in Tables 7 and 8: the average score values of the neutrosophic makespan yielded by the improved TLBO algorithm are better than those of the plain TLBO algorithm in both medium uncertainty and small uncertainty cases. This also reflects the superiority of the improved TLBO algorithm. Also presented in the tables, the average nominal makespan values yielded by the improved TLBO algorithm are larger than the ones obtained by the plain TLBO algorithm; again, this means that a scheduling scheme with a large nominal makespan value can resist the risk of processing time fluctuations and a so-called 'optimal' scheduling scheme in the determinis- tic case however will result in vulnerability or fragility in the performance. This research therefore also indicates that considering only the optimal makespan value in deterministic processing time scenarios is not enough to deal with real-life situations on the shop floor. This research therefore gives some clues for the optimization method of scheduling problems with uncertain processing times. Figure 8 gives a Gantt chart of Instance 24 with deterministic processing times (nominal processing times) and the corresponding neutrosophic version with large uncertainty is presented in Fig. 9 where the scales of x-ordinate are valued as score values of neutrosophic makespan. The neutrosophic starting times are given below the horizontal line, while the neutrosophic completion times are depicted above the line. For the purpose of a clear view, only the broken line presenting the truth membership T x is given for each srarting/completion time in the Gantt chart. By comparing the two Gantt charts, it can be found that the processing sequences (operation permutations) on each machine in both the two Gantt charts are identical; the Gantt chart in Fig. 9 is the actual version of the one in Fig. 8. According to Fig. 8, the last operation is operation 3.19 on machine M2 and other operations are all completed earlier than operation 3.19; nevertheless, the Gantt chart in Fig. 9 indicates that the actual situation is different with the case in Fig. 8. Clearly, the idle time between operations on machines 2, 3, 5−8, 13−15 is quite necessary to absorb processing time fluctuations. It can also be observed from Fig. 9 that the amplitude of the truth membership value increases continuously with the optimization process, because the truth membership values increases, while the indeterminacy and the falsity membership values decrease after each addition operation of two TNS numbers.

Conclusions
This research mainly focuses on the uncertain IPPS problem. The integration of process planning and scheduling can achieve an efficient utilization of resources to reduce conflicts in a flexible manufacturing system; nevertheless,   due to the uncertain processing times, the implementation of the so-called 'optimal' scheduling scheme usually results in deteriorations in the the performance of the manufacturing system. To address such uncertain IPPS problems, this paper presents a TNS-based methodology. The TNSs are employed for the first time to model the uncertain processing times and a TNS-based mathematical model is also established to facilitate the problem. We further discuss the method on how to convert an MILP model of the deterministic IPPS problem into the TNS-based one. Due to the NP-hardness, the uncer-  tain IPPS problem is solved by the improved TLBO algorithm to seek for robust solutions. The outstanding feature of the TLBO algorithm stems from parameter independency; more teachers are included in the improved TLBO algorithm to improve the individual diversity and hence intensify the search ability of the algorithm. The score function is adopted in neutrosophic makespan determination and the corresponding score value is treated as the objective. We test the Kim's benchmark in the experimental study and the results show that the improved TLBO algorithm is better than the plain TLBO algorithm in all the small uncertainty, the medium uncertainty, and the large uncertainty scenarios. More importantly, we find out that a scheduling scheme with some idle time slots may perform better, because these idle time slots can absorb processing time fluctuations. Finally, two Gantt charts with both nominal as well as neutrosophic processing times are given.
To further improve the solution quality, e.g., the quality of some small-scale instances, problem-specific neighborhood structure can be deigned and incorporated into the TLBO algorithm. In this research, the uncertain levels of neutrosophic sets in experiments are determined based on the predetermined scenarios; this relies on the actual data from shop floor in real-life applications; therefore, prior knowledge or data are required in applying neutrosophic set-based optimization method. In addition, this research considers only the neutrosophic makespan criterion. In many cases, machine utilization is another important criterion; therefore, the multi-objective uncertain IPPS problem can be listed as the future research direction. In real-life situations, there are other kinds of uncertainties, such as machine breakdowns, random job arrivals and preventive maintenance of machines in scheduling. Such factors can be included in the uncertain IPPS problems and considered in subsequent studies.