Keywords

1 Introduction

Minute variations or/and differences always exist in any production process, regardless of the quality of design and maintenance. SPC is a powerful and capable set of solving problem tools, which is very useful in stabilizing the process and improving its efficiency by reducing variability [1]. One of the main objectives of SPC is quickly realizing the existence of assignable causes and any change in the process to investigate the reason of the diversions and applying corrective actions to avoid more defective products in lines. The most essential process of implementing SPC is control charting. Control charts are useful in determining the process behaviour [2]. Usually existence of the non-randomness trend in the control chart has significant impact on process performance. In the meantime, control chart patterns such as Cycle, Trend, Shift, and Systematic as the basic patterns, have the process roots and usually appear in the most control charts of qualitative characteristics. As control charts consider only the current sample data, are not capable of presenting any pattern-related information. On the other hand, applying the accessible run rules can lead to increase false alarms [1]. Since the analysis of control charts is difficult, because it requires statistical knowledge and also experience of process, the main motivation of this study is to deal with the challenge of developing intelligent system which could identify defects, detect resources of deviations and recommend corrective actions automatically. Our investigation shows there is no existing model capable of handling mentioned challenge. To address this specific challenge, the main research question would be:

What would be a suitable intelligent methodology to describe and analyze control charts for the fault diagnosis of the process in order to advise decision makers?

Considering the fact that neural networks are well used alternatives for pattern recognition, and expert systems are effective in identifying causes of deviations and also recommending corrective action, we will try to answer the research question merging mentioned models using current successful experiences in both areas. The rest of paper is organized as follows: Sect. 2 is devoted to the literature review. Research methodology is discussed in Sect. 3. In Sect. 4 EDSS model is presented and in Sect. 5 conclusions are provided.

2 Contribution to Smart Systems

Smartness is the heart of Industry 4.0. Smart systems refer to diverse range of technological systems that can perform autonomously or in collaboration with other systems. These systems are capable to combine functionalities including sensing and controlling a particular situation in order to describe and analyze it. Smart systems have the ability to predict, decide and communicate with the user through a user interfaces. Smart systems have been also used in different areas, such as energy, transportation, security, ICT, industrial manufacturing, control, etc. [3]. The intelligence of smart systems is associated whit autonomous operation based on adaptability and learn-ability. This idea carries a sense of evolution and refers to process of modification and improvement over time. Neural networks with the ability of learning and expert systems with the command and rule-based features are perfect examples of smart systems [4]. In control process area, the main purpose of utilizing smart systems is to develop an intelligent system for real time control and monitoring the process. This work intends to contribute with the issues related to supervisory aspects of smart systems, considering the data acquisition, information transmitting, command-and-control and cognitive features of NNs and ESs. The main objective is to provide a model to support more intelligent and adaptive monitoring of smart system.

3 State of the Art

In this section related works that have used ESs and NNs to interpret SPC charts, were reviewed.

3.1 Application of ESs in SPC

In order to be competitive in the global market, more attention should be paid to extracting quality engineering knowledge in a systematic manner. Availability, consistency, extensibility and testability are the major advantages of ESs for SPC users [5]. In the following application of ES in SPC shall be briefly described. Evans [6] designed an ES for interpretation of x-bar and R charts using three sets of rules. [7] proposed a knowledge-based SPC system based on general knowledge of the process, which is capable of monitoring variations in a process. In another study [8] they proposed a hybrid system for SPC implementation. This system used ANN models to analyze control charts and an ES to diagnose the plausible causes. Their system can only recognize patterns that have full features. Reference [9] developed a knowledge based assistance for monitoring the process. Chung [10] integrated a model of decision support system (DSS), ANN, SPC and ICT to facilitate making decisions in production line. Reference [11] employed image processing and multivariate SPC to develop a visual detection ES. [12] has focused on ES and SPC, for selecting collaborative commerce system. And [13] developed an ES, based on multivariate control charts, to fault detection in induction motors.

3.2 Application of ANNs in SPC

The principle reason for applying NNs in SPC is to automate SPC chart interpretation [14]. This author has divided the literature of this area into structural change identification (changes in the average or variance of process) and pattern recognition. Most of the early research focused on detecting mean and variance shifts using similar approaches to [15] including [16,17,18]. Cheng [18, 19] designed multi-layer networks to simulate variance change. [24] presents a NN-based approach for detecting bivariate process variance shifts. NNs in the issues of pattern recognition are used to determinate random and non-random patterns. For example, [20] used LVQ to detect normal patterns, trends, sudden shifts and cycles. [21] proposed a hybrid-learning model using back propagation networks (BPN) and decision trees. [22] proposed a selective NN ensemble approach for CCP identifications. [23] proposed a NN to address the problem of monitoring a multivariate–multistage process. [24] using the method of Fourier descriptors and NNs identifies the CCPs. In the [25] the hybrid model of Recurrent Neural Network (RNN) and regression were utilized to recognize the CCPs. [26] developed a NN classifier for CCPs by Generalized Autoregressive Conditional Heteroskedasticity (GARH) Model. [27] Applies multivariate exponentially weighted moving average (MEWMA) and NNs for identifying the start point of the variations. [28] presents combination of intelligent model of ANN and support vector machine (SVM) learning methods for CCP recognition. [29] is an attempt to separate basic and mixture CCPs using NN and Independent Component Analysis (ICA). [30] is an example of using Paraconsistent ANN (PANnet) and SPC in electrical power System.

4 Research Methodology

As the conceptual model is observed in the Fig. 1, in this study the structure of EDSS model based on ANNs has been presented to control the plaster manufacturing process. In order to describe the implementation steps of the model and validate it, the case study has been done in Semnan Noor plaster factory, which is producer of construction plaster and micronized plaster, according to Iran National Standard No. 1-12015 in the field of construction plaster production. The investigated product in this study is POP (Plaster of Paris) or construction plaster that is produced through calcinations process at the temperature of 150 °C in a closed reactor system called baking kiln while gypsum loses 1.5 mol of molecule water and converts to the construction plaster. In this study, order to improve the quality of the process, after identifying all parameters of each control station in the plaster production process, the interview was done with the experts focusing on the importance of each variable in the process. Then, according to the existing records, the causes of defect occurrence in the construction plaster production that were associated with quality features of the plaster were examined. Finally, initial setting time of the plaster was diagnosed as the critical parameter of the process that should take at least 7 min and at most 15 min. This parameter is dependent on the plaster’s crystal of water after baking while the acceptance range in the investigated factory is between LSL = 5.0 to USL = 5.08 weight percentage and the process works under control with the basic limits of about LCL = 5.26 to UCL = 5.56. This parameter as a CW index has been analyzed by “Failure Modes and Effects Analysis” (FMEA) and “SPC” methodologies. The statistical population of this study included baked plaster of Low Burn kiln in the plaster production process that has been produced in a specified time period by stratified random sampling method from the baking tail silo, at random times, different shifts and operators. According to previous research on quality control issues, 25 subgroups of 5 samples (n = 125) were found suitable. Due to the nature of plaster production process, 8 times per shift and each time, five 25 grams’ sample from a random point of the silo was taken to measure.

Fig. 1.
figure 1

Conceptual model

5 Model Design Construction

EDSS is composed of 3 subsystems. The first subsystem using statistical formulas, determines control chart limits for sampled data (weight percentage of CW), calculates process capability (Cpk) and gives warning if any of the points is out of control limits. The second subsystem is responsible for identifying unnatural patterns and the purpose of third subsystem is interpreting the causes of deviations and recommending preventative or corrective action.

5.1 Developing an ANN

In this section first the procedure of pattern simulation and then the development steps of NN will be explained.

Simulating Different Patterns of the Process Control Charts.

In statistical issues any natural deviations could be determined according to probability distribution function of the corresponding random variable. This is the base of natural behaviors simulation in different charts of process control [31]. In this study, due to lack of large volume of data for unnatural patterns, it is tried to simulate the data, according to the original data of process.

Designing the Structure of NN Model.

Overall structure of NN model is composed of, two separated sets, module I and module II (Fig. 2).

Fig. 2.
figure 2

Structure of NN mode

Topology of the LVQ Network Designed in Module I.

First part of the model has been designed with general aim of identifying and classifying the input patterns. For this purpose, using competitive algorithms instruction, a LVQ network with two half-connected layers has been designed. The recognition window has 25 components (input layer neurons). In the first layer of network, 175 neurons and in the second layer 8 neurons are considered. The main reason in determining the required number of first layer neurons, were significantly identification incorrectly patterns. In addition, it has been tried to assign almost equal neurons to patterns with equal number of parameters (Table 1).

Table 1. Neurons of the first layer

Topology of MLP Networks Designed in Module II.

The second part of the model has been developed with the aim of estimating the parameters of each unnatural CCP, based on determined definitions (Table 2), and also estimating the starting point of the unnatural patterns. For this purpose, 7 two-layer perceptron networks, analyze the basis and simultaneous patterns. Because different directions of changes in process (upward or downward pattern) should be well recognized, so due to considering the output values in the range [−1, 1] the activation function has been considered bipolar sigmoid with a constant A = 0.1. Total input of all MLP networks in module II is 26 that one of them is for bias and is considered equivalent to one. In all MLP networks for a special error value of the network, the number of required iterations to reach the desired error is calculated and the lowest repeated number until reaching to the considered error, is selected as the number of hidden layer neurons (Table 3). In output layer of each MLP, the number of neuron is selected according to the number of corresponding parameters of each pattern.

Table 2. Range of changes in unnatural patterns parameters
Table 3. Optimum number of hidden layer neurons, iteration trainings and maximum cumulative error in module II.

NNs Training.

The LVQ network in module I with learning rate λ = 0.01 is trained by enabling competition to take a place among the Kohonen neurons. The competition is based on the Euclidean distances between the weight vectors of these neurons and the input vector. The training method used in MLP networks is back propagation with adaptive learning rate, where the weight of each layer, by the output and output derivative is corrected until the network is fully trained [31]. The training data set is applied to corresponding networks in a category form and error is calculated at each step until the learning process performed. Network’s error which is a cumulative error is defined as below in which: p stands for pattern number, o output neurons and \( {\text{d}}_{\text{ij}} \) desired value for the j output of i pattern.

$$ E = \frac{1}{2}\sum\nolimits_{{{\text{i}} = 1}}^{\text{p}} {\sum\nolimits_{{{\text{j}} = 1}}^{\text{o}} {\left( {{\text{d}}_{\text{ij}} - {\text{o}}_{\text{ij}} } \right)^{2} } } . $$
(1)

In this study, there were total of 11,000 training samples. 4000 samples as training data set in the LVQ network which for each pattern equally 500 data have been assigned and 7000 samples in the MLPs which for each of 7 networks equally 1000 samples were produced. In order to producing training set, first maximum and minimum values of input parameters were defined and then desired data were scaled and mapped considering the type of used transfer functions.

$$ Ascale = { \hbox{min} } + \frac{{{ \hbox{max} } - { \hbox{min} }}}{{{\text{Amax}} - {\text{Amin}}}}({\text{A}} - {\text{Amin}}). $$
(2)

The scaling data range in LVQ network between [−5, +5] and in MLP networks [−1, +1] is considered. But before that the inputs of MLPs, between [3.4, 7.4] had been scaled and the outputs with separate max and min values scaled (Table 4). Because each pattern has two orders of changes (upward and downward), the desired output is set in form 1 or −1. For example, the output vectors of Natural would be [10000000] and Downward Shift [0–1000000]. Maximum cumulative error (MCE) for training the LVQ is calculated 0.047 (188 in 4000 training data), and for testing 0.0525. MCE and training iteration of each MLP can be seen in Table 3.

Table 4. Scaled values range corresponding output of MLP networks

Test the NN Model.

In the training stage, the network’s efficiency increases by minimizing the error between real outputs and in the testing step only the input data are given to the network, which are validated through the response prediction, between input and output variables.

Evaluation of LVQ Network in Module I.

One of the major criteria in the development of LVQ network in purposed model has been the reduction of possibility of incorrect identification. Considering the diversity of trained patterns, the model shows a good response in this problem and also in indirect detection of individual patterns and incomplete diagnosis of mixed behaviors (Table 5). Moreover, for each identification window (input vector) network makes a decision on the process status. Hence the occurrence possibility of errors related to the decision making topics will be discussed. If the network detects by mistake the behavior of the under control process as unnatural, type I error and if the unnatural pattern in the process could not be detect, type II error has taken placed. Performance of module I and LVQ network by 400 test vectors were measured. Each of 400 test vectors is a sample of 25 Pieces that represents one of the 8 identified patterns. Amount of LVQ error is calculated 0.052 and the results of 400 experimental vectors is integrated in Table 5.

Table 5. Evaluation results of module I

Test and Evaluation the MLP Networks in Module II.

One of the critical problems in NNs training is the over- adaption problem of training data. Topics “generalization” and “training” in NNs have the same significance [32]. For solving this problem, part of the training vectors is considered as validation data. In this way training data follow the flow of parameters modification and validity data will follow error during the learning process. The validity data set error such as training data set error naturally decreases but as soon as there is over learning in the network, despite fixing or decreasing error of validity data set, the amount of training data error will increase. In this time the training process stops and the parameters according to the minimum errors of validity data, will be considered as an algorithm ultimate answer. After the training process, performance of the network was tested by several examples. Results show, module II in identification and analysis of defined parameters, acts successfully and efficiently (Table 3).

Verification of the NN Model.

After evaluating the NN model by the test data set, in order to determine the accuracy, repeatability and the amount of results stability in the test repetition, the verification of the model by comparing the amount of NN error and discriminant analysis (DA) error has been done. It should be noted that DA is a similar statistical method in classification and in this study the statistical software SAS was used for this purpose. Figure 3 shows related errors to any classes in the Rate row, and in the Priors row the weight of each class is shown. 0.3325 represents the total error in using DA for the test data set. As it is seen in Fig. 4, neural networks compared to DA method, has much better performance and precision.

Fig. 3.
figure 3

DA error in the test data set classification

Fig. 4.
figure 4

Comparing the error of NN and DA for each pattern

5.2 Developing an ES

Steps discussed in this section include knowledge acquisition, knowledge representation and implementation:

Knowledge Acquisition.

In this study following the knowledge management system (KMS) approach, to help the diagnostic process, Western Electric tests are used as general knowledge and in order to obtain specific knowledge of the process, first Cause and Effect diagrams were prepared to determine the most problem causes in the process (Fig. 5). After several reviews, effective variables on rate of crystal water were categorized in two categories of controllable and uncontrollable variables. Controllable variables include: kiln’s negative pressure (filter), temperature of kiln body, flame’s profile, fuel pressure adjustment, kiln blades and humidity in raw material. Uncontrollable variables include: Air temperature and change of trend in raw material because of mine. Next the FMEA methodology were used to identify and prioritize failure modes in the plaster production process and then necessary actions to eliminate or reduce the occurrence of failure modes were taken and finally the results of the analysis with the aim of creating the full reference for future problems were recorded in the Knowledge base of the designed system.

Fig. 5.
figure 5

Cause and effect diagram for increasing of crystal water

Knowledge Representation.

In this study the rule-based approach has used as a general framework for the knowledge representation. In this research experts’ problem-solving knowledge were encoded in the form of IF <Situation> THEN <Action> rules. This set of rules has formed the knowledge base of ES. The knowledge rules have obtained from manuals, procedures, technical documentations and forms of interviews with experienced engineers and technicians. During the interview process, conversations recorded in detail and then transferred to the FMEA worksheet. Finally, a total of 60 rules applied as specific and general knowledge for interpreting X-bar and R control charts.

Implementation.

The proposed hybrid model (EDSS) has designed using three main modules. The first module is related to knowledge base, the second module is related to designing the interface and the third module is related to running the system and dialogue to the user. EDSS which can be called intelligent statistical process control (ISPC), its components are as follows:

  1. (1)

    Knowledge base, which is composed of three main parts below:

    1. (a)

      Events, which extracts from the records of ISO, preventive maintenance, calibration, brainstorming sessions and FMEA forms.

    2. (b)

      Procedures, which Includes technical instructions, plaster production standard, ISPC manual and etc.

    3. (c)

      Rules, which consists of 60 rules extracting from experts and documents such as Western Electric tests, statistical formulas and NN’s analysis, has presented in the form of IF-THEN statement.

Western Electric tests which are using general knowledge of process to control charts interpretation is including: points out of control limits (1 point more than +3σ or −3σ), gradual changes in levels (9 points larger or smaller than CL), trends (6 points in a row steadily increasing or decreasing), Systematic variations (14 points in a row alternating up and down), cycles (4 of 5 points more than +2σ or −2σ), and mixtures (8 points in a row more than +1σ or −1σ) (Table 6).

Table 6. FMEA form for the critical parameter (crystal water)

Example.

The Following example represents a kind of typical rule, based on specific knowledge of process:

  • IF interpretation is “Upward Trend”

  • AND Failure Mode is “Increase of Crystal Water”

  • AND Process Index is “Decrease of kiln’s temperature”

  • THEN Special Cause can be “Fuel nozzle Chocking”

  • AND Corrective Actions can be either “Cleaning the fuel nozzle, establishment of PM for the burner or installing fuel filter”.

  1. (2)

    Inference Engine, which backward-chain inference engine has used in this study in order to troubleshooting. This means, the written program has performed with two sets of rules. The first group define goals for the properties and if it is unable to determine the value of the properties with the existing rules, the user is asked to identify them. The second group updates actions like modifying the rules and transmitting the satisfied goals.

  2. (3)

    Working Memory, which Consists of events and facts to be applied by the rules. This memory is used to store temporary data that is provided during problem-solving process. This data includes the user’s answers to the system’s questions and also deriving facts from reasoning process like unnatural patterns, the starting point and corresponding parameters of patterns that are identified by the NN.

  3. (4)

    User Interface, which works with the inference engine and the knowledge base in order to two-way communication between user and the EDSS. Users can answer a question by selecting Yes or No or select one item from a menu on the screen.

5.3 Integration of NN and ES

To develop ISPC, capabilities of NNs and ES are used in automatic interpretation of control charts. The purposed system is able to do most of traditional SPC operations including calculation of control limits (LCL, CL and UCL), calculation of Cpk, data normalization test and checking for X-bar and R charts being under control. In SPC first the based charts must be prepared. For this purpose, after initial sampling, control limits and Cpk must be calculated in control mode. In this study according to the plaster factory experts’ opinion if “Cpk > 1” the chart is accepted as the basis (Fig. 6).

Fig. 6.
figure 6

SPC operations

Experimental Result of EDSS Model.

Using real data from the current process of plaster producing and interface software, a practical example is presented (Fig. 7). As shown in Fig. 6 although there is no any out of control mode in the R-chart but the process is not capable of meeting specifications because the “Cpk < 1” and equal to 0.81. On the other hand, by selecting x-bar chart, the user has faced with the message of “X-bar chart is out of control”. Then, the ES using Western Electric tests has announced that points falling outside the control limits may be the result of: carelessness in the measurement, machinery stop or off spec materials. Later, ES suggests the user to check the unnatural patterns by NN. As you can see, the NN has not only has identified downward shift pattern in X-bar chart but it has also estimated the starting point of unnatural pattern (point 6) and the displacement parameter (−0.161). After that, ES asks the user to explain the defect to find causes of potential failure modes in the process. If the answer is not in the options, ES says: “refer to the experts!”. In this example according to the appearance of downward shift pattern and as the user observation was “kiln body scarlet”, the cause of defect has announced “temperature exchange of kiln with environment due to the loss refractory and thickness” and “establishment of maintenance and inspection of refractory” have proposed as corrective or preventive actions. In this practical example, after doing corrective action and re-sampling the process, the control charts did not show any out of control mode and also process capability improved from Cpk = 0.81 to Cpk = 1.15. Ultimately, the designed system examined by several examples and the results were considered satisfactory. However, the system performance needs to be improved more so it is important that the long-term use of the model be identified as part of the objectives of the study because the model design is highly influenced by it.

Fig. 7.
figure 7

Results of EDSS for a practical example

6 Conclusions

The main goal of this work was to present the capabilities of emerging technologies and algorithms, dealing with quality issues in shop floor. In this work a hybrid EDSS was designed to support the operators in troubleshooting of plaster production process. For this purpose, ES and NN were integrated via designed interface software for reasoning of deviation sources and recommending corrective actions. The ES tries to determine the fault area as far as possible until the serviceman focuses on the point. On the other hand, in the structure of current model, features of LVQ and MLP networks are used in two modules. Therefore, the competitive power of LVQ network in pattern classification and the interoperability of multi-layer perceptron networks for parameter estimation of abnormal patterns in different process control chart were used simultaneously. This integrated approach has provided an appropriate condition for implementing desired thoughts. Considering the training of the network for identifying basic and mixed significant behaviors, the numerical results in Table 5 shows output of module I in diagnosis of behavioral patterns, which is acceptable. The results also show that module II who is estimating the parameters of corresponding patterns, is effective and reliable (Table 3). The work has many contributions and was covering multi tasks including: production, delivery and encryption of NN input, using a wide range of data in training the NNs, and doing most of SPC required via integrated system such as drawing of basis chart, checking X-bar and R charts for being under control and also calculating Cpk. Smartness is key criteria in Industry 4.0 and the case study shows the capability of proposed model to bring more intelligence to the production lines.