Introduction

One of the main differences between the classical experimentation process and the stochastic optimization methods is that, in the classical experimentation, many of the runs designed and obtained are useless because they are out of specification, they are non-conformant parts (scrap), while in the direct search methods, the idea is to run the process, minimize the non-conformant product and find the best parameters combination. One of the most used methodologies is the one proposed originally by Box (1957), modified by Spendley et al. (1962) and then by Nelder and Mead (1965), among many others, these authors are the ones that had contributed dramatically to the original proposal of Box. The Box´s algorithm, known as evolutive operations (EVOP), got transformed by Spendley as simplex-EVOP, while the modification of Nelder and Mead is known simply by Nelder–Mead simplex (NMS).

Direct search methods (DSM) prosecute the purpose of optimization: to get the response or responses to a maximum, a minimum or a target. The main differences in respect to the classical experimentation and optimization methods (DOE, response surface methodology) are shown on Table 1.

Table 1 Differences between DOE/RSM and direct search methods

DSM are also known as optimization techniques free of restrictions. These methods were very popular in the 60s, but by the 70s they lost popularity because of the scientific community critics. Never the less, these methods are still in use, in fact, in the last 15 years they had suffered many changes and modifications, investigators are continuously trying to overcome some of their restrictions and/or applying them to particular situations.

Taguchi´s crossed array

Experimental design, Classical or Taguchi, requires an arrangement to combine all the levels of all the variables considered in the design. This generates all the combinations or some of the combinations (fractional designs) of those levels named “runs”. These runs are executed and the results analyzed and evaluated. The measure of error and effects is considered to guide the actions to be taken in terms of input variable adjustment.

Taguchi developed a special arrangement, to consider variables defined as noise factors, identified as crossed array. The purpose of this arrangement is to run the original design (control variables), under the noise factors conditions, executing the experimental runs and finding the best level combination of the control variables under this condition is what makes a process robust. In Fig. 1, a L 9 (3)4 × L 8(2)3crossed array example is shown, Taguchi (1986). The Inner array contains the nine runs for the control variables; the “outer” array contains the eight runs for the noise variables.

Fig. 1
figure 1

Taguchi´s crossed array, example

An important observation requests attention: Designed Experiments will provide de best levels for the variables included in the study, but these levels are not necessarily the optimum ones. In order to find an optimum, another methodology needs to be applied. This strategy is the response surface methodology, which works under the same conditions of the DOE.

The Nelder and Mead simplex

As shown in Fig. 1, in the classical arrays the levels are pre designed, this is, once the arrangement is defined, these levels stay fixed during the process of experimentation. In the NMS, the algorithm starts with pre designed levels, then these levels are modified trough the iterations of the algorithm according to a set of rules (operations of the simplex). This is the mechanism applied to modify the levels of the factors, so the optimization is accomplished. Spendley et al. (1962) proposal consists in the use of a simplex (a geometric arrangement); in general, a polyhedron of n + 1 vertex (for a two input variables this will be a triangle). The search mechanism consists in using one of the vertex as pivot (called the worse vertex), to estimate the next vertexes, always moving towards the best vertex (maximum, minimum, target), until the optimum combination is found (the response converges to the desired point), as an example, Fig. 2 is shown.

Fig. 2
figure 2

The EVOP-simplex algorithm, Spendley et al. (1962), example

Nelder and Mead (1965) modified Spendley´s model adding four operations to the polyhedron: reflection, contraction, expansion and shrinkage, an example of these operations is shown on Fig. 3.

Fig. 3
figure 3

Operations on the NMS: reflection, contraction, expansion and shrinkage

In these methods, the iterative process continues until the algorithm cycles from one simplex to another, when this happens, it indicates that a local optimum has been found. Another indicator is that the simplex becomes very small and the variance of the response is reduced. Spendley et al. (1962) concludes that the speed of movement towards the optimum is inversely proportional to the variation in the response variable.

It can be observed that there is a great difference between these two strategies, while the classical methods need to suspend production, generate scrap, utilize extraordinary resources, equipment time, etcetera, the DSM is executed during normal production. As shown on Table 1, both strategies have advantages and disadvantages.

As pointed before, Nelder and Mead modified the Spendley Simplex algorithm adding four operations. Box and Draper (1966) concluded that this algorithm, known until today as NMS, is the most efficient and dependable. Many additions and modifications have been done to this algorithm [Hunter and Kittrell (1966); Parkinson and Hutchinson (1971); Torczon (1989) and Walter Frederick (1991)]; but the essence of it remains intact.

After an exhaustive review of most of these Direct search methods, we can conclude on the following:

Strengths

  • Can be applied to a continuous process, non-conformant product (scrap) is minimized.

  • The operations added by Nelder and Mead make the iterative process faster and efficient, trough a considerable reduction of iterations.

  • A significance test and a start and stop criteria proposed by Sánchez-Leal (1991) provides a guide line to reduce unnecessary iterations and costs.

Weaknesses

  • Most of these methods do not consider noise factors.

  • Because noise factors are not considered, these algorithms cannot be used to characterize processes.

  • Taguchi methods consider noise factors, but an invasive classical experimentation is needed.

  • There is no clear definition of the best way to measure the response (or is has not been considered in these algorithms); the most used approximation is the Taguchi´s signal to noise ratio, but there are many concerns about its efficiency and dependability.

Considering the strengths and weaknesses found, this new proposal is named Armentum because it really is an agglomeration of the main concepts of the strongest methodologies studied. The idea is to eliminate the weaknesses while it is supported by the strengths, among other characteristics it considers:

  • The concept of continuous operation of Box (1957).

  • The minimization or elimination of non-conformant product (scrap).

  • A noise environment, added by high effect factors (uncontrollable factors, although controllable for experimentation purposes).

  • The evaluation of the response in terms of real capability (P pk), as stop/start criteria.

  • The inclusion of a dual response. Dual response is a media to add robustness to the process because it considers the media and the standard deviation at the same time.

Methodology

Two control variables and two noise variables are considered to illustrate the logics of this algorithm. Figure 4 shows the basic proposal, in which the objective is to utilize the NMS algorithm, adding noise conditions in each vertex, as an analogy with the Taguchi´s crossed array, this is to “penalize” the operative conditions with the systemic variation (noise).

Fig. 4
figure 4

Basic proposal

Figure 5 shows how the external array will follow the new vertex generated, using as example the reflection, one of the NMS operations:

Fig. 5
figure 5

The external array follows the new vertex

It has to be noted that the external array is not optimized by the NMS operations, because it is not of interest to optimize these noise conditions, their purpose is to penalize the control variables, this is the way to add robustness to the process.

As shown on Fig. 6, the process continues until the best conditions are found. This algorithm can be extended to more applications, such as screening, for example. The external array can be considered not only as noise factors, but also as control variables, if this is the case, the NMS operations and algorithm can be applied to them as well.

Fig. 6
figure 6

The NMS iterative process with the external array

As explained before, the concept of Taguchi´s crossed array is used here, but instead of using one orthogonal array as an inner array and another orthogonal array as the outer array, we are using the variable simplex as inner array and a factorial 22 as outer array; We have to keep in mind that the idea of the NMS as inner array is used to maintain the “continuous running process” objective.

Figure 7 shows a graphical representation of the crossed array proposed by Taguchi while Figs. 8, 9 and 10 represent the conditions analog to Taguchi´s concept for the proposed new algorithm.

Fig. 7
figure 7

Taguchi´s crossed array

Fig. 8
figure 8

The inner array is replaced with a simplex (two control variables)

Fig. 9
figure 9

The outer array is replaced with a 22 factorial design

Fig. 10
figure 10

Final approach

Results

In order to test this new combination of methodologies, it was applied to a continuous flexography process that is used to print a particular product label. The machine is a Mark Andy 830, Figs. 11 and 12 shows the machine and a schematic of the operation basics of the process.

Fig. 11
figure 11

Source: http://www.flexoexchange.com

Mark Andy 380

Fig. 12
figure 12

The flexography process

It is important to mention that originally, the evaluation of the quality of the printed label was made visually, based on the experience of the process operator and the quality inspectors. It is not the purpose of this paper to document all the activities and the methodological steps followed to transform this visual inspection into a hard system, but it is important to consider these issues before any experimentation. The output measure was transformed to a continuous variable named luminosity, measured by an instrument that distinguishes three dimensional planes of light: luminosity, red–green spectrum and yellow–blue spectrum. Statistical analyses lead to implement luminosity as a correlation equivalent to the visual inspection. The experimentation and optimization data are presented as this output variable.

The classical array

In order to characterize this process, a Full Factorial Design, in two levels, two blocks and four central points was applied. Table 2 shows the factors and the levels considered for experimentation and Fig. 13 the arrangement and the output variable measurements (luminosity).

Table 2 First DOE set up
Fig. 13
figure 13

Full factorial design with four central points

The design of the product is to print five lines of labels at the same time; for comparative purposes, only the data of the first line of labels is shown. Line one of labels is on the outer border of the material band, so it is more sensitive to changes in the input variables. This condition is a constant on all the comparisons.

Figure 14 shows the results from Minitab®; note that the 3-way interactions were removed because they resulted statistically irrelevant. On Figs. 15 and 16 it is shown the standardized effects Pareto chart and the main effects plot, respectively.

Fig. 14
figure 14

Analysis of variance, factorial design

Fig. 15
figure 15

Standardized effects pareto chart for luminosity

Fig. 16
figure 16

Main effects plot for luminosity

P values of 0.000 on plate card angle and 0.056 on ink pressure indicate main effects relevance. From all this information it can be concluded that the mayor effects are generated by the variables speed, ink pressure and plate card angle. The dotted line on Fig. 16 indicates the response optimum for luminosity that is 66 units. This goal and its acceptable tolerance (between 65 and 67 units) were determined by a multifunctional team: production, quality control, process engineering and top management.

In this phase of the whole process, the next step should be to design a new experiment with the strongest input variables only, including more levels to increase the spectrum of parameters and find the “best”. It has to be kept in mind that the Design and analysis of experiments do not find and optimum; they only lead to the best option available.

Response surface methodology will be the proper approach to optimize the process, nevertheless this methodology implies more runs, more scrap, more production lost, more resources. As a matter of fact, a great deal of effort was needed to convince high management to let this team run the first experiment; this is a common situation in the real world.

The hybrid combination: Armentum

The Nelder–Mead simplex combination with the Taguchi´s crossed array was initiated generating three starting vertexes (combinations) of the control variables included in the analysis. These variables were the ink pressure and plate card angle, set as the inner array (the simplex algorithm). Production speed and drying were considered as noise factors and set as the factorial 22 outer array. A worksheet was designed to arrange the simplex iterations and the outer 22 array, and to register the runs output and to evaluate the response. Figure 17 shows the worksheet with the results of the algorithm run. As an evaluation of the response and in order to provide another way to penalize the process, so it gets more robust, a dual response was used, which was optimized to a minimum, considering the goal of 66 units of luminosity, Montgomery (1997). The first three vertexes are set as an initial simplex, the rest are calculated with the NMS operations.

$${\text{Rd}} = \left| {\left( {\bar{y} - 66} \right)} \right| + 3S_{x}$$
(1)
Fig. 17
figure 17

The Armentum worksheet

According to Eq. 1, the goal of the dual response is cero, meaning that the objective has been reached. In this case the algorithm was stopped at vertex no. 28 because the stop criterion was found; the simplex began to cycle from one vertex to another.

Process capability and comparisons

On Figs. 18 and 19 the initial and the final capability of the process can be seen. In both cases, a 30 label random sample was taken. The initial capability study was run according to the actual operation conditions of the process. The graphs were generated with Minitab®.

Fig. 18
figure 18

Initial capability, luminosity

Fig. 19
figure 19

Final capability, luminosity

On Table 3 it is shown a comparison between the two approaches. It has to be considered that the methodologies were used on a different phase; the first DOE was applied to characterize the process and the new combination to optimize it. Never the less, if a response surface analysis were used to optimize the process, a lot of useless runs should be scraped, production stopped, etcetera. As a reference, on Fig. 20 it is shown the first design for RSM with 14 runs, 2 replicates, 3 cube and 3 axial central points and an Alfa of 1.414; the RSM requires at least two designs, a first one to determine the best fitted equation of the response surface and a second one to minimize or maximize the response. The data for RSM on Table 2 is only estimation.

Table 3 Methodology comparison results
Fig. 20
figure 20

RSM first design

Conclusions

There is a substantial practical difference in these methodologies. In has been proved that DOE requires extraordinary resources, production lost and generates scrap during the process of experimentation. This new combination can be applied to any continuous production process, generating less scrap and with the great advantage of not stopping production, which is one of the main inhibitors for the use of these continuous improvement strategies.

In this case study, the process was completely out of control, so the first DOE was designed to characterize de variables. This note is important because in most cases a process that is actually running, is a process that has an acceptable level of efficiency, this is, it produces mostly good parts but the levels of the control variables are not necessarily the optimum ones, neither the response variable. Under these conditions, the new algorithm can be developed from the initial experimentation process.