1 Introduction

In the last decades, the rapid development of Micro-electromechanical systems (MEMS) has raised global interest in micro-operations, which are used in a wide variety of applications such as bioengineering [1], drug delivery [2], optics [3], and aerospace [4]. As the importance and applications of manipulating small parts and objects increase, instruments, and tools for carrying out such operations are becoming of great favor to scientists.

Microgrippers are used to carry out micro-scale operations with the desired precision. Using a flexible mechanism in the microgripper lowers the manufacturing costs and omits friction, clearance, and the need for lubrication [5].

Common actuation mechanisms to achieve the desired motion of interest in microgrippers include electrothermal [6, 7], electrostatic [8], pneumatic [9], electromagnetic [10], shaped memory alloy [11, 12], and piezoelectric actuators [13, 14]. Voice Coil Motors (VCM) have been used to create extensive ranges of motion in micro/nano-positioning mechanisms while delivering a high speed of motion [15, 16].

Comparing to other types of actuation, piezoelectric stack actuators have advantages such as fast response, high resolution, stable displacement and large output force [17, 18].

The output displacement of piezo-electric stack actuators is usually much smaller than the size of objects we aim to manipulate.

Displacement amplification mechanisms have been utilized to amplify the input displacement of microgrippers, resulting in a greater range of motion in the gripping jaws [19].

Due to various features expected of Microgrippers, some studies operated a multi-objective optimization on its design. Dao et al. [20] performed a multi-objective optimization based on displacement and frequency. They achieved a set of optimal dimensions using the differential evolution algorithm. Ho et al. [21] performed an optimization based on hybrid Taguchi-teaching learning-based optimization algorithm (HTLBO) choosing displacement, frequency and gripping effort as objectives. Xiao et al. [22] derived an optimal design of microgripper, based on input stiffness, safety factor and amplifier ratio. Using Radius basis functional network (RBFN) multi-objective GA (Genetic Algorithm) optimization method, they derived a set of optimal designs. Ho et al. [23] carried out an optimization using hybridization algorithm between adaptive neuro-fuzzy inference system (ANFIS) and Jaya algorithm in order to achieve an optimal displacement and resonance frequency. Grossard et al. [24] performed a flexible building block method optimization on microgripper to achieve a higher stroke amplification and force amplification. Nguyen et al. [25] optimized a sand crab-inspired compliant microgripper based on the amplification factor and the maximum stress created by a specific displacement. Das et al. [26] used computational methods to optimize a piezoelectric microgripper design to minimize parasitic motion and increase output displacement. Their experimental results demonstrated low parasitic motion and high precision motion resolution of their microgripper.

Based on what has been said, an overview of the literature shows three major weaknesses:

First, due to the use of the full factorial method, the number of experiments to achieve the desired result will be very large. This is a complicated matter in cases involving theoretical analysis and costly and time consuming when conducting practical experiments.

Second, most research has done the process of optimizing the micro-gripper to reach a design point. But due to the wide range of applications of microgrippers, we will need different outputs. For example, sometimes we need to withstand high stress and sometimes a large amplification factor. Therefore, instead of achieving a single design point, it is better for the user to have a set of designs so that he/she can choose the desired design point according to his/her needs.

Third, in cases where the choice of the desired design point is left to the user, unsystematically doing of the selection process leads to decision errors due to incorrect feeling of the demands. Therefore, using a systematic multi-criteria decision-making process is necessary to solve this problem. Of course, this in itself can complicate the process. So, the decision-making method should be effective and simple at the same time.

To address the first weakness, experiments based on the full factorial method are replaced by the Taguchi orthogonal arrays, which, significantly reduce the volume of records while covering the entire data. In addition, to determine the correlation of the parameters, low effect values are eliminated and lead to a concise, agile, and effective regression equation.

For the second, instead of using single-objective optimization, a multi-objective meta-heuristic algorithm based on the genetic algorithm is used, and therefore a single optimal point is replaced by an optimal Pareto-front that provides output flexibility for different applications.

To solve the third problem, the "Analytical Hierarchy process" (AHP), which is a multi-criteria decision-making method, allows the systematic selection of the desired design point according to the real expectations of the user based on the matrix of pairwise comparisons.

It is important to note that the focus of this research is not on a specific design or specific output parameters, but is to provide a fast, flexible and efficient algorithm, yet simple and low cost to optimize the parameters of a microgripper. Therefore, the selected outputs such as amplification factor, stress, vibration modes, dynamic or thermal considerations etc., the method of identification of correlations and validation is not important and done only using the finite element method. Of course, this algorithm can be generalized to the experimental data.

In this research, we present two important factors of a microgripper design, which can help us choose a design for our specific kind of application. These two factors are the amplification factor (AF) and minimal stress. For a set of designs that have the same overall structure but vary in some features, there is usually a tradeoff between these two factors. When we increase the amplification factor, we observe an increase in the maximum stress created in the design and vice versa. Therefore, in this study, a novel method of obtaining the right design for one's needs was developed, utilizing a multi-objective optimization procedure and an importance criterion. Using this criterion, one can decide the relative importance of amplification factor and minimal stress, which will help choose the right design. This method of having minor variations in designs and choosing from them becomes especially important since for every design, achieving a well-regulated control system needs a lot of experiment and design knowledge. However, if we use a universal design with minor regulations to our work, we already know how to best control it and where to put our sensors. Flexibility and efficiency are expected from a universal design. As a result, having a great range of motion and easy means of producing movement were two important factors in our design. To ensure a great range of motion, we used four stages of amplification, and we achieved a remarkable magnitude of amplification factor.

This study aims to investigate the optimization of a four-stage microgripper based on the range of motion and stress responses. Following the introduction, the design process and the computational procedure used to perform the analyses are presented. Afterward, the design of experiments and the mathematical modelings are explained. Results are subsequently presented to demonstrate possible optimized designs. In addition, the observations on the experimental results are discussed further. Finally, conclusions are drawn based on the results and discussions.

2 Design of the microgripper and the finite element analysis

A main view diagram of the proposed microgripper is presented in Fig. 1. The microgripper is comprised of two stack piezoelectric ceramic actuators (SPCA), fixing holes, preload bolts, bridge amplifiers, lever amplifiers, a number of circular and rectangular flexure hinges, and finally, a pair of gripping jaws. By going through elastic deformation, flexure hinges transmit the produced displacements of the SPCAs to create the desired gripping motion. The symmetricity of the structure improves accuracy, helps balance the structure's internal stress, and doubles the displacement amplification. Preload bolts are used to adjust the preload force on each SPCA. One end of each SPCA is fixed with the displacement transmission mechanism (DTM) using the preload bolt. Several fixing holes on the microgripper are used to keep the microgripper in place. Figure 2 presents the main design parameters of the proposed microgripper design, which are also specified in Table 1.

Fig. 1
figure 1

Schematic diagram of the proposed microgripper

Fig. 2
figure 2

Main design parameters of the proposed microgripper

Table 1 Main design parameters of the microgripper

The piezoelectric actuators produce a small output displacement; hence, four stages of displacement amplification were incorporated to ensure a great gripping stroke in the microgripper, leading to a large tip displacement. A bridge amplifier and three lever amplifiers were deployed to increase the output displacement.

Compliant mechanisms move solely by the deformation and flexure of hinges. Therefore, friction and unwanted motions are eliminated in compliant mechanisms. As a result, the microgripper’s controlling system can possess a greater movement precision in such designs.

As for the material, the Aluminum alloy was used due to its favorable mechanical properties such as relatively low density, high flexibility, low cost, and being easy to be machined using electric discharge machining (EDM). Therefore, 7075 aluminum alloy was chosen, which has a modulus of elasticity \(E = 71 \,{\text{GPa}}\), a Poisson’s ratio \(\nu = 0.33\), a yield strength \(s = 455\,{\text{ MPa}}\), and a density \(\rho = 2810\,{\text{ kg/m}}^{3}\).

In order to reduce the computational costs, a half-model was used in simulations, and an x-symmetric constraint was applied to the symmetry plane. A uniform distributed displacement in the y-direction was input to the surfaces, which contact the upper and lower surfaces of the piezoelectric (5 µm in the "+y" direction for the upper surface and 5 µm in the "−y" direction for the lower surface). The surfaces of fixing holes were constrained to be fully encased. The displacement of gripping tips (which yields the amplification factor when divided by the input displacement of the piezoelectric) and the maximum stress generated in the microgripper structure were the outputs of the simulation. In order to mesh the samples, linear hexahedral elements of type C3D8R with a size of 0.04 mm were used, and a static-general solver with non-linear geometry was employed. A sensitivity analysis showed that a 0.04-mm mesh was fine enough to eliminate the effect of mesh size on the results. The contours of displacement and stress generated in the microgripper can be seen in Figs. 3 and 4.

Fig. 3
figure 3

FEA analysis diagram of the microgripper: displacement nephogram of microgripper

Fig. 4
figure 4

FEA analysis diagram of the microgripper: stress nephogram of microgripper

3 Materials and methods

3.1 Design of experiments using the Taguchi method

The Taguchi method is one of the most powerful and stable methods of designing experiments. Comparing to the Full-Factorial method, the Taguchi method decreases the number of experiments in a drastic way, using orthogonal matrices [27]. The main advantage of the Taguchi method is to reduce price and time by decreasing the number of experiments, eliminating the effects of uncontrollable and noisy factors, and determining the main influencing factors [28, 29]. Furthermore, by this method, the interactions of factors on each other are identified. Table 2 compares the most important capabilities required in designing experiments using the Taguchi orthogonal array method with the full factorial method. The Taguchi method can be summarized in the following steps:

  1. 1.

    Define the purpose of the study.

  2. 2.

    Define dependent variable (output).

  3. 3.

    Describe parameters affecting the output (independent variables).

  4. 4.

    Determine the levels of each of the independent variables and their validity range.

  5. 5.

    Select a suitable orthogonal matrix that is compatible with the number of variables and their levels.

  6. 6.

    Perform experiments according to the instructions obtained from the orthogonal matrix and record the results.

  7. 7.

    Analyze the results and implement the Taguchi algorithm based on signal to noise ratio and output interpretation.

Table 2 Comparison of the design of experiments by Taguchi orthogonal arrays with the full factorial method

An essential step in the Taguchi algorithm is defining the loss function, then defining the N/A relation function regarding the loss function [30]. The higher the signal-to-noise ratio, the higher the ratio of useful information to false information and noise [31]. Based on the nature of the dependent variable, the S/N ratio function can be defined in the following three ways [21,22,23,24,25, 27, 32,33,34]:

1. Smaller is better:

$$\frac{S}{N} = - 10 \log \left( {\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} y_{i}^{2} } \right)$$
(1)

2. Larger is better:

$$\frac{S}{N} = - 10\log \left( {\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \frac{1}{{y_{i}^{2} }}} \right)$$
(2)

3. Nominal is better:

$$\frac{S}{N} = 10\log \left( {\frac{{\overline{y}}}{{s_{y}^{2} }}} \right)$$
(3)

In these equations, \(Y_{i}\) is the i-th observation and n is the number of observations [32].

$$\overline{y} = \mathop \sum \limits_{i = 1}^{n} \frac{{y_{i} }}{n}$$
(4)
$$s_{y}^{2} = \mathop \sum \limits_{i = 1}^{n} \frac{{\left( {y_{i} - \overline{y}} \right)}}{n - 1}$$
(5)

Note that in all the above steps, a higher signal-to-noise ratio indicates that the variable has a stronger significant effect on the dependent variable than the misinformation and noise at the desired level. Also, the difference between the maximum and minimum signal-to-noise ratio at different levels for an independent variable represented by the "δ" parameter indicates that the variable has a greater effect on the output or the dependent variable [32].

3.2 Mathematical Modeling (Regression)

The collected data is analyzed by Minitab version 11 software. Different methods of regression can be used to model, predict and analyze various systems [25]. In this study, in order to find the relationship between the dependent variable and independent variables, first from Simple Linear Regression (SLR) and then Stepwise Linear Regression (SWLR) and then from Full Quadratic Linear Regression (FQMLR) and then Backward Full Quadratic Linear Regression (BFQMLR) is used to analyze the data. Then, using Analysis of Variance (ANOVA) parameters such as R2, AdjustedR2 and Predict R2, also from P Value and finally RMSE, modeling methods are compared and the most accurate method is selected [35, 36]. The general equation for SLR is as follows:

$$y = \beta_{0} + \beta_{1} x + \epsilon$$
(6)

where \(x\) will be the predictor and \(\beta_{0}\) and \(\beta_{1}\) will be the model parameters (coefficients). Epsilon is a random component of the model that follows an independent normal distribution. The estimating equation of SLR method is as follows:

$$\hat{y} = b_{0} + b_{1} x$$
(7)

where \(b_{0}\) and \(b_{1}\) are the estimated parameters of the model and \(\hat{y}\) is the approximate value of the dependent variable \(y\).

In SWLR method, effective parameters are added one after another while the ineffective parameter are removed on by one. The general and estimated equations of SWLR method are respectively as follows:

$$y = \beta_{0} + \beta_{i} x_{i} + \cdots + \beta_{j} x_{j} + \epsilon$$
(8)
$$\hat{y} = b_{0} + b_{i} x_{i} + \cdots + b_{j} x_{j}$$
(9)

where, \(x_{i} , \ldots ,x_{j}\) are the predictors and \(\beta_{0} , \ldots ,\beta_{j}\) are the model parameters. Also, \(b_{0} , \ldots ,b_{j}\) are the estimated model parameters and \(\hat{y}\) represents the approximate value of \(y\).

Then FQMLR was used, the general equation of which is as follows [36, 37]:

$$y = {\beta _0} + {\beta _1}{x_1} + \cdots + {\beta _k}{x_k} + {\beta _{1,1}}x_1^2 + \cdots + {\beta _{k,k}}x_k^2 + {\beta _{1,2}}{x_1}{x_2} + \cdots + {\beta _{k - 1,k}}{x_{k - 1}}{x_k}$$
(10)

where, \(\beta_{0} ,\beta_{1} , \ldots ,\beta_{k} ,\beta_{1,1} , \ldots ,\beta_{k,k} ,\beta_{1,2} , \ldots ,\beta_{k - 1,k}\) are model parameters (coefficients) and \(x_{1} ,x_{1}^{2} , \ldots ,x_{k} ,x_{k}^{2}\) are the predictors.

The full-quadratic type of estimation equation MLR is as follows:

$$\hat{y} = b_{0} + b_{1} x_{1} + \cdots + b_{k} x_{k} + b_{{1,1}} x_{1}^{2} + \cdots + b_{{k,k}} x_{k}^{2} + b_{{1,2}} x_{1} x_{2} + \cdots + b_{{k - 1,k}} x_{{k - 1}} x_{k}$$
(11)

where, \(b_{0} ,b_{1} , \ldots ,b_{k} ,b_{1,1} , \ldots ,b_{k,k} ,b_{1,2} , \ldots ,b_{k - 1,k}\) are estimations of model parameters and \(\hat{y}\) is the approximate predicted value of dependent variable(output). Then, to implement BFQMLR, in each step, each term of the FQMLR equation that has a \(P\,{\text{value}} > 0.05\) is deemed as non-significant and is removed from Equation. Therefore, the final equation will have fewer parameters than FQMLR and at the same time will be more consistent with the data. Therefore, in the final comparison to choose the best method, it is enough to compare the ANOVA and Root Mean Square Error (RSME) analysis between SWLR and BFQMLR, and choose one of these two methods [35, 36].

3.3 Multi-objective Optimization Using NSGA-II:

The NSGA algorithm was first introduced in 1994 by Srinivas and Deb using the ranking and non-dominated sorting process [38]. This algorithm had problems in selecting higher quality individuals among all the individuals raised, problems in elitism and complexity of calculations. Therefore, in order to fix these problems, edited version was introduced in 2002 by Deb under the title NSGA-II [39]. Therefore, NSGA-II is an algorithm with high computational efficiency, fast, Non-elitism preventing, Non-Dominated Sorting, and with less control over shared members to maintain their diversity [34, 40,41,42].

The implementation steps of NSGA-II can be summarized as follows:

  1. 1.

    Generation of the initial population with size N and Gen = 0

  2. 2.

    Calculation of all components of the objective function for all individuals of the initial population

  3. 3.

    Non-dominated sorting based on crowding distance and ranking

  4. 4.

    Selecting a portion of the initial population which have a higher ranking, generating crossover and mutation for making offspring

  5. 5.

    Making a new population with size 2 N by a combination of parents and offspring populations

  6. 6.

    Rankin based on crowding distance and selecting N individuals with the higher rank

  7. 7.

    Checking end condition (it can be meet the maximum allowed generations, achieving the desired accuracy, reaching max. time limit and no improvement after in Maximum pre-defined number of generations).

The solution will consist of a set of points whose input are independent variables and whose output are all components of the objective function (dependent variables). Each point on optimal Pareto-front represents an optimal answer to the problem. It is clear that by changing one of the optimal output points, the other outputs will change. Therefore, it depends entirely on the selector will have to choose which point.

3.4 Optimum point Selection; An AHP Approach

There are several ways to select one of the several points on Optimal Pareto-front, all of which depend on the opinion of the selector. To ensure that the real and accurate opinion of the selector in determining the weight of each component of the objective function (dependent variables) is used in constructing the final output, a systematic method based on pairwise comparisons, called analytic hierarchical process, is used as the following [43]:

Suppose the number of dependent variables (output) is n, denoted by \(f1\) to \(fn\), in other words:

$$y = \left[ { f_{1}\, f_{2} \ldots f_{n} } \right]$$
(12)

In this case, we form a pairwise comparison matrix as follows:

$$A = \left[ {\begin{array}{*{20}c} 1 & {\mathop a\nolimits_{12} } & {...} & {\mathop a\nolimits_{1n} } \\ {\mathop a\nolimits_{21} } & 1 & {...} & {\mathop a\nolimits_{2n} } \\ {...} & {...} & {...} & {...} \\ {\mathop a\nolimits_{n1} } & {\mathop a\nolimits_{n2} } & {...} & 1 \\ \end{array} } \right] = \left[ {a_{ij} } \right]$$
(13)

Where \(a_{ij}\) represents the comparative significance of output \(f_{i}\) over output \(f_{j}\), determined on a predetermined scale (usually from 1 to 5). It should be known that \(a_{ii} = 1 \left( {i:1 \,{\text{to}}\, n} \right)\) and we have:

$$a_{ij} = \frac{1}{{a_{ji} }}$$
(14)

To calculate the weight of each output component we have:

$$W_{i} = \left\{ {a_{i1} \times a_{i2} \times \ldots \times a_{in} } \right\}^{\frac{1}{n}}$$
(15)

where \(W_{i}\) will be the weight of \(f_{i}\). For normalized mode we have:

$$w_{i}^{N} = \frac{{w_{i} }}{{\mathop \sum \nolimits_{i = 1}^{n} w_{i} }}$$
(16)

where, \(w_{i}^{N}\) is the normalized weight of i-th output (or \(f_{i}\)).

Now for each row of optimal Pareto-front, the final combined output can be expressed as \(y_{c}\) and defined as below:

$$y_{c} = \mathop \sum \limits_{i = 1}^{n} w_{i} f_{i}$$
(17)

Now we just have to find the maximum value of calculated \(y_{c}\) values.

$$y_{{{\text{opt}}{.}}} = {\text{Max}}\left( {y_{c} } \right)$$
(18)

Instead of multi-objective optimization mode, it is also possible to use a single-objective optimization mode with the following objective function from the beginning of the optimization process, and just find the optimum point:

$${\text{Objective }}\,{\text{function}}: y_{c} = \mathop \sum \limits_{i = 1}^{{\overline{n}}} w_{i} f_{i}$$
(19)

3.5 The Proposed Algorithm

The proposed algorithm can be divided into two basic steps:

Modeling stage, Includes geometric designing of the microgripper, defining independent variables (inputs), defining dependent variables (outputs), design and execution of experiments based on proper definition and selection of Taguchi orthogonal matrices, recording and analyzing results and thus ranking and separating variables with specific effects against variables with uncertain effect or noise (and eliminating them in the process of making a mathematical model), deriving a mathematical model using regression in two different ways, ANOVA analysis and deciding on the appropriate model based on this analysis.

Optimization stage, includes defining the multi-component objective function, modifying the objective function based on the nature of the outputs (so that all need to be minimized), performing the NSGA-II algorithm and obtaining the Optimal Pareto-front, making decisions Regarding the selection of final optimal points on the Optimal Pareto-front based on the weights obtained from performing pairwise comparisons, performing AHP and defining the combined objective function. A summary of these steps is shown in Figs. 5 and 6.

3.6 Uncertainties and Limitations

Regarding this fact that we often have to use the Taguchi orthogonal arrays instead of the full factorial method due to cost and time considerations on the one hand and the complexity of the calculations and (sometimes) qualitative variables, on the other hand, the weaknesses of this method may affect our research. This means that the accuracy of the constructed mathematical models may be reduced due to the omission or misinterpretation of the interaction of the variables or the small number of experiments and the discrete levels.

In the FEM, the choice of elements and mesh sizes, geometric parameters of the model, and mechanical properties of the model material can be the primary sources of uncertainty in this study.

To eliminate these shortcomings, the results should always be validated within an acceptable range by performing a sufficient number of tests.

4 Results

According to the proposed algorithm, the first step is the geometric design of the microgripper. According to the review of the articles mentioned in the introduction section, the proposed plan is as shown in Fig. 1.

Also, the geometric design variable parameters are X1 to X6, which are specified in Fig. 2. The minimum and maximum values as well as the levels of each variable are according to Table 3.

Table 3 Geometric parameters and their levels

Also, the outputs (independent variables) as mentioned in Sect. 4.1 are amplification factor and stress, which are hereinafter referred to as AF and ST in this article. The ultimate goal is to reach the maximum AF and the minimum ST.

According to the independent variables (inputs), dependent variables (outputs) and their nature, the Taguchi orthogonal matrix will be 6 factors and each will have 5 levels of type L25 and according to Table 3.

Next, we have to do the experiments (FEM analysis) and record the results of each of the 25 tests in Table 4 for both AF and ST. The results are according to Table 5.

Table 4 DOE based on Taguchi L25
Table 5 Results (performed experiments by FEM) based on Taguchi L25

At this stage, it is necessary to implement the Taguchi algorithm separately for each of the dependent variables AF and ST and obtain the results. Due to the nature of the dependent variables, the signal to noise ratio (S/N Ratio) function is considered larger is better for AF and smaller is better for ST. The S/N values for AF are shown in Table 6 and its diagram in Fig. 7, as well as the S/N values for ST in Table 6 and its diagram in Fig. 8.

Table 6 Response table for signal to noise ratios (AF), larger is better
Fig. 5
figure 5

Proposed optimization algorithm (stage1: modeling)

Fig. 6
figure 6

Proposed optimization algorithm (stage2: optimization)

Fig. 7
figure 7

Main effects plot for S/N ratio (AF), larger is better

Fig. 8
figure 8

Main effects plot for S/N ratio (St), smaller is better

According to Table 6 and DELTA values for the levels of independent variables, at this stage we cannot assume any variable from AF or ST completely noisy and completely remove it, so we leave this to the regression steps. The next step is to obtain a mathematical model of the problem using regression. As explained in the proposed algorithm, for each of the dependent variables AF and ST, we first use the SLR and then the SWLR, then FQMLR and therefore BFQMLR. Since SWLR and BFQMLR are complementary regression functions and are better than SLR and FQMLR, respectively, we provide the results for SWLR and BFQMLR only. The SWLR equation for AF is as (1), the BFQMLR for AF is as (2), and the BFQMLR, SWLR for ST are both identical and consistent with 3 (Table 7).

$${\text{AF}} = - 97.4 + 1.6704 X1 - 0.3040 X2 + 2.0632 X3 + 3.916 X5$$
(20)
$$\begin{aligned} {\text{AF}} & = - 984.4 + 6.32 X1 - 4.05 X2 + 27.97 X3 + 12.69 X4 + 122.5 X5 \\ & \quad + 0.0980 X2 \times X2 - 0.2466 X3 \times X3 - 0.1923 X4 \times X4 - 19.68 X5 \times X5 \\ & \quad - 0.1477 X1 \times X4 - 0.0924 X2 \times X3 + 0.1289 X2 \times X4 - 1.243 X3 \times X5 \\ \end{aligned}$$
(21)
$${\text{St}} = 536.7 - 7.374X3 - 48.44 \times X5$$
(22)
Table 7 Response table for signal to noise ratios (St), smaller is better

As previously stated in the description of regression methods, noisy and low-effect nerves that had a P value > 0.05 in ANOVA analysis were removed from the regression equation. Also, the Pareto chart of standardized effects for both AF and ST variables and each of the two modes SWLR and BFQMLR are shown in Figs. 9, 10 and 11. As can be seen, for the ST variable, due to the P value threshold value > 0.05, based on a = 0.15, all nonlinear nerves (and some linear nerves) are eliminated and the SWLR and BFQMLR equations are the same. Based on ANOVA analysis, the values of \(R ^{2}\), ADJUST \(R ^{2}\), Prediction \(R^{2}\) for each of the above modes are listed in Table 8.

Fig. 9
figure 9

Pareto chart of standardized effects for SWLR of AF (only for terms which P value < 0.05. Here 1.50 is threshold)

Fig. 10
figure 10

Pareto chart of standardized effects for BFQMLR of AF (only for terms which P value < 0.05. Here 1.55 is threshold)

Fig. 11
figure 11

Pareto chart of standardized effects for both SWLR and BFQMLR of St (only for terms which P value < 0.05. Here 1.50 is threshold)

Table 8 Results of ANOVA for R2, R2 (Adj.) and R2 (Pred.)

Then, according to Eqs. 20, 21 and 22, the validity of each of the mathematical models was checked with the values obtained from FEM, and the relative error and RSME of each model were obtained according to Tables 9, 10, and 11. The results of these tables show that for the AF variable, the BFQMLR model has a higher \(R^{2}\), which indicates a better fit of the data. In addition, for these variables, the value of \(R^{2}\)(adj), which is used to compare models with different number of terms, indicates the considerable superiority of the BFQMLR model. Comparison of superiority in predicting results (excluding input points for Regression) is done by \(R^{2}\)(pred), in which case there is a significant advantage in the BFQMLR model, as well. On the other hand, the average relative error percentage in this model is lower than SWLR. Comparison of RSME values in this case also shows the superiority of the BFMQLR model. Therefore, in general, the BFQMLR model is selected for the dependent variable AF. Regarding the ST variable, due to the similarity of the SWLR and BFQMLR models, there is no choice between the two modes, but the sum of the above shows that the AF model is more valid than the ST model. In the next step, in order to perform the optimization operation, we define a targeting function with two output components according to the system outputs as follows:

$${\text{Obj}}\;{\text{.Function}} : y = \left[ {{\text{AF}}\,\,\,\,{\text{ ST}}} \right]$$
(23)
Table 9 Validity check for regression models of AF
Table 10 Validity check for regression models of St
Table 11 RSME for AF and St

According to the nature of AF (large is better) and also the nature of ST (where smaller is better) and that in the optimization process we want to minimize the objective function, it is necessary to multiply the AF variable by a negative, so the objective function is modified as follows:

$${\text{Obj}}\;{\text{.function}}: y = \left[ { - {\text{AF}}\,\,\,\,{\text{ ST}}} \right]$$
(24)

where the values of AF and ST are determined according to Eqs. 2 and 3, respectively. The upper bound and lower bound values of each independent variable (input) are determined according to level1 and level5 of each. The settings of the other parameters of the NSGA-II algorithm are as shown in Table 12. After executing the target function with the above parameters in MATLAB software version R2013a with 50-point optimal Pareto front, the optimal Pareto front diagram and the optimal points on it are according to Table 13 and Fig. 12.

Table 12 Type and value for NSGA-II parameters
Table 13 Points of optimal Pareto-front
Fig. 12
figure 12

Optimal Pareto-front

Obviously, each of the points on the optimal Pareto front can be used as an optimal point, but if we want to have only one optimal point as the final answer, as stated in the section on optimization description (4.3), we use AHP method and create a pairwise comparison table and then ask an expert user to compare the AF and ST parameters in terms of importance (for a specific application) and fill in a table according to Table 14. In Table 14, for example, the importance of AF relative to ST is three to one. It is also noteworthy that in order to combine the output components according to the obtained weights, it is necessary for both of these components to be normalized before multiplying by the weights. For example, to normalize AF with respect to the optimal Pareto-front, we do the following:

$${\text{AF}}_{N} = \frac{{{\text{AF}}}}{{{\text{AF}}_{\max } - {\text{AF}}_{\min } }}$$
(25)
Table 14 Pairwise compare of output components (AF vs. St) based on AHP

where AFN is the normalized value of AF and \({\text{AF}}_{\max }\) and \({\text{AF}}_{\min }\) are the maximum and minimum values for AF in the optimal Pareto-front, respectively. After performing these steps, the table of values of the optimization parameter will be as in Table 15. The values in \(Y_{c} = {\text{combined}}\) output are obtained by the following formula:

$$Y_{c} = W_{{N_{1} }} AF_{N} + W_{{N_{2} }} ST_{N}$$
(26)
Table 15 Combined output using normalized weights

As can be seen, the maximum value of the combined output in Table 15 is 1.00359 and corresponds to the point ST = 104.729 AF = 34.933, the values of its independent variables on the optimal Pareto-front are as follows:

$$\begin{gathered} X1 = 20.45 \quad X3 = 47.00\quad X5 = 1.76 \hfill \\ X2 = 17.09\quad X4 = 31.00 \quad X6 = 14.47 \hfill \\ \end{gathered}$$

To show the flexibility of the above method, the above point (p3) along with the optimal points obtained for the AF to ST ratios of 4 to 1 (p4) and 8 to 1 (p8) are shown in Fig. 13.

Fig. 13
figure 13

Combined output points on the optimal Pareto-front

5 Discussion

As shown in Table 6 and Fig. 8, the input variables X3 and X1 have a very strong effect on the AF output parameter, while the effect of other variables is less in this case. In addition, the effect of these two variables has a definite upward trend, while the other 4 variables fluctuate. Regarding ST output, it can be concluded from Table 7 and Fig. 9 that X3 and X5 are the most effective factors and other variables have noise effect. This effect is to the extent that the X3 and X5 regression models are expected to be more important than the other inputs. In the mathematical model of the AF variable, according to the SWLR method, all neurons that had a p value > 0.05 were removed, and as a result, only 4 variables X1, X2, X3, X5 were present, and the effect of X4, X6 noise was identified and removed. Also, in the AF mathematical model based on BFQMLR, in the linear part similar to the SWLR model, only 4 variables X1, X2, X3, X5 are present. Out of 6 square terms (\(x_{i}^{2}\)), only \(x_{2}^{2} , x_{3}^{2} , x_{4}^{2} \,{\text{and}}\, x_{5}^{2}\) are present and other cases have a noise effect. Also, out of 30 possible terms for the interactional component, only \(x_{3} x_{5} , x_{2} x_{4} , x_{2} x_{3} \,{\text{and}}\,x_{1} x_{4}\) are present, and the other 26 terms have a noise effect and have been removed from the model. On the other hand, in the ST mathematical model with BFQMLR method, there are only 2 linear terms that are due to the variables X3, X5 and other linear terms with all interactional square terms have a noise effect and have been removed, so SWLR regression models, BFQMLR are the same for ST. Comparison of the effects of ANOVA analysis between the two regression models expressed for AF, as mentioned in the Results section, shows the absolute superiority of the BFQMLR model. Validity check performed on the presented regression models and the error values entered for each in Table 12 and 13 indicate the validity of all 3 regression models. Of course, the validity of this model is fully verifiable if it is valid for points other than the points used to build the model. In terms of analysis of variance, such validation is performed using the parameter \(R^{2} \left( {{\text{pred}}.} \right)\), Which has an acceptable value for all 3 models according to Table 10. The optimal Pareto front shown in Fig. 12 indicates that achieving higher AF magnitudes (which is desirable) has the unintended effect of increasing stress. For this purpose, it is necessary to compromise between the values of AF and ST, which is a kind of technical contradiction, in a reliable and systematic way. For this purpose, and instead of manual weighting, which is a common method in these cases, the matrix of pairwise comparisons and determination and normalization of the weight of each output using the Analytic Hierarchy Process (AHP) has been used. The results shown in Fig. 13 confirm this claim and its conformity with the logic of the problem.s

Finally, Table 16 compares the proposed algorithm with the previous ones (mentioned in the literature review). This comparison is made in 4 contexts. In the fourth column, comparisons are made from the perspective of data collection methods, and as can be seen, some studies have used estimates based on analytical relationships or arbitrary finite points, which certainly do not provide reliable coverage of the range of inputs and their optimal point may be a local extreme. Of course, this is not the case when well-known design of experiments methods such as Taguchi orthogonal arrays or Response Surface Method (RSM) are used. In the fifth column, the optimization methods used are compared, all of which are multi-objective and each has its advantages and disadvantages. Therefore, in this case, neither can be superior to the other. In the sixth column, the methods for selecting the optimal point from the set of answers are examined, and as can be seen, except for the proposed algorithm of this research, none of the previous cases provides such an answer and limited to a maximum of 5 optimal points (Whose choice was also unsystematic). This can be a measure of the flexibility of the algorithm for different applications of microgrippers and presented in the sixth column.

Table 16 Comparison of the proposed algorithm with previous ones

For a better understanding, if each algorithm has an advantage in a specific aspect, it is highlighted in the corresponding column, and as can be seen, the only algorithm that has an advantage in all four areas at the same time is the algorithm proposed in this paper.

6 Conclusions

Microgrippers are the end effectors in micro-operations and micro-manipulations, making them a sensitive instrument in terms of accuracy and efficiency. As an actuation mechanism for microgrippers, piezo-electric actuation has several benefits, but there is a major drawback; the range of motion in this type of actuation is minimal. Therefore, amplification mechanisms are of great interest, especially in piezo-electric microgrippers. Amplification Factor is an important parameter to ensure a great range of motion for the gripper, thus resulting in a wider application use. Using four stages of amplification, we achieved a desirable range of motion for the microgripper. This paper proposed a novel optimization process to adjust a symmetric compliant piezoelectric actuated microgripper for specific applications. A trade-off was made between the displacement amplification factor and maximum generated mechanical stress using multi-objective optimization. Based on the optimization, a set of designs was proposed instead of a single optimum design. A selection method based on the Analytical Hierarchy Process (AHP) was suggested for the selection process of designs for specific applications. The performance of the proposed microgripper design was inspected using the finite element method. Taguchi's method of designing experiments was used to identify effective variables and obtain relations between design parameters and intended responses. Using NSGA-II instead of the traditional Multi-Objective Genetic Algorithm (MOGA) reduces the complexity of calculations, speeds up the algorithm, leads to easier elitism, and preserves the GA population's diversity, leading to more accurate and faster results. Also, using the AHP algorithm prevents possible errors in the final decision of the designer.