Advertisement

Neural Computing and Applications

, Volume 30, Issue 5, pp 1529–1547 | Cite as

Multi-criteria decision-making method based on single-valued neutrosophic linguistic Maclaurin symmetric mean operators

  • Jian-qiang WangEmail author
  • Yu Yang
  • Lin Li
Original Article

Abstract

This paper investigates a wide range of generalized Maclaurin symmetric mean (MSM) aggregation operators, such as the generalized arithmetic MSM and the generalized geometric MSM, whose predominant characteristic is capturing the interrelationships among multi-input arguments. The single-valued neutrosophic linguistic set plays an essential role in decision making, which can serve as an extension of either a linguistic term set or a single-valued neutrosophic set. This study centers on multi-criteria decision-making (MCDM) issues in which criteria are weighed differently and criteria values are expressed as single-valued neutrosophic linguistic numbers. Based on this foundation, we extend a series of MSM aggregation techniques under single-valued neutrosophic linguistic environments and propose procedures for solving MCDM problems. We also explore the influence of parameters on aggregation results. Finally, we provide a practical example and conduct a comparison analysis between the proposed approach and other existing methods in order to verify the proposed approach and demonstrate its validity.

Keywords

Single-valued neutrosophic linguistic Maclaurin symmetric mean Multi-criteria decision-making 

1 Introduction

Intuitionistic fuzzy sets (IFSs) are described by the membership (also called truth) and non-membership (also called falsity). The hesitation degree (also called indeterminacy) is equal to the difference between the membership and the non-membership, and it is provided by default and cannot be defined alone. However, indeterminacy in neutrosophic sets (NSs), which are defined independently and quantified explicitly, can describe a proposition’s value between truth and falsehood. Originally proposed by Smarandache [1], a NS consists of the degrees of truth, indeterminacy and falsity. To date, a large number of researchers have produced works associated with NS theory, such as [2, 3, 4, 5, 6]. Ye [7] pointed out that the concept of a NS is defined from a philosophical point of view, and it is therefore unable to address practical issues from a scientific or engineering perspective. To overcome this flaw, Ye [7] employed simplified neutrosophic sets (SNSs). Building on SNSs, Tian et al. [8] defined several simplified neutrosophic linguistic (SNL) distance measures and developed some improved MULTI-MOORA approaches under SNL environment. Furthermore, Tian et al. [9] addressed a green product development problem under a SNL environment using a TOPSIS-based QUALIFLEX method. Two special instances have also been introduced: single-valued neutrosophic sets (SVNSs), introduced by Smarandache [10] in 1998, and interval neutrosophic sets (INSs) [11]. For convenience, the term of SVNS is used in this article, which is the same as SNS. Building on SVNS, Deli and Şubaş [12] provided a ratio ranking method for solving multi-attribute decision-making (MADM) problems and Biswas et al. [13] extended the technique for order preference by similarity to the ideal solution. Applying a single component to represent the grade of membership under a SVNS environment cannot take uncertainties into account; therefore, many approaches have been proposed in interval neutrosophic environments. For example, Bausys and Zavadskas [14] extended a new VIKOR method, and Broumi et al. [15] proposed an extended TOPSIS method to solve MADM problems. However, there exists a situation in which the expression is beyond the scope of SNS, SVNS and INS. For example, suppose several professionals evaluate the possibility of a certain statement. Some hold the view that the degrees of truth, falsity and indeterminacy of the statement are 0.8, 0.2 and 0.4, respectively, while others propose 0.7, 0.3 and 0.3; this situation cannot be described as SNS, SVNS or INS. Instead, a multi-valued neutrosophic set [16] (MVNS) is applicable to this case. In a MVNS, truth, indeterminacy and falsity can be represented as a discrete, finite number set. The situation described above can be expressed by \(\{ (0.7,0.8),(0.2,0.3),(0.3,0.4)\}\). Furthermore, MVNSs have been applied to address multi-criteria decision-making (MCDM) issues. For example, Ji et al. [17] constructed a projection-based TODIM method utilizing MVNSs to express evaluation information. Peng et al. [18] defined some outranking relations for multi-valued neutrosophic numbers and extended the ELECTRE method. MVNSs are called single-valued neutrosophic hesitant fuzzy sets (SVNHFSs) in [19].

In general, humans prefer to use linguistic terms rather than real or fuzzy numbers in evaluations because of the ambiguity of people’s thinking and the complexity of objective things. Considering this perspective, Zadeh [20] introduced a new notion called linguistic variables, which are greatly helpful in analyzing qualitative information. Since the development of linguistic variables, further research has been conducted into the application of decision-making methods to decision-making problems. In one recent example, Yu et al. [21] proposed an interactive MCDM approach with intuitionistic linguistic numbers. Based on the hesitation perspective, Wang et al. [22] presented a likelihood-based TODIM approach to manage multi-hesitant fuzzy linguistic information. In addition, Moharrer et al. [23] proposed a novel two-phase methodology based on interval type-2 fuzzy sets to model human perceptions of linguistic terms. However, utilizing linguistic variables generally implies that the truth degree of a linguistic term is 1, while the degrees of falsity and indeterminacy cannot be expressed. This fails to accommodate real-life decision-making issues. In order to overcome this limitation, Ye [24] presented the single-valued neutrosophic linguistic set (SVNLS), which describes truth, the falsity and indeterminacy levels regarding a linguistic term. Sometimes, the degrees of truth, falsity and indeterminacy for a certain issue cannot be expressed exactly with real numbers but must instead be denoted by interval values. Therefore, Ye [25] further generalized the concept of the interval neutrosophic linguistic set (INLS). Ma et al. [26] built on this work by solving a treatment selection problem under an interval neutrosophic linguistic environment.

Effective aggregation is one of the most important research areas in the field MCDM. Aggregation, which usually involves mathematical operators, is not just an average; rather, it represents a more general notion. In order to fuse the massive individual data into a single data point and acquire more direct ranking of the options, researchers have proposed many efficient and practical aggregating operators, such as the power average (PA) operator, ordered weighted aggregation (OWA) operator, Bonferroni mean (BM) operator and Heronian mean (HM) operators. Recently, Tian et al. [27] applied the traditional PA operator under a simplified neutrosophic uncertain linguistic environment. Liu et al. [28] proposed a new decision-making method based on the intuitionistic trapezoidal fuzzy prioritized OWA operator. Subsequently, Liang et al. [29] developed a method based on the single-valued trapezoidal neutrosophic normalized weighted BM operator. Meanwhile, Ji et al. [30] investigated a single-valued neutrosophic Frank normalized prioritized BM operator. Liu et al. [31] proposed some HM operators based on neutrosophic uncertain linguistic numbers. The Maclaurin symmetric mean (MSM) operator, firstly proposed by Maclaurin [32] and further developed by Detemple and Robertson [33], has the well-known advantage of capturing the interrelationships among the multi-input arguments lying between the max and min operators.

Related studies of SVNSs and MSM operators have been quite fruitful. As for the measurement of SVNSs, Aydoğdu defined [34] similarity measure on two SVNSs. Based on graph theory, Broumi et al. [35] presented the concept of bipolar single neutrosophic graphs. Besides, there are a lot of theoretical researches about clustering, such as [36, 37]. Broumi and Smarandache [38] presented a new decision-making method based on the neutrosophic trapezoid linguistic weighted arithmetic averaging (including geometric) aggregation operator. In SVNS research field, a variety of achievements from different research perspectives have been achieved, while the operator researches, playing an essential role in decision-making field, with respect to SVNLS are insufficient. The relative research literature around the MSM operator has continued to grow. Ju et al. [39] introduced some novel weighed intuitionistic linguistic MSM operators. Qin and Liu [40] defined the dual MSM operator and extended it to accommodate uncertain linguistic environments. Qin et al. [41] further investigated the weighted hesitant fuzzy MSM operator in order to aggregate hesitant fuzzy information. In addition, some researchers have studied the MSM operator from theoretical aspects such as inequality [42, 43, 44] and convexity [45]. In summary, existing MSM operators can be utilized to aggregate information in the form of crisp numbers, intuitionistic fuzzy numbers or 2-tuple linguistic numbers when solving a decision-making problem, but they fail to accommodate situations in which the input arguments are single-valued neutrosophic linguistic numbers (SVNLNs). Motivated by gap in the literature, we aim to study the MSM operator in context of single-valued neutrosophic linguistic information.

The primary contributions of this paper can be summarized as follows.
  1. 1.

    The MSM operator is a classical mean-type aggregation operator with a distinctive capability to capture the interrelationships among multi-input arguments. In this study, we not only extend the MSM operator to a generalized form, including arithmetic and geometric forms, but also explore its crucial qualities.

     
  2. 2.

    Based on the related research achievements of predecessors, we extend the MSM operator under single-valued neutrosophic linguistic environments. We also propose a series of single-valued neutrosophic linguistic Maclaurin symmetric mean (SVNLNMSM) aggregation operators.

     
  3. 3.

    We demonstrated the effectiveness of a MCDM approach based on the weighted single-valued neutrosophic linguistic Maclaurin symmetric mean (WSVNLMSM) operator, weighted single-valued neutrosophic linguistic generalized Maclaurin symmetric mean (WSVNLGMSM) operator and weighted single-valued neutrosophic linguistic geometric Maclaurin symmetric mean (WSVNLGeoMSM) operator using illustrations and a comparative analysis. In addition, we use linguistic scale functions to calculate qualitative data in order to compensate for differences in semantics.

     

The remainder of this article is organized as follows. In Sect. 2, we introduce linguistic term sets and linguistic scale functions, as well as some basic definitions and operations for SVNLSs and SVNLNs. Furthermore, we briefly describe the MSM operator and its properties. In Sect. 3, we develop SVNLMSM, SVNLGMSM and SVNLGeoMSM operators. Moreover, we investigate some desirable properties of these expanded operators and discussed some special cases with respect to different parameter values. In Sect. 4, we present some approaches based on the WSVNLMSM, WSVNLGMSM and WSVNLG eo MSM operators to solve the multi-criteria decision making with SVNL information. In Sect. 5, we provide a practical example to demonstrate the MCDM process and the validity of the proposed methods. Finally, we analyze what influence on ranking outcomes shall be generated when the whole parameter in proposed aggregation operators is assigned by a variety of values.

2 Preliminaries

2.1 Linguistic term sets

Let \(S = \left\{ {s_{\theta } |\theta = 0, \ldots ,2t} \right\}\) be a finite and totally ordered discrete linguistic term set, in which t is a positive integer. The cardinality of the set is an odd value, and each label s θ represents a possible value for a linguistic variable. Clearly, the linguistic term in the middle position suggests an evaluation of “indifference,” and the remaining values in the set are distributed symmetrically around it. The semantics of the linguistic variable are relevant to its subscripts, which are shown as follows:
$$\begin{aligned} S & = \{ s_{0} = extremely\;poor, \,s_{1} = very\;poor, \,s_{2} = poor,\, s_{3} = medium, \\ & \quad s_{4} = good, \,s_{5} = very\;good, \,s_{6} = extremely\;good\} . \\ \end{aligned}$$

Definition 1

[46] Let s α and s β be any two linguistic terms in S. The following characteristics are required:

  1. 1.

    Max operator: \(Max(s_{\alpha } ,s_{\beta } ) = s_{\alpha }\) if s α  ≥ s β ,

     
  2. 2.

    Min operator: \(Min(s_{\alpha } ,s_{\beta } ) = s_{\alpha }\) if s α  ≤ s β ,

     
  3. 3.

    Negation operator: \(Neg(s_{\alpha } ) = s_{2t - \alpha }\),

     
  4. 4.

    The set is ordered: s α  > s β if and only if α > β.

     
The linguistic term set becomes a powerful and effective tool in MCDM problems because decision makers (DMs) can express their views more accurately by integrating linguistic information into the assessment. However, the linguistic term set is discrete, which can easily lead to an incomplete aggregated value. Therefore, Xu [47] expanded the discrete linguistic term set to a continuous one, \(\bar{S} = \{ s_{\theta } |0 \le \theta \le L\}\), where s α  > s β if and only if α > β, and L(L ≥ 2t + 1) is a sufficiently large positive integer in order to preserve the entirety of the information provided. The element \(\bar{S}\) satisfies the characteristics in Definition 1. If s θ  ∊ S, we call s θ the original linguistic term, which is generally used for evaluation; otherwise, we call s θ the virtual linguistic term, which is generally used for calculation and ranking.

Definition 2

[47] Let s α and s β be any two linguistic terms in \(\bar{S}\). The related operations can be defined as follows:

  1. 1.

    \(s_{\alpha } \oplus s_{\beta } = s_{\alpha + \beta } ,\)

     
  2. 2.

    \(\lambda s_{\alpha } = s_{\lambda \alpha } ,\)

     
  3. 3.

    \(s_{\alpha } \otimes s_{\beta } = s_{\alpha \beta } ,\)

     
  4. 4.

    \((s_{\alpha } )^{\lambda } = s_{{\alpha^{\lambda } }} .\)

     

2.2 Linguistic scale functions

Although operational rules directly based on the subscript of a linguistic term are quite widely utilized in MCDM methods, they represent a simple transformation from linguistic terms to real numbers and cannot properly maintain the original vagueness of the evaluation. Therefore, it is essential to develop a linguistic scale function, which helps to use quantitative data and express semantics more flexibly. In particular, linguistic scale functions not only provide more deterministic results, but also assign different semantic values to linguistic terms in different situations [48, 49, 50].

Definition 3

[49] Let \(s_{i} \in S\) be a linguistic term. If \(\theta_{i} \in [0,1]\) is a numeric value, then a linguistic scale function f conducts the mapping from s i to θ i (i = 0, 1, 2, …, 2t), and it can be represented as follows:
$$f : s_{i} \to \theta_{i} \quad (i = 0,1,2, \ldots ,2t),$$
(1)
where 0 ≤ θ 0 < θ 1 < ··· < θ 2t  ≤ 1.
The following three linguistic scale functions are provided for use in the subsequent analysis:
  1. 1.
    Based on the subscript function (\(sub(s_{x} ) = x\)) of linguistic terms, the linguistic scale function is defined as follows:
    $$f_{1} \left( { s_{x} } \right) = \theta_{x} = \frac{x}{2t}(x = 0,\,1,\,2, \ldots ,2t).$$
    (2)
     
This function, similar to the subscript function, takes the average of the evaluation scale of the provided linguistic information.
  1. 2.
    Based on the exponential scale, the linguistic scale function is defined as follows:
    $$f_{2} \left( { s_{y} } \right) = \theta_{y} = \left\{ {\begin{array}{*{20}l} {\frac{{q^{t} - q^{t - y} }}{{2q^{t} - 2}}} \hfill & {y = \left( {0,1,2, \ldots ,t} \right)} \hfill \\ {\frac{{q^{t} + q^{y - t} - 2}}{{2q^{t} - 2}}} \hfill & {\left( {y = t + 1,t + 2, \ldots ,2t} \right)} \hfill \\ \end{array} } \right..$$
    (3)
     
In practical terms, this function is a composite assessment expression, and it depicts a situation in which DMs’ mental stimulation is affected by both good and bad criteria. We can interpret the value of q subjectively. Let A and B be two indictors. Suppose that A is more important than B with an importance ratio m. Then, q k  = m, where k denotes the scale level. At present, the majority of researchers believe the upper limit of the importance ratio to be 9.
  1. 3.
    Based on the prospect theory, the linguistic scale function is defined as follows:
    $$f_{3} \left( { s_{z} } \right) = \theta_{z} = \left\{ {\begin{array}{*{20}l} {\frac{{t^{\alpha } - \left( {t - i} \right)^{\alpha } }}{{2t^{\alpha } }}} \hfill & {\left( {z = 0,1,2, \ldots ,t} \right)} \hfill \\ {\frac{{t^{\beta } + \left( {i - t} \right)^{\beta } }}{{2t^{\beta } }}} \hfill & {\left( {z = t + 1,t + 2, \ldots ,2t} \right)} \hfill \\ \end{array} } \right.,$$
    (4)
    where αβ ∊ [0, 1]. When α = β = 1, this function reduces to \(f_{1} (s_{x} ) = \theta_{x} = \frac{x}{2t}\).
     

The value function in the prospect theory illustrates the phenomenon in which the DMs’ sensitivity regarding the gap between “good” and “slightly good” is greater than that of the gap between “good” and “very good.” In other words, this function reflects the value formed by the subjective feelings of DMs.

In order to simplify the calculation, it is necessary to expand the above functions to \(f^{*} : { }\bar{S} \to R^{ + } (R^{ + } = \{ r|r \ge 0, r \in R\} )\), which satisfies \(f^{*} (s_{i} ) = \theta_{i}\) and is a strictly monotonically increasing and continuous function. Due to the monotony, the mapping, \(f^{*} :\bar{S} \to R^{ + }\), is one-to-one. Therefore, the inverse function of \(f^{*}\) exists, denoted as \(f^{* - 1}\).

2.3 Single-valued neutrosophic sets

In his research, Smarandache provided various real-life examples of possible applications of NSs. However, it is difficult to apply NSs to practical situations. Therefore, Ye [7] reduced NSs with nonstandard intervals into a kind of SNSs with standard intervals, while preserving the operations of the NSs.

Definition 4

[51] Let X be a space of points (objects), with a generic element in X denoted by x. A SVNS A in X is characterized by a truth-membership function T A , an indeterminacy-membership function I A , and a falsity-membership function F A . If the functions T A , \(I_{A}\) and F A are defined in singleton subintervals or subsets in the real standard [0, 1] (that is, T A : X → [0, 1], I A : X → [0, 1] and F A : X → [0, 1]), then the sum of T A (x), I A (x), and F A (x) satisfies the condition 0 ≤ T A (x) + I A (x) + F A (x) ≤ 3 for any \(x \in X\). Then, a SVNS A is denoted as follows:
$$A = \left\{ {x,\left\langle {T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)} \right\rangle |x \in X} \right\},$$
(5)
which is called a single-valued neutrosophic set (SVNS).

For convenience, the ordered triple component 〈T A (x), I A (x), F A (x)〉, which is the core of SVNS, can be called a single-valued neutrosophic number (SVNN). Furthermore, each SVNN can be described as a = (T a I a F a ), where T a  ∊ [0, 1], I a  ∊ [0, 1], F a  ∊ [0, 1], and 0 ≤ T a  + I a  + F a  ≤ 3.

2.4 Single-valued neutrosophic linguistic sets

Definition 5

[24, 48] Let X be a space of points (objects), with a generic element in X denoted by x, and let \(S = \left\{ {s_{\theta } |\theta = 0,1, \ldots ,2t} \right\}\) be a finite and totally ordered discrete linguistic term set, where t is an arbitrary natural number. Then,
$$A = \left\{ {x,\left\langle {s_{\theta (x)} ,\left( {T_{A} \left( x \right),I_{A} \left( x \right), F_{A} \left( x \right)} \right)} \right\rangle |x \in X} \right\}$$
(6)
is called a SNLS, where \(s_{\theta (x)} \in \bar{S}\), T A : X → [0, 1], I A : X → [0, 1], and F A : X → [0, 1] with the condition 0 ≤ T a  + I a  + F a  ≤ 3. The numbers T A (x), I A (x), F A (x) represent the truth-membership degree, the indeterminacy-membership degree, and the falsity-membership degree, respectively, of element x in X to the linguistic variable s θ(x), respectively.

Definition 6

[24] Let \(A = \left\{ {x,\left\langle {s_{\theta (x)} ,\left( {T_{A} \left( x \right),I_{A} \left( x \right),F_{A} \left( x \right)} \right)} \right\rangle |x \in X} \right\}\) be a SVNLS. Then, the ordered quadruple component \(\left\langle {s_{\theta (x)} ,\left( {T_{A} \left( x \right), I_{A} \left( x \right), F_{A} \left( x \right)} \right)} \right\rangle\) is called a single-valued neutrosophic linguistic number (SVNLN). Each SVNLN can be expressed as \(a = \left\langle {s_{a} ,(T_{a} ,I_{a} ,F_{a} )} \right\rangle\), where \(s_{a} \in \bar{S}\), T a  ∊ [0, 1], I a  ∊ [0, 1], F a  ∊ [0, 1], and 0 ≤ T a  + I a  + F a  ≤ 3.

In effect, A can be viewed as a collection of SVNLNs.

Definition 7

[24] Let \(a = s_{a} ,(T_{a} , I_{a} , F_{a} )\) and \(b = s_{b} ,(T_{b} , I_{b} , F_{b} )\) be any two SNLNs, let \(f^{*}\) be a linguistic scale function, and let λ ≥ 0. Then, the operational rules of SNLNs are defined as follows:
  1. 1.

    \(a \oplus b = \left\langle {\mathop f\nolimits^{{* - 1}} \left( {\mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{a} }} ) + \mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{b} }} )} \right),\left( {\frac{{\mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{a} }} ) \cdot \mathop T\nolimits_{a} + \mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{b} }} ) \cdot \mathop T\nolimits_{b} }}{{\mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{a} }} ) + \mathop {f}\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{b} }} )}},\frac{{\mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{a} }} )\mathop I\nolimits_{a} + \mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{b} }} )\mathop I\nolimits_{b} }}{{\mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{a} }} ) + \mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{b} }} )}},\frac{{\mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{a} }} )\mathop F\nolimits_{a} + \mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{b} }} )\mathop F\nolimits_{b} }}{{\mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{a} }} ) + \mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{b} }} )}}} \right)} \right\rangle .\)

     
  2. 2.

    \(a \otimes b = f^{* - 1} \left( {f^{*} \left( {s_{{\theta_{a} }} } \right)f^{*} \left( {s_{{\theta_{b} }} } \right)} \right),\left( {T_{a} T_{b} ,I_{a} + I_{b} - I_{a} I_{b} ,F_{a} + F_{b} - F_{a} F_{b} } \right).\)

     
  3. 3.

    \(\lambda a = f^{* - 1} \left( {\lambda f^{*} \left( {s_{{\theta_{a} }} } \right)} \right),\left( {T_{a} , I_{a} , F_{a} } \right) .\)

     
  4. 4.

    \(a^{\lambda } = f^{* - 1} \left( {\left( {f^{*} \left( {s_{{\theta_{a} }} } \right)} \right)^{\lambda } } \right),\left( {T_{a}^{\lambda } ,1 - \left( {1 - I_{a} } \right)^{\lambda } ,1 - \left( {1 - F_{a} } \right)^{\lambda } } \right).\)

     
  5. 5.

    \({\text{neg}}\left( a \right) = f^{* - 1} \left( {f^{*} \left( {s_{2t} } \right) - f^{*} \left( {s_{{\theta_{a} }} } \right)} \right),\left( {F_{a} ,1 - I_{a} ,T_{a} } \right).\)

     

Obviously, the above operational results are still SNLNs.

Definition 8

[24, 48] For any SVNLN \(a = s_{a} ,(T_{a} , I_{a} , F_{a} )\), the score function, accuracy function and certainty function for a can be defined, respectively, as follows:

  1. 1.

    \(S\left( a \right) = f^{*} \left( {s_{{\theta_{a} }} } \right)\left( {T_{a} + 1 - I_{a} + 1 - F_{a} } \right),\)

     
  2. 2.

    \(A\left( a \right) = f^{*} \left( {s_{{\theta_{a} }} } \right)\left( {T_{a} - F_{a} } \right),\) and

     
  3. 3.

    \(C\left( a \right) = f^{*} \left( {s_{{\theta_{a} }} } \right)T_{a} ,\)

     
where \(f^{*}\) is the linguistic scale function.

Definition 9

[48] Let \(a = \left\langle{s_{a} ,(T_{a} , I_{a} , F_{a} ) } \right\rangle\) and \(b = s_{b} ,(T_{b} , I_{b} , F_{b} )\) be any two SVNLNs, and let \(f^{*}\) be a linguistic scale function. The comparison method can be defined as follows:

  1. 1.

    If S(a) > S(b), then a > b,

     
  2. 2.

    If S(a) = S(b) and A(a) > A(b), then a > b,

     
  3. 3.

    If S(a) = S(b) and A(a) = A(b), and C(a) > C(b), then a > b,

     
  4. 4.

    If S(a) = S(b) and A(a) = A(b), and C(a) = C(b), then a = b.

     

2.5 Maclaurin symmetric mean operator

Definition 10

[32] Let x i (i = 1, 2, …, n) be the set of nonnegative real numbers. A MSM operator of dimension n is a mapping MSM (m): (R +) n  → R +, and it can be defined as follows:
$$MSM^{(m)} (x_{1} , \ldots ,x_{n} ) = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < \cdots < i_{m} \le n}} {\prod\nolimits_{j = 1}^{m} {x_{{i_{j} }} } } }}{{C_{n}^{m} }}} \right)^{{\frac{1}{m}}} ,$$
(7)
where (i 1i 2, …, i m ) traverses all the m-tuple combination of (1, 2, …, n), and \(C_{n}^{m} = \frac{n!}{m!(n - m)!}\) is the binomial coefficient. In the next analysis, assume that i 1 < i 2 < ,…, < i m . In addition, \(x_{{i_{j} }}\) mean refers to i j th element in a particular arrangement.
It is clear that the MSM (m) operator has the following properties:
  1. 1.

    Idempotency. If x ≥ 0 and x i  = x for all i, then MSM (m)(xx,…, x) = x.

     
  2. 2.

    Monotonicity. If x i  ≤ y i , for all i, MSM (m)(x 1x 2, …, x n ) ≤ MSM (m)(y 1y 2, …, y n ), where x i and y i are any nonnegative real numbers.

     
  3. 3.

    Boundedness. MIN{x 1x 2, …, x n } ≤ MSM (m)(x 1x 2, …, x n ) ≤ MAX{x 1x 2, …, x n }.

     
In effect, if m is assigned different values, then the MSM (m) operator can be degenerated into some special forms, which are described as follows:
  1. 1.
    When m = 1, an average value of the set can be derived by the MSM (m) operator, as follows:
    $$MSM^{(1)} (x_{1} , \ldots ,x_{n} ) = \mathop {\left( {\frac{{\sum\nolimits_{{1 \le \, i_{1} \le n}} { \, x_{{ \, i_{1} }} } }}{{ \, C_{n}^{1} }}} \right)}\nolimits^{1} = \frac{{\sum\nolimits_{i = 1}^{n} { \, x_{i} } }}{n}.$$
    (8)
     
  2. 2.
    When m = 2, the MSM (m) operator reduces to the Bonferroni mean (BM) operator with the whole parameters equal to one, as follows:
    $$\begin{aligned} MSM^{(2)} (x_{1} , \ldots ,x_{n} ) & = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < i_{2} \le n}} {\prod\nolimits_{j = 1}^{2} {x_{{i_{j} }} } } }}{{C_{n}^{2} }}} \right)^{{\frac{1}{2}}} \\ & = \left( {\frac{{2\sum\nolimits_{{1 \le i_{1} < i_{2} \le n}} { \, x_{{ \, i_{1} }} \, x_{{ \, i_{2} }} } }}{n(n - 1)}} \right)^{{\frac{1}{2}}} \\ & = \left( {\frac{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, x_{i} \, x_{j} } }}{n(n - 1)}} \right)^{{\frac{1}{2}}} \\ & = \,\mathop {BM}\nolimits^{1,1} (\mathop x\nolimits_{1} , \ldots ,\mathop x\nolimits_{n} ). \\ \end{aligned}$$
    (9)
     
  3. 3.
    When m = 3, the MSM (m) operator reduces to the generalized Bonferroni mean (GBM) operator with the whole parameter equal to one, as follows:
    $$\begin{aligned} MSM^{(3)} (x_{1} , \ldots ,x_{n} ) & = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < i_{2} < i_{3} \le n}} {\prod\nolimits_{j = 1}^{3} {x_{{i_{j} }} } } }}{{C_{n}^{3} }}} \right)^{{\frac{1}{3}}} \\ & = \left( {\frac{{6\sum\nolimits_{{1 \le i_{1} < i_{2} < i_{3} \le n}} {x_{{i_{1} }} x_{{i_{2} }} x_{{i_{3} }} } }}{{n\left( {n - 1} \right)\left( {n - 2} \right)}}} \right)^{{\frac{1}{3}}} \\ & = \left( {\frac{1}{{n\left( {n - 1} \right)\left( {n - 2} \right)}}\sum\limits_{i,j,k = 1,i \ne j \ne k}^{n} {\mathop x\nolimits_{i}^{1} } \mathop x\nolimits_{j}^{1} \mathop x\nolimits_{k}^{1} } \right)^{{\frac{1}{3}}} \\ & = \mathop {GBM}\nolimits^{1,1,1} (\mathop x\nolimits_{1} , \ldots ,\mathop x\nolimits_{n} ). \\ \end{aligned}$$
    (10)
     

3 Some single-valued neutrosophic linguistic Maclaurin symmetric mean operators

This section presents both the generalized arithmetic MSM operator and the geometric MSM operator. Generally, MSM operators are used in situations in which input arguments are crisp numbers. Therefore, we go on to extend MSM operators to statements in which input arguments are SNLNs. Finally, we develop weighted aggregation operators that consider the different importance of criteria.

3.1 Generalized MSM operator

Definition 11

Let x i (i = 1, 2, …, n) be the set of nonnegative real numbers and p 1p 2, …, p m  ≥ 0. A generalized MSM operator of dimension n is a mapping \(GMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}} :\left( {R^{ + } } \right)^{n} \to R^{ + }\), and it can be defined as follows:
$$\begin{aligned} & GMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (x_{1} , \ldots ,x_{n} ) \\ & \quad = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < \cdots < i_{m} \le n}} {\prod\nolimits_{j = 1}^{m} { \, x_{{ \, i_{j} }}^{{ \, p_{j} }} } } }}{{C_{n}^{m} }}} \right)^{{\frac{1}{{(p_{1} + p_{2} + \cdots + p_{m} )}}}} , \\ \end{aligned}$$
(11)
where (i 1i 2, …, i m ) traverses all the m-tuple combination of (1, 2, …, n), and \(C_{n}^{m} = \frac{n!}{m!(n - m)!}\) is the binomial coefficient.

The \(GMSM^{{\left( {m,p_{1} ,p_{2} , \ldots p_{m} ,} \right)}}\) operator has the following desirable properties:

Property 1

  1. 1.

    Idempotency. If x ≥ 0 and x i  = x for all i, then \(GMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,m} \right)}} \left( {x,x, \ldots ,x} \right) = x.\)

     
  2. 2.

    Monotonicity. If x i  ≤ y i , for all i, and x i and y i are any nonnegative real numbers, \(GMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,m} \right)}} \left( {x_{1} ,x_{2} , \ldots ,x_{n} } \right) \le GMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,m} \right)}} \left( {y_{1} ,y_{2} , \ldots ,y_{n} } \right).\)

     
  3. 3.

    Boundedness \(MIN\left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\} \le GMSM^{{(m,\mathop p\nolimits_{1} ,\mathop p\nolimits_{2} , \ldots ,\mathop p\nolimits_{m} )}} (x_{1} ,x_{2} , \ldots ,x_{n} ) \le MAX\left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}.\)

     

Proof

  1. 1.

    Since each x i  = x, that is,

    $$\begin{aligned} GMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (x, \ldots ,x) & = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < \cdots < i_{m} \le n}} {\prod\nolimits_{j = 1}^{m} { \, x^{{ \, p_{j} }} } } }}{{C_{n}^{m} }}} \right)^{{\frac{1}{{\mathop {(p}\nolimits_{1} + \mathop p\nolimits_{2} + \cdots + \mathop p\nolimits_{m} )}}}} \\ & = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < \cdots < i_{m} \le n}} { \, x^{{p_{1} + p_{2} + \cdots + p_{m} }} } }}{{C_{n}^{m} }}} \right)^{{\frac{1}{{\mathop {(p}\nolimits_{1} + \mathop p\nolimits_{2} + \cdots + \mathop p\nolimits_{m} )}}}} \\ & = \left( {\frac{{C_{n}^{m} (\mathop x\nolimits^{{\mathop p\nolimits_{1} + \mathop p\nolimits_{2} + \cdots + \mathop p\nolimits_{m} }} )}}{{C_{n}^{m} }}} \right)^{{\frac{1}{{\mathop {(p}\nolimits_{1} + \mathop p\nolimits_{2} + \cdots + \mathop p\nolimits_{m} )}}}} = x. \\ \end{aligned}$$
     
  2. 2.
    Assume that one m-tuple (i 1i 2, …, i m ) is given randomly, and the parameters p 1p 2, …, p m are assigned to nonnegative real numbers, respectively. Then, \(\prod\nolimits_{j = 1}^{m} { \, x_{{ \, i_{j} }}^{{ \, p_{j} }} } \le \prod\nolimits_{j = 1}^{m} { \, y_{{ \, i_{j} }}^{{ \, p_{j} }} }\) when 0 ≤ x i  ≤ y i for each i, and \(\sum\nolimits_{{1 \le i_{1} < \cdots < i_{m} \le n}} {\prod\nolimits_{j = 1}^{m} {x_{{i_{j} }}^{{p_{j} }} } }\) is inferior to \(\sum\nolimits_{{1 \le i_{1} < \cdots < i_{m} \le n}} {\prod\nolimits_{j = 1}^{m} {y_{{i_{j} }}^{{p_{j} }} } }\), for which \(\prod\nolimits_{j = 1}^{m} {x_{{ \, i_{j} }}^{{ \, p_{j} }} } \le \prod\nolimits_{j = 1}^{m} { \, y_{{ \, i_{j} }}^{{ \, p_{j} }} }\) is established in each possible m-tuple arrangement. Therefore, we have
    $$\left( {\frac{{\sum\limits_{{1 \le i_{1} < \cdots < i_{m} \le n}} {\prod\limits_{j = 1}^{m} {\mathop x\nolimits_{{\mathop i\nolimits_{j} }}^{{\mathop p\nolimits_{j} }} } } }}{{C_{n}^{m} }}} \right)^{{\frac{1}{{(p_{1} + p_{2} + \cdots + p_{m} )}}}} \le \left( {\frac{{\sum\limits_{{1 \le i_{1} < \cdots < i_{m} \le n}} {\prod\limits_{j = 1}^{m} {\mathop y\nolimits_{{\mathop i\nolimits_{j} }}^{{\mathop p\nolimits_{j} }} } } }}{{C_{n}^{m} }}} \right)^{{\frac{1}{{(p_{1} + p_{2} + \cdots + p_{m} )}}}} .$$
     
  3. 3.

    Let x = MIN{x 1x 2, …, x n } and \(\overline{x} = MAX\left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}.\) According to the property of idempotency, \(MIN\left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\} = \underline{x} = GMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (\underline{x} , \ldots ,\underline{x} ).\) According to the property of monotonicity, when x ≤ x i for any i, we have \(\underline{x} = GMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (\underline{x} , \ldots ,\underline{x} ) \le GMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (x_{1} ,x_{2} , \ldots ,x_{n} ).\)

     

Similarly, \(\overline{x} = GMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (\overline{x} , \ldots ,\overline{x} ) \ge GMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (x_{1} ,x_{2} , \ldots ,x_{n} )\).

Therefore, we have \({\text{MIN}}\left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\} \le {\text{GMSM}}^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (x_{1} ,x_{2} , \ldots ,x_{n} ) \le {\text{MAX}}\left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}.\)

Furthermore, the \(GMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operator can reduce to some simple forms when m is assigned different values. These are represented as follows.
  1. 1.
    When m = 2, the \(GMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operator reduces to the Bonferroni mean (BM) operator with the parameters \(p_{1} ,p_{2}\), as follows:
    $$\begin{aligned} GMSM^{{(2,p_{1} ,p_{2} )}} (x_{1} , \ldots ,x_{n} ) & = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < i_{2} \le n}} {x_{{i_{1} }}^{{p_{1} }} x_{{i_{2} }}^{{p_{2} }} } }}{{C_{n}^{2} }}} \right)^{{\frac{1}{{p_{1} + p_{2} }}}} = \left( {\frac{2}{n(n - 1)}\sum\limits_{1 \le i < j \le n} {x_{i}^{{p_{1} }} x_{j}^{{p_{2} }} } } \right)^{{\frac{1}{{p_{1} + p_{2} }}}} \\ & = \left( {\frac{1}{n(n - 1)}\sum\limits_{1 \le i < j \le n} {x_{i}^{{p_{1} }} x_{j}^{{p_{2} }} } } \right)^{{\frac{1}{{p_{1} + p_{2} }}}} = \mathop {BM}\nolimits^{{p_{1} ,p_{2} }} . \\ \end{aligned}$$
    (12)
     
  2. 2.
    When m = 3, the \(GMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operator reduces to the generalized Bonferroni mean (GBM) operator with the parameters \(p_{1} ,p_{2} ,p_{3}\), as follows:
    $$\begin{aligned} GMSM^{{(3,p_{1} ,p_{2} ,p_{3} )}} (x_{1} , \ldots ,x_{n} ) & = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < i_{2} < i_{3} \le n}} {\prod\nolimits_{j = 1}^{3} { \, x_{{ \, i_{j} }}^{{ \, p_{j} }} } } }}{{C_{n}^{3} }}} \right)^{{\frac{1}{3}}} \\ & = \left( {\frac{{6\sum\nolimits_{{1 \le i_{1} < i_{2} < i_{3} \le n}} { \, x_{{ \, i_{1} }}^{{ \, p_{1} }} \, x_{{ \, i_{2} }}^{{ \, p_{2} }} \, x_{{ \, i_{3} }}^{{ \, p_{3} }} } }}{{n\left( {n - 1} \right)\left( {n - 2} \right)}}} \right)^{{\frac{1}{3}}} \\ & = \left( {\frac{1}{{n\left( {n - 1} \right)\left( {n - 2} \right)}}\sum\limits_{i,j,k = 1,i \ne j \ne k}^{n} {\mathop x\nolimits_{i}^{{\mathop p\nolimits_{1} }} } \mathop x\nolimits_{j}^{{\mathop p\nolimits_{2} }} \mathop x\nolimits_{k}^{{\mathop p\nolimits_{3} }} } \right)^{{\frac{1}{{\mathop p\nolimits_{1} + \mathop p\nolimits_{2} + \mathop p\nolimits_{3} }}}} \\ & = \mathop {GBM}\nolimits^{{\mathop p\nolimits_{1} ,\mathop p\nolimits_{2} ,\mathop p\nolimits_{3} }} . \\ \end{aligned}$$
    (13)
     
  3. 3.
    When p 1 = p 2 = ··· = p m  = 1, the \({\text{GMSM}}^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operator reduces to the MSM operator with the parameter m, as follows:
    $$\begin{aligned} GMSM^{(m,1,1, \ldots ,1)} (x_{1} , \ldots ,x_{n} ) & = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < \cdots < i_{m} \le n}} {\prod\nolimits_{j = 1}^{m} {x_{{i_{j} }}^{1} } } }}{{C_{n}^{m} }}} \right)^{{\frac{1}{m}}} \\ & = \left( {\frac{{\sum\nolimits_{{1 \le i_{1} < \cdots < i_{m} \le n}} {\prod\nolimits_{j = 1}^{m} {x_{{i_{j} }} } } }}{{C_{n}^{m} }}} \right)^{{\frac{1}{m}}} \\ & = MSM^{(m)} (x_{1} , \ldots ,x_{n} ). \\ \end{aligned}$$
    (14)
     

3.2 Geometric MSM operator

Definition 12

Let x i (i = 1, 2, …, n) be the set of nonnegative real numbers and p 1p 2, …, p m  ≥ 0. A geometric MSM operator of dimension n is a mapping \(G_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}} :\left( {R^{ + } } \right)^{n} \to R^{ + }\), and it can be defined as follows:

$$\mathop G\nolimits_{eo} MSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (x_{1} , \ldots ,x_{n} ) = \frac{1}{{(p_{1} + p_{2} + \cdots + p_{m} )}}\left( {\prod\limits_{{1 \le i_{1} < \cdot \cdot \cdot < i_{m} \le n}} {(p_{1} x_{{i_{1} }} + p_{2} x_{{i_{2} }} + \cdots + p_{m} x_{{i_{m} }} } )} \right)^{{\frac{1}{{C_{n}^{m} }}}} ,$$
(15)
where (i 1i 2, …, i m ) traverses all the m-tuple combinations of (1, 2, …, n), and \(C_{n}^{m} = \frac{n!}{m!(n - m)!}\) is the binomial coefficient.

The \(\mathop G\nolimits_{eo} MSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}}\) operator also has the following desirable properties:

Property 2

  1. 1.

    Idempotency. If x ≥ 0, and \(x_{i} = x\) for all i, then \(\mathop G\nolimits_{eo} {\text{MSM}}^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (x,x, \ldots ,x) = x.\)

     
  2. 2.

    Monotonicity. If x i   y i for all i, and x i and y i are any nonnegative real numbers, \(\mathop G\nolimits_{eo} MSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (x_{1} ,x2, \ldots ,x_{n} ) \le \mathop G\nolimits_{eo} MSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (y_{1} ,y_{2} , \ldots ,y_{n} ).\)

     
  3. 3.

    Boundedness. The \(\mathop G\nolimits_{eo} {\text{MSM}}^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}}\) operator lies between the max and min operators. \(MIN\left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\} \le \mathop G\nolimits_{eo} MSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (x_{1} ,x_{2} , \ldots ,x_{n} ) \le MAX\left\{ {x_{1} ,x_{2} , \ldots ,x_{n} } \right\}.\)

     

Because Property 2 is similar to the Property 1, the proof is omitted here.

The \(\mathop G\nolimits_{eo} MSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}}\) operator can reduce to a simple form when m is assigned different values. One example is represented as follows:
  1. 1.

    When m = 2, the \(G_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operator reduces to the geometric Bonferroni mean (G eo BM) operator with the parameter p 1p 2, as follows:

    $$\begin{aligned} \mathop G\nolimits_{eo} MSM^{{(2,p_{1} ,p_{2} )}} (x_{1} , \ldots ,x_{n} ) & = \frac{1}{{p_{1} + p_{2} }}\left( {\prod\limits_{{1 \le i_{1} < i_{2} \le n}} {\left( {p_{1} x_{{i_{1} }} + p_{2} x_{{i_{2} }} } \right)} } \right)^{{\frac{1}{{C_{n}^{2} }}}} \\ & = \frac{1}{{p_{1} + p_{2} }}\left( {\prod\limits_{{1 \le i_{1} < i_{2} \le n}} {\left( {p_{1} x_{{i_{1} }} + p_{2} x_{{i_{2} }} } \right)} } \right)^{{\frac{2}{(n - 1)n}}} \\ & = \frac{1}{{p_{1} + p_{2} }}\left( {\prod\limits_{i,j = 1,i \ne j}^{n} {\left( {p_{1} x_{i} + p_{2} x_{j} } \right)} } \right)^{{\frac{1}{(n - 1)n}}} \\ & = \mathop G\nolimits_{eo} BM^{{(p_{1} ,p_{2} )}} . \\ \end{aligned}$$
    (16)
     

3.3 Some SVNLMSM operators

In this subsection, we develop the SVNLMSM (m), \(SVNLGMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) and \(SVNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operators to enhance the applicability when aggregating information in the form of SVNLNs. The relative SVNLMSM operator’s advantage of capturing the interrelationship among the multiple input arguments mainly demonstrates from the theoretical perspective because of the multiplication between \(a_{{i_{j} }}^{{p_{j} }}\) and \(a_{{i_{k} }}^{{p_{k} }} (k \ne j)\) in the equation when i = 1, 2,…, n. We discuss some desirable properties and special cases.

Definition 13

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs. The single-valued neutrosophic linguistic Maclaurin symmetric mean \(SVNLMSM\,{\text{operator}}:\,\Omega ^{n} \to \varOmega\) is
$$SVNLMSM^{(m)} (a_{1} , \ldots ,a_{n} ) = \left( {\frac{{\mathop \oplus \limits_{{1 \le i_{1} < \cdots < i_{m} \le n}} \left( {\mathop \otimes \limits_{j = 1}^{m} \mathop a\nolimits_{{\mathop i\nolimits_{j} }} } \right)}}{{C_{n}^{m} }}} \right)^{{\frac{1}{m}}} ,$$
(17)
where m = 1, 2, …, n and \(\Omega\) is the set including all SVNLNs.

Based on the calculation laws of SVNLNs described earlier, the SVNLMSM operator can be expressed as follows.

Theorem 1

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs, and m = 1, 2, …, n. Then, the value aggregated by the SVNLMSM operator is still a SVNLN, and
$$\begin{aligned} SVNLMSM^{(m)} (a_{1} , \ldots ,a_{n} ) = & \left\langle {\mathop f\nolimits^{* - 1} \left( {\mathop {\left( {\frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left( {\prod\nolimits_{j = 1}^{m} { \, f^{*} ( \, s_{{\theta_{{ \, i_{j}^{(k)} }} }} )} } \right)} }}{{C_{n}^{m} }}} \right)}\nolimits^{{\frac{1}{m}}} } \right)} \right.,\left( {\mathop {\left( {\frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ { \, A^{(k)} \cdot \prod\limits_{j = 1}^{m} { \, T_{{ \, i_{j}^{(k)} }} } } \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} { \, A^{(k)} } }}} \right)}\nolimits^{{\frac{1}{m}}} } \right. \\ & \quad 1 - \mathop {\left( {1 - \frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ { \, A^{(k)} \cdot \left( {1 - \prod\nolimits_{j = 1}^{m} {\left( {1 - \, I_{{ \, i_{j}^{(k)} }} } \right)} } \right)} \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} { \, A^{(k)} } }}} \right)}\nolimits^{{\frac{1}{m}}} , \\ & \quad \left. {\left. {1 - \mathop {\left( {1 - \frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ { \, A^{(k)} \left( {1 - \prod\limits_{j = 1}^{m} {\left( {1 - \, F_{{ \, i_{j}^{(k)} }} } \right)} } \right)} \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} { \, A^{(k)} } }}} \right)}\nolimits^{{\frac{1}{m}}} } \right)} \right\rangle , \\ \end{aligned}$$
(18)
where \(\, A^{(k)} = \prod\nolimits_{j = 1}^{m} { \, f^{*} ( \, s_{{\theta_{{ \, i_{j}^{(k)} }} }} )}\) (k = 1, 2, … C n m ) and \(a_{{i_{j}^{\left( k \right)} }}\) represents the i j th element in kth permutation.

The detailed proof for Theorem 1 is provided in the “Appendix,” and it is similar to the proof for Theorems 2 and 3.

Property 3

Let \(a_{i} = s_{{\theta \left( {a_{i} } \right)}} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs, and m = 1, 2, …, n. The SVNLMSM (m) operator has the following desirable properties:

  1. 1.

    Idempotency. If the SVNLN \(a_{i} = a = \langle s_{{\theta_{a} }} ,\left( {T_{a} ,I_{a} ,F_{a} } \right)\rangle\) for each \(i \left( {i = 1,2, \ldots ,n} \right)\), then \(SNLMSM^{\left( m \right)} \left( {a,a, \ldots ,a} \right) = a = \langle s_{{\theta_{a} }} ,\left( {T_{a} ,I_{a} ,F_{a} } \right)\rangle.\)

     
  2. 2.

    Commutativity. Let \(\left( {a_{1}^{\prime } ,a_{2}^{\prime } , \ldots ,a_{n}^{\prime } } \right)\) be any permutation of (a 1 a 2 …, a n ). Then, \(SVNLMSM^{\left( m \right)} \left( {a_{1}^{\prime } ,a_{2}^{\prime } , \ldots ,a_{n}^{\prime } } \right) = SVNLMSM^{\left( m \right)} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right)\).

     

Proof

  1. 1.

    Since each a i  = a, then

    $$\begin{aligned} SVNLMSM^{{(m)}} (a, \ldots ,a) = & \left\langle {\mathop f\nolimits^{{* - 1}} \left( {\mathop {\left( {\frac{{C_{n}^{m} \mathop {\left( {\mathop f\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{a} }} )} \right)}\nolimits^{m} }}{{C_{n}^{m} }}} \right)}\nolimits^{{\frac{1}{m}}} } \right)} \right.,\left( {\mathop {\left( {\frac{{\sum\nolimits_{{k = 1}}^{{C_{n}^{m} }} {\left\{ {{\text{ }}A^{{(k)}} \cdot ({\text{ }}T_{a} )^{m} } \right\}} }}{{\sum\nolimits_{{k = 1}}^{{C_{n}^{m} }} {{\text{ }}A^{{(k)}} } }}} \right)}\nolimits^{{\frac{1}{m}}} ,} \right. \\ & \quad 1 - \mathop {\left( {1 - \frac{{\sum\nolimits_{{k = 1}}^{{C_{n}^{m} }} {\left\{ {{\text{ }}A^{{(k)}} \cdot \left( {1 - \left( {1 - {\text{ }}I_{a} } \right)^{m} } \right)} \right\}} }}{{\sum\nolimits_{{k = 1}}^{{C_{n}^{m} }} {{\text{ }}A^{{(k)}} } }}} \right)}\nolimits^{{\frac{1}{m}}} \\ & \quad \left. {\left. {1 - \mathop {\left( {1 - \frac{{\sum\nolimits_{{k = 1}}^{{C_{n}^{m} }} {\left\{ {{\text{ }}A^{{(k)}} \cdot \left( {1 - \left( {1 - {\text{ }}F_{a} } \right)^{m} } \right)} \right\}} }}{{\sum\nolimits_{{k = 1}}^{{C_{n}^{m} }} {{\text{ }}A^{{(k)}} } }}} \right)}\nolimits^{{\frac{1}{m}}} } \right)} \right\rangle \\ & = \left\langle {\mathop s\nolimits_{{\mathop \theta \nolimits_{a} }} ,\left( {\mathop T\nolimits_{a} ,\mathop I\nolimits_{a} ,\mathop F\nolimits_{a} } \right)} \right\rangle = a. \\ \end{aligned}$$
     
  2. 2.

    Based on Definition 13 and Theorem 1, it is easy to prove the commutativity.

     
By assigning different values to the parameters m, some special cases of the SNLMSM (m) operator can be derived as follows:
  1. 1.

    When m = 1, an average value of the set can be derived by the SNLMSM (m) operator, as follows:

    $$\begin{aligned} & SVNLMSM^{(1)} (a_{1} , \ldots ,a_{n} ) = \frac{{ \oplus_{i = 1}^{n} \, a_{i} }}{n} \\ & \quad = \left\langle {\mathop f\nolimits^{* - 1} \left( {\sum\limits_{i = 1}^{n} {\mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{i} }} } \right)} } \right),\left( {\frac{{\sum\nolimits_{i = 1}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)} \cdot \, T_{i} }}{{\sum\nolimits_{i = 1}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)} }},\frac{{\sum\nolimits_{i = 1}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right) \cdot } \, I_{i} }}{{\sum\nolimits_{i = 1}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)} }},\frac{{\sum\nolimits_{i = 1}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)} \cdot \, F_{i} }}{{\sum\nolimits_{i = 1}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)} }}} \right)} \right\rangle . \\ \end{aligned}$$
    (19)
     
This is a simple aggregation form that reflects the average evaluation.
  1. 2.

    When m = 2, the SVNLMSM (m) operator reduces to the following form:

    $$\begin{aligned} & SVNLMSM^{(2)} (a_{1} , \ldots a_{n} ) = \left( {\frac{{ \oplus_{i,j = 1,i \ne j}^{n} \, a_{i} \otimes a_{j} }}{n(n - 1)}} \right)^{{\frac{1}{2}}} \\ &\quad = \left\langle {\mathop f\nolimits^{* - 1} \left( {\left( {\frac{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right) \cdot \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)} }}{n(n - 1)}} \right)^{{\frac{1}{2}}} } \right)} \right., \\ &\quad \quad \left( {\left( {\frac{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right) \cdot \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)} \, T_{i} \, T_{j} }}{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right) \cdot \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)} }}} \right)^{{\frac{1}{2}}} } \right., \\ &\quad \quad \left( {\frac{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right) \cdot \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)} \left\{ {[1 - (1 - I_{i} )(1 - \, I_{j} )} \right\}}}{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right) \cdot \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)} }}} \right)^{{\frac{1}{2}}}, \\ &\quad \quad \left. {\left. {\left( {\frac{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right) \cdot \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)} \left\{ {1 \, - \, (1 \, - \, F_{i} )(1 - \, F_{j} )} \right\}}}{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right) \cdot \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)} }}} \right)^{{\frac{1}{2}}} } \right)} \right\rangle . \\ \end{aligned}$$
    (20)
     
  2. 3.

    When m = n, the SVNLMSM (m) operator reduces to the following form:

    $$\begin{aligned} SVNLMSM^{(n)} (a_{1} , \ldots ,a_{n} ) = \left( { \otimes_{i = 1}^{n} \, a_{i} } \right)^{{\frac{1}{n}}} \hfill \\ = \left\langle {\mathop {f}\nolimits^{* - 1} \left( {\left( {\prod\nolimits_{i = 1}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)} } \right)^{{\frac{1}{n}}} } \right),\left( {\left( {\prod\nolimits_{i = 1}^{n} { \, T_{i} } } \right)^{{\frac{1}{n}}} ,1 - \left( {\prod\nolimits_{i = 1}^{n} {\left( {1 - \, I_{i} } \right)} } \right)^{{\frac{1}{n}}} ,1 - \left( {\prod\nolimits_{i = 1}^{n} {\left( {1 - \, F_{i} } \right)} } \right)^{{\frac{1}{n}}} } \right)} \right\rangle . \hfill \\ \end{aligned}$$
    (21)
     

Definition 14

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs. The single-valued neutrosophic linguistic generalized Maclaurin symmetric mean \(SVNLGMSM\,{\text{operator}}:\,\Omega ^{n} \to\Omega\) is

$$SVNLGMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (a_{1} , \ldots ,a_{n} ) = \left( {\frac{{ \oplus_{{1 \le i_{1} < \cdots < i_{m} \le n}} \left( { \otimes_{j = 1}^{m} \, a_{{ \, i_{j} }}^{{ \, p_{j} }} } \right)}}{{C_{n}^{m} }}} \right)^{{\frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}}} ,$$
(22)
where m = 1, 2, …, n and \(\Omega\) is the set including all SNLNs.

Based on the calculation laws of SVNLNs described earlier, the SVNLGMSM operator can be expressed as follows.

Theorem 2

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs (m = 1, 2, …, n). Then, the value aggregated by SVNLGMSM operator is still a SVNLN, and

$$\begin{aligned} &SVNLGMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (a_{1} , \ldots ,a_{n} ) \\ &\quad = \left\langle {\mathop f\nolimits^{* - 1} \left( {\mathop {\left( {\frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left( {\prod\nolimits_{j = 1}^{m} {\left( { \, f^{*} \left( { \, s_{{\theta_{{ \, i_{j}^{(k)} }} }} } \right)} \right)^{{p_{j} }} } } \right)} }}{{C_{n}^{m} }}} \right)}\nolimits^{{\frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}}} } \right)} \right., \\ &\quad \quad \left( {\mathop {\left( {\frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ { \, B^{(k)} \cdot \prod\nolimits_{j = 1}^{m} { \, T_{{ \, i_{j}^{(k)} }}^{{ \, p_{j} }} } } \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} { \, B^{(k)} } }}} \right)}\nolimits^{{\frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}}} ,} \right. \\ &\quad \quad 1 - \mathop {\left( {1 - \frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ { \, B^{(k)} \cdot \left( {1 - \prod\nolimits_{j = 1}^{m} {\left( {1 - \, I_{{ \, i_{j}^{(k)} }} } \right)^{{ \, p_{j} }} } } \right)} \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} { \, B^{(k)} } }}} \right)}\nolimits^{{\frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}}}, \\ &\quad \quad \left. {\left. {1 - \mathop {\left( {1 - \frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ { \, B^{(k)} \cdot \left( {1 - \prod\limits_{j = 1}^{m} {\left( {1 - \, F_{{ \, i_{j}^{(k)} }} } \right)^{{ \, p_{j} }} } } \right)} \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} { \, B^{(k)} } }}} \right)}\nolimits^{{\frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}}} } \right)} \right\rangle , \\ \end{aligned}$$
(23)
where \(\, B^{(k)} = \prod\nolimits_{j = 1}^{m} {\left( { \, f^{*} ( \, s_{{\theta_{{ \, i_{j}^{(k)} }} }} )} \right)^{{ \, p_{j} }} } ,(k = 1,2, \ldots ,C_{n}^{m} )\) and \(a_{{i_{j}^{(k)} }}\) represents the i j th element in kth permutation. This result can be derived directly based on the operational law of SVNLNs.

Property 4

Let \(a_{i} = \left\langle {s_{{\theta \left( {a_{i} } \right)}} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)} \right\rangle ,{\kern 1pt} {\kern 1pt} {\kern 1pt} (i = 1,2, \ldots ,n)\) be a collection of SVNLNs. The SVNLGMSM (m) (m = 1, 2, …, n) operator has the following desirable properties:

  1. 1.

    Idempotency. If the SVNLN \(a_{i} = a = s_{{\theta_{a} }} ,\left( {T_{a} ,I_{a} ,F_{a} } \right)\) for each \(i \left( {i = 1,2, \ldots ,n} \right)\) , then \(SVNLGMSM^{\left( m \right)} \left( {a,a, \ldots ,a} \right) = a = s_{{\theta_{a} }} ,\left( {T_{a} ,I_{a} ,F_{a} } \right)\).

     
  2. 2.

    Commutativity. Let \(\left( {a_{1}^{{\prime }} ,a_{2}^{{\prime }} , \ldots ,a_{n}^{{\prime }} } \right)\) be any permutation of (a 1a 2, …, a n ). Then, \(SVNLGMSM^{\left( m \right)} \left( {a_{1}^{\prime } ,a_{2}^{\prime } , \ldots ,a_{n}^{\prime } } \right) = SVNLGMSM^{\left( m \right)} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right).\)

     

The proof of Property 4 is similar to Property 3; therefore, it is omitted here.

By assigning different values to the parameter m, some special cases of the SVNLMSM (m) operator can be derived as follows:
  1. 1.

    When m = 2, the \(SVNLGMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operator reduces to the single-valued neutrosophic linguistic Bonferroni mean (SVNLBM) operator with the parameters \(p_{1} ,p_{2}\), as follows:

    $$\begin{aligned} & SVNLGMSM^{{(2,p_{1} ,p_{2} )}} (a_{1} , \ldots ,a_{n} ) \\ & \quad = \left( {\frac{1}{n(n - 1)}\mathop \oplus \limits_{i,j = 1,i \ne j}^{n} a_{i}^{{p_{1} }} \otimes a_{j}^{{p_{2} }} } \right)^{{\frac{1}{{p_{1} + p_{2} }}}} \\ {\kern 1pt} & \quad = \left\langle {\left( {\mathop f\nolimits^{* - 1} \left( {\left( {\frac{1}{n(n - 1)}\sum\limits_{i,j = 1,i \ne j}^{n} {\mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{i} }} } \right)^{{p_{1} }} \mathop {f}\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{j} }} } \right)^{{p_{2} }} } } \right)^{{\frac{1}{{p_{1} + p_{2} }}}} } \right)} \right.} \right., \\ & \quad \quad \left( {\left( {\frac{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)^{{p_{1} }} \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)^{{p_{2} }} } \cdot T_{i}^{{p_{1} }} T_{j}^{{p_{2} }} }}{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)^{{p_{1} }} \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)^{{p_{2} }} } }}} \right)^{{\frac{1}{{p_{1} + p_{2} }}}} ,} \right. \\ & \quad \quad \left( {\frac{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)^{{p_{1} }} \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)^{{p_{2} }} } \cdot (1 - I_{i} )^{{p_{1} }} (1 - I_{j} )^{{p_{2} }} }}{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)^{{p_{1} }} \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)^{{p_{2} }} } }}} \right)^{{\frac{1}{{p_{1} + p_{2} }}}} , \\ & \quad \quad \left. {\left. {\left( {\frac{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)^{{p_{1} }} \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)^{{p_{2} }} } \cdot (1 - F_{i} )^{{p_{1} }} (1 - F_{j} )^{{p_{2} }} }}{{\sum\nolimits_{i,j = 1,i \ne j}^{n} { \, f^{*} \left( { \, s_{{\theta_{i} }} } \right)^{{p_{1} }} \, f^{*} \left( { \, s_{{\theta_{j} }} } \right)^{{p_{2} }} } }}} \right)^{{\frac{1}{{p_{1} + p_{2} }}}} } \right)} \right\rangle . \\ \end{aligned}$$
    (24)
     

Definition 15

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs. The single-valued neutrosophic linguistic geometric Maclaurin symmetric mean \(SVNLG_{eo} MSM\,{\text{operator}}:\,\Omega ^{n} \to\Omega\) which is

$$SVNL\mathop G\nolimits_{eo} MSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (a_{1} , \ldots ,a_{n} ) = \frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}\left( {\mathop \otimes \limits_{{1 \le i_{1} < \cdots < i_{m} \le n}} \left( {p_{1} a_{{i_{1} }} \oplus p_{2} a_{{i_{2} }} \oplus \cdots \oplus p_{m} a_{{i_{m} }} } \right)} \right)^{{\frac{1}{{C_{n}^{m} }}}} ,$$
(25)
where m = 1, 2, …, n and \(\Omega\) is the set of all SVNLNs.

The following desirable results can be obtained using the operational rules of SNLNs:

Theorem 3

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs where m = 1, 2, …, n and p 1 p 2 ···, p m   0. Then, the value aggregated by the SVNLG eo MSM operator is still a SVNLN, and

$$\begin{aligned} & SVNLGMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (a_{1} , \ldots ,a_{n} ) \\ & \quad = \left\langle {\mathop f\nolimits^{* - 1} \left( {\frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}\mathop {\left( {\prod\limits_{k = 1}^{{C_{n}^{m} }} {\left( {\sum\limits_{j = 1}^{m} {p_{j} \mathop { \cdot f}\nolimits^{*} } \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{{\mathop i\nolimits_{j}^{(k)} }} }} } \right)} \right)} } \right)}\nolimits^{{\frac{1}{{C_{n}^{m} }}}} } \right)} \right.,\left( {\mathop {\left( {\prod\limits_{k = 1}^{{C_{n}^{m} }} {\frac{{\sum\nolimits_{j = 1}^{m} { \, V^{(k)} \cdot \, T_{{ \, i_{j}^{(k)} }} } }}{{\sum\nolimits_{j = 1}^{m} { \, V^{(k)} } }}} } \right)}\nolimits^{{\frac{1}{{C_{n}^{m} }}}} ,} \right. \\ & \quad \quad 1 - \left. {\left. {\mathop {\left( {\prod\limits_{k = 1}^{{C_{n}^{m} }} {\left( {1 - \frac{{\sum\nolimits_{j = 1}^{m} { \, V^{(k)} \cdot \, I_{{ \, i_{j}^{(k)} }} } }}{{\sum\nolimits_{j = 1}^{m} { \, V^{(k)} } }}} \right)} } \right)}\nolimits^{{\frac{1}{{C_{n}^{m} }}}} ,1 - \mathop {\left( {\prod\limits_{k = 1}^{{C_{n}^{m} }} {\left( {1 - \frac{{\sum\nolimits_{j = 1}^{m} { \, V^{(k)} \cdot \, F_{{ \, i_{j}^{(k)} }} } }}{{\sum\nolimits_{j = 1}^{m} { \, V^{(k)} } }}} \right)} } \right)}\nolimits^{{\frac{1}{{C_{n}^{m} }}}} } \right)} \right\rangle , \\ \end{aligned}$$
(26)
where \(\mathop V\nolimits^{(k)} = \mathop p\nolimits_{j} \mathop { \cdot f}\nolimits^{*} (\mathop s\nolimits_{{\mathop \theta \nolimits_{{\mathop i\nolimits_{j}^{(k)} }} }} )\) (k = 1, 2, …, C n m ) and \(a_{{i_{j}^{(k)} }}\) represents the i j th element in kth permutation.

This result can be derived directly based on the operational law of SVNLNs.

Property 5

Let \(a_{i} = s_{{\theta \left( {a_{i} } \right)}} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs, and m = 1, 2, …, n. The SVNLG eo MSM (m) operator has the following desirable properties:

  1. 1.

    Idempotency. If the SVNLN \(a_{i} = a = s_{{\theta_{a} }} ,\left( {T_{a} ,I_{a} ,F_{a} } \right)\) for each \(i\,\left( {i = 1,2, \ldots ,n} \right)\), then \(SVNLG_{eo} MSM^{\left( m \right)} \left( {a,a, \ldots ,a} \right) = a = s_{{\theta_{a} }} ,\left( {T_{a} ,I_{a} ,F_{a} } \right)\).

     
  2. 2.

    Commutativity. Let \(\left( {a_{1}^{\prime } ,a_{2}^{\prime } , \ldots ,a_{n}^{\prime } } \right)\) be any permutation of (a 1a 2, …, a n ). Then, \(SVNLG_{eo} MSM^{\left( m \right)} \left( {a_{1}^{\prime } ,a_{2}^{\prime } , \ldots ,a_{n}^{\prime } } \right) = SVNLG_{eo} MSM^{\left( m \right)} \left( {a_{1} ,a_{2} , \ldots ,a_{n} } \right)\).

     

The proof of Property 5 is similar to Property 3; therefore, it is omitted here.

One special case of the SVNLG eo MSM (m) operator can be derived as follows:
  1. 1.

    When m = 2, the \(SVNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operator reduces to the following form:

    $$\begin{aligned} & SVNL\mathop G\nolimits_{{eo}} MSM^{{(2,p_{1} ,p_{2} )}} (a_{1} , \ldots ,a_{n} ) = \frac{1}{{p_{1} + p_{2} }}\left( {\mathop \otimes \limits_{{i,j = 1,i \ne j}}^{n} \left( {p_{1} a_{i} \oplus p_{2} a_{j} } \right)} \right)^{{\frac{1}{{(n - 1)n}}}} \\ & \quad = \left\langle {\left( {\mathop f\nolimits^{{* - 1}} \left( {\frac{1}{{p_{1} + p_{2} }}\left( {\prod\limits_{\begin{subarray}{l} i,j = 1, \\ {\kern 1pt} i \ne j \end{subarray} }^{n} {\left( {p_{1} \mathop {f}\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{i} }} } \right) + p_{2} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{j} }} } \right)} \right)} } \right)^{{\frac{1}{{(n - 1)n}}}} } \right),\left( {\mathop {\prod\limits_{\begin{subarray}{l} i,j = 1, \\ i \ne j \end{subarray} }^{n} {\left( {\frac{{p_{1} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{i} }} } \right)\mathop T\nolimits_{i} + p_{2} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{j} }} } \right)\mathop T\nolimits_{j} }}{{p_{1} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{i} }} } \right) + p_{2} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{j} }} } \right)}}} \right)} }\nolimits^{{\frac{1}{{(n - 1)n}}}} } \right.} \right.} \right.{\kern 1pt} \\ & \quad \left. {\left. {\quad 1 - \mathop {\left( {\prod\limits_{\begin{subarray}{l} i,j = 1, \\ i \ne j \end{subarray} }^{n} {\left( {1 - \frac{{p_{1} \mathop {f}\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{i} }} } \right)\mathop I\nolimits_{i} + p_{2} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{j} }} } \right)\mathop I\nolimits_{j} }}{{p_{1} \mathop {f}\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{i} }} } \right) + p_{2} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{j} }} } \right)}}} \right)} } \right)}\nolimits^{{\frac{1}{{(n - 1)n}}}} ,1 - \mathop {\left( {\prod\limits_{\begin{subarray}{l} i,j = 1, \\ {\kern 1pt} i \ne j \end{subarray} }^{n} {\left( {1 - \frac{{p_{1} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{i} }} } \right)\mathop F\nolimits_{i} + p_{2} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{j} }} } \right)\mathop F\nolimits_{j} }}{{p_{1} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{i} }} } \right) + p_{2} \mathop f\nolimits^{*} \left( {\mathop s\nolimits_{{\mathop \theta \nolimits_{j} }} } \right)}}} \right)} } \right)}\nolimits^{{\frac{1}{{(n - 1)n}}}} } \right)} \right\rangle . \\ \end{aligned}$$
    (27)
     

3.4 Some weighted SVNLMSM operators

Each individual’s view is unique because of different knowledge and experience so that their significance should be different. Therefore, in this subsection, we propose some operators associated with weight, which is defined as follows:

Definition 16

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs, and let w = (w 1w 2, …, w n )T be the weight vector, which satisfies ∑ i=1 n w i  = 1 and w i  > 0 (i = 1, 2, …, n). Each w i denotes the importance degree of a i . The weighted single-valued neutrosophic linguistic Maclaurin symmetric mean \(WSVNLMSM \,{\text{operator}}\,:\,\Omega ^{n} \to\Omega\) is

$$WSVNLMSM^{(m)} (a_{1} , \ldots ,a_{n} ) = \left( {\frac{{ \oplus_{{1 \le i_{1} < \cdots < i_{m} \le n}} \left( { \otimes_{j = 1}^{m} \left( {n \, w_{{ \, i_{j} }} } \right) \cdot a_{{ \, i_{j} }} } \right)}}{{C_{n}^{m} }}} \right)^{{\frac{1}{m}}} ,$$
(28)
where m = 1, 2, …, n and \(\Omega\) is the set including all SNLNs.

Based on the calculation laws for SVNLNs described earlier, the WSVNLMSM operator can be expressed as follows.

Theorem 4

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs, and m = 1, 2, …, n. Then, the value aggregated by the WSVNLMSM operator is still a SVNLN, and

$$\begin{aligned} WSVNLMSM^{(m)} (a_{1} , \ldots ,a_{n} ) &\,=\, \left\langle {\mathop f\nolimits^{* - 1} \left( {\left( {\frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left( {\prod\nolimits_{j = 1}^{m} {\left( {nw_{{ \, i_{j}^{(k)} }} } \right)f^{*} ( \, s_{{\theta_{{ \, i_{j}^{(k)} }} }} )} } \right)} }}{{C_{n}^{m} }}} \right)^{{\frac{1}{m}}} } \right)} \right.,\left( {\left( {\frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ { \, A_{ \bullet }^{(k)} \cdot \prod\nolimits_{j = 1}^{m} { \, T_{{ \, i_{j}^{(k)} }} } } \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} { \, A_{ \bullet }^{(k)} } }}} \right)^{{\frac{1}{m}}} ,} \right. \\ & \quad 1 - \mathop {\left( {1 - \frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ {\mathop { \, A}\limits_{ \bullet }^{(k)} \cdot \left( {1 - \prod\nolimits_{j = 1}^{m} {\left( {1 - \, I_{{ \, i_{j}^{(k)} }} } \right)} } \right)} \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\mathop { \, A}\limits_{ \bullet }^{(k)} } }}} \right)}\nolimits^{{\frac{1}{m}}} , \\ & \quad {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} 1 - \left. {\left. {\mathop {\left( {1 - \frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ { \, A_{ \bullet }^{(k)} \cdot \left( {1 - \prod\nolimits_{j = 1}^{m} {\left( {1 - \, F_{{ \, i_{j}^{(k)} }} } \right)} } \right)} \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} { \, A_{ \bullet }^{(k)} } }}} \right)}\nolimits^{{\frac{1}{m}}} } \right)} \right\rangle , \\ \end{aligned}$$
(29)
where \(\, A_{ \bullet }^{(k)} = \prod\nolimits_{j = 1}^{m} {\left( {n \, w_{{ \, i_{j}^{(k)} }} } \right) \cdot f^{*} \left( { \, s_{{\theta_{{ \, i_{j}^{(k)} }} }} } \right),\;} (k = 1,2, \ldots ,C_{n}^{m} )\) and \(a_{{i_{j}^{(k)} }}\) represents the i j th element in kth permutation.

This result can be derived directly based on the operational law of SVNLNs. And the proof of Theorem 4 is similar to Theorem 1; therefore, it is omitted here.

Property 6

(Reducibility) Let \(w = \left( {\frac{1}{n},\frac{1}{n}, \ldots ,\frac{1}{n}} \right)^{\text{T}}\). Then, \(WSVNLMSM^{(m)} (a_{1} , \ldots ,a_{n} )\, = \,SVNLMSM^{(m)} (a_{1} , \ldots ,a_{n} )\).

The detailed proof of Property 6 is described in the “Appendix.”

Definition 17

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs, w = (w 1w 2, …, w n )T be the weight vector, which satisfies ∑ i=1 n w i  = 1 and w i  > 0 (i = 1, 2, …, n). And each w i denotes the importance degree of a i . The weighted generalized single-valued neutrosophic linguistic Maclaurin symmetric mean \(WGSVNLMSM\, {\text{operator}}\,:\,\Omega ^{n} \to\Omega\) is

$$WGSVNLMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (a_{1} , \ldots ,a_{n} ) = \left( {\frac{{ \oplus_{{1 \le i_{1} < \cdots < i_{m} \le n}} \left( { \otimes_{j = 1}^{m} \left( {n \, w_{{ \, i_{j} }} \otimes \, a_{{ \, i_{j} }} } \right)^{{ \, p_{j} }} } \right)}}{{C_{n}^{m} }}} \right)^{{\frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}}} ,$$
(30)
where m = 1, 2, …, n and \(\Omega\) is the set including all SVNLNs.

Based on the calculation laws for SVNLNs described earlier, the WGSVNLMSM operator can be expressed as follows:

Theorem 5

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs, and m = 1, 2, …, n. Then, the value aggregated by the WSVNLGMSM operator is still a SVNLN, and

$$\begin{aligned} & SVNLGMSM^{{(m,p_{1} ,p_{2} , \ldots ,p_{m} )}} (a_{1} , \ldots a_{n} ) \\ & \quad = \left\langle {f^{* - 1} \left( {\left( {\frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left( {\prod\nolimits_{j = 1}^{m} {\left( {nw_{{i_{j}^{(k)} }} \left( {S_{{\theta_{j}^{(k)} }} } \right)} \right)^{{p_{j} }} } } \right)} }}{{C_{n}^{m} }}} \right)^{{\frac{1}{{p_{1} + p_{2} + \cdots p_{m} }}}} } \right)} \right.,\left( {\left( {\frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ {\mathop B\limits_{ \cdot }^{(k)} \cdot \prod\nolimits_{j = 1}^{m} {T_{{i_{j}^{(k)} }}^{{p_{j} }} } } \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ {\mathop B\limits_{ \cdot }^{(k)} } \right\}} }}} \right)^{{\frac{1}{{p_{1} + p_{2} + \cdots p_{m} }}}} } \right., \\ & \quad \quad 1 - \left( {1 - \frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ {\mathop B\limits_{ \cdot }^{(k)} \cdot \left( {1 - \prod\nolimits_{j = 1}^{m} {\left( {1 - I_{{i_{j}^{(k)} }} } \right)}^{{_{{p_{j} }} }} } \right)} \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\mathop B\limits_{ \cdot }^{(k)} } }}} \right)^{{\frac{1}{{p_{1} + p_{2} + \cdots p_{m} }}}} ,\left. {\left. {1 - \left( {1 - \frac{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\left\{ {\mathop B\limits_{ \cdot }^{(k)} \cdot \left( {1 - \prod\nolimits_{j = 1}^{m} {\left( {1 - F_{{i_{j}^{(k)} }} } \right)}^{{_{{p_{j} }} }} } \right)} \right\}} }}{{\sum\nolimits_{k = 1}^{{C_{n}^{m} }} {\mathop B\limits_{ \cdot }^{(k)} } }}} \right)^{{\frac{1}{{p_{1} + p_{2} + \cdots p_{m} }}}} } \right)} \right\rangle , \\ \end{aligned}$$
(31)
where \(\mathop B\limits_{ \cdot }^{(k)} = \prod\nolimits_{j = 1}^{m} {\left( {nw_{{i_{j}^{(k)} }} f^{*} \left( {s_{{\theta_{{i_{j}^{(k)} }} }} } \right)} \right)}^{{p_{j} }} ,(k = 1,2, \ldots ,C_{n}^{m} )\) and \(a_{{i_{j}^{(k)} }}\) represents the i j th element in kth permutation.

This result can be derived directly based on the operational law of SVNLNs. And the proof of Theorem 5 is similar to Theorem 1; therefore, it is omitted here.

Property 7

(Reducibility) Let \(w = \left( {\frac{1}{n},\frac{1}{n}, \ldots ,\frac{1}{n}} \right)^{\text{T}}\). Then, \(WSVNLGMSM^{(m)} \left( {a_{1} , \ldots ,a_{n} } \right) = SVNLGMSM^{(m)} \left( {a_{1} , \ldots ,a_{n} } \right)\).

The proof of Property 7 is similar to Property 6; therefore, it is omitted here.

Definition 18

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs, and let w = (w 1w 2, …, w n )T be the weight vector, which satisfies ∑ i=1 n w i  = 1 and w i  > 0(i = 1, 2, …, n). Each w i denotes the importance degree of a i . The weighted single-valued neutrosophic linguistic geometric Maclaurin symmetric mean \(WSVNLG_{eo} MSM\;{\text{operator}}\,:\,\Omega ^{n} \to\Omega\) is

$$\begin{aligned} & WSVNLG_{eo} MSM^{{\left( {m,p_{1} ,p, \ldots ,p_{m} } \right)}} \left( {a_{1} , \ldots ,a_{n} } \right) \\ & \quad = \frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}\left( {\mathop \otimes \limits_{{1 \le i_{1} < \cdots < i_{m} \le n}} \left( {\left( {p_{1} \cdot a_{{i_{1} }} } \right)^{{nw_{{i_{1} }} }} \oplus \left( {p_{2} \cdot a_{{i_{2} }} } \right)^{{nw_{{i_{2} }} }} \oplus \cdots \oplus \left( {p_{m} \cdot a_{{i_{m} }} } \right)^{{nw_{{i_{m} }} }} } \right)} \right)^{{\frac{1}{{C_{n}^{m} }}}} , \\ \end{aligned}$$
(32)
where m = 1, 2, …, n and \(\Omega\) is the set including all SVNLNs.

Some desirable results can be obtained by the operations of SVNLNs as follows:

Theorem 6

Let \(a_{i} = s_{{\theta_{i} }} ,\left( {T_{i} ,I_{i} ,F_{i} } \right)\left( {i = 1,2, \ldots ,n} \right)\) be a collection of SVNLNs. In addition, m = 1, 2, …, n and p 1 p 2 …, p m   0. Then, the value aggregated by the SVNLG eo MSM operator is still a SVNLN, and

$$\begin{aligned} SVNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots p_{m} } \right)}} \left( {a_{1} , \ldots ,a_{n} } \right) = & \left\langle {f^{ * - 1} \left( {\frac{1}{{p_{1} + p_{2} + \cdots + p_{m} }}\left( {\prod\limits_{k = 1}^{{C_{n}^{m} }} {\left( {\sum\limits_{j = 1}^{m} {\left( {p_{j} \cdot f^{ * } \left( {s_{{\theta_{{i_{j}^{(k)} }} }} } \right)} \right)^{{nw_{{i_{j} }} }} } } \right)} } \right)^{{\frac{1}{{C_{n}^{m} }}}} } \right),\left( {\left( {\prod\limits_{k = 1}^{{C_{n}^{m} }} {\frac{{\sum\nolimits_{j = 1}^{m} {\mathop V\limits_{ \cdot }^{(k)} \cdot \left( {T_{{i_{j}^{(k)} }} } \right)^{{w_{{i_{j} }} }} } }}{{\sum\nolimits_{j = 1}^{m} {\mathop V\limits_{ \cdot }^{(k)} } }}} } \right)^{{\frac{1}{{C_{n}^{m} }}}} ,} \right.} \right. \\ & \left. {\left. {1 - \left( {\prod\limits_{k = 1}^{{C_{n}^{m} }} {\left( {1 - \frac{{\sum\nolimits_{j = 1}^{m} {\mathop V\limits_{ \cdot }^{(k)} \cdot } \left( {1 - \left( {1 - I_{{i_{j}^{(k)} }} } \right)^{{nw_{{i_{j} }} }} } \right)}}{{\sum\nolimits_{j = 1}^{m} {\mathop V\limits_{ \cdot }^{(k)} } }}} \right)} } \right)^{{\frac{1}{{C_{n}^{m} }}}} ,1 - \left( {\prod\limits_{k = 1}^{{C_{n}^{m} }} {\left( {1 - \frac{{\sum\nolimits_{j = 1}^{m} {\mathop V\limits_{ \cdot }^{(k)} } \cdot \left( {1 - \left( {1 - F_{{i_{j}^{(k)} }} } \right)^{{nw_{{i_{j} }} }} } \right)}}{{\sum\nolimits_{j = 1}^{m} {\mathop V\limits_{ \cdot }^{(k)} } }}} \right)} } \right)^{{\frac{1}{{C_{n}^{m} }}}} } \right)} \right\rangle , \\ \end{aligned}$$
(33)
where (k = 1, 2, …, C n m ), and \(a_{{i_{j}^{\left( k \right)} }}\) represents the i j th element in kth permutation.

This result can be derived directly based on the operational law of SVNLNs. And the proof of Theorem 6 is similar to Theorem 1; therefore, it is omitted here.

Property 8

(Reducibility) Let \(w = \left( {\frac{1}{n},\frac{1}{n}, \ldots ,\frac{1}{n}} \right)^{\text{T}}\). Then, \(WSVNL\mathop G\nolimits_{eo} MSM^{(m)} (a_{1} , \ldots ,a_{n} ){ = }SVNL\mathop G\nolimits_{eo} MSM^{(m)} (a_{1} , \ldots ,a_{n} ).\)

The proof of Property 8 is similar to Property 6; therefore, it is omitted here.

4 MCDM approach based on SVNLNs

In this section, we apply the proposed SVNLMSM operator to cope with a MCDM issue. Consider a MCDM evaluation in the form of SVNLNs; let O = {o 1o 2, …, o m } be a discrete set of alternatives, and let C = {c 1c 2, …, c n } be the set of n criteria, whose weight vector is a w = (w 1w 2, …, w n )T and satisfies ∑  i=1 n w i  = 1 and w i  ≥ 0 for any i = 1, 2, …, n. Each w i denotes the importance degree of the criteria c j . The performance degree of alternative o i under criteria c j is measured by SVNLN, \(a_{ij} = s_{{\theta_{ij} }} ,\left( {T_{ij} ,I_{ij} ,F_{ij} } \right)\), and it is collected by the decision matrix A = (a ij ) m×n . Then, the main procedures are as follows:

Step 1 Normalize the decision matrix.

Generally, there exist two types of criteria: benefits (the bigger the better) and costs (the smaller the better). In order to maintain consistency in the criteria values, the indispensable first step is to transform the decision matrix A = (a ij ) m×n into a normalized one, denoted as R = (r ij ) m×n . For convenience, the performance of the alternative o i with respect to criterion c j in the normalized decision matrix is still denoted as \(s_{{\theta_{ij} }} ,\left( {T_{ij} ,I_{ij} ,F_{ij} } \right)\). In this paper, we use the negation operator in Definition 7 for transformation.

Step 2 Aggregate and obtain the overall assessment value for each alternative;

Utilize operator (29) or (31) or (33) as proposed in this paper to aggregate the values r ij (j = 1, 2, …, n) of the ith row and obtain the overall preference value \(r_{i}\) corresponding to alternative A i .

Step 3 Calculate the score values, accuracy values, and certainty values of r i (i = 1, 2, …, m);

Step 4 Rank all the alternatives.

Use the comparison method described in Definition 9 to rank all alternatives in accordance with S(r i ), A(r i ), C(r i )(i = 1, 2, …, m).

5 Example

5.1 Data source and background

In this subsection, we refer to the illustrative example from Tian [48] in order to compare and analyze the feasibility of the method proposed in this paper. Then, we briefly introduce the MCDM issue’s background.

A large state-owned company, ABC Nonferrous Metals Co. Ltd, wants to invest in global minerals in order to expand the main business. After thorough investigation, a panel takes into account five possible countries (alternatives): a 1a 2a 3a 4 and a 5. The linguistic terms employed are as follows:
$$S = \left\{ {s_{0} = very\;poor, s_{1} = poor, s_{2} = slightly\;poor, s_{3} = fair, s_{4} = slightly\;good, s_{5} = good, s_{6} = very\;good} \right\}$$
Executive managers and several experts in the field hold a heated discussion to come to a consensus on several factors, and they ultimately provide the assessment information in the form of the SVNLNs, as shown in Table 1. The weight vector of the factors is w = (0.25, 0.22, 0.35, 0.18)T. The criteria are as follows: c1, resources (such as the suitability of the minerals and their exploration); c2, politics and policy (such as corruption and political risks); c3, economy (such as development vitality and the stability); and c4, infrastructure (such as railway and highway facilities).
Table 1

Evaluation values

 

c 1

c 2

c 3

c 4

a 1

s 4, (0.6, 0.6, 0.1)〉

s 5, (0.6, 0.4, 0.3)〉

s 4, (0.8, 0.5, 0.1)〉

s 2, (0.8, 0.3, 0.1)〉

a 2

s 2, (0.7, 0.5, 0.1)〉

s 4, (0.6, 0.4, 0.2)〉

s 3, (0.6, 0.2, 0.4)〉

s 4, (0.7, 0.4, 0.3)〉

a 3

s 3, (0.5, 0.1, 0.2)〉

s 4, (0.6, 0.5, 0.3)〉

s 6, (0.7, 0.6, 0.1)〉

s 2, (0.5, 0.5, 0.2)〉

a 4

s 2, (0.4, 0.5, 0.3)〉

s 3, (0.5, 0.3, 0.4)〉

s 4, (0.6, 0.8, 0.2)〉

s 5, (0.9, 0.3, 0.1)〉

a 5

s 5, (0.6, 0.4, 0.4)〉

s 5, (0.8, 0.3, 0.1)〉

s 3, (0.7, 0.5, 0.1)〉

s 4, (0.6, 0.5, 0.2)〉

In general, we use m = [n/2] for computation in practical problems, where the symbol [ ] indicates a round function, and n is the number of attributes. This is intuitive and simple; furthermore, in this case, the risk preferences of the DMs are neutral, and the interrelationship of the individual arguments can be fully taken into account [41].

5.2 Cases using the proposed aggregation operators based on \(MSM^{\left( m \right)}\)

In accordance with the procedure described in Sect. 4, we can identify the optimal choice from among the five alternatives.

Case 1

Procedure using the WSVNLMSM (m) operator.

When \(f^{ *} \left( { s_{i} } \right) = \theta_{i} = \frac{i}{2t} = \frac{i}{6}\left( {i = 1,2,3,4,5} \right),m = 2\), the procedure to solve the MCDM problem is as follows:

Step 1 Normalize the decision matrix.

In this case, it is clear that all criteria are of the benefit type; therefore, there is no need to normalize the decision matrix.

Step 2 Aggregate and obtain the overall assessment value for each alternative.

Use \(WSVNLMSM^{{\left( \varvec{m} \right)}}\) to calculate the comprehensive value for each alternative a i , denoted by r i .
$$\begin{array}{*{20}l} {r_{1} = s_{3.7594} ,\left( {0.6858,0.4770,0.1592} \right),} \hfill & {r_{2} = s_{3.1150} , \left( {0.6416, 0.3613, 0.2693} \right),} \hfill \\ {r_{3} = s_{3.8038} ,\left( {0.5999,0.4594,0.1897} \right),} \hfill & {r_{4} = s_{3.3697} , \left( {0.6170, 0.5302, 0.2360} \right),} \hfill \\ {r_{5} = s_{4.0957} , \left( {0.6766, 0.4206, 0.2092} \right).} \hfill & {} \hfill \\ \end{array}$$
Step 3 Calculate the score values, accuracy values and certainty values of r i (i = 1, 2, …, m).
$$\begin{array}{*{20}l} {\begin{array}{*{20}l} {S\left( {r_{1} } \right) = 1.2842,} \hfill & {A\left( {r_{1} } \right) = 0.1308,} \hfill & {C\left( {r_{1} } \right) = 0.4297, } \hfill & {S\left( {r_{2} } \right) = 1.0441,} \hfill & {A\left( {r_{2} } \right) = 0.1456,} \hfill & {C\left( {r_{2} } \right) = 0.3331,} \hfill \\ {S(r_{3} ) = 1.2367,} \hfill & {A\left( {r_{3} } \right) = 0.0890,} \hfill & {C\left( {r_{3} } \right) = 0.3803,} \hfill & {S(r_{4} ) = 1.0394,} \hfill & {A\left( {r_{4} } \right) = 0.0487,} \hfill & {C\left( {r_{4} } \right) = 0.3465,} \hfill \\ {S\left( {r_{5} } \right) = 1.3971,} \hfill & {A\left( {r_{5} } \right) = 0.1747,} \hfill & { C\left( {r_{5} } \right) = 0.4619.} \hfill & {} \hfill & {} \hfill & {} \hfill \\ \end{array} } \hfill & {} \hfill \\ {} \hfill & {} \hfill \\ {} \hfill & {} \hfill \\ \end{array}$$

Step 4 Rank all the alternatives.

According to the comparison in Definition 9, we can rank the alternatives as follows: a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4.

The best one is a 5.

When \(f^{*} \left( { s_{i} } \right) = \theta_{i} = \frac{i}{2t},m = 3\), the comprehensive values for the five alternatives are derived as follows:
$$\begin{array}{*{20}l} {r_{1} = s_{3.6320} ,\left( {0.6854,0.4730,0.1593} \right),} \hfill & {r_{2} = s_{3.0777} , \left( {0.6447, 0.3725, 0.2643} \right),} \hfill \\ {r_{3} = s_{3.5751} ,\left( {0.5829,0.4514,0.1991} \right),} \hfill & {r_{4} = s_{3.2875} , \left( {0.5957, 0.5299, 0.2480} \right),} \hfill \\ {r_{5} = s_{4.0689} , \left( {0.6738, 0.4255, 0.2099} \right).} \hfill & {} \hfill \\ \end{array}$$
The score values, accuracy values and certainty values of r i (i = 1, 2, …, m) are computed as follows:
$$\begin{array}{*{20}l} {S(r_{1} ) = 1.2428,} \hfill & {A\left( {r_{1} } \right) = 0.1286,} \hfill & {C\left( {r_{1} } \right) = 0.4149,} \hfill & {S(r_{2} ) = 1.0300,} \hfill & {A\left( {r_{2} } \right) = 0.1397,} \hfill & {C\left( {r_{2} } \right) = 0.3307,} \hfill \\ {S(r_{3} ) = 1.1514,} \hfill & {A\left( {r_{3} } \right) = 0.0784,} \hfill & {C\left( {r_{3} } \right) = 0.3473,} \hfill & {S(r_{4} ) = 0.9660,} \hfill & {A\left( {r_{4} } \right) = 0.0361,} \hfill & {C\left( {r_{4} } \right) = 0.3264} \hfill \\ {S\left( {r_{5} } \right) = 1.3823,} \hfill & {A\left( {r_{5} } \right) = 0.1684,} \hfill & {C\left( {r_{5} } \right) = 0.4569.} \hfill & {} \hfill & {} \hfill & {} \hfill \\ \end{array}$$

According to the comparison in Definition 9, we can obtain the ranking a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4, which is identical to the previous result.

Case 2

Procedure using the \(WSVNLGMSM^{{\left( {\varvec{m},\varvec{p}_{1} ,\varvec{p}_{2} , \ldots ,\varvec{p}_{\varvec{m}} } \right)}}\) operator.

When \(f^{ *} \left( { s_{i} } \right) = \theta_{i} = \frac{i}{2t} = \frac{i}{6},m = 2,p_{1} = 1,p_{2} = 2\), the procedure to solve the MCDM problem is as follows:

Step 1 Normalize the decision matrix.

All criteria are of the benefit type; therefore, there is no need to normalize the decision matrix.

Step 2 Aggregate and obtain the overall assessment value for each alternative.

Use the WSVNLGMSM (m) operator to calculate the comprehensive value for each alternative a i , denoted by \(r_{i} \left( {i = 1,2,3,4,5} \right)\)
$$\begin{array}{*{20}l} {r_{1} = \langle s_{3.9504} , \left( {0.6939, 0.4634, 0.1675} \right)\rangle ,} \hfill & {r_{2} = \langle s_{3.2331} , \left( {0.6538, 0.3655, 0.2742} \right)\rangle ,} \hfill \\ {r_{3} = \langle s_{4.4484} , \left( {0.6240, 0.4963, 0.1751} \right)\rangle , } \hfill & {r_{4} = \langle s_{3.6517} , \left( {0.6988, 0.4485, 0.1975} \right)\rangle ,} \hfill \\ {r_{5} = \langle s_{3.9700} , \left( {0.6771, 0.4180, 0.1960} \right)\rangle .} \hfill & {} \hfill \\ \end{array}$$
Step 3 Calculate the score values, accuracy values and certainty values of r i (i = 1, 2, …, 5).
$$\begin{array}{*{20}l} {S\left( {r_{1} } \right) = 1.3583, } \hfill & {A\left( {r_{1} } \right) = 0.1518, } \hfill & {C\left( {r_{1} } \right) = 0.4569,} \hfill & {S\left( {r_{2} } \right) = 1.0853, } \hfill & {A\left( {r_{2} } \right) = 0.1554, } \hfill & {C\left( {r_{2} } \right) = 0.3523,} \hfill \\ {S\left( {r_{3} } \right) = 1.4476,} \hfill & {A\left( {r_{3} } \right) = 0.0947, } \hfill & {C\left( {r_{3} } \right) = 0.4627, } \hfill & {S\left( {r_{4} } \right) = 1.2493,} \hfill & {A\left( {r_{4} } \right) = 0.1523, } \hfill & {C\left( {r_{4} } \right) = 0.4253,} \hfill \\ {S\left( {r_{5} } \right) = 1.3651, } \hfill & {A\left( {r_{5} } \right) = 0.1714,} \hfill & { C\left( {r_{5} } \right) = 0.4480.} \hfill & {} \hfill & {} \hfill & {} \hfill \\ \end{array}$$

Step 4 Rank all the alternatives.

According to the comparison in Definition 9, we can obtain the ranking a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2; again, the best option is a 5.

Case 3

Procedure using the \(WSVNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operator.

When \(f^{ *} \left( { s_{i} } \right) = \theta_{i} = \frac{i}{2t} = \frac{i}{6},m = 2,p_{1} = 1,p_{2} = 2\), the procedure to solve the MCDM is as follows:

Step 1 Normalize the decision matrix.

As stated above, all criteria are of the benefit type; therefore, there is no need to normalize the decision matrix.

Step 2 Aggregate and obtain the overall assessment value for each alternative.

Use the WSVNLG eo MSM (m) operator to calculate the comprehensive value for each alternative a i , denoted by r i (i = 1, 2, 3, 4, 5).
$$\begin{array}{*{20}l} {r_{1} = \langle s_{3.5940} , \left( {0.7144, 0.4448, 0.1497} \right)\rangle ,} \hfill & {r_{2} = \langle s_{3.2331} , \left( {0.6566, 0.3354, 0.2710} \right)\rangle ,} \hfill \\ {r_{3} = \left\langle {s_{3.9467} ,\left( {0.5995,0.4988,0.1813} \right)} \right\rangle ,} \hfill & {r_{4} = \left\langle {s_{3.6234} ,\left( {0.6419,0.5103,0.2213} \right)} \right\rangle ,} \hfill \\ {r_{5} = \langle s_{3.9159} , \left( {0.6891, 0.4093, 0.1836} \right)\rangle .} \hfill & {} \hfill \\ \end{array}$$
Step 3 Calculate the score values, accuracy values and certainty values of r i (i = 1, 2, …, m).
$$\begin{aligned} S\left( {r_{1} } \right) = 1.2698, A\left( {r_{1} } \right) = 0.1615, C\left( {r_{1} } \right) = 0.4297,S(r_{2} ) = 1.1048,A\left( {r_{2} } \right) = 0.1731, C\left( {r_{2} } \right) = 0.3538, \hfill \\ S\left( {r_{3} } \right) = 1.2626, A\left( {r_{3} } \right) = 0.0663, C\left( {r_{3} } \right) = 0.3944, S\left( {r_{4} } \right) = 1.1536, A\left( {r_{4} } \right) = 0.0795, C\left( {r_{4} } \right) = 0.3877, \hfill \\ S\left( {r_{5} } \right) = 1.3681, A\left( {r_{5} } \right) = 0.1827, C\left( {r_{5} } \right) = 0.4498. \hfill \\ \end{aligned}$$

Step 4 Rank all the alternatives.

According to the comparison in Definition 9, we can obtain the ranking a 5 ≻ a 5 ≻ a 5 ≻ a 5 ≻ a 5; the best option is a 5.

Using the parameters m = 2, p1 = 1, and p2 = 2, the proposed approaches based on WSVNLMSM (m), \(WSVNLGMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) and \(WSVNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) result in almost identical rankings, as shown in Table 2. The top three alternatives for all of the MCDM techniques are \(a_{5} ,a_{1} and a_{3}\) arranging from the best to the poorest in all MCDM techniques. However, the least suitable option may be either a 2 or a 4. In fact, the WSNLMSM (m) operator is a special case of the \(WSNLGMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}} operator\), where p j (j = 1, 2, …, m) = 1. This demonstrates that the size of parameter p j can change the ranking results when all individual data are aggregated.
Table 2

Aggregation results

Proposed operator

Parameter

Ranking

m

p 1

p 2

WSNLMSM ( m)

2

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

\(WSNLGMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\)

2

1

2

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

\(WSNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\)

2

1

2

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

5.3 Comparative analysis and discussion

Based on the experiment in Sect. 5.2, we first compare our two proposed methods with the method reference in Tian [48]. This comparison is displayed in Table 3. The two methods produce exactly the same ranking results. The results in Tables 2 and 3 suggest that the proposed MCDM methods in this paper are feasible and effective.
Table 3

Comparison of different methods

Method

Ranking

Approach based on the SNLNWB w p, q from Tian (p = 1, q = 2)

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

Proposed approach based on \(WSNLGMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\)(p 1 = 1, p 2 = 2)

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

Proposed approach based on \(WSNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\)(p 1 = 1, p 2 = 2)

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

The advantages the proposed method offers in solving MCDM problems are summarized as follows:
  1. 1.

    SNLSs not only contain linguistic terms, but also provide degrees of truth, indeterminacy and falsity related to the linguistic terms. This can express evaluation information more flexibly and explicitly. Moreover, SNLNs can maintain the completeness of the initial data in terms of these three aspects while addressing the MCDM problem. In addition, utilizing linguistic scale functions compensates for differences in semantics.

     
  2. 2.

    Obtaining the same ranking results implies that the proposed method is reasonable and valid for decision-making problems involving simplified neutrosophic linguistic information. Compared with the method SNLNWB w p, q proposed by Tain et al. [48], the \(WSNLGMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) and \(WSNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\) operators presented in this paper take more generalized forms and have more flexible parameters that facilitate selecting the optimal choices.

     

Next, we conduct a comparative analysis to discuss the influence of parameters on ranking results.

The data in Tables 4 and 5 indicate that the optimal alternative is a 5 and the worst is \(a_{2} \;{\text{or}}\;a_{4}\) when the style of aggregation operator is determined. In regard to Table 4, if the parameter p 1 is assigned the value 0, the ranking results are inconsistent with the actual situation. Moreover, a similar situation arises in Table 5. Therefore, in practical applications, p 1 must be a real number. Tables 4 and 5 also indicate that if one of the parameters far exceeds the others, the ranking order can be disrupted, to the point that the best option changes from a 5 to \(a_{3} \;{\text{or}}\;a_{4}\) from Tables 4 and 5. This result supports the idea that it is more appropriate to assign the parameter value as equally as possible. Furthermore, alternatives \(a_{3} \;{\text{and}}\;a_{4}\) are affected easily when one of the parameters changes.
Table 4

Ranking results when m = 2

p 1

p 2

Ranking by \(WSNLGMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\)

Ranking by \(WSNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\)

1

0

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

0

1

a 4 ≻ a 3 ≻ a 5 ≻ a 1 ≻ a 2

a 4 ≻ a 5 ≻ a 1 ≻ a 3 ≻ a 2

1

2

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

1

3

a 3 ≻ a 4 ≻ a 5 ≻ a 1 ≻ a 2

a 5 ≻ a 3 ≻ a 1 ≻ a 4 ≻ a 2

1

4

a 3 ≻ a 4 ≻ a 5 ≻ a 1 ≻ a 2

a 5 ≻ a 3 ≻ a 1 ≻ a 4 ≻ a 2

1

5

a 3 ≻ a 4 ≻ a 5 ≻ a 1 ≻ a 2

a 5 ≻ a 3 ≻ a 1 ≻ a 4 ≻ a 2

2

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

3

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

4

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

5

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

0.5

0.5

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

1

1

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

2

2

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

3

3

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

a 5 ≻ a 3 ≻ a 1 ≻ a 4 ≻ a 2

4

4

a 5 ≻ a 3 ≻ a 1 ≻ a 4 ≻ a 2

a 5 ≻ a 3 ≻ a 1 ≻ a 4 ≻ a 2

5

5

a 5 ≻ a 3 ≻ a 1 ≻ a 4 ≻ a 2

a 5 ≻ a 3 ≻ a 1 ≻ a 4 ≻ a 2

Table 5

Ranking results when m = 3

p 1

p 2

p 3

Ranking by \(WSNLGMSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\)

Ranking by \(WSNLG_{eo} MSM^{{\left( {m,p_{1} ,p_{2} , \ldots ,p_{m} } \right)}}\)

0.5

0.5

0.5

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

1

0

0

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

0

1

0

a 3 ≻ a 1 ≻ a 5 ≻ a 2 ≻ a 4

a 3 ≻ a 1 ≻ a 5 ≻ a 2 ≻ a 4

0

0

1

a 4 ≻ a 2 ≻ a 5 ≻ a 3 ≻ a 1

a 4 ≻ a 2 ≻ a 5 ≻ a 1 ≻ a 3

1

1

2

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

1

1

3

a 5 ≻ a 4 ≻ a 3 ≻ a 1 ≻ a 2

a 5 ≻ a 4 ≻ a 1 ≻ a 3 ≻ a 2

1

1

4

a 4 ≻ a 3 ≻ a 5 ≻ a 1 ≻ a 2

a 4 ≻ a 5 ≻ a 1 ≻ a 3 ≻ a 2

1

1

5

a 4 ≻ a 3 ≻ a 1 ≻ a 5 ≻ a 2

a 4 ≻ a 5 ≻ a 1 ≻ a 3 ≻ a 2

1

2

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 3 ≻ a 1 ≻ a 2 ≻ a 4

1

3

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 3 ≻ a 5 ≻ a 1 ≻ a 4 ≻ a 2

1

4

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 3 ≻ a 1 ≻ a 5 ≻ a 2 ≻ a 4

1

5

1

a 5 ≻ a 3 ≻ a 1 ≻ a 2 ≻ a 4

a 3 ≻ a 1 ≻ a 5 ≻ a 2 ≻ a 4

2

1

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

3

1

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 4 ≻ a 2

4

1

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

5

1

1

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

a 5 ≻ a 1 ≻ a 3 ≻ a 2 ≻ a 4

Comparing the ranking results reveals that the sizes of the parameter values influence the ranking results most. Therefore, determining the parameters is a vital part of solving MCDM problems. In general, based on actual demand, the parameter value should be equal to 1 or 1/2; this simplifies the calculation while also allowing the interrelationship of criteria.

6 Conclusion

Traditional MSM operators have been widely used in data fusion because of their trait of capturing interrelationships among multiple input arguments, which mainly demonstrates from the theoretical perspective because of the multiplication between \(a_{{i_{j} }}^{{p_{j} }}\) and \(a_{{i_{k} }}^{{p_{k} }} (k \ne j)\) in the equation when i = 1,2,…,n. Based on this, we generalized the MSM operator from both arithmetic and geometric points of view. On the other hand, the prominent advantages of SNLNs are that they not only contain the linguistic information, but also define explicitly the truth, falsity and indeterminacy of the linguistic term, contributing to flexibility in application. Motivated by these factors and considering the different importance of input arguments, we proposed the WSNLMSM, WSNLGMSM and WSNL \({\text{G}}_{\text{eo}}\) MSM operators and discussed some special cases in which the parameters take different values and several desirable properties. Then, we successfully applied the proposed methods to a practical MCDM problem, verifying the feasibility of the proposed approaches.

Based on the above comparison analysis and discussion, we further confirmed the validity of the proposed techniques by proving that the same ranking results can be obtained by referring to the same data and background. Although three linguistic scale functions were discussed in this paper, we employed only one of them.

It is well known that in classical fuzzy sets, intuitionistic fuzzy sets and classical probability, values are theoretically not allowed outside of the interval [0, 1]. However, the real world contains numerous examples and applications of over-/under-/off-neutrosophic components, like the membership degrees >1 and <0. Therefore, with respect to possible future research, we will not only discuss the influence of different linguistic scale functions on ranking results, but also extend our research from MCDM in neutrosophic sets to neutrosophic oversets/undersets/offsets.

Notes

Acknowledgements

The authors would like to thank the editors and anonymous reviewers for their helpful comments that improved the paper. This work was supported by the National Natural Science Foundation of China (No. 71571193).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interests regarding the publication of this paper.

References

  1. 1.
    Smarandache F (1995) Neutrosophic logic and set, mss. http://fs.gallup.unm.edu/neutrosophy.htm
  2. 2.
    Guo Y-H, Sengur A (2012) A novel color image segmentation approach based on neutrosophic set and modified fuzzy c-means. Circuits Syst Signal Process 32(4):1699–1723MathSciNetCrossRefGoogle Scholar
  3. 3.
    Khoshnevisan M, Bhattacharya S (2003) Neutrosophic information fusion applied to financial market. In: Proceedings of the sixth international conference of information fusion, Cairns, Australia, pp 1252–1257Google Scholar
  4. 4.
    Rivieccio U (2008) Neutrosophic logics: prospects and problems. Fuzzy Sets Syst 159(14):1860–1868MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Salama AA, Alblowi SA (2012) Neutrosophic set and neutrosophic topological spaces. J Math 3(4):31–35Google Scholar
  6. 6.
    Bausys R, Zavadskas E, Kaklauskas A (2015) Application of neutrosophic set to multicriteria decision making by COPRAS. Econ Comput Econ Cybern Stud Res 49(1):91–106Google Scholar
  7. 7.
    Ye J (2014) A multicriteria decision-making method using aggregation operators for simplified neutrosophic sets. J Intell Fuzzy Syst 26(5):2459–2466MathSciNetzbMATHGoogle Scholar
  8. 8.
    Tian Z-P, Wang J, Wang J-Q, Zhang H-Y (2016) An improved MULTIMOORA approach for multi-criteria decision-making based on interdependent inputs of simplified neutrosophic linguistic information. Neural Comput Appl. doi: 10.1007/s00521-016-2378-5 Google Scholar
  9. 9.
    Tian Z-P, Wang J, Wang J-Q, Zhang H-Y (2016) Simplified neutrosophic linguistic multi-criteria group decision-making approach to green product development. Group Decis Negot. doi: 10.1007/s10726-016-9479-5 Google Scholar
  10. 10.
    Smarandache F (1998) A unifying field in logics: neutrosophic logic. Neutrosophy, neutrosophic set, neutrosophic probability and statistics. American Research Press, RehobothzbMATHGoogle Scholar
  11. 11.
    Smarandache F, Wang H-B, Zhang Y-Q, Sunderraman R (2005) Interval neutrosophic sets and logic: theory and applications in computing. Hexis, PhoenixzbMATHGoogle Scholar
  12. 12.
    Deli I, Şubaş Y (2016) A ranking method of single valued neutrosophic numbers and its applications to multi-attribute decision making problems. Int J Mach Learn Cybern. doi: 10.1007/s13042-016-0505-3 Google Scholar
  13. 13.
    Biswas P, Pramanik S, Giri B (2016) TOPSIS method for multi-attribute group decision-making under single-valued neutrosophic environment. Neural Comput Appl 27(3):727–737CrossRefGoogle Scholar
  14. 14.
    Bausys R, Zavadskas E (2015) Multi criteria decision making approach by VIKOR under interval neutrosophic set environment. Econ Comput Econ Cybern Stud Res 49(4):33–48Google Scholar
  15. 15.
    Broumi S, Ye J, Smarandache F (2015) An extended TOPSIS method for multiple attribute decision making based on interval neutrosophic uncertain linguistic variables. Neutrosophic Sets Syst 8:22–31Google Scholar
  16. 16.
    Wang J-Q, Li X-E (2015) TODIM method with multi-valued neutrosophic sets. Control Decis 30(6):1139–1142Google Scholar
  17. 17.
    Ji P, Zhang H-Y, Wang J-Q (2016) A projection-based TODIM method under multi-valued neutrosophic environments and its application in personnel selection. Neural Comput Appl. doi: 10.1007/s00521-016-2436-z Google Scholar
  18. 18.
    Peng J-J, Wang J-Q, Wu X-H (2016) An extension of the ELECTRE approach with multi-valued neutrosophic information. Neural Comput Appl. doi: 10.1007/s00521-016-2411-8 Google Scholar
  19. 19.
    Ye J (2014) Multiple-attribute decision-making method under a single-valued neutrosophic hesitant fuzzy environment. J Intell Syst. doi: 10.1515/jisys-2014-0001 Google Scholar
  20. 20.
    Zadeh LA (1975) The concept of a linguistic variable and its application to approximate reasoning. Inf Sci 8(3):199–249MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Yu S-M, Wang J, Wang J-Q (2016) An extended TODIM approach with intuitionistic linguistic numbers. Int Trans Oper Res. doi: 10.1111/itor.12363 zbMATHGoogle Scholar
  22. 22.
    Wang J, Wang J-Q, Zhang H-Y (2016) A likelihood-based TODIM approach based on multi-hesitant fuzzy linguistic information for evaluation in logistics outsourcing. Comput Ind Eng 99:287–299CrossRefGoogle Scholar
  23. 23.
    Moharrer M, Tahayori H, Livi L (2015) Interval type-2 fuzzy sets to model linguistic label perception in online services satisfaction. Soft Comp 19(1):237–250CrossRefGoogle Scholar
  24. 24.
    Ye J (2015) An extended TOPSIS method for multiple attribute group decision making based on single valued neutrosophic linguistic numbers. J Intell Fuzzy Syst 28(1):247–255MathSciNetGoogle Scholar
  25. 25.
    Ye J (2014) Some aggregation operators of interval neutrosophic linguistic numbers for multiple attribute decision making. J Intell Fuzzy Syst 27(5):2231–2241MathSciNetzbMATHGoogle Scholar
  26. 26.
    Ma Y-X, Wang J-Q, Wang J, Wu X-H (2016) An interval neutrosophic linguistic multi-criteria group decision-making method and its application in selecting medical treatment options. Neural Comput Appl. doi: 10.1007/s00521-016-2203-1 Google Scholar
  27. 27.
    Tian Z-P, Wang J, Zhang H-Y, Wang J-Q (2016) Multi-criteria decision-making based on generalized prioritized aggregation operators under simplified neutrosophic uncertain linguistic environment. Int J Mach Learn Cybern. doi: 10.1007/s13042-016-0552-9 Google Scholar
  28. 28.
    Liu P, Li Y, Antuchevičienė J (2016) Multi-criteria decision-making method based on intuitionistic trapezoidal fuzzy prioritised owa operator. Technol Econ Dev Econ 22(3):453–469CrossRefGoogle Scholar
  29. 29.
    Liang R-X, Wang J-Q, Li L (2016) Multi-criteria group decision making method based on interdependent inputs of single valued trapezoidal neutrosophic information. Neural Comput Appl. doi: 10.1007/s00521-016-2672-2 Google Scholar
  30. 30.
    Ji P, Wang J-Q, Zhang H-Y (2016) Frank prioritized Bonferroni mean operator with single-valued neutrosophic sets and its application in selecting third party logistics. Neural Comput Appl. doi: 10.1007/s00521-016-2660-6 Google Scholar
  31. 31.
    Liu P-D, Shi L-L (2015) Some neutrosophic uncertain linguistic number Heronian mean operators and their application to multi-attribute group decision making. Neural Comput Appl. doi: 10.1007/s00521-015-2122-6 Google Scholar
  32. 32.
    Maclaurin C (1729) A second letter to Martin Folkes, Esq.: concerning the roots of equations, with the demonstration of other rules of algebra. Philos Trans R Soc Lond Ser A 36(1729):59–96Google Scholar
  33. 33.
    Detemple D, Robertson J (1979) On generalized symmetric means of two variables. Angew Chem 47(25):4638–4660zbMATHGoogle Scholar
  34. 34.
    Aydoğdu A (2015) On similarity and entropy of single valued neutrosophic sets. Gen Math Notes 29(1):67–74Google Scholar
  35. 35.
    Broumi S, Smarandache F, Talea M, Bakali A (2016) An introduction to bipolar single valued neutrosophic graph theory. Appl Mech Mater 841:184–191CrossRefGoogle Scholar
  36. 36.
    Ye J (2014) Single valued neutrosophic minimum spanning tree and its clustering method. J Intell Syst 23(3):311–324Google Scholar
  37. 37.
    Karaaslan F (2016) Correlation coefficients of single valued neutrosophic refined soft sets and their applications in clustering analysis. Neural Comput Appl. doi: 10.1007/s00521-016-2209-8 Google Scholar
  38. 38.
    Broumi S, Smarandache F (2014) Single valued neutrosophic trapezoid linguistic aggregation operators based multi-attribute decision making. Bull Pure Appl Sci Math Stat 33e(2):135–155CrossRefGoogle Scholar
  39. 39.
    Ju Y-B, Liu X-Y, Ju D-W (2015) Some new intuitionistic linguistic aggregation operators based on Maclaurin symmetric mean and their applications to multiple attribute group decision making. Soft Comput. doi: 10.1007/s00500-015-1761-y zbMATHGoogle Scholar
  40. 40.
    Qin J-D, Liu X-W (2015) Approaches to uncertain linguistic multiple attribute decision making based on dual Maclaurin symmetric mean. Intell Fuzzy Syst 29:171–186MathSciNetCrossRefzbMATHGoogle Scholar
  41. 41.
    Qin J-D, Liu X-W, Pedrycz W (2015) Hesitant fuzzy Maclaurin symmetric mean operators and its application to multiple-attribute decision making. Int J Fuzzy Syst 17(4):509–520MathSciNetCrossRefGoogle Scholar
  42. 42.
    Wen J-J, Shi H-N (2000) Optimizing sharpening for Maclaurin inequality. J Chengdu Univ 19(3):1–8Google Scholar
  43. 43.
    Pečarić J, Wen J, W-l Wang TLu (2005) A generalization of Maclaurin’s inequalities and its applications. Math Inequalities Appl 8(4):583–598MathSciNetzbMATHGoogle Scholar
  44. 44.
    Krnić M, Pečarić J (2006) A Hilbert inequality and an Euler-Maclaurin summation formula. Anziam J 48(3):419–431MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    Zhang X-M (2007) S-Geometric convexity of a function involving Maclaurin’s elementary symmetric mean. J Inequalities Pure Appl Math 8(2):156-165MathSciNetGoogle Scholar
  46. 46.
    Herrera F, Martinez L (2000) An approach for combining numerical and linguistic information based on the 2-tuple fuzzy linguistic representation model in decision-making. Int J Uncertain Fuzziness Knowl Based Syst 8(5):539–562CrossRefzbMATHGoogle Scholar
  47. 47.
    Xu Z-S (2006) A note on linguistic hybrid arithmetic averaging operator in multiple attribute group decision making with linguistic information. Group Decis Negot 15(6):593–604CrossRefGoogle Scholar
  48. 48.
    Tian Z-P, Wang J, Zhang H-Y, Chen X-H, Wang J-Q (2015) Simplified neutrosophic linguistic normalized weighted Bonferroni mean operator and its application to multi-criteria decision-making problems. Filomat. doi: 10.2298/FIL1508576F zbMATHGoogle Scholar
  49. 49.
    Wang J-Q, Wu J-T, Wang J, Zhang H-Y, Chen X-H (2014) Interval-valued hesitant fuzzy linguistic sets and their applications in multi-criteria decision-making problems. Inf Sci 288(1):55–72MathSciNetCrossRefzbMATHGoogle Scholar
  50. 50.
    Yu S-M, Zhou H, Chen X-H, Wang J-Q (2015) A multi-criteria decision-making method based on Heronian mean operators under a linguistic hesitant fuzzy environment. Asia Pac J Oper Res 32(5):1–35MathSciNetCrossRefzbMATHGoogle Scholar
  51. 51.
    Wang H-B, Smarandache F, Zhang Y-Q, Sunderraman R (2010) Single valued neutrosophic sets. Multispace Multistruct 4:410–413Google Scholar

Copyright information

© The Natural Computing Applications Forum 2016

Authors and Affiliations

  1. 1.School of BusinessCentral South UniversityChangshaPeople’s Republic of China
  2. 2.School of BusinessHunan UniversityChangshaPeople’s Republic of China

Personalised recommendations