Introduction

In the last century, to grab the concept of scepticism and unpredictability issues of science, fuzzy set theory is demanded to be as one of the most influential fields, which was first portrayed by Zadeh (Zadeh 1965) in 1965. Vagueness theory plays a prominent role to sort out the problems of engineering, architectural modelling, networking science, decision-making procedures and many more realistic problems. Atanassov (Atanassov 1986) extended the idea of fuzzy set theory and introduced the substantial notion of intuitionistic fuzzy theory with the intriguing interpretation of both membership and non-membership functions. In recent past, researchers have unfolded the uncertainty area into new domains and directions like triangular (Yen et al. 1999), trapezoidal (Abbasbandy and Hajjari 2009), pentagonal (Chakraborty et al. 2019a), hexagonal (Khan et al. 2020a) and heptagonal (Maity et al. 2020) numbers with specific nourishments. Liu and Yuan (Liu and Yuan 2007) and Ye (Ye 2014) constructed the rudimentary conception of triangular and trapezoidal intuitionistic fuzzy set, respectively. Moreover, researchers made origination of numerous kinds of innovative methodologies to sketch analytically the concepts and encouraged few improved versions of uncertain parameters.

In recent time, Smarandache (Smarandache 1998) conceived the perception of neutrosophic set containing three different components namely, (i) truthiness, (ii) indeterminacies and (iii) falseness. Subsequently, Wang et al. (Wang et al. 2010) established the perception of a single-typed neutrosophic set, which is a very pertinent factor to resolve the solution of complicated type of difficulties. Progressing with the research, Chakraborty et al. (Chakraborty et al. 2018, 2021) conceived the vigorous notion of triangular and trapezoidal neutrosophic numbers to many real-life problems.

Bosc and Pivert (Bosc and Pivert 2013) germinated the concept of bipolarity in neutrosophic arena in human decision-making process. Lee (Lee 2000) clarified the view point of bipolar fuzzy set theory in their research papers. Later on, Kang and Kang (Kang and Kang 2012) widened this perception into semi-groups and groups structural domain, whereas Deli et al. (Deli et al. 2015) put forth the constructive theory of a bipolar neutrosophic set and implemented it to decision-making issues. Malik et al. (Malik et al. 2020) implemented the bipolar single-valued neutrosophic graph theoretic knowledge and Quek et al. (Quek et al. 2022) designed penta-partitioned neutrosophic graph theoretic idea in COVID-19-related mathematical modelling. Successively, Chakraborty (Chakraborty et al. 2019b) set forth triangular bipolar number and its categorisation based on distinct logical points. Contemporarily, Wang et al. (Wang et al. 2018) also perceived the theory of operators in a bipolar neutrosophic domain and synchronised it in decision-making theory.

Recently, multi-criteria decision-making (MCDM) problem is one of the highly recommended techniques in the decision science domain. This technique is more appreciated when a group of criteria is employed by a group of decision makers. Those problems in relating multi-criteria group decision-making (MCGDM) have exhibited its intense impact in neutrosophic arena.

Nowadays, MCDM and MCGDM have ample scope of applications in multitude spheres under various unpredictability circumstances. Further, numerous (Garg et al. 2022; Deetae 2021; Chakraborty et al. 2022; Das et al. 2022; Haque et al. 2020) works on MCDM in neutrosophic environment are developed frequently which play an essential role in science and engineering.

Trung et al. (Trung and Thanh 2022) executed fuzzy linguistic MCDM technique in digital marketing technology. Some aggregation operator-based MCGDM/MAGDM techniques have emerged in recent era. Qin et al. (Qin et al. 2020) have implemented weighted Archimedean power partitioned Bonferroni aggregation operator in MCDM problem. Garg (Garg 2021) illustrated sine trigonometric operational law-based Pythagorean fuzzy aggregation operator to executed decision-making problem. Qiyas et al. (Qiyas et al. 2022a) utilised fuzzy credibility Dombi aggregation operators to clarify revised TOPSIS method. Khan et al. (Khan et al. 2020b) utilised and generalised neutrosophic cubic aggregation operators in decision-making process.

In the year 2015, Helen (Helen and Uma 2015) established the idea of pentagonal fuzzy number, which has been extended by Christi (Christi and Kasthuri 2016) into pentagonal intuitionistic number to resolve the transportation problem. Recently, Chakraborty (Chakraborty et al. 2019c, d, 2020; Chakraborty 2020) manifested a legerdemain conception of pentagonal fuzzy number and its various and distinct depiction in transportation field, graph theoretical problem, MCGDM and networking arena.

Different techniques of decision-making have been adopted to enrich this field. Some notable techniques are TOPSIS (Garg 2020), MULTIMOORA (Garg and Rani 2022), AHP (Tas et al. 2022), DEMATEL (Karasan et al. 2022) and EDAS (Liao et al. 2022), etc. In recent time, Adali et al. (Adalı and Tuş 2021; Adalı et al. 2022) resolved some notion of multiattribute decision-making process in to tackle real-life hazards. Also, Deng (Julong 1989; Deng 2005) set forth a grey relation analysis (GRA) to treat vagueness issues. Recently, Wang et al. (Wang et al. 2022) correlated the grey-based decision-making theory to analyse the solar PV power plant selection in Vietnam. Qiyas et.al. (Qiyas et al. 2022b) executed extended GRA technique in MCDM model. Qi (Qi 2021) put forth GRA-CRITIC mechanism for intuitionistic fuzzy MADM-based problem with the application of potentiality evaluation. Recently, Pramanik and Mallik (Pramanik and Mallick 2020) extended GRA-based MADM strategy in single-valued trapezoidal neutrosophic environment.

In this article, we chiefly shed light on the application of PNN and its utility in an agricultural-based MCGDM problem. Additionally, we applied our established PNWAA and PNWGA operators (Chakraborty et al. 2020) in PNN environment in case of solving MCGDM technique separately. A refined GRA skill is developed along with the MEREC strategy with an effective comparative scrutiny to demonstrate the pertinence of the ranking results. Lastly, a sensitivity analysis is performed here which gave an essential force in the research work. This noble idea will assist us to clear up a plethora of each day existence issues in uncertainty area.

Motivation

Our main goal of this research article is to endorse the GRA scheme to encourage the grey system and evaluate the best alternative under imprecise dataset. During survey study, the following questions arise in our mind to better control and effectively execute the decision-making problem, which motivate us to conduct our current research work.

  • How can we incorporate pentagonal neutrosophic imprecise data in realistic MCGDM model?

  • Which mathematical operator is appropriate to aggregate the underlying information?

  • Which technique will be useful to capture the grey knowledge associated with the problem?

  • Is there any need of another technique to analyse a comparative knowledge-based discussion to enrich our study?

  • Is our executed technique robust and stable?

Novelties

Nowadays, researchers have shown their attentiveness to evolve the theories connecting to neutrosophic domain to promote its numerous applications in distinct branches of neutrosophic arena. However, legitimising all the standpoints regarding to PNN theory: different conjectures and problems are yet to be created and solved. In this research paper, our supreme motto is to focus some blurred topics in PNN domain which are listed as follows:

  1. (1)

    Application of PNNWAA and PNNWGA operators to interpret the MCGDM method.

  2. (2)

    Suggested new distance measure (Hamming distance).

  3. (3)

    Discussion the idea of GRA method.

  4. (4)

    Execute the idea of GRA in pentagonal neutrosophic domain to solve our proposed MCGDM problem.

  5. (5)

    Strengthen the GRA strategy by MEREC technique with a comparative justification.

  6. (6)

    Sensitivity analysis of the ranking outcomes.

Definitions of different sets and pentagonal neutrosophic number

Fuzzy Set: (Zadeh 1965) Suppose \(\widetilde{T}\) be a set such that \(\widetilde{\mathrm{T}}=\left\{\left(\upgamma ,{\mathrm{\alpha }}_{\widetilde{\mathrm{T}}}\left(\upgamma \right)\right):\mathrm{\gamma \epsilon T},{\mathrm{\alpha }}_{\widetilde{\mathrm{T}}}\left(\upgamma \right)\upepsilon [\mathrm{0,1}]\right\}\) which is customarily designated by \(\left(\upgamma ,{\mathrm{\alpha }}_{\widetilde{\mathrm{T}}}\left(\upgamma \right)\right)\), and here, \(\gamma\) belongs to the set \(T\) and \(0\le {\mathrm{\alpha }}_{\widetilde{\mathrm{T}}}\left(\upgamma \right) \le 1\), and then, the set \(\widetilde{\mathrm{T}}\) is called a fuzzy set.

Neutrosophic Set: (Smarandache 1998) A set \({\widetilde{T}}_{Neu}\) is the universe of discourse of \(T\) most generally stated as \(\sigma\) is called a neutrosophic set if \({\widetilde{T}}_{Neu}=\left\{\langle \sigma ;\left[{\tau }_{\widetilde{{T}_{Neu}}}\left(\sigma \right),{\pi }_{\widetilde{{T}_{Neu}}}\left(\sigma \right),{\rho }_{\widetilde{{T}_{Neu}}}\left(\sigma \right)\right]\rangle \vdots \sigma \epsilon T\right\}\),where \({\tau }_{\widetilde{{T}_{Neu}}}\left(\sigma \right):T\to \left[\mathrm{0,1}\right]\) stands for the degree of confidence, \({\pi }_{\widetilde{{T}_{Neu}}}\left(\sigma \right):T\to \left[\mathrm{0,1}\right] \mathrm{stands for}\) the degree of uncertainty, and \({\rho }_{\widetilde{{T}_{Neu}}}\left(\sigma \right):T\to [\mathrm{0,1}]\) represents the degree of falseness in the decision-making course of action. Where\(\left[{\tau }_{\widetilde{{A}_{Neu}}}\left(\sigma \right),{\pi }_{\widetilde{{A}_{Neu}}}\left(\sigma \right),{\rho }_{\widetilde{{A}_{Neu}}}\left(\sigma \right)\right]\) satisfies the inequality \(0\le {\tau }_{\widetilde{{T}_{Neu}}}\left(\sigma \right)+{\pi }_{\widetilde{{T}_{Neu}}}\left(\sigma \right)+{\rho }_{\widetilde{{T}_{Neu}}}\left(\sigma \right)\le 3\).

Single-Valued Neutrosophic Set: (Wang et al. 2010) A Neutrosophic set mentioned above \({\widetilde{T}}_{Neu}\) is said to be a Single-Valued Neutrosophic Set \(\left({\widetilde{T}}_{SNeu}\right)\) if \(\sigma\) is a single-valued independent variable.\({\widetilde{T}}_{SNeu}=\left\{\langle \sigma ;\left[{\aleph }_{{\widetilde{T}}_{Neu}}\left(\sigma \right),{\beth }_{{\widetilde{T}}_{Neu}}\left(\sigma \right),{\upomega }_{{\widetilde{T}}_{Neu}}\left(\sigma \right)\right]\rangle \vdots \sigma \epsilon T\right\}\), where \({\aleph }_{{\widetilde{T}}_{Neu}}\left(\sigma \right),{\beth }_{{\widetilde{T}}_{Neu}}\left(\sigma \right) and {\upomega }_{{\widetilde{T}}_{Neu}}\left(\sigma \right)\) signify the notion of accuracy, indeterminacy and falsity membership functions, respectively. \(\widetilde{{T}_{NC}}\) is designated as neut-convex, which implies that. \(\widetilde{{T}_{NC}}\) is a subset of R by satisfying the following norms:

$${\mathrm{\aleph }}_{{\widetilde{T}}_{Neu}}\langle {\varphi s}_{1}+\left(1-\varphi \right){s}_{2}\rangle \ge min\langle {\mathrm{\aleph }}_{{\widetilde{T}}_{Neu}}\left({s}_{1}\right),{\mathrm{\aleph }}_{{\widetilde{A}}_{Neu}}\left({s}_{2}\right)\rangle$$
$${\mathrm{\beth }}_{{\widetilde{T}}_{Neu}}\langle \varphi {s}_{1}+\left(1-\varphi \right){s}_{2}\rangle \le max\langle {\mathrm{\beth }}_{{\widetilde{T}}_{Neu}}\left({s}_{1}\right),{\mathrm{\beth }}_{{\widetilde{A}}_{Neu}}\left({s}_{2}\right)\rangle$$
$${\omega }_{{\widetilde{T}}_{Neu}}\langle \varphi {s}_{1}+\left(1-\varphi \right){s}_{2}\rangle \le max\langle {\omega }_{{\widetilde{T}}_{Neu}}\left({s}_{1}\right),{\omega }_{{\widetilde{A}}_{Neu}}\left({s}_{2}\right)\rangle$$

where \({s}_{1}\,\mathrm{ and }\,{s}_{2}\in {\mathbb{R}}\,and\,\varphi \in [\mathrm{0,1}]\)

Single-Valued Pentagonal Neutrosophic Number: (Chakraborty et al. 2020) A Single-Valued Pentagonal Neutrosophic Number \(\left(\widetilde{{Pen}_{N}}\right)\) is defined as \(\left(\widetilde{{Pen}_{N}}\right)=\langle \left[\left({t}_{1},{t}_{2},{t}_{3},{t}_{4},{t}_{5}\right);\theta \right],\left[\left({t}_{1},{t}_{2},{t}_{3},{t}_{4},{t}_{5}\right);\vartheta \right],\left[\left({t}_{1},{t}_{2},{t}_{3},{t}_{4},{t}_{5}\right);\gamma \right]\rangle\), where \(\theta , \vartheta ,\gamma \in \left[\mathrm{0,1}\right]\).The accuracy membership function \(\left( {\chi_{{\tilde{T}}} } \right):{\mathbb{R}} \to \left[ {0,\theta } \right]\), the ambiguity membership function \(\left({\yen }_{\widetilde{T}}\right):{\mathbb{R}}\to \left[\vartheta ,1\right]\) and the falsity membership function \(\left({\pounds }_{\widetilde{T}}\right):{\mathbb{R}}\to \left[\gamma ,1\right]\) are defined by:

$${\chi}_{{\tilde{T}}} \left( x \right) = \left\{ {\begin{array}{*{20}c} {\frac{{\theta \left( {x - t_{1} } \right)}}{{\left( {t_{2} - t_{1} } \right)}}} & {t_{1} \le x \le t_{2} } \\ {\frac{{\theta \left( {x - t_{2} } \right)}}{{\left( {t_{3} - t_{2} } \right)}}} & { t_{2} \le x < t_{3} } \\ \theta & {\begin{array}{*{20}c} {x = t_{3} } \\ \end{array} } \\ {\frac{{\theta \left( {t_{4} - x} \right)}}{{\left( {t_{4} - t_{3} } \right)}}} & {t_{3} < x \le t_{4} } \\ {\frac{{\theta \left( {t_{4} - x} \right)}}{{\left( {t_{5} - t_{4} } \right)}}} & {t_{4} \le x \le t_{5} } \\ 0 & {otherwise} \\ \end{array} } \right. {\yen} , _{{\tilde{T}}} \left( x \right) = \left\{ {\begin{array}{*{20}c} {\frac{{t_{2} - x + \vartheta \left( {x - t_{1} } \right)}}{{\left( {t_{2} - t_{1} } \right)}}} & {t_{1} \le x \le t_{2} } \\ {\frac{{t_{3} - x + \vartheta \left( {x - t_{2} } \right)}}{{\left( {t_{3} - t_{2} } \right)}}} & { t_{2} \le x < t_{3} } \\ \vartheta & {\begin{array}{*{20}c} {x = t_{3} } \\ \end{array} } \\ {\frac{{x - t_{3} + \vartheta \left( {t_{4} - x} \right)}}{{\left( {t_{4} - t_{3} } \right)}}} & {t_{3} < x \le t_{4} } \\ {\frac{{x - t_{4} + \vartheta \left( {t_{5} - x} \right)}}{{\left( {t_{5} - t_{4} } \right)}}} & {t_{4} \le x \le t_{5} } \\ 1 & {otherwise} \\ \end{array} } \right.$$
$${\pounds }_{\widetilde{T}}\left(x\right)=\left\{\begin{array}{cc}\frac{{t}_{2}-x+\gamma (x-{t}_{1})}{({t}_{2}-{t}_{1})}& {t}_{1}\le x\le {t}_{2}\\ \frac{{t}_{3}-x+\gamma (x-{t}_{2})}{({t}_{3}-{t}_{2})}& { t}_{2}\le x<{t}_{3}\\ \gamma & \begin{array}{c}x={t}_{3}\end{array}\\ \frac{x-{t}_{3}+\gamma \left({t}_{4}-x\right)}{\left({t}_{4}-{t}_{3}\right)}& {t}_{3}<x\le {t}_{4}\\ \frac{x-{t}_{4}+\gamma \left({t}_{5}-x\right)}{\left({t}_{5}-{t}_{4}\right)}& {t}_{4}\le x\le {t}_{5}\\ 1& otherwise\end{array}\right.$$
(1)

Proposed score and accuracy function

The requirement of score function (Chakraborty et al. 2020) in pentagonal neutrosophic domain is to turn over a neutrosophic number into a crisp number. Score function wholly depends on the degree of truthiness, ambiguity and falsity. Here, we define a new score function in pentagonal neutrosophic environment. Thus, for any single-typed pentagonal neutrosophic number,

$${\widetilde{L}}_{PtNeu}=({l}_{1},{l}_{2},{l}_{3},{l}_{4},{l}_{5};{t}_{Pt}{,i}_{Pt},{f}_{Pt})$$

We define the score function as follows:

$${L}_\text{Score}=\frac{1}{15}\left({l}_{1}+{l}_{2}+{l}_{3}+{l}_{4}+{l}_{5}\right)\times \left(2+{t}_{Pt}-{i}_{Pt}-{f}_{Pt}\right)$$
(2)
$${L}_\text{Accuracy}=\frac{1}{15}\left({l}_{1}+{l}_{2}+{l}_{3}+{l}_{4}+{l}_{5}\right)\times \left(2+{t}_{Pt}-{f}_{Pt}\right)$$
(2a)

Hamming distance between two pentagonal neutrosophic numbers

Let \({\widetilde{L}}_{PtNeu1}= \left({l}_{1}^{1} ,{l}_{2}^{1},{l}_{3}^{1},{l}_{4}^{1},{l}_{5}^{1};{{t}_{Pt}}^{1},{{i}_{Pt}}^{1},{{f}_{Pt}}^{1}\right)\) and \({\widetilde{L}}_{PtNeu2}= \left({l}_{1}^{2} ,{l}_{2}^{2},{l}_{3}^{2},{l}_{4}^{2},{l}_{5}^{2};{{t}_{Pt}}^{2},{{i}_{Pt}}^{2},{{f}_{Pt}}^{2}\right)\)

are two pentagonal neutrosophic numbers. Then, the Hamming distance between two numbers is defined as follows:

$${D({\widetilde{L}}_{PtNeu1,}{\widetilde{L}}_{PtNeu2})}_{H}=\frac{1}{15}\left(\left|{l}_{1}^{1}\left(2+{{t}_{Pt}}^{1}-{{i}_{Pt}}^{1}-{{f}_{Pt}}^{1}\right)-{l}_{1}^{2}\left(2+{{t}_{Pt}}^{2}-{{i}_{Pt}}^{2}-{{f}_{Pt}}^{2}\right)\right|+\left|{\mathrm{l}}_{2}^{1}\left(2+{{\mathrm{t}}_{\mathrm{Pt}}}^{1}-{{\mathrm{i}}_{\mathrm{Pt}}}^{1}-{{\mathrm{f}}_{\mathrm{Pt}}}^{1}\right)-{\mathrm{l}}_{2}^{2}\left(2+{{\mathrm{t}}_{\mathrm{Pt}}}^{2}-{{\mathrm{i}}_{\mathrm{Pt}}}^{2}-{{\mathrm{f}}_{\mathrm{Pt}}}^{2}\right)\right|+\left|{\mathrm{l}}_{3}^{1}\left(2+{{\mathrm{t}}_{\mathrm{Pt}}}^{1}-{{\mathrm{i}}_{\mathrm{Pt}}}^{1}-{{\mathrm{f}}_{\mathrm{Pt}}}^{1}\right)-{\mathrm{l}}_{3}^{2}\left(2+{{\mathrm{t}}_{\mathrm{Pt}}}^{2}-{{\mathrm{i}}_{\mathrm{Pt}}}^{2}-{{\mathrm{f}}_{\mathrm{Pt}}}^{2}\right)\right|+\left|{\mathrm{l}}_{4}^{1}\left(2+{{\mathrm{t}}_{\mathrm{Pt}}}^{1}-{{\mathrm{i}}_{\mathrm{Pt}}}^{1}-{{\mathrm{f}}_{\mathrm{Pt}}}^{1}\right)-{\mathrm{l}}_{4}^{2}\left(2+{{\mathrm{t}}_{\mathrm{Pt}}}^{2}-{{\mathrm{i}}_{\mathrm{Pt}}}^{2}-{{\mathrm{f}}_{\mathrm{Pt}}}^{2}\right)\right|+\left|{\mathrm{l}}_{5}^{1}\left(2+{{\mathrm{t}}_{\mathrm{Pt}}}^{1}-{{\mathrm{i}}_{\mathrm{Pt}}}^{1}-{{\mathrm{f}}_{\mathrm{Pt}}}^{1}\right)-{\mathrm{l}}_{5}^{2}\left(2+{{\mathrm{t}}_{\mathrm{Pt}}}^{2}-{{\mathrm{i}}_{\mathrm{Pt}}}^{2}-{{\mathrm{f}}_{\mathrm{Pt}}}^{2}\right)\right|\right)$$
(3)

Weighted aggregation operators of pentagonal neutrosophic numbers

Aggregation operators are such relevant equipments for clustering information to grasp diplomatically the decision-making policy, and this section gives rise to an apprehension between two weighted aggregation operators to aggregate PNNs.

Pentagonal neutrosophic weighted arithmetic averaging operator: (Chakraborty et al. 2020) Let \({\widetilde{l}}_{j}\)=  < (\({l}_{j1}\), \({l}_{j2}\), \({l}_{j3}\), \({l}_{j4}\), \({l}_{j5}\)); \({t}_{Ptj}{,i}_{Ptj},{f}_{Ptj}\)> \((j=\mathrm{1,2},3,\dots .,n)\) be a set of PNNs, then, a PNWAA operator is defined as follows:

$$\mathrm{PNWAA }({\widetilde{l}}_{1},{\widetilde{l}}_{2},\dots ,{\widetilde{l}}_{n})=\sum_{j=1}^{n}{\Omega }_{j}{\widetilde{l}}_{j}$$
(4)

where \({\Omega }_{j}\) is the weight of \({\widetilde{l}}_{j}\)(j = 1,2,3,….,n) such that \({\Omega }_{j}>0\) and \(\sum_{j=1}^{n}{\Omega }_{j}=1.\)

Pentagonal neutrosophic weighted geometric averaging operator: (Chakraborty et al. 2020) Let \({\widetilde{l}}_{j}\)=  < (\({l}_{j1}\), \({l}_{j2}\), \({l}_{j3}\), \({l}_{j4}\), \({l}_{j5}\)); \({t}_{Ptj}{,i}_{Ptj},{f}_{Ptj}\)> \((j=\mathrm{1,2},3,\dots .,n)\) be a set of PNNs, then, a PNWGAA operator is defined as follows:

$$\mathrm{PNWGA }\left({\widetilde{l}}_{1},{\widetilde{l}}_{2},\dots .,{\widetilde{l}}_{n}\right)={\prod }_{j=1}^{n}\widetilde{{{l}_{j}}^{{\Omega }_{j}}}$$
(5)

where \({\Omega }_{j}\) is the weight of \({\widetilde{l}}_{j}\)(j = 1,2,3,….,n) such that \({\Omega }_{j}>0\) and \(\sum_{j=1}^{n}{\Omega }_{j}=1.\)

Multi-criteria group decision-making problem in pentagonal neutrosophic environment

In this current decade, MCGDM problem is one of the authentic, rational and well organised topics for handling the notion of uncertainty and vagueness issues. The chief objective of this method is to detect the finest alternatives amongst the finite distinct alternatives on the basis of their finite different attribute values. Thus, decision-making procedure can be built up vigorously by the strategies of MCGDM which is immensely favourable to generate decision recommendation and suggests procedure conveniences in terms of improved decision attributes, provides enhanced communication capabilities and boosts up aspirations of decision makers. The accomplishment of the procedure is quite tactful in pentagonal neutrosophic domain. Applying some established mathematical operators, score function and accuracy function, we evolve an algorithm to equip this MCGDM problem.

In this section, we study a MCGDM-based agricultural issue in which we need to choose the best alternative crop for maximum financial gain according to distinct view points from three different agriculturalists. The proposed algorithm is sketched briefly as follows:

Materials and Methods

Suppose that \(A\)={\({A}_{1}\),\({A}_{2}, { A}_{3}\),…….\({A}_{m}\}\) be the set of m alternatives and \(B\)={\({B}_{1}\),\({B}_{2}, {B}_{3}\),…….\({B}_{n}\}\) be the set of n attributes. Also, the \(\partial =\{ {\partial }_{1},{\partial }_{2},{\partial }_{3},\dots .{\partial }_{n}\}\) be the connecting weight set attributes where each \({\partial }_{i}\ge\) 0 and also satisfies the relation \(\sum_{i=1}^{n}{\partial }_{i}=1\). Therefore, we regard the set of decision makers \(D=\{ {D}_{1},{D}_{2},{D}_{3}\dots \dots \dots ..{D}_{r}\}\) connected with alternatives whose weight set is considered as \(\delta =\left\{{\delta }_{1},{\delta }_{2},{\delta }_{3}\dots \dots \dots ..{\delta }_{r}\right\}\) where each \({\delta }_{i}\ge\) 0 and also satisfies the relation \(\sum_{i=1}^{r}{\delta }_{i}=1\), and this weight vector will be selected according to the capability of judgement, proficiency of decision makers’ experience and knowledge and their inventive thinking capability. The strategy to resolve our problem is depicted in Fig. 1.

Fig. 1
figure 1

Flowchart of the GRA–MEREC–MCGDM strategy

GRA mechanism in pentagonal neutrosophic environment

Grey relational analysis (GRA) was designed by a Chinese Professor Julong Deng (Julong 1989) which is extensively applied in Grey system theory. It explains circumstances with no data as black and those with precise information as white. Briefly, none of these idealised circumstances ever occurs in realistic problems. Moreover, conditions between these acute situations, which contain incomplete information, are specified as grey, indistinct or fuzzy.

Algorithm of GRA technique

  • Step 1 Composition of Decision Matrices

Here, we build up all decision matrices in accordance with the decision maker’s opinion corresponding to the finite alternatives and finite set of attributes. The noteworthy point is that the entities \({\mathrm{t}}_{\mathrm{ij}}\) for each matrix are all pentagonal neutrosophic numbers. The matrix is given as follows:

$${V}^{C}=\left(\begin{array}{cccccc}.& {A}_{1}& {A}_{2}& {A}_{3}& .& \begin{array}{ccc}.& .& { A}_{m}\end{array}\\ {B}_{1}& {t}_{11}^{c}& {t}_{12}^{c}& {t}_{13}^{c}& .& .\begin{array}{ccc}.& .& {t}_{1m}^{c}\end{array}\\ {B}_{2}& {t}_{21}^{c}& {t}_{22}^{c}& {t}_{23}^{c}& .& \begin{array}{ccc}.& .& {t}_{2m}^{c}\end{array}\\ \begin{array}{c}{B}_{3}\\ .\\ {B}_{n}\end{array}& \begin{array}{c}.\\ .\\ \begin{array}{c}.\\ {t}_{n1}^{c}\end{array}\end{array}& \begin{array}{c}.\\ .\\ {t}_{n2}^{c}\end{array}& \begin{array}{c}.\\ .\\ {t}_{n3}^{c}\end{array}& \begin{array}{c}.\\ .\\ .\end{array}& \begin{array}{c}\begin{array}{ccc}.& .& .\end{array}\\ \begin{array}{ccc}.& .& .\end{array}\\ \begin{array}{ccc}.& .& {t}_{nm}^{c}\end{array}\end{array}\end{array}\right)$$
(6)
  • Step 2 Standardisation of decision matrices

Let \({V}^{C}\)= (\({t}_{ij}^{c}{)}_{\mathrm{mn}}\) be the finalised decision matrix where each entity of the decision matrix is pentagonal neutrosophic number where \({t}_{ij}^{c}\) =([\({t}_{ij}^{1c}\),\({t}_{ij}^{2c}\),\({t}_{ij}^{3c}\), \({t}_{ij}^{4c}\),\({t}_{ij}^{5c}\)];\({t}_{Ptijc}{,i}_{Ptijc},{f}_{Ptijc}\)) is the evaluation value of alternative \({A}_{i}\) w.r.t. the attribute\({B}_{j}\).We consider the following skill of normalisation to obtain the standardised decision matrix where \({V}^{*c}\)=\(({{\overline{t} }_{ij}^{c})}_{mn}\) in which the entity \({\overline{t} }_{ij}^{c}\) = ([\({\overline{t} }_{ij}^{1c}\),\({\overline{t} }_{ij}^{2c}\),\({\overline{t} }_{ij}^{3c}\),\({\overline{t} }_{ij}^{4c}\),\({\overline{t} }_{ij}^{5c}\)];\({\overline{t} }_{ptijc}\),\({\overline{i} }_{ptijc}\),\({\overline{f} }_{ptijc}\)) is formulated as follows:

$${\overline{t} }_{ij}^{c}= \left(\left[\frac{{t}_{ij}^{1c}}{p},\frac{{t}_{ij}^{2c}}{p},\frac{{t}_{ij}^{3c}}{p},\frac{{t}_{ij}^{4c}}{p},\frac{{t}_{ij}^{5c}}{p}\right];{t}_{Ptijc}{,i}_{Ptijc},{f}_{Ptijc}\right),\mathrm{ Where }\,p\hspace{0.17em}=\sqrt{{{t}_{ij}^{1c}}^{2}+{{t}_{ij}^{2c}}^{2}+{{t}_{ij}^{3c}}^{2}+{{t}_{ij}^{4c}}^{2}+{{t}_{ij}^{5c}}^{2}}$$
(7)

Thus, we attain the following standardised matrix:

$${V}^{*c}\hspace{0.17em}= \left(\begin{array}{cccccc}.& {A}_{1}& {A}_{2}& {A}_{3}& .& \begin{array}{ccc}.& .& { A}_{m}\end{array}\\ {B}_{1}& {\overline{t} }_{11}^{c}& {\overline{t} }_{12}^{c}& {\overline{t} }_{13}^{c}& .& .\begin{array}{ccc}.& .& {\overline{t} }_{1m}^{c}\end{array}\\ {B}_{2}& {\overline{t} }_{21}^{c}& {\overline{t} }_{22}^{c}& {\overline{t} }_{23}^{c}& .& \begin{array}{ccc}.& .& {\overline{t} }_{2m}^{c}\end{array}\\ \begin{array}{c}{B}_{3}\\ .\\ {B}_{n}\end{array}& \begin{array}{c}.\\ .\\ \begin{array}{c}.\\ {\overline{t} }_{n1}^{c}\end{array}\end{array}& \begin{array}{c}.\\ .\\ {\overline{t} }_{n2}^{c}\end{array}& \begin{array}{c}.\\ .\\ {\overline{t} }_{n3}^{c}\end{array}& \begin{array}{c}.\\ .\\ .\end{array}& \begin{array}{c}\begin{array}{ccc}.& .& .\end{array}\\ \begin{array}{ccc}.& .& .\end{array}\\ \begin{array}{ccc}.& .& {\overline{t} }_{nm}^{c}\end{array}\end{array}\end{array}\right)$$
(8)
  • Step 3 Aggregation and composition of single decision matrix

For producing a single decision matrix V, we have employed the legitimate pentagonal neutrosophic weighted arithmetic averaging operator (\(\mathrm{PNWAA})\) \(t^{\prime }_{ij} = \sum\nolimits_{j = 1}^{n} {\delta_{j} t_{ij}^{c} }\) and weighted geometric averaging operator \((\mathrm{PNWGA})\) \(t^{\prime }_{ij} = \mathop \prod \nolimits_{j = 1}^{n} t_{ij}^{{c}{\delta_{j} }}\) to aggregate the decision matrices to form an individual one which is mainly represented as \(V.\) Here, we get two single decision matrices using \(\mathrm{PNWAA}\) and \(\mathrm{PNWGA}\) operators. The matrices are defined as below:

$$V=\left(\begin{array}{cccccc}.& {A}_{1}& {A}_{2}& {A}_{3}& .& \begin{array}{ccc}.& .& { A}_{m}\end{array}\\ {B}_{1}& {t}_{11}^{\mathrm{^{\prime}}}& {t}_{12}^{\mathrm{^{\prime}}}& {t}_{13}^{\mathrm{^{\prime}}}& .& .\begin{array}{ccc}.& .& {t}_{1n}^{\mathrm{^{\prime}}}\end{array}\\ {B}_{2}& {t}_{21}^{\mathrm{^{\prime}}}& {t}_{22}^{\mathrm{^{\prime}}}& {t}_{23}^{\mathrm{^{\prime}}}& .& \begin{array}{ccc}.& .& {t}_{2n}^{\mathrm{^{\prime}}}\end{array}\\ \begin{array}{c}{B}_{3}\\ .\\ {B}_{n}\end{array}& \begin{array}{c}.\\ .\\ \begin{array}{c}.\\ {t}_{m1}^{\mathrm{^{\prime}}}\end{array}\end{array}& \begin{array}{c}.\\ .\\ {t}_{m2}^{\mathrm{^{\prime}}}\end{array}& \begin{array}{c}.\\ .\\ {t}_{m3}^{\mathrm{^{\prime}}}\end{array}& \begin{array}{c}.\\ .\\ .\end{array}& \begin{array}{c}\begin{array}{ccc}.& .& .\end{array}\\ \begin{array}{ccc}.& .& .\end{array}\\ \begin{array}{ccc}.& .& {t}_{mn}^{\mathrm{^{\prime}}}\end{array}\end{array}\end{array}\right)$$
(9)
  • Step 4 Formulating positive and negative ideal solution

In this step, we formulate Positive Ideal Solution and Negative Ideal Solution from the aggregated individual decision matrices. Here, we obtain two sets of Positive and Negative Ideal Solutions for the two different individual decision matrices (both the cases of Arithmetic and Geometric Averaging Operators). The formulae are defined as follows:

$$\text{Positive \,Ideal \,Solution }{I}^{+}= \left({I}_{11}^{+}, {I}_{12}^{+},{I}_{13}^{+}\dots \dots {I}_{1m}^{+}\right)$$
(10)
$$\text{Negative \,Ideal \,Solution }{I}^{-}= ({I}_{11}^{-}, {I}_{12}^{-},{I}_{13}^{-}\dots \dots .{I}_{1m}^{-})$$
(11)

where

$${I}_{1l}^{+}= <\left[{{t}^{1+}}_{1l}^{\mathrm{^{\prime}}},{{t}^{2+}}_{1l}^{\mathrm{^{\prime}}},{{t}^{3+}}_{1l}^{\mathrm{^{\prime}}},{{t}^{4+}}_{1l}^{\mathrm{^{\prime}}},{{t}^{5+}}_{1l}^{\mathrm{^{\prime}}}\right];{{t}^{+}}_{pt1l},{{i}^{+}}_{pt1l},{{f}^{+}}_{pt1l}>=<[\begin{array}{c}max\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{1}, \begin{array}{c}max\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{2}, \begin{array}{c}max\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{3}, \begin{array}{c}max\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{4},\begin{array}{c}max\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{5}]; \begin{array}{c}max\\ 1\le r\le n\end{array} {t}_{ptrl}^{\mathrm{^{\prime}}},\begin{array}{c}min\\ 1\le r\le n\end{array} {i}_{ptrl}^{\mathrm{^{\prime}}},\begin{array}{c}min\\ 1\le r\le n\end{array} {f}_{ptrl}^{\mathrm{^{\prime}}} >$$
$${I}_{1l}^{-}= <\left[{{t}^{1-}}_{1l}^{\mathrm{^{\prime}}},{{t}^{2-}}_{1l}^{\mathrm{^{\prime}}},{{t}^{3-}}_{1l}^{\mathrm{^{\prime}}},{{t}^{4-}}_{1l}^{\mathrm{^{\prime}}},{{t}^{5-}}_{1l}^{\mathrm{^{\prime}}}\right];{{t}^{-}}_{pt1l},{{i}^{-}}_{pt1l},{{f}^{-}}_{pt1l}>\hspace{0.17em}=\hspace{0.17em}\hspace{0.17em}<\hspace{0.17em}[\begin{array}{c}min\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{1}, \begin{array}{c}min\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{2}, \begin{array}{c}min\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{3}, \begin{array}{c}min\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{4}\begin{array}{c}min\\ 1\le r\le n\end{array} {{t}_{rl}^{\mathrm{^{\prime}}}}^{5}; \begin{array}{c}min\\ 1\le r\le n\end{array} {t}_{ptrl}^{\mathrm{^{\prime}}},\begin{array}{c}max\\ 1\le r\le n\end{array} {i}_{ptrl}^{\mathrm{^{\prime}}},\begin{array}{c}max\\ 1\le r\le n\end{array} {f}_{ptrl}^{\mathrm{^{\prime}}} >$$

Here, \(l=\mathrm{1,2},\dots m\)

  • Step 5 Composing weighted modified grey relational coefficients

Determining both positive and negative weighted modified grey relational coefficients both in the case of aggregated matrices by arithmetic and geometric operators, we construct the following formulae:

$${\Delta }^{+}(i,j) = \frac{{D({B}_{i}{A}_{j,}{I}_{11}^{+})}_{H}+{D({B}_{i}{A}_{j,}{I}_{12}^{+})}_{H}+\dots + {D({B}_{i}{A}_{j,}{I}_{1m}^{+})}_{H}}{m({\sum }_{k=1}^{m}{D\left({I}_{1k}^{+},{I}_{1k}^{-}\right)}_{H})}$$
(12)
$${\Delta }^{-}(i,j) =\frac{{D({B}_{i}{A}_{j,}{I}_{11}^{-})}_{H}+{D({B}_{i}{A}_{j,}{I}_{12}^{-})}_{H}+\dots + {D({B}_{i}{A}_{j,}{I}_{1m}^{-})}_{H}}{m({\sum }_{k=1}^{m}{D\left({I}_{1k}^{+},{I}_{1k}^{-}\right)}_{H})}$$
(13)
  • Step 6 Formulating Positive and Negative Index of Grey Relational Coefficients

In this step, we utilise the weight vector of the attribute set and construct the following formulae of Positive and Negative Index of Grey Relational Coefficients:

$${{In}^{+}}_{p}=\sum_{i=1}^{n}{\delta }_{i} {\Delta }^{+}\left(p,i\right)$$
(14)
$${{In}^{-}}_{p=}\sum_{i=1}^{n}{\delta }_{i} {\Delta }^{-}\left(p,i\right)$$
(15)

Where \(p=\mathrm{ 1,2},\dots m\)

  • Step 7 Determining relative affinity coefficients

For evaluating relative grey affinity coefficients of each of the alternatives \({A}_{i}\) for \(i= \mathrm{1,2},..m,\) we construct the following formula:

$${\mathrm{Aff}}^{p}=\frac{\left|{{In}^{+}}_{p}-{{In}^{-}}_{p} \right|}{{{In}^{+}}_{p}+{{In}^{-}}_{p}}\mathrm{ where }\,p= 1, 2, m$$
(16)
  • Step 8 Ranking

Ranking the alternatives is settled in accordance with their relative affinity coefficient values.

Illustrative example

In this research article, we consider a tropical agricultural-based problem in continental and sub continental region in which there are three different kinds of crops are cultivated for maximum financial gain. Our objective is to find the best alternative with proper justification of pentagonal neutrosophic theory. Here, we consider three different attributes such as climate factor, landscape and soil factor and farming technique. We also consider here three different categories of decision makers: i) young agriculturalist with meagre experience and having knowledge of modern technique of farming, ii) adult aged agriculturalist with moderate experience and having fair knowledge of farming technique and iii) old aged agriculturalist with sound experience and having knowledge of some old fashioned technique of farming. In accordance with their opinions, we construct three different decision matrices based on pentagonal neutrosophic environment which are described as follows:\({A}_{1}=\) Food Crops,\({A}_{2}=\) Plantation Crops \(and {A}_{3}=\) Horticulture Crops are the alternatives.\({ B}_{1}=\) Climate Factor \(, {B}_{2}=\) Landscape and Soil Factor and \({B}_{3}=\) Farming Technique are the three different features. Let, \({D}_{1}=\) Young agriculturalist \(, {D}_{2}=\) Adult age agriculturalist and \({D}_{3}=\) Old age agriculturalist having weight assignment \(\delta =\{ 0.33, 0.36, 0.31 \}\) and the weight assignment in different attribute function is \(\partial =\left\{\mathrm{0.3,0.4,0.3}\right\}.\) Using two aggregator operators, two verbal matrices are constructed according to the decision makers’ opinions.

  • Step 1 Composition of decision matrices

In this step, three decision matrices are constructed according to the opinions of three different types of decision makers.

$${V}^{1}=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& <\mathrm{1,2},\mathrm{3,4},5;\mathrm{0.6,0.7,0.5}>& \begin{array}{c}<\mathrm{1.2,2},\mathrm{3.4,4.5,5.6};\mathrm{0.6,0.5,0.7}>\end{array}& \begin{array}{c}<\mathrm{1.5,2.8,3.6,4.4,5.8};\mathrm{0.6,0.5,0.5}>\end{array}\\ {B}_{2}& <\begin{array}{c}\mathrm{2.2,3},\mathrm{3.6,4.5,6};\mathrm{0.5,0.7,0.6}>\end{array}& \begin{array}{c}<\mathrm{0.5,1.8,2.4,3},4.4;\mathrm{0.7,0.5,0.8}>\end{array}& \begin{array}{c}<\mathrm{0.8,1.4,2},\mathrm{2.8,3.5};\mathrm{0.3,0.7,0.8}>\end{array}\\ {B}_{3}& \begin{array}{c}<\mathrm{0.2,0.8,1.4,1.8,2.4};\mathrm{0.5,0.4,0.5}>\end{array}& \begin{array}{c}<\mathrm{0.7,1.5,2.25,3.5,4.45};\mathrm{0.8,0.6,0.4}>\end{array}& \begin{array}{c}<\mathrm{2,3},\mathrm{4,5},6;\mathrm{0.4,0.6,0.3}>\end{array}\end{array}\right)$$
$$Young \,\,Agriculturalist{V}^{2}=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& <\mathrm{1.4,2},\mathrm{3.5,4},5.8;\mathrm{0.7,0.5,0.6}>& \begin{array}{c}<\mathrm{1.8,2},\mathrm{3.2,4.5,6.4};\mathrm{0.6,0.5,0.6}>\end{array}& \begin{array}{c}<\mathrm{2,3},\mathrm{4,5},6;\mathrm{0.4,0.2,0.7}>\end{array}\\ {B}_{2}& <\begin{array}{c}\mathrm{2.5,3},\mathrm{3.5,4.4,6.2};\mathrm{0.5,0.6,0.7}>\end{array}& \begin{array}{c}<\mathrm{0.2,0.8,1.4,1.8,2.4};\mathrm{0.5,0.6,0.7}>\end{array}& \begin{array}{c}<\mathrm{2.2,2.8,3.5,4},5.5;\mathrm{0.3,0.5,0.6}>\end{array}\\ {B}_{3}& \begin{array}{c}<\mathrm{0.5,1.8,2.4,3},4.4;\mathrm{0.8,0.4,0.6}>\end{array}& \begin{array}{c}<\mathrm{0.75,1.3,2.4,3.6,5.2};\mathrm{0.7,0.4,0.8}>\end{array}& \begin{array}{c}<\mathrm{1.5,2.8,3.6,4.4,5.8};\mathrm{0.8,0.6,0.7}>\end{array}\end{array}\right)$$
$$Adult \,\,Agriculturalist$$
$${V}^{3}=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& <\mathrm{0.5,1.8,2.4,3},4.4;\mathrm{0.5,0.7,0.8}>& \begin{array}{c}<\mathrm{2.2,2.8,3.5,4},5.5;\mathrm{0.7,0.5,0.6}>\end{array}& \begin{array}{c}<\mathrm{2.4,2.8,3.2,3.5,4.2};\mathrm{0.6,0.4,0.7}>\end{array}\\ {B}_{2}& <\mathrm{0.75,1.3,2.4,3.6,5.2};\mathrm{0.7,0.3,0.6}\begin{array}{c}>\end{array}& \begin{array}{c}<\mathrm{2.2,3},\mathrm{3.6,4.5,6};\mathrm{0.5,0.5,0.6}>\end{array}& \begin{array}{c}<\mathrm{3.5,4},\mathrm{4.5,5},5.5;\mathrm{0.3,0.5,0.8}>\end{array}\\ {B}_{3}& \begin{array}{c}<\mathrm{3,3.4,3.8,4.5,5.4};\mathrm{0.6,0.8,0.5}>\end{array}& \begin{array}{c}<\mathrm{0.8,1.4,2},\mathrm{2.8,3.5};\mathrm{0.7,0.7,0.6}>\end{array}& \begin{array}{c}<1.\mathrm{4,2},\mathrm{2.5,3},4.5;\mathrm{0.4,0.6,0.7}>\end{array}\end{array}\right)$$
$$Old \,\,Agricultralist$$

Weights of the alternatives—Young Agriculturalist: 0.33, Adult Agriculturalist: 0.36,

Old Agriculturalist: 0.31 and Weights of the attributes are: 0.3, 0.4 and 0.3.

  • Step 2 Standardisation of decision matrices

In this step, we need to standardise the above mentioned decision matrices. Thus, we take help in Eq. (7) and standardise decision matrices as below:

$${V}^{*1}=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& <\mathrm{0.13,0.27,0.40,0.54,0.67};\mathrm{0.6,0.7,0.5}>& \begin{array}{c}<\mathrm{0.14,0.24,0.04,0.54,0.68};\mathrm{0.6,0.5,0.7}>\end{array}& \begin{array}{c}<\mathrm{0.17,0.32,0.41,0.50,0.66};\mathrm{0.6,0.5,0.5}>\end{array}\\ {B}_{2}& <\begin{array}{c}\mathrm{0.24,0.33,0.40,0.49,0.66};\mathrm{0.5,0.7,0.6}>\end{array}& \begin{array}{c}<\mathrm{0.08,0.29,0.39,0.50,0.71};\mathrm{0.7,0.5,0.8}>\end{array}& \begin{array}{c}<\mathrm{0.15,0.27,0.39,0.54,0.68};\mathrm{0.3,0.7,0.8}>\end{array}\\ {B}_{3}& \begin{array}{c}<0.\mathrm{06,0.23,0.41,0.53,0.70};\mathrm{0.5,0.4,0.5}>\end{array}& \begin{array}{c}<\mathrm{0.11,0.28,0.35,0.36,0.70};\mathrm{0.8,0.6,0.4}>\end{array}& \begin{array}{c}<\mathrm{0.21,0.32,0.42,0.53,0.63};\mathrm{0.4,0.6,0.3}>\end{array}\end{array}\right)$$
$${V}^{*2}=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& <\mathrm{0.17,0.24,0.42,0.49,0.70};\mathrm{0.7,0.5,0.6}>& \begin{array}{c}<\mathrm{0.20,0.22,0.36,0.50,0.72};\mathrm{0.6,0.5,0.6}>\end{array}& \begin{array}{c}<\mathrm{0.21,0.32,0.42,0.53,0.63};\mathrm{0.4,0.2,0.7}>\end{array}\\ {B}_{2}& <\begin{array}{c}\mathrm{0.27,0.32,0.38,0.48,0},67;\mathrm{0.5,0.6,0.7}>\end{array}& \begin{array}{c}<\mathrm{0.06,0.23,0.41,0.58,0.70};\mathrm{0.5,0.6,0.7}>\end{array}& \begin{array}{c}<\mathrm{0.26,0.33,0.41,0.47,0.65};\mathrm{0.3,0.5,0.6}>\end{array}\\ {B}_{3}& \begin{array}{c}<\mathrm{0.08,0.30,0.39,0.49,0.71};\mathrm{0.8,0.4,0.6}>\end{array}& \begin{array}{c}<\mathrm{0.11,0.19,0.35,0.52,0.75};\mathrm{0.7,0.4,0.8}>\end{array}& \begin{array}{c}<\mathrm{0.17,0.32,0.41,0.50,0.66};\mathrm{0.8,0.6,0.7}>\end{array}\end{array}\right)$$
$${V}^{*3}=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& <\mathrm{0.08,0}.\mathrm{30,0.39,0.49,0.71};\mathrm{0.5,0.7,0.8}>& \begin{array}{c}<\mathrm{0.26,0.33,0.41,0.47,0.65};\mathrm{0.7,0.5,0.6}>\end{array}& \begin{array}{c}<\mathrm{0.32,0.38,0.44,0.48,0.57};\mathrm{0.6,0.4,0.7}>\end{array}\\ {B}_{2}& <\mathrm{0.11,0.19,0.35,0.52,0.75};\mathrm{0.7,0.3,0.6}\begin{array}{c}>\end{array}& \begin{array}{c}<\mathrm{0.24,0.33,0.40,0.50,0.66};\mathrm{0.5,0.5,0.6}>\end{array}& \begin{array}{c}<\mathrm{0.34,0.40,0.44,0.50,0.54};\mathrm{0.3,0.5,0.8}>\end{array}\\ {B}_{3}& \begin{array}{c}<\mathrm{0.33,0.37,0.41,0.50,0.59};0.6,\mathrm{0.8,0.5}>\end{array}& \begin{array}{c}<\mathrm{0.15,0.27,0.39,0.54,0.68};\mathrm{0.7,0.7,0.6}>\end{array}& \begin{array}{c}<\mathrm{0.21,0.31,0.39,0.47,0.70};\mathrm{0.4,0.6,0.7}>\end{array}\end{array}\right)$$
  • Step 3 Aggregation and Composition Of Single Decision matrix

In this step, we aggregate the standardised decision matrices by applying arithmetic and geometric operators and build up two single decision matrices for the two different cases which are given as follows:

$${V}_{A}({T}_{i},{S}_{j})=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& <\mathrm{0.13,0.27,0.40,0.51,0.70};\mathrm{0.61,0.62,0.62}>& \begin{array}{c}<\mathrm{0.20,0.27,0.27,0.50,0.68};\mathrm{0.63,0.50,0.63}>\end{array}& \begin{array}{c}<\mathrm{0.23,0.34,0.42,0.50,0.62};\mathrm{0.54,0.43,0.62}>\end{array}\\ {B}_{2}& <\mathrm{0.21,0.28,0.38,0.50,0.70};\mathrm{0.49,0.66,0.63}\begin{array}{c}>\end{array}& \begin{array}{c}<\mathrm{0.12,0.28,0.40,0.53,0.70};\mathrm{0.58,0.53,0.70}>\end{array}& \begin{array}{c}<\mathrm{0.24,0.33,0.41,0.50,0.63};\mathrm{0.30,0.56,0.72}>\end{array}\\ {B}_{3}& \begin{array}{c}<\mathrm{0.15,0.30,0.40,0.50,0.67};\mathrm{0.72,0.50,0.53}>\end{array}& \begin{array}{c}<\mathrm{0.12,0.24,0.36,0.47,0.71};\mathrm{0.74,0.54,0.58}>\end{array}& \begin{array}{c}<0.23,\mathrm{0.32,0.41,0.50,0.66};\mathrm{0.60,0.60,0.60}>\end{array}\end{array}\right)$$
$${V}_{G}({T}_{i},{S}_{j})=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& <\mathrm{0.12,0.27,0.40,0.51,0.69};\mathrm{0.60,0.64,0.65}>& \langle \mathrm{0.19,0.26,0.18,0.50,0.68};\mathrm{0.63,0.50,0.64}\rangle & \begin{array}{c}<\mathrm{0.22,0.34,0.42,0.50,0.62};\mathrm{0.52,0.37,0.64}>\end{array}\\ {B}_{2}& <\mathrm{0.20,0.27,0.38,0.49,0.69};\mathrm{0.55,0.57,0.64}\begin{array}{c}>\end{array}& \begin{array}{c}<\mathrm{0.10,0.28,0.40,0.58,0.69};\mathrm{0.56,0.54,0.71}>\end{array}& \begin{array}{c}<\mathrm{0.24,0.33,0.41,0.50,0.62};\mathrm{0.30,0.58,0.74}>\end{array}\\ {B}_{3}& \begin{array}{c}<\mathrm{0.11,0.29,0.40,0.51,0.68};\mathrm{0.63,0.57,0.54}>\end{array}& \begin{array}{c}<\mathrm{0.12,0.24,0.36,0.47,0.71};\mathrm{0.73,0.58,0.64}>\end{array}& \begin{array}{c}<\mathrm{0.19,0.32,0.41,0.50,0.66};\mathrm{0.51,0.60,0.60}>\end{array}\end{array}\right)$$
  • Step 4 Formulating Positive and Negative Ideal Solution

Composing Positive and Negative Ideal Solution, we employ the above-mentioned Eqs. (10) and (11) and obtain two sets of Positive and Negative Ideal Solutions for two different single matrices.

Positive ideal solution

$${I}_{A}^{+}=\left(\begin{array}{ccc}{I}_{11A}^{+}& {I}_{12A}^{+}& {I}_{13A}^{+}\\ <\mathrm{0.21,0.30,0.40,0.51,0.70};\mathrm{0.72,0.50,0.53}>& \begin{array}{c}<\mathrm{0.20,0.28,0.40,0.53,0.71};\mathrm{0.74,0.50,0.58}>\end{array}& \begin{array}{c}<\mathrm{0.24,0.34,0.42,0.50,0.66};\mathrm{0.60,0.43,0.60}>\end{array}\end{array}\right)$$
$${I}_{G}^{+}=\left(\begin{array}{ccc}{I}_{11G}^{+}& {I}_{12G}^{+}& {I}_{13G}^{+}\\ <\mathrm{0.20,0.29,0.40,0.51,0.69};\mathrm{0.63,0.57,0.54}>& \begin{array}{c}<\mathrm{0.19,0.28,0.40,0.58,0.71};\mathrm{0.73,0.50,0.64}>\end{array}& \begin{array}{c}<\mathrm{0.24,0.34,0.42,0.50,0.66};\mathrm{0.52,0.37,0.60}>\end{array}\end{array}\right)$$

Negative ideal solution

$${I}_{A}^{-}=\left(\begin{array}{ccc}{I}_{11A}^{-}& {I}_{12A}^{-}& {I}_{13A}^{-}\\ <\mathrm{0.13,0.27,0.38,0.50,0.67};\mathrm{0.49,0.66,0.63}>& \begin{array}{c}<\mathrm{0.12,0.24,0.27,0.47,0.68};\mathrm{0.58,0.54,0.70}>\end{array}& \begin{array}{c}<\mathrm{0.23,0.32,0.41,0.50,0.62};\mathrm{0.30,0.60,0.72}>\end{array}\end{array}\right)$$
$${I}_{G}^{-}=\left(\begin{array}{ccc}{I}_{11G}^{-}& {I}_{12G}^{-}& {I}_{13G}^{-}\\ <\mathrm{0.11,0.27,0.38,0.49,0.68};\mathrm{0.55,0.64,0.65}>& \begin{array}{c}<\mathrm{0.10,0.24,0.18,0.47,0.68};\mathrm{0.56,0.58,0.71}>\end{array}& \begin{array}{c}<\mathrm{0.19,0.32,0.41,0.50,0.62};\mathrm{0.30,0.60,0.74}>\end{array}\end{array}\right)$$
  • Step 5 Composing weighted modified grey relational coefficients

For calculating Modified Grey Relational Coefficients, we make use of Eqs. (12) and (13) and obtain two sets of Modified Grey Relational Coefficient vectors for two distinct single matrices which are given as follows:

$${{\Delta }_{A}}^{+}(i,j)=\left(\begin{array}{ccc}0.19& 0.15& 0.10\\ 0.28& 0.21& 0.36\\ 0.06& 0.12& 0.12\end{array}\right)\,\,{{\Delta }_{A}}^{-}(i,j)=\left(\begin{array}{ccc}0.14& 0.18& 0.24\\ 0.08& 0.14& 0.10\\ 0.29& 0.24& 0.19\end{array}\right)$$
$${{\Delta }_{G}}^{+}(i,j)=\left(\begin{array}{ccc}0.20& 0.17& 0.06\\ 0.17& 0.17& 0.33\\ 0.09& 0.13& 0.17\end{array}\right){{\Delta }_{G}}^{-}(i,j)=\left(\begin{array}{ccc}0.14& 0.22& 0.29\\ 0.16& 0.17& 0.12\\ 0.11& 0.13& 0.18\end{array}\right)$$
  • Step 6 Formulating Positive and Negative Index of Grey Relational Coefficients

In this step, we estimate the indexes of Positive and Negative Grey Relational Coefficient values. For that estimation, we make use of Eqs. (14) and (15) and the estimated values are given as below:

$${\mathrm{In}}_{\mathrm{arith}}^{+1}\hspace{0.17em}=\hspace{0.17em}0 0.15, {\mathrm{In}}_{\mathrm{arith}}^{+2}= 0.28, {\mathrm{In}}_{\mathrm{arith}}^{+3}=0.10$$
$${\mathrm{In}}_{\mathrm{arith}}^{-1}\hspace{0.17em}=\hspace{0.17em}0 0.19, {\mathrm{In}}_{\mathrm{arith}}^{-2}= 0.11, {\mathrm{In}}_{\mathrm{arith}}^{-3}=0.24$$
$${\mathrm{In}}_{\mathrm{geo}}^{+1}\hspace{0.17em}=\hspace{0.17em}0 0.15, {\mathrm{In}}_{\mathrm{geo}}^{+2}= 0.22, {\mathrm{In}}_{\mathrm{geo}}^{+3}=0.13$$
$${\mathrm{In}}_{\mathrm{geo}}^{-1}\hspace{0.17em}=\hspace{0.17em}0 0.22, {\mathrm{In}}_{\mathrm{geo}}^{-2}= 0.15, {\mathrm{In}}_{\mathrm{geo}}^{-3}=0.14$$
  • Step 7 Determining Relative Affinity Coefficients

Evaluating the values of Relative Affinity Coefficients, we employ Eq. (16) and calculate two sets of affinity coefficient values which are given as follows:

$${\mathrm{Aff}}_{\mathrm{arith}}^{1}\hspace{0.17em}=\hspace{0.17em}0.12, {\mathrm{Aff}}_{\mathrm{arith}}^{2}=0.44, {\mathrm{Aff}}_{\mathrm{arith}}^{3}=0. 41$$
$${\mathrm{Aff}}_{\mathrm{geo}}^{1}\hspace{0.17em}=\hspace{0.17em}0.192, {\mathrm{Aff}}_{\mathrm{geo}}^{2}=0.191, {\mathrm{Aff}}_{\mathrm{geo}}^{3}=0.03$$
  • Step 8 Ranking

In accordance with the estimated values of the affinity coefficient of the alternatives, we categorise the alternatives and order them for both the cases.

$${A}_{2}> { A}_{3}>{ A}_{1}(\text{When\,\, arithmetic\,\, operator \,\,is\,\, used})$$

\({A}_{2}={ A}_{1}\)>\({A}_{3}\) (When geometric operator is used)

Sensitivity analysis

A sensitivity analysis is performed to examine how the attribute weights of each criterion affect the relative matrix and their ranking. Here, we consider PNWAA-based sensitivity analysis in Table1 and PNGWA-based sensitivity analysis in Table 2 to classify different ranking. The sensitivity analytical graphs are demonstrated clearly in Fig. 2, 3, 4. In Fig. 2 several trials are conducted with the weight variation for checking the best alternative. Also, Fig. 3 depicts the ranking of the alternatives which are evaluated using PNWAA operator and Fig. 4 depicts the ranking of the alternatives using PNWGA operator. 

Table 1 PNNWAA-based sensitivity analysis (Individual matrix aggregated by Arithmetic Operator)
Table 2 PNNWGA-based sensitivity analysis (Individual matrix aggregated by Geometric Operator)
Fig. 2
figure 2

Variation of weights of the attributes vs. number of trials

Fig. 3
figure 3

The sensitivity analysis of the ranking of three alternatives using PNNWA operator (horizontal axis indicates the number of trials and the vertical axis indicates the variation of weights of the attributes)

Fig. 4
figure 4

The sensitivity analysis of the ranking of three alternatives using PNNWG operator (horizontal axis indicates the number of trials and the vertical axis indicates the variation of weights of the attributes)

Method based on the removal effects of criteria (MEREC)

Method based on the removal effects of criteria (MEREC) is incorporated here to decide the criteria’s optimal weights in the multi-criteria decision-making process to draw a justified analytic comparative view of our hypothetical weight with the optimal weight vectors obtained by this technique for examining the similarity of the ranking outcomes through both of the procedures. MEREC technically applies each criterion’s elimination consequence on the performance value. M. Keshavarz-Ghorabaee et al. (Keshavarz-Ghorabaee et al. 2021) introduced the MEREC tactics for the MCDM problem and drew some analytical perspectives with some objective weighting methods. Another decision-making technique on MEREC has also studied by Trung et al. (Trung and Thinh 2021). Very recently, MEREC technique is applied in hybrid intuitionistic fuzzy domain by Hezam et al. (Hezam et al. 2022). For this analysis, a simple logarithmic measure is prolifically utilised with equal weight assignment to compute alternatives’ performances. The procedure of this technique is discussed below.

Algorithm of MEREC technique

  • Step 1 Composition of score-valued decision matrices

In this step, we compute the score values (\({n}_{ij})\) of the pentagonal neutrosophic entities of the two aggregated decision matrices from (9) to form two individual decision matrices (\({S}_{i})\)(by both arithmetic and geometric operators) in crispified format with the help of score function mentioned in Eq. (2).

  • Step 2Calculation of the overall performances of the alternatives

In this step, we calculate the overall performance value (\({N}_{i}\)) of the alternatives. A logarithmic nonlinear function is erected with equal criteria weight age to attain the alternatives’ overall performance value (\({N}_{i})\). The aggregated normalised score-valued decision matrix is used to pertain the results. If there is m number of alternatives and n number of criterion, then, the below-mentioned equation is used for this calculation:

$${N}_{i}=\mathrm{ln}\left(1+\left(\frac{1}{n}\sum_{j}\left|\mathrm{ln}\left({n}_{ij}\right)\right|\right)\right) i=\mathrm{1,2},\dots m$$
(17)
  • Step 3 Calculation of the overall performances of the alternatives by removal of criteria

In this step, we compute the performance value of the alternatives by removing each criterion. In this step, we eliminate one criterion and observe the effect of performance value by it. We denote \({N}_{ij}^{^{\prime}}\) be the removal performance value of the kith alternative by eliminating the jth criteria.

$${N}_{ij}^{\mathrm{^{\prime}}}=\mathrm{ln}\left(1+\left(\frac{1}{n}\sum_{k,k\ne j}\left|\mathit{ln}\left({n}_{ij}\right)\right|\right)\right)i=\mathrm{1,2},\dots m$$
(18)
  • Step 4 Computation of the deviational values

In this step, we add up the absolute deviational values of the corresponding performance valued from the removal value. \({D}_{j}\) is calculated as follows:

$${D}_{j}=\sum_{i}\left|{N}_{ij}^{\mathrm{^{\prime}}}-{N}_{ij}\right|j=\mathrm{1,2},..m$$
(19)
  • Step 5 Computation of optimal weight

In this step, we calculate the final optimal weight of the criteria. Let \({\sigma }_{j}\) be the final optimal weight of each criterion. We construct the function with the help of step 4:

$${\sigma }_{j}=\frac{{D}_{j}}{\sum_{j}{D}_{j}}j=\mathrm{1,2},\dots m$$
(20)

To reach at the ranking outcomes, the remaining steps are similar as GRA technique. So, we omit these steps.

Computational process by MEREC technique

  • Step 1 Composition of Score-valued Decision Matrices: In this step, we compute two score-valued decision matrices.

    $${S}_{1}=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& 0.18& \begin{array}{c}0.19\end{array}& \begin{array}{c}0.21\end{array}\\ {B}_{2}& 0.17& 0.18& 0.14\\ {B}_{3}& \begin{array}{c}0.23\end{array}& 0.20& \begin{array}{c}0.20\end{array}\end{array}\right),{S}_{2}=\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& 0.17& \begin{array}{c}0.18\end{array}& \begin{array}{c}0.21\end{array}\\ {B}_{2}& 0.18& 0.18& 0.14\\ {B}_{3}& \begin{array}{c}0.20\end{array}& 0.19& \begin{array}{c}0.18\end{array}\end{array}\right)$$
  • Step 2 Calculation of the overall performances of the alternatives

In this step, we compute the set of overall performance values using Eq. (17):

$${{N}_{1arith}=0.97 , N}_{2arith}=1.03 , {N}_{3arith}=0.94$$
$${N}_{1geo}=0.99, {N}_{2geo}=1.03,{N}_{3geo}=0.98$$
  • Step 3: Calculation of the overall performances of the alternatives by removal of criteria

In this step, we compute the overall performance values of the alternatives by Eq. (18):

$$\left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& 0.73& 0.74& \begin{array}{c}0.75\end{array}\\ {B}_{2}& 0.80& 0.81& 0.77\\ {B}_{3}& 0.73& 0.71& \begin{array}{c}0.71\end{array}\end{array}\right) \left(\begin{array}{cccc}.& {A}_{1}& {A}_{2}& {A}_{3}\\ {B}_{1}& 0.74& \begin{array}{c}0.75\end{array}& \begin{array}{c}0.77\end{array}\\ {B}_{2}& 0.80& 0.80& 0.76\\ {B}_{3}& \begin{array}{c}0.75\end{array}& 0.75& \begin{array}{c}0.74\end{array}\end{array}\right)$$
  • Step 4 Computation of the deviational values

In this step, we compute the deviational values using Eq. (19):

\({D}_{1\mathrm{arith}}= 0.68, {D}_{2\mathrm{arith}}= 0.68, {D}_{3\mathrm{arith}}=0.71\)

$${D}_{1\mathrm{geo}}=0.71, {D}_{2\mathrm{geo}}= 0.70, {D}_{3\mathrm{geo}}=0.73$$
  • Step 5 Computation of optimal weight

In this step, we compute the optimal weight vector using Eq. (20):

$${\upsigma }_{1\mathrm{arith}}= 0.33, {\upsigma }_{2\mathrm{arith}}= 0.33, {\upsigma }_{3\mathrm{arith}}=0.34$$
$${\upsigma }_{1\mathrm{geo}}= 0.33, {\upsigma }_{2\mathrm{geo}}= 0.33, {\upsigma }_{3\mathrm{geo}}=0.34$$
  • Step 6 Formulating of positive and negative index

In this step, we estimate the indexes of Positive and Negative index to make use of Eqs. (14) and (15) and the estimated values are given as below:

$${\mathrm{In}}_{\mathrm{arith}}^{+1}\hspace{0.17em}=\hspace{0.17em}0 0.15, {\mathrm{In}}_{\mathrm{arith}}^{+2}= 0.28, {\mathrm{In}}_{\mathrm{arith}}^{+3}=0.10$$
$${\mathrm{In}}_{\mathrm{arith}}^{-1}\hspace{0.17em}=\hspace{0.17em}0 0.19, {\mathrm{In}}_{\mathrm{arith}}^{-2}= 0.11, {\mathrm{In}}_{\mathrm{arith}}^{-3}=0.24$$
$${\mathrm{In}}_{\mathrm{geo}}^{+1}\hspace{0.17em}=\hspace{0.17em}0 0.14, {\mathrm{In}}_{\mathrm{geo}}^{+2}= 0.22, {\mathrm{In}}_{\mathrm{geo}}^{+3}=0.13$$
$${\mathrm{In}}_{\mathrm{geo}}^{-1}\hspace{0.17em}=\hspace{0.17em}0 0.22, {\mathrm{In}}_{\mathrm{geo}}^{-2}= 0.15, {\mathrm{In}}_{\mathrm{geo}}^{-3}=0.15$$
  • Step 7 Determining relative affinity coefficients

Evaluating the values of Relative Affinity Coefficients, we employ Eq. (16) and calculate two sets of affinity coefficient values which are given as follow:

$${\mathrm{Aff}}_{\mathrm{arith}}^{1}\hspace{0.17em}=\hspace{0.17em}0.12, {\mathrm{Aff}}_{\mathrm{arith}}^{2}=0.44, {\mathrm{Aff}}_{\mathrm{arith}}^{3}= 0.41$$
$${\mathrm{Aff}}_{\mathrm{geo}}^{1}\hspace{0.17em}=\hspace{0.17em}0.22, {\mathrm{Aff}}_{\mathrm{geo}}^{2}=0.19, {\mathrm{Aff}}_{\mathrm{geo}}^{3}=0.07$$
  • Step 8 Ranking

In accordance with the estimated values of the affinity coefficient of the alternatives, we categorise the alternatives and order them for both the cases:

$${A}_{2}> { A}_{3}>{ A}_{1}(\mathrm{When \,arithmetic \, operator \,is\, used})$$
$${A}_{2}>{ A}_{1}>{ A}_{3}(\mathrm{When \,geometric\, operator\, is\, used})$$

Results and discussion

In this section, we try to uphold the comparative study of the GRA strategy and the MEREC approach in pentagonal neutrosophic domain. Note worthily, here, we demonstrate the MCGDM technique with two operators (PNWAA, PNWGA) for both of the procedures. It is noted that, for both of the techniques, \({A}_{2}\) is the best alternative through arithmetic and geometric aggregation operators. More specifically, while we apply the arithmetic operator, the ranking results for both of the processes remain unchanged; on the other hand, in the case of geometric aggregation operator, though the best ranked alternative preserves its position in both of the ranking methods but there occurs a little fluctuation of the ranking values of the alternatives. So, comparing the ranking result, we conclude that MEREC technique vehemently supports the GRA ranking result from all aspects. (Fig. 5, 6).

Fig. 5
figure 5

Ranking Comparison between GRA and MEREC method with respect to arithmetic operator

Fig. 6
figure 6

Ranking Comparison between GRA and MEREC method with respect to geometric operator

Conclusion and future research scope

It is well established that PNN theory is captivating, proficient and capable of handling the vagueness theory with immense productivity. In this research study, we have introduced a new MCGDM technique in PNN environment endorsing the GRA scheme and MEREC strategies to encourage the grey system and evaluate the best alternative under imprecise dataset. Here, we have executed GRA and MEREC strategies to find the best crop under the optimal agricultural-based scenario by taking the opinions of several experts. Appropriate arithmetic and geometric aggregation operators are introduced to capture and compute the numerical imprecise data in PNN field. Sensitivity analysis is performed to show the efficiency of our executed techniques. Some major findings of our studies are listed as follows:

  1. (i)

    Here, two logical aggregation operators namely PNWAA and PNWGA are deployed to execute the MCGDM technique, and it is found that the “plantation crop” (i.e. alternative A2) is the best alternative under both the underlying aggregation operators. Even this result sustained when the GRA and MEREC methods are applied.

  2. (ii)

    During sensitivity analysis and numerical simulation it is observed that the output is more robust under arithmetic operator than geometric operator. It is observed that the best alternative A2 can preserve its position with the fluctuation of underlying weights in a certain range when the arithmetic operator is used, whereas the result alters under the same fluctuation of underlying weights when the geometrical operator is used. Thus, we prefer to recommend our MCGDM strategy endorsing the GRA and MEREC scheme in PNN environment with the arithmetic aggregation operator for future study.

  3. (iii)

    Here, we also observe that MEREC technique strongly supports the GRA analysis in PNN environment.

As future scope of this research study, it can be mentioned that this research idea can implement in various research domains like engineering-based structural issues, medical diagnoses problem, clustering analysis, various selection and orientation problems, image processing, big data analysis and pattern recognition, etc.