1 Introduction

In 1976, Robert Axelrod [1] introduced a pioneering and innovative method called cognitive maps, which aimed to comprehensively model the intricate causal relationships between concept variables. These cognitive maps are constructed with interconnected nodes, each representing a concept variable, and directed arcs denoting the causal influences between them. Notably, Axelrod’s cognitive maps offered simplicity while providing valuable insights into the causality of complex problems [2]. However, further advancements were made in the field, leading to the emergence of fuzzy cognitive maps (FCMs) proposed by Kosko in 1986 [3]. FCMs represented a significant breakthrough by introducing enhanced quantification capabilities. Subsequently, the realm of cognitive maps witnessed a proliferation of diverse variants, each tailored to tackle even the most challenging modeling scenarios.

The literature is rich with noteworthy research publications, contributing to the continuous evolution of cognitive maps. Hagiwara’s work delved into non-linear causality in arcs [4], Carvalho and Tomé explored rule-based FCMs [5], Khor ventured into fuzzy knowledge maps [6], and Augustine et al. introduced rate cognitive maps for failure mode identification [7]. As the field advanced, new variants emerged, such as Salmeron’s fuzzy grey cognitive maps [8], Cai et al.‘s evolutionary fuzzy cognitive maps [9], Iakovidis and Papageorgiou’s intuitionistic fuzzy cognitive maps [10], Ruan et al.‘s belief degree-distributed fuzzy cognitive maps [11], and Chunying et al.‘s rough cognitive maps [12].

The broad scope of applications for FCMs span diverse fields, showcasing their versatility and utility. They have found successful use in prediction and classification tasks [13], aiding in decision-making within the medical domain [10], facilitating navigation systems [14], improving forecasting accuracy [15], conducting risk assessments [16], enabling demand prediction [17], supporting rural development initiatives [18], enhancing health management systems [19], optimizing municipal waste management processes [20], integrated environmental assessment [21], explainable artificial intelligence [22], linguistic reasoning [23], integration with statistical methods [24], integration with grey mathematics [25], dynamic decision support systems [26], handling causal interdependencies in complex systems [27], integration with D-number theory [28], automatic structure determination from data [29].

Despite the inherent relevance of causal relationships in failure occurrences, there is a surprising scarcity of direct applications of cognitive maps in failure analysis within the existing literature. However, the contribution made by Peláez and Bowles stands out as they utilized FCMs for system modeling in Failure Mode and Effects Analysis (FMEA) [30].

The present paper highlights two significant shortcomings in the standard architecture of FCMs, as used in [30]. The first limitation pertains to the tendency of traditional tanh threshold functions to diminish the values of state vector components. The second limitation involves the challenge in clearly identifying failure states using these functions. To address these issues, the authors propose a novel approach by suggesting the adoption of a modified tanh threshold function in FCMs. They demonstrate the effectiveness of this method through a practical example involving the refill tip of a ballpoint pen. By comparing the standard tanh with the modified tanh threshold functions, the proposed approach exhibits superior performance in failure mode identification.

2 Methodology for Capturing Failure Modes Using FCMs

The presented research methodology involves a comprehensive analysis of the refill tip example, as illustrated in Fig. 1. The development of the Fuzzy Cognitive Map (FCM) for the refill tip, depicted in Fig. 2, was achieved through meticulous consultations with multiple technical experts from various fields. To provide a clear understanding of the FCM, concise descriptions of the concepts associated with each numbered node are presented in Table 1. Notably, the grey shaded nodes, aptly named “function quantifiers,” hold particular significance. These nodes represent the desired functionality of the refill tip, and their concept values directly quantify the system’s degradation levels. Close monitoring of the concept values of these function quantifiers is crucial, as any values surpassing the specified failure thresholds (either − 1 or + 1) will indicate impending failures.

The Delphi method [31] was used in reaching a consensus on the arc weights. The Delphi method is a structured communication technique that involves a panel of experts. Its primary purpose is to achieve group consensus or make decisions by surveying these experts. The process unfolds through multiple rounds of questionnaires, where after each round; a facilitator provides an anonymized summary of the experts’ forecasts and the reasons behind their judgments. Experts then revise their answers based on feedback from other panel members, and this iterative process continues until a predefined stopping criterion (such as consensus or result stability) is met. For the present study, a group of experts comprising of university professors well versed in FCM architectures and design experts from popular ball pen manufacturing companies were involved.

Initially, each expert received a blank adjacency matrix format. They were tasked with assigning weights to describe the strength of causal relations in each cell. Additionally, experts freely commented on their decisions. Subsequently, statistical details such as mean and median outcomes were provided. The filled-out adjacency matrix formats and comments were revealed to the experts while maintaining anonymity. Based on this information, a second round of weight assessment occurred, prompting experts to revise their initial assessments. The outcomes of this second round revealed that most revised weights fell within the interquartile range, signaling consensus. The final weights were determined as the medians from the second round and organized into an adjacency matrix.

The arc weights finalized from the Delphi process are compiled in the form of adjacency matrix (E) given in Table 2. The adjacency matrix is a mapping of arc weights against interconnected concepts. In the present problem, there ae 15 concepts. Hence, the adjacency matrix as seen in Table 2 is a square matrix of size (15 × 15). Each non-zero entry in the matrix indicates the weight carried by the arc connecting the corresponding concepts. On the other hand, an entry of zero indicates the absence of a correlation (or arc) between the corresponding concepts.

Fig. 1
figure 1

Cross-section of the refill tip of a ball point pen [7]

Fig. 2
figure 2

Finalized FCM of refill tip

Table 1 Node concept descriptions
Table 2 Adjacency matrix (E) with arc weights

The methodology that was adopted for identifying failures is detailed below:

  1. a)

    An appropriate real numerical range for the concept variables contained in the nodes of the FCM is required. The range finalized in this case is [-1, + 1]. Next, the arc weights have to be assigned. The adjacency matrix (Table 2) was deployed for this purpose.

  2. b)

    Each function-quantifier of the cognitive map was classified and labelled into three categories: (i) Larger the better, (ii) Smaller the better, and (iii) Nominal the best; depending on the type of function quantified by the function-quantifier. For example, the function-quantifier: “Clearance between ball and tip enclosure” belongs to the category: Nominal the best. Table 3 gives the complete classification of all function quantifiers used in the present example along with their failure indication threshold values.

  3. c)

    The standard tanh threshold function as given by (1) is replaced by the modified tanh function given by (2).

Table 3 Classification and thresholds of function quantifiers
$$T\left(x\right)=\frac{{e}^{\lambda x}-{e}^{-\lambda x}}{{e}^{\lambda x}+{e}^{-\lambda x}}$$
(1)
$$T\left(x\right)=\left\{\frac{({e}^{2\lambda }+1)}{{(e}^{2\lambda }-1)}\begin{array}{cc}-1& if x<-1\\ \frac{({e}^{\lambda x}-{e}^{-\lambda x})}{{(e}^{\lambda x}+{e}^{-\lambda x})}& if-1\le x\le +1\\ +1& if x>+1\end{array}\right\}$$
(2)

The modification shown in (2) essentially clamps the tanh curve at the extremes of the interval [-1, + 1] to the ordinate values of -1 and + 1 respectively. This modification ensures that the concept value at the extremes of the real interval [-1, + 1] will not be reduced on application of the threshold function.

In the above equations, λ is a constant parameter that determines the slope or steepness of the threshold function.

The reasoning behind the structure of the proposed modified tanh function is as follows:

At x = + 1, the value of T(x) using the standard tanh function is:

$$T\left(x\right)=\frac{{e}^{\lambda }-{e}^{-\lambda }}{{e}^{\lambda }+{e}^{-\lambda }}$$
(3)

However, it is required to be + 1. In order to achieve this, the multiplication factor required for T(x) is:

$$\frac{{e}^{\lambda }+{e}^{-\lambda }}{{e}^{\lambda }-{e}^{-\lambda }}$$
(4)

This factor can be expressed in an alternative form as:

$$\frac{({e}^{\lambda }+{e}^{-\lambda })/{e}^{-\lambda }}{{(e}^{\lambda }-{e}^{-\lambda })/{e}^{-\lambda }}=\frac{({e}^{2\lambda }+1)}{{(e}^{2\lambda }-1)}$$
(5)

Now, this factor has to be multiplied with the function throughout the range [-1, + 1] to maintain the tanh profile. The end result is the modified tanh function as given by Eq. (2). Figure 3 shows the modified tanh function as given by Eq. (2).

Fig. 3
figure 3

Modified tanh function as given by Eq. (2)

  1. d)

    The FCM was simulated with a pre-designed starting input vector to identify those function-quantifiers that indicate failures. Failures are indicated in the form of large excesses (for smaller the better and nominal the best types) or deficiencies (for larger the better and nominal the best types).

3 FCM Simulation and Results

The development and execution of the FCM simulation code were accomplished on the Scilab software platform. The FCM was simulated twice, incorporating different threshold functions. In the initial simulation, the standard tanh threshold function was employed, leading to the FCM reaching a stable state after 59 iterations. Subsequently, a second simulation was performed, utilizing the modified tanh function, resulting in an impressive convergence to a steady state in a mere 13 iterations. The rapid convergence observed in the second simulation is often indicative of the FCM’s efficiency, making the modified threshold function a key contributing factor to this remarkable outcome. Notably, it is crucial to emphasize that all other simulation parameters remained identical in both cases. For a comprehensive understanding, the final iteration values of the node concepts for both simulations are presented in Table 4.

Table 4 Iteration results for both cases

4 Final Discussion

When dealing with failure mode identification, it is crucial to direct our attention to monitoring the values within the shaded cells of Table 4, as these values are of utmost significance, particularly in relation to the function quantifiers. It is important to note that a failure mode is considered to be indicated when any function quantifier achieves the critical failure threshold, which can be either − 1 or + 1.

For the sake of clarity and a more comprehensive understanding of the results, let us take the example of the function quantifier labeled as “Volume of ink deposited on paper” (Node no. 14). Upon inspection of Table 4, It can be seen that the final value of this quantifier, derived using the modified tanh function, is recorded as + 1. By referring to the corresponding information in Table 3, it can be discerned that this particular function quantifier falls under the category of “Nominal the best.” Therefore, given that the final value resides at the upper extreme of the interval [-1, + 1], it can be deduced that the failure mode pertains to an excess deposition or overflow of ink on paper. By employing similar logical reasoning, the failure modes associated with each of the function quantifiers can be efficiently deduced.

The complete process of identifying failures in the context of the proposed methodology can be summarized in the form of an algorithm as follows:

  1. a.

    a) Select an appropriate real numerical range for the nodes of the FCM and populate its arcs with weights (found using an established methodology) representative of their respective strengths of relations.

  2. b.

    b) Classify and label each function-quantifier of the cognitive map according to three categories: (a) Larger the better, (b) Smaller the better, and (c) Nominal the best; depending on the type of function quantified by the function-quantifier. For example,

the function-quantifier: Clearance__BETWEEN__Ball and tip enclosure from the.

refill tip example belongs to the category: Nominal the best.

  1. c.

    c) Simulate the FCM with a starting input vector to identify those function-quantifiers that indicate failures. Failures are indicated in the form of large excesses (for smaller the better and nominal the best types) or deficiencies (for larger the better and nominal the best types).

  2. d.

    d) Map the failure indicating function-quantifiers back to their corresponding system components.

  3. e.

    e) Declare the identified functions/components to have failed.

5 Conclusion

When utilizing the tanh function, it becomes evident that the curve approaches asymptotes at the ordinate values of -1 and + 1. As a result, any degradation captured by the increase or decrease of concept values is noticeably compressed towards the mean due to the tanh function’s inherent properties. Additionally, it becomes apparent that achieving the precise failure thresholds of concept nodes (i.e., either − 1 or + 1) is practically unattainable using the tanh function. This particular limitation is clearly illustrated by the iteration values present in the shaded cells under the tanh function in Table 4.

In stark contrast, the modified tanh function emerges as an exceptional alternative, exhibiting the capability to clearly and accurately represent failure modes by achieving the critical threshold values with remarkable precision. The effectiveness and superiority of the modified tanh function in accurately identifying and pinpointing failure modes are clearly evident from the results obtained in our analysis.

Therefore, based on the extensive exploration and comparison of the two threshold functions, it is evident that the modified tanh function serves as a vital contributing factor to the success of failure mode identification in this particular scenario. Its unique properties ensure that the concept values at the extremes of the real interval [-1, + 1] remain unaltered, providing valuable insights into the failure modes of the system under consideration.