1 Introduction

We envisage an IoT environment, where things at the edge of the network convey locally inferred knowledge to the IoT applications. We focus on a setting that involves networks of distributed wireless devices (e.g., sensor nodes and actuators, smart meters) capable of sensing and locally processing & reasoning about events. Each node performs measurements and locally extracts and infers knowledge over these measurements in light of event reasoning, e.g., wireless sensors spread on a geographical area are responsible for inferring fire or flood incidents. The fundamental requirement to materialize predictive intelligence at the edge of the network is the autonomous nature of nodes to locally perform data sensing & inference, and disseminate only inferred knowledge (e.g., minimal sufficient statistics) to their neighbors and concentrators. Nodes convey intelligence to concentrators for event inference.

Many critical IoT applications have been developed on top of contextual data streams captured by nodes for events identification and reasoning. Events are related to critical aspects, e.g., security issues or violations of predefined constraints. For instance, in security and environmental monitoring applications, a monitoring infrastructure is imperative to apply an efficient mechanism to derive alerts when specific criteria are satisfied [1, 8, 10, 12, 23]. We can identify two main orientations in terms of data acquisition, transfer and contextual reasoning:

  • Orientation 1: Centralized Context Reasoning. Nodes transfer their measurements to a concentrator, e.g., a sink node, back-end-system, Cloud center, which the latter processes data and possesses the intelligence to infer events, and

  • Orientation 2: Collaborative Context Reasoning. Nodes locally process data, locally infer knowledge, and have the intelligence for event reasoning in a collaborative manner.

In this paper, we elaborate on the second orientation through a collaborative, intelligent, and adaptive model for local data processing and event reasoning. This federated reasoning among nodes involves three perspectives of the captured information: (i) predicted context, (ii) contextual inference of outliers, and (iii) context fusion based on expert knowledge. These different Context Perspectives (CPs) are aggregated into a Type-2 Fuzzy Sets inference engine, which locally concludes on an event. Then, through a proposed knowledge-centric nodes clustering scheme in a federated way, nodes disseminate only pieces of inferred knowledge among them to unanimously reason about an event based on their local view. In turn, representative nodes of such collaborating clusters locally reason about context and then report the aggregated inference to the concentrators. The concentrators form a contextual event map and apply strategies to handle the inferred events, e.g., warn/trigger flood first responders. The key excellence is that our model combines local context processing & inference to the network edge with knowledge-centric nodes clustering. The challenge is to collaboratively process & infer events by minimizing the false alarms / erroneous inference that affect decision making, i.e., unsuitable decisions of handling hazardous phenomena.

1.1 Related work

Event processing & inference is adopted to support the development of IoT applications [8]. From the sensing and processing perspective, normally in the literature, the sensing devices monitor a specific area and deliver the captured data to a back-end system for processing, event inference, and alerts/decision making [10, 24]. Analysis on architectural solutions and case studies on event inference mechanisms is discussed in [3]. The back-end system in [18] adopts aggregate methodologies for event inference, while in [17] it supports IoT applications for air quality monitoring in indoors environments. Such system collects contextual data from temperature, humidity, light, and air quality sensors and then centrally infer events. Moreover, the centralized context reasoning systems in [6, 12], and [23] provide early inference of forest fire events based on vision-enabled sensors, home monitoring based on the received signal strength of sensors, and surveillance of critical areas, respectively. In wireless sensors network deployments, e.g., [2, 5, 11, 16], the back-end systems centrally provide event inference for specific areas by minimizing false alerts.

From the quality of inference perspective, event inference utilizing the principles of approximate reasoning like Fuzzy Logic (FL) is proved a useful technique for delivering high quality of inference. The model in [7] predicts the peak particle velocity of ground vibration levels. Such model adopts a FL-based inference scheme and utilizes the parameters of distance from blast face to the vibration monitoring point. The FL-based context reasoning model in [22] estimates the radiation levels in the air. The adoption of FL aims to handle missing values and, thus, deriving a mechanism capable of delivering alerts. The FL-based fusion model in [35] reduces uncertainty and false-positives within the process of fault detection. In [4], a specific FL-based inference system is proposed for ambient intelligence environments. Such system learns the users’ behavior in light of being adapted to the users’ profiles. In [13, 14], the authors propose a centralized reasoning system that derives immediate identification of events based only on univariate data. Such system adopts data fusion and prediction for efficiently aggregating sensors measurements. Then, the system adopts FL for handling the uncertainty on the event reasoning.

In all the aforementioned efforts, the edge devices transfer their data to a back-end system, where the latter based on certain computing and reasoning paradigms, e.g., data aggregation, FL-based reasoning, infers events and provides alerts/warnings to IoT applications. The clear major difference of our collaborative machine learning mechanism compared to the aforementioned efforts is the localized event processing & inference at the network edge instead of a centralized reasoning approach. In all research efforts, the back-end system centrally undertakes the responsibility of event reasoning [15] and alerts generation once all contextual data are delivered throughout the network [19].

Our federated reasoning approach drastically departs from the centralized predictive intelligence paradigm to a fully distributed intelligence perspective. Our challenge is to push the intelligence for event processing & inference to the edge nodes equipped with computing and sensing capabilities provide partial awareness on an event. By enhancing this local event inference with different CPs, our mechanism (i) avoids raw data transfer from IoT nodes to a back-end system, (ii) favors of conveying the minimal inferred knowledge from the edge to concentrators by introducing a knowledge-centric nodes clustering, (iii) minimizes the false alarm rate by introducing advanced approximate inference over the CPs, and (iv) reduces the communication overhead induced by transferring humongous data volumes from sensors to concentrators through localized inference. In our orientation, the edge nodes do not share and/or relay contextual information. Instead, they conditionally transfer inferred prices of knowledge, if necessary, in light of high quality of inference. Furthermore, from the quality of inference perspective, our mechanism adopts Type-2 Fuzzy Sets over multivariate contextual data instead of univariate data Type-1 Fuzzy Sets as e.g., in [13], to cope with the induced uncertainty of event knowledge representation.

1.2 Research excellence & contribution

To the best of our knowledge, our collaborative machine learning mechanism is a first attempt to materialize the concept of federated reasoning by conveying predictive intelligence for real-time event inference to the edge of the network. This is achieved by exploiting at most the computing & sensing capabilities of IoT nodes based on different CPs. Our vision of intelligent edge computing is materialized by conditionally deliver inferred knowledge from the network edge with high quality of inference and not transferring data to the back-end-system. In combination with the proposed knowledge-centric clustering scheme, our novel mechanism is robust in terms of erroneous event inference (false alerts) and reduces the communication overhead between nodes and back-end system. The obtained outcome of this research is: (i) accurate event inference close to the source of the contextual information, (ii) significantly low communication overhead by localized belief-centric groupings, thus, avoiding data transfer to the back-end systems, and (iii) energy-efficient and robust inference in terms of imprecise and faulty data streams.

The major technical contributions of this research are:

  • A temporal nearest-neighbors exponential smoothing model for localized context prediction;

  • A conditionally growing adaptive vector quantization model for localized context outliers inference based on the Adaptive Resonance Theory;

  • A time-optimized stochastic novelty detection & adaptation model based on the Optimal Stopping Theory. We provide the theoretical analyses for the above-mentioned statistical learning and optimization models;

  • A collaborative knowledge-centric nodes clustering scheme and a Type-2 FL-based event inference combining predicted and fused context with outliers identification;

  • Asymptotic time and space complexities of the proposed algorithms and collaborative methods and a comprehensive evaluation of the nodes energy consumption in terms of communication and computation/processing cost;

  • Performance and comparative assessment of our mechanism with: (i) the local voting scheme and (ii) the centralized aggregation-based event detection schemes achieving up to three orders of magnitude less energy consumption in an IoT environment.

1.3 Organization

The paper is organized as follows: Section 2 presents the rationale and overview of our federated reasoning approach. Sections 3 and 4 introduce the local context prediction and outliers detection, respectively. Section 5 proposes a novelty & adaptation mechanism, while Sections 6 and 7 introduce context fusion and Type-2 FL-based inference. Section 8 discusses on the collaborative knowledge-centric nodes clustering. Section 9 reports on the asymptotic time and space complexities of the proposed algorithms and methods and discusses the nodes energy consumption in terms of communication and computation/processing cost. Section 10 presents a comprehensive performance and comparative assessment with other event identification mechanisms. Section 11 concludes the paper.

2 Overview & rationale

2.1 Overview

We model the topology of an edge network of sensing and computation nodes (nodes) by an undirected communication graph as shown in Fig. 1 (left). Let \(\mathcal {G} = (\mathcal {E}, \mathcal {N})\) denote an undirected graph with vertex set \(\mathcal {N} = \{1, 2,...,n\}\) and edge set \(\mathcal {E} \subset \{ \{i, j \} | i, j \in \mathcal {N}\}\), where each edge {i,j} is an unordered pair of distinct nodes. A graph is connected if for any two vertices i and j there exists a sequence of edges (a path) {i,k 1},{k 1,k 2},...,{k s−1,k s },{k s ,j} in \(\mathcal {E}\). Let \(\mathcal {N}_{i} = \{j \in \mathcal {N}| \{i, j\} \in \mathcal {E} \}\) denote the set of neighbors of node i. Let also a set of concentrator nodes \(\mathcal {C} = \{1, \ldots , c\}\) that act as sink nodes for a specific subset of nodes in \(\mathcal {N}\). Concentrators gather (digested) context knowledge from certain nodes in order to provide to the IoT applications the corresponding reasoning results by those nodes on the presence of an event of interest. The concentrators could directly connect to a fixed Internet infrastructure, e.g., cloud platform for predictive analytics.

Fig. 1
figure 1

(Left) Overall architecture: IoT nodes locally process data and infer events, where cluster heads (CHs) report the aggregated degree of belief concentrators; (right) Internal context processing and reasoning on an IoT node: from context sensing to local event inference

The nodes monitor a specific area by sensing multiple contextual variables like ambient temperature, humidity, wind speed, and perform local reasoning to infer on an event of interest, e.g., a fire or flood event. We assume that nodes observe the same phenomenon. The degree of occurrence or degree of belief of an event, notated by μ i , is locally inferred by node i. This belief is disseminated by node i to its neighbors \(\mathcal {N}_{i}\) to further enhance the contextual knowledge of its neighborhood. This leads to a clustering of nodes according to their view, thus contributing to distributed event reasoning.

The nodes clustering is achieved by the election of a node, referred to as Cluster Head (CH), based only on the disseminated degrees of belief. Groups of nodes are formed each one involving a unique CH. Each CH aggregates its members’ degrees of belief and communicates with its concentrator delivering an inference result. In this case, no centralized process is adopted for clustering and data aggregation on event identification. The CHs convey aggregated knowledge to concentrators, thus, minimizing the messages circulated in the network. Note, the messages exchanged among members and CHs are not raw data. Instead, they are pieces of inferred context represented by the degrees of belief as it will be elaborated later. The overall proposed architecture is shown in Fig. 1 (left).

2.2 Rationale

Our multi-perspective collaborative context reasoning model for each node builds on top of a local FL-based inference engine (Type-2 FL System; introduced later) that combines three perspectives of context: (i) current fused context, (ii) predicted context, and (iii) outliers context. This model locally derives the degree of belief μ i for node i each time a vector of contextual values is captured; hereinafter, referred to as context vector. A node i orchestrates the following reasoning processes to infer an event:

  • Context Fusion evaluates the event inference rule defined by experts from the current context vector.

  • Context Prediction utilizes the trend of historical context vectors experienced on node i for a short-term forecast of context.

  • Context Outliers & Novelty incrementally evaluates and revises its belief that the currently context vectors significantly deviate from their statistical patterns experienced on node i.

  • Fuzzy Context Inference, which is realized by a Type-2 FL System (T2FLS), combines predicted and outliers context vectors with the current fused context. T2FLS derives the μ i for node i as a local inference.

Assume a discrete time domain \(t \in \mathbb {T} = \{1, 2, {\ldots } \}\). A context (row) vector \(\mathbf {x} = [x_{1}, \ldots , x_{d}] \in \mathbb {R}^{d}\) consists of d variables \(x_{j} \in \mathbb {R}\) corresponding to sensor measurements. A node i at time t captures context vector x(t) and combines the Context Perspectives (CPs):

  • (CP1) the current belief of an event by evaluating the expert’s knowledge over x(t).

  • (CP2) how much x(t) is deviated from the predicted context given a short history,

  • (CP3) in what degree x(t) is considered as an outlier given the statistical distribution of patterns. Figure 1 (right) shows the all context processing and reasoning processes for node i: from context sensing to local event inference.

Concerning CP1, our model evaluates the belief of event from the current context. Since CP1 constitutes a rule-based baseline solution for event inference, we move a step further to incorporate knowledge from CP2 and CP3. As we show in our evaluation, the fusion of these CPs results to more sophisticated event reasoning.

Concerning CP2, node i stores the most recent m vectors x(tm),x(tm + 1),…,x(t − 1). Based on this history, node i predicts the context vector at time t, \(\hat {\mathbf {x}}(t)\) with respect to the conditional expectation conditioned on the recent observed history, i.e.,

$$\begin{array}{@{}rcl@{}} \hat{\mathbf{x}}(t) = \mathbb{E}[\left.\mathbf{x}(t) \right| \mathbf{x}(t-1), \ldots, \mathbf{x}(t-m)]. \end{array} $$
(1)

Node i then captures the actual context x(t) and the prediction error is \(e(t) = \lVert \mathbf {x}(t) - \hat {\mathbf {x}}(t) \rVert \), where ∥⋅∥ denotes the Euclidean norm. The rationale in CP2 is that the prediction error gives an insight of how the actual vector is deviated from the expected vector based on a short-term history experienced on node i. If the current context deviates from the expected context then this instantaneously indicates that the observed recent normal state changes. However, we should take into consideration the statistical patterns from the entire history of context vectors to enhance our belief on event inference.

Concerning CP3, node i incrementally estimates the probability distribution of context p(x). This unknown distribution is approximated by specific pattern vectors \(\mathbf {w}_{k} \in \mathbb {R}^{d}\), k ∈ [K],Footnote 1 which represent the so-far observed vector space \(\mathbb {D} \subset \mathbb {R}^{d}\). The number K of those patterns is not necessarily fixed and is initially unknown. Each pattern w k is the representative of the (convex) vector subspace \(\mathbb {D}_{k} \subset \mathbb {D}\). The p(x) is approximated by patterns based on the probability p(x|w k ) of observing x being derived from subspace \(\mathbb {D}_{k}\) represented by w k . As it will be discussed, this probability depends on the distance between x and w k . The rationale in CP3 is that node i infers whether current x deviates significantly from the (so far) statistical patterns. In turn, node i assesses whether x lies outside or not the observed vector space utilizing the assignment probability p(w |x) ∝ p(x|w )p(w ), with respect to its closest pattern w , i.e.,

$$\begin{array}{@{}rcl@{}} \mathbf{w}^{*} = \arg \min\limits_{k \in [K]} \lVert \mathbf{x} - \mathbf{w}_{k}\rVert. \end{array} $$
(2)

As will be discussed, the assignment probability p(w |x) quantifies the instantaneous belief that context is (i) either outlier, (ii) or novelty, thus, expanding our current knowledge, (iii) or a normal instance of the space \(\mathbb {D}\), thus, updating our current knowledge. Node i, to support such reasoning, is equipped with a time-optimized mechanism to incrementally update/adjust to possible novel vector subspaces identified, thus, augmenting its current knowledge. This augmentation is achieved by increasing the number of patterns to better reflect the new vector subspaces, thus, minimizing the risk of false consideration of outliers, which correspond to false alarms under event inference. Before proceeding with the three CPs, we provide some preliminaries on unsupervised statistical learning and optimal stopping theory adopted in our analysis.

2.3 Preliminaries

2.3.1 Adaptive vector quantization

Adaptive Vector Quantization (AVQ) refers to an unsupervised learning (clustering) algorithm [31] that partitions a d-dimensional space \(\mathbb {R}^{d}\) into a fixed number of K subspaces. AVQ distributes K patterns w 1,…,w K in \(\mathbb {R}^{d}\). A pattern w k represents a subspace of \(\mathbb {R}^{d}\). AVQ learns as w k changes in response to random vector \(\mathbf {x} \in \mathbb {R}^{d}\). Competition selects which w k the vector x modifies. The k-th pattern ‘wins’ if w k is the closest to x. During partition, vectors x are projected onto their closest patterns and patterns adaptively move around the space to form optimal partitions (subspaces of \(\mathbb {R}^{d}\)) that minimize the Expected Quantization Error (EQE):

$$\begin{array}{@{}rcl@{}} \mathcal{J}(\{\mathbf{w}_{k}\}) & = & \mathbb{E}\left[ \underset{k}{\min}\lVert \mathbf{x}-\mathbf{w}_{k} \rVert^{2} \right]. \end{array} $$
(3)

2.3.2 On-line machine learning & stochastic gradient descent

Stochastic Gradient Descent (SGD) [27] is widely adopted in on-line machine learning as an optimization method for incrementally minimizing an objective function \(\mathcal {J}(a)\), where \(a \in \mathcal {A}\) is a parameter from a parameter space \(\mathcal {A}\) and \(a^{*} \in \mathcal {A}\) minimizes \(\mathcal {J}\). SGD leads to fast convergence to a by adjusting the estimated a so far in the direction (negative gradient \(-\nabla \mathcal {J}\)), which improves the minimization of \(\mathcal {J}\). SGD gradually changes a upon reception of a new training sample. The standard gradient descent algorithm updates a as: \({\Delta } a = - \eta \nabla _{a}\mathbb {E}[\mathcal {J}(a)]\), where the expectation is approximated by evaluating \(\mathcal {J}\) and its gradient over all training pairs and η ∈ (0,1). On the other hand, SGD simply does away with the expectation in the update of a and computes the gradient of \(\mathcal {J}\) using only a single training sample at step t = 1,2,…. The update of a t at step t is given by:

$$\begin{array}{@{}rcl@{}} {\Delta} a_{t} & = & - \eta_{t} \nabla_{a_{t}}\mathcal{J}(a_{t}). \end{array} $$
(4)

In SGD, the learning rate {η t }∈ (0,1) is a step-size schedule, which defines a slowly decreasing sequence of scalars that satisfy:

$$\begin{array}{@{}rcl@{}} \sum\limits_{t=1}^{\infty}\eta_{t} = \infty \text{ and } \sum\limits_{t=1}^{\infty}{\eta_{t}^{2}} < \infty. \end{array} $$
(5)

Choosing the proper learning schedule is not trivial; a practical method is the hyperbolic schedule: \(\eta _{t} = \frac {1}{t+1}\) [27].

2.3.3 Optimal stopping theory

The Optimal Stopping Theory [28] (OST) deals with the problem of choosing the best time instance to take the decision of performing a certain action. This decision is based on sequentially observed random variables in order to maximize the expected reward. For random variables X 1,X 2,… and measurable functions Y t = ψ t (X 1,X 2,…,X t ), t = 1,2,… and Y = ψ (X 1,X 2,…), the problem is to find a stopping time τ to maximize \(\mathbb {E}[Y_{\tau }]\). The τ is a random variable with values in {1,2,…} such that the event {τ = t} is in the Borel field (filtration) \(\mathbb {F}_{t}\) generated by X 1,…,X t , i.e., the only available information we have obtained up to t: \(\mathbb {F}_{t} = \mathbb {B}(X_{1},\ldots ,X_{t})\). The decision to stop at t is a function of X 1,…,X t and does not depend on future observables X t+1,…. The problem is to find the optimal stopping time t such that the supremum \(\mathbb {E}[Y_{\tau }]\) is attained: i.e.,

$$\begin{array}{@{}rcl@{}} t^{*} & = & \inf \{t \geq 1 | Y_{t} = \text{ess} \sup\limits_{\tau \geq t} \mathbb{E}[Y_{\tau} | \mathbb{F}_{t}]\}. \end{array} $$
(6)

The (essential) supremum \(\text {ess} \sup _{\tau \geq t} \mathbb {E}[Y_{\tau } | \mathbb {F}_{t}]\}\) is taken over all stopping times τ such that τt. The optimal stopping time t is obtained through the principle of optimality [30]. The theorem in [39] refers to the existence of the optimal stopping time.

Theorem 1 (Existence of Optimal Stopping Time)

If \(\mathbb {E}[\sup _{t} Y_{t}] < \infty \) and \(\lim _{t \to \infty } \sup _{t} Y_{t} \leq Y_{\infty }\) almost surely then the stopping time \(t^{*} = \inf \{t \geq 1 | Y_{t} = \text {ess} \sup _{\tau \geq t} \mathbb {E}[Y_{\tau } | \mathbb {F}_{t}]\}\) is optimal.

Proof

See [39]. □

3 Context prediction

The major concept of this CP is to interpret the deviation between the excepted context and the actual context on node i as a reliable indication of an event. Context prediction (Fig. 1(right)) involves a multidimensional time-series vector forecast at node i to locally predict the upcoming context \(\hat {\mathbf {x}}(t+1)\) given a sliding history window of m observed vectors x(tm),…,x(t − 1) and the current context x(t).

We enhance the multivariate Holt-Winters Double Exponential Smoothing (DES) with a h-Nearest Neighbors smoothing (h NN) at time t, 1 ≤ hm. DES takes into account the possibility of a time series exhibiting some form of trend with an updated slope component. In our case, we attempt to capture the temporal correlation of the noisy contextual data by exploiting the values of the temporal data nearest neighbors. The proposed temporal smoothing functionality over DES encapsulates the correlation of values ahead of time, which aligns with our idea of event reasoning using instantaneous context deviation. This deviation should involve the trend and slope, already captured by DES, and the temporal correlation of consequent contextual values. By involving this temporal correlation between recent past and future values, we enhance event reasoning.

Our idea is to substitute each value x i with the average \(x^{\prime }_{i}\) of the h NN backward and forward values, ∀i. That is given the values x i (k), k = th + 1,…,t − 1, the corresponding temporal h NN smoothed values \(x^{\prime }_{i}(k)\) are:

$$\begin{array}{@{}rcl@{}} x^{\prime}_{i}(k) = \frac{1}{h}\sum\limits_{\tau=k-\frac{h-1}{2}}^{k+\frac{h-1}{2}}x_{i}(\tau) \end{array} $$
(7)

Once \(x^{\prime }_{i}\) values are smoothed then the forecast of the i-th variable at time t, x i (t) is achieved using DES, ∀i. Evidently, when h = 1, then our approach is reduced to DES, i.e., without dealing with the temporal NN smoothing. In turn, we obtain:

$$\begin{array}{@{}rcl@{}} y_{i}(t) & = & \delta x^{\prime}_{i}(t) + (1-\delta)(y_{i}(t-1) + u_{i}(t-1)) \end{array} $$
(8)
$$\begin{array}{@{}rcl@{}} u_{i}(t) & = & \kappa(y_{i}(t)-y_{i}(t-1)) + (1-\kappa)u_{i}(t-1) \end{array} $$
(9)

where \(x^{\prime }_{i}(t)\) is the actual smoothed value from our h NN method at t as in (7), y i (t) and y i (t − 1) are the intercepts at time t and t − 1, respectively. The u i (t) and u i (t − 1) are the slopes (time series trends) at time t and t − 1, respectively. The δ and κ are smoothing constants in (0,1). The δ value is used to smooth the new actual and trend-adjusted previously smoothed intercept, while the κ value is used to smooth the trend. The smoothing constants determine the weight given to most recent past values and control the weight of smoothing. Values close to 1 give weight to more recent values and near to 0 distribute the weights to consider values from the more distant past within the window. We set δ = 0.7 and κ = 0.9 as in [32].

The expected context vector \(\hat {\mathbf {x}}(t) = [\hat {x}_{1}, \ldots , \hat {x}_{d}]\) at time t is predicted by the intercept vector y(t) = [y 1(t),…,y d (t)] and slope vector u(t) = [u 1(t),…,u d (t)] and then we obtain the deviation e(t) ∈ [0,1]:

$$\begin{array}{@{}rcl@{}} \hat{\mathbf{x}}(t) = \mathbf{y}(t) + \mathbf{u}(t) \text{ and } e(t) = d^{-\frac{1}{2}}\lVert \hat{\mathbf{x}}(t) - \mathbf{x}(t)\rVert, \end{array} $$
(10)

where the factor \(d^{-\frac {1}{2}}\) is a normalization factor over the Euclidean norm ∥⋅∥ to get a value in [0,1] given that x ∈ [0,1]d, i.e., each x i value is scaled in [0,1].

4 Context outliers inference

This CP infers whether the current context is an outlier, which highly impacts the event reasoning (Fig. 1 (right)). We study the case where outlier context deviates significantly from the up-to-now statistical patterns learned locally on a node. If this deviation occurs regularly, then our model considers the possibility of a novelty, thus, to adapting new knowledge; see Section 5.

4.1 Conditionally growing context vector quantization

Consider a node i, which captures context vectors x drawn from a space \(\mathbb {D}\). Based on those vectors, we identify the vector subspaces \(\mathbb {D}_{k}\), k ∈ [K], estimate their patterns w k and their number K, where p(x) can be approximated. This is achieved by incrementally partitioning the space \(\mathbb {D} = \cup _{k=1}^{K}\mathbb {D}_{k}\). We study an incremental AVQ for partitioning \(\mathbb {D}\) into K (unknown) subspaces \(\mathbb {D}_{k}\). The quantization of \(\mathbb {D}\) operates as a mechanism to project x to the closest pattern w k . Node i incrementally minimizes the EQE:

$$\begin{array}{@{}rcl@{}} \mathcal{J}(\{\mathbf{w}_{k}\}) & = & \mathbb{E}\left[ \underset{k}{\min} \lVert \mathbf{x} - \mathbf{w}_{k} \lVert^{2} \right] \end{array} $$
(11)

We seek the best possible approximation of vectors x out of a set \(\{\mathbf {w}_{k}\}_{k=1}^{K}\) of (finite) K patterns such that x is projected to its closest pattern \(\mathbf {w}^{*} \in \mathbb {D}^{*} \subset \{ \mathbf {x} \in \mathbb {D}: \lVert \mathbf {x}-\mathbf {w}^{*} \rVert = \min _{k} \lVert \mathbf {x}-\mathbf {w}_{k} \rVert \}\) . We incrementally minimize \(\mathcal {J}\) in (11) with the presence of a random x and update only the closest pattern w . However, the number of subspaces (and, thus, patterns) K > 0 is completely unknown and not necessarily constant. The key problem is to decide on an appropriate K value to minimize (11).

In the literature a variety of AVQ methods exists which are not suitable for incremental implementation, because K must be supplied in advance. We propose a conditionally growing AVQ algorithm (i) in which the patterns are sequentially updated and (ii) is adaptively growing, i.e., increases K if a criterion holds true. Given that K is not available a-priori, our algorithm minimizes \(\mathcal {J}\) with respect to a threshold ρ. Initially, the vector space has a unique (random) pattern, i.e., K = 1. Upon the presence of x, our algorithm (i) finds the closest pattern w and (ii) updates w only if the condition ∥qw ∥≤ ρ holds true. Otherwise, x is currently considered as a new pattern, thus, increasing K by one. This conditional quantization leaves random vectors to self-determine the resolution of quantization. Evidently, high ρ would result to coarse space quantization while low ρ yields fine-grained quantization. The parameter ρ is associated with the stability-plasticity dilemma also known as vigilance in Adaptive Resonance Theory [29]. In our case, ρ represents a threshold of similarity between vectors and patterns, thus, guiding us in determining whether a new pattern should be formed. To give a physical meaning to ρ, we express it through a set of percentages a i ∈ (0,1) of the value ranges of each x i . Then, ρ = ∥[a 1,…,a d ]∥ and if we let a i = a,∀i, then ρ = (a d)1/2. High a over high dimensional space results in a low number of patterns and vice versa. The outcome is a set of K patterns \(\mathcal {W} = \{\mathbf {w}_{k}\}_{k=1}^{K}\).

The incremental minimization in (11) given a series of \(\mathbf {x}(t), t \in \mathbb {T}\), is achieved by SGD. Our algorithm processes successive x(t) until a termination criterion Γ(t) ≤ γ. Γ(t) refers to the distance between successive estimates of the patterns at steps t − 1 and t. The algorithm stops at the first t where:

$$\begin{array}{@{}rcl@{}} \Gamma(t) \leq \gamma: \Gamma(t) = \sum\limits_{k=1}^{K} \lVert \mathbf{w}_{k}(t) - \mathbf{w}_{k}(t-1) \rVert. \end{array} $$
(12)

The update rules of patterns w k are provided in Theorem 2.

Theorem 2

Given context x and its closest pattern \(\mathbf {w}^{*} \in \mathcal {W}\) , the patterns \(\{\mathbf {w}_{k}\}_{k=1}^{K}\) converge to the optimal estimates if updated as:

$$\begin{array}{@{}rcl@{}} {\Delta} \mathbf{w}^{*} = \left\{ \begin{array}{ll} \eta (\mathbf{x}-\mathbf{w}^{*}) &, \text{ if } \lVert \mathbf{q}-\mathbf{w}^{*} \rVert \leq \rho\\ \mathbf{0} &, \text{ otherwise. } \end{array} \right. \end{array} $$

Each \(\mathbf {w}_{k} \in \mathcal {W} \setminus \{\mathbf {w}^{*}\}\) is updated as: Δw k = 0 ; rate η ∈ (0,1)is defined in Section 2.3 .

Proof

For proof, see Appendix A.1.. □

A fundamental characteristic of our quantization algorithm is that each pattern \(\mathbf {w}_{k} \in \mathcal {W}\) corresponds to the centroid \(\mathbb {E}[\mathbf {x}|\mathbf {x} \in \mathbb {D}_{k}]\) of those vectors x assigned to w k . This is utilized for estimating the probability of an outlier as discussed in Section 5.

Theorem 3

(Centroid Convergence) If \(\bar {\mathbf {x}}\) is the centroid of the vector subspace \(\mathbb {D}_{k}\) and pattern w k is the closest pattern of those \(\mathbf {x} \in \mathbb {D}_{k}\) , \(P(\mathbf {w}_{k} = \bar {\mathbf {x}}) \to 1\) at equilibrium.

Proof

For proof, see Appendix A.2.. □

Our Algorithm 1 processes a (random) context vector one at a time. In the initialization phase, there is only one pattern w 1, i.e., K = 1, which is the first vector. For the t-th context x(t) and onwards, t ≥ 2, the algorithm: (i) updates the closest pattern to x(t) (out of K patterns) given that the distance is less than ρ, otherwise (ii) a new pattern is added (increasing K by one). The algorithm stops updating the patterns at the first step t where Γ(t) ≤ γ. At that time and onwards, the algorithm returns the set of patterns \(\mathcal {W}\) and no further modification is performed.

figure c

4.2 Outliers inference

We study how CP detects a change in the patterns space \(\mathbb {D}_{k}, \forall k\), based only on {w k } from Section 4.1. Consider an incoming x to node i. The CP rationale lies in two components: First, decide whether x is an outlier with respect to the current quantization of \(\mathbb {D}\). Second, track overtime the number of such outliers and decide that subspaces have changed when this number becomes high.

Consider the probability assignment p(w k |x) of x to a pattern. Since we do not have any prior knowledge about p(w k |x), we apply the principle of maximum entropy: among all possible probability distributions, we choose the one that maximizes the entropy [34] given an optimal quantization of \(\mathbb {D}\). Specifically, p(w k |q) conforms to the Gibbs distribution:

$$\begin{array}{@{}rcl@{}} p(\mathbf{x}|\mathbf{w}_{k}) \propto \exp(-\beta \lVert \mathbf{x} - \mathbf{w}_{k}\rVert^{2}), \end{array} $$
(13)

where β ≥ 0 will be explained later. Assuming that each w k has the same prior \(p(\mathbf {w}_{k}) = \frac {1}{K}\), through the Bayes’ rule \(p(\mathbf {w}_{k}|\mathbf {x}) = \frac {p(\mathbf {x}|\mathbf {w}_{k})p(\mathbf {w}_{k})}{p(\mathbf {x})}\) we obtain that:

$$\begin{array}{@{}rcl@{}} p(\mathbf{w}_{k} | \mathbf{x}) = \frac{\exp(-\beta \lVert \mathbf{x} - \mathbf{w}_{k}\rVert^{2})}{{\sum}_{i=1}^{K}\exp(-\beta \lVert \mathbf{x} - \mathbf{w}_{i}\rVert^{2})}. \end{array} $$
(14)

Note, p(w k |x) explicitly depends on the distance of context with patterns. By varying the parameter β, the probability assignment p(w k |x) can be completely fuzzy (β = 0, each vector belongs equally to all patterns) and crisp (β, each vector belongs to only one pattern, or precisely uniformly distributed over the set of equidistant closest patterns). As β this probability becomes a delta function around the pattern closest to x. The probability p(w |x) quantifies the belief that x is an outlier with w being its closest pattern in the quantized space.

5 Context novelty & adaptation

5.1 Context space change detection

The probability assignment p(w |x) is reconsidered if x is far distant from w . The distance ∥xw ∥ quantifies the likelihood that x is expected to be drawn from p(x|w ) given that x is assigned to w . To decide whether x can be properly represented by w , we associate w with a dynamic vigilance ρ > 0, which depends on the distance of the assigned x to w . This vigilance is a normalized distance ratio of ∥xw 2 out of the average distances of all context vectors x , = 1,…,L, that were assigned to w :

$$\begin{array}{@{}rcl@{}} \rho^{*} = \frac{ \lVert \mathbf{x}-\mathbf{w}^{*} \rVert^{2}}{\frac{1}{L}{\sum}_{\ell=1}^{L}\lVert \mathbf{x}_{\ell}-\mathbf{w}^{*}\rVert^{2}}. \end{array} $$
(15)

Based on this ratio, if ρ is less than a threshold ρ > 0, x is properly represented by its closest pattern. Otherwise, x is deemed to be an outlier. A ρ value normally ranges between 2.5 and 5 [33]. Hence, for x, which is assigned to w , we define as outlier indicator of x with respect to w the random variable:

$$\begin{array}{@{}rcl@{}} I(\mathbf{x}) = \left\{ \begin{array}{ll} 1 &, \text{ if } \lVert \mathbf{x} - \mathbf{w}^{*} \rVert^{2} > \rho^{\top} \frac{1}{L}{\sum}_{\ell=1}^{L}\lVert \mathbf{x}_{\ell}-\mathbf{w}^{*}\rVert^{2}\\ 0 &, \text{ otherwise. } \end{array} \right. \end{array} $$
(16)

Let us now move to keeping track of the outlier indicators I(x(1)),…,I(x(t)) overtime focusing on their closest pattern w : w = argminkx(t) −w k ∥, ∀t. To simplify the notation, we set I t = I(x(t)). A cumulative sum of I t ’s with a high portion of 1’s causes node i to consider that p(x|w ) might have changed. Upon observation of x, node i observes for pattern w the random variables {I 1,…,I t }. Node i detects a change in p(x|w ) based on the cumulative sum S t of the I 1,I 2,…,I t up to t-th assigned vector:

$$\begin{array}{@{}rcl@{}} S_{t} = \sum\limits_{\tau = 1}^{t}I_{\tau}. \end{array} $$
(17)

I t is a discrete random process with independent and identically distributed (i.i.d.) samples. Each I t follows an unknown probability distribution depending on the distance of x to w . I t has finite mean \(\mathbb {E}[I_{t}] < \infty \), t = 1,…, which depends on ∥x(t) −w 2 and the expectation of an outlier indicator is:

$$\begin{array}{@{}rcl@{}} \mathbb{E}[I] = 0 \cdot P(\{I=0\}) + 1 \cdot P(\{I=1\}) = P(\{I = 1\}). \end{array} $$
(18)

Our knowledge on that distribution, which is not trivial to estimate, will provide insight to judge whether p(x|w ) has changed in the subspace determined by w . We should ‘follow’ the trend of that change by either updating w , to continuously represent its subspace or, create a new pattern in the novel vector subspace.

By observing I t and sum S t up to t, the challenge here is to decide how large the sum should get before deciding that p(x|w ) has changed. Should we decide at an early stage that p(x|w ) has changed, this might correspond to ‘premature’ decision; a relatively small number of ‘outliers’ might not correspond to change in p(x|w ). Should we ‘delay’ our decision then we might get erroneous event inference (high false alarm rate), since we avoid adapting w to ‘follow’ the trend of the vector subspace change.

The rationale for this CP has as follows: To decide when p(x|w ) has changed we could wait for an unknown finite horizon t in order to be more confident on a change. During the t horizon, we only observe the cumulative sum S τ ,τ = 1,…,t . We propose a stochastic optimization algorithm that postpones a vector space change decision through additional observations of I τ . At time t , a decision on a possible p(x|w ) change has to be taken. The problem is to find the optimal stopping time t in order to ensure that p(x|w ) has changed from those x(t) assigned to w at t > t .

We define our confidence Y t of a decision on a change of p(x|w ) based on the cumulative sum S t in (17). Y t is directly connected to the performance improvements that a timely decision yields. Y t is a random variable generated by the sum of I τ up to t, \(S_{t} = {\sum }_{\tau =1}^{t}I_{\tau }\), discounted by a risk factor α ∈ (0,1):

$$\begin{array}{@{}rcl@{}} Y_{t} & = & \alpha^{t}S_{t}. \end{array} $$
(19)

Our algorithm has to find t in order to (i) either start adapting w after considering that p(x|w ) has changed or (ii) create a new pattern, with respect to vigilance ρ (see Section 4) for those vectors arrive at t > t . If we never start this adaptation, our confidence that we follow the new trend (patterns) is zero, Y = 0. This indicates that we do not ‘follow’ the trend of a possible change over the subspace and/or do not augment further our knowledge on possibly new vector subspaces. Furthermore, we will never start adapting w at some t with S t = 0, since there is no piece of evidence of any outlier up to t. As I t assumes unity values for certain times then S t increases at a high rate, thus indicating a possible change due to a significant number of outliers. Our problem is to decide how large the S t should get before we start adapting w or augment our current knowledge on the underlying vector space distribution by adding extra patterns. We have to find a time t > 0 that maximizes our confidence, i.e., when the supremum

$$\begin{array}{@{}rcl@{}} \sup\limits_{t} \mathbb{E}[Y_{t}] \end{array} $$
(20)

is attained. The semantic of the risk factor α has as follows. High α indicates a conservative adaptation model; it requires additional observations for concluding on a change decision. This, however, comes at the expense of possible outliers prediction inaccuracies during this period, since the w might not be a representative of its corresponding assigned vectors. Low α denotes a rather optimistic model, which reaches premature decisions on a p(x|w ) change. This means that once we concluded on a change, we have to adapt w by actually exploiting every incoming vector assigned to w and/or considering x as a new pattern. This continues until the updated w converges.

We propose a solution for the problem in (20). Firstly, we prove the existence of t in our case, then report on the corresponding optimal stopping time, and finally elaborate on the optimality of the proposed solution. A decision taken at time t is:

  • either to assert that a change on p(x|w ) holds true and, then, start the adaptation of w or inserting x as a new pattern,

  • or continue the observation process at time t + 1 and, then, proceed with a decision.

Based only on \(S_{t} = {\sum }_{\tau =1}^{t} I_{\tau }\) we determine a stopping time that maximizes (20).

Theorem 4

An optimal stopping time t for the problem in(20) exists.

Proof

For proof, see Appendix A.3.. □

In our case, I t are non-negative, thus, the problem is monotone [28]. This means that t , since it exists by Theorem 4, is obtained by the 1-stage look-ahead optimal rule (1–sla) [28]. That is, we should start adapting w at the first stopping time t at which \(Y_{t} \geq \mathbb {E}[Y_{t+1}|\mathbb {F}_{t}]\), i.e.,

$$\begin{array}{@{}rcl@{}} t^{*} & = & \inf \{ t \geq 1 | Y_{t} \geq \mathbb{E}[Y_{t+1} | \mathbb{F}_{t}]\}. \end{array} $$
(21)

For our monotone stopping problem with observations I 1,I 2,… and rewards Y 1,Y 2,…,Y , the 1–sla is optimal since supt Y t has finite expectation (\(\mathbb {E}[I]\frac {\alpha }{1-\alpha }\)) and \(\lim _{t \to \infty } \sup _{t} Y_{t} = Y_{\infty } = 0\) (see Theorem 4).

Theorem 5

The optimal stopping time t for the problem in(20) is \(t^{*} = \inf \{t \geq 1| S_{t} \geq \frac {\alpha }{1-\alpha }\mathbb {E}[I]\}\) .

Proof

For proof, see Appendix A.4.. □

To derive t from Theorem 5 we need to estimate the expectation \(\mathbb {E}[I] = P(\{I=1\})\). Empirically, the probability P({I = 1}) can be experimentally calculated by those assigned vectors whose ratio of the distances from their closest patterns out of the total variance of the distances is at least ρ; refer to (15). Moreover, we provide an estimate for P({I = 1}) based on our quantization algorithm in Section 4. The probability of {I t = 1} refers to the conditional probability of x(t) being an outlier given that it is assigned to w with p(w |x(t)). The P({I t = 1}) is, therefore, associated with the probability that the distance ∥x(t) −w 2 > 𝜃, with scalar:

$$\begin{array}{@{}rcl@{}} \theta & = & \rho^{\top} \frac{1}{L}\sum\limits_{\ell=1}^{L}\lVert \mathbf{x}_{\ell} - \mathbf{w}^{*} \rVert^{2}. \end{array} $$
(22)

If we define the vector z(t) = x(t) −w then we seek the probability density distribution of its squared Euclidean norm ∥z(t)∥2. Therefore, based on the centroid convergence in Theorem 3, w refers to the centroid: \(\mathbf {w}^{*} = \mathbb {E}[\mathbf {x}|\mathbf {x} \in \mathbb {D}^{*}]\). Hence, the squared distance of \(\mathbf {z} = [z_{1}, \ldots , z_{d}] = [x_{1}-w^{*}_{1}, \ldots , x_{d}-w^{*}_{d}]\) under the assumption of normally distributed random components follows a non-central squared χ 2 distribution χ 2(d,ζ) with d degrees of freedom and non-centrality parameter \(\zeta = {\sum }_{i=1}^{d} (w^{*}_{i})^{2}\). We approximate P({I = 1}) = P(∥z2 > 𝜃) = 1 − P(∥z2𝜃) by the cumulative distribution function \(CDF_{\chi ^{2}(d,\zeta )}(\theta ) = P(\lVert \mathbf {z} \lVert ^{2} \leq \theta )\) of χ 2(d,ζ). Let \(Q_{\kappa _{1}}(\kappa _{2},\kappa _{3})\) be the monotonic, log-concave Marcum Q-function, with parameters κ 1,κ 2, and κ 3. Then, we obtain that \(CDF_{\chi ^{2}(d,\zeta )}(\theta ) = P(\lVert \mathbf {z} \lVert ^{2} \leq \theta ) = 1 - Q_{\kappa _{1}}(\kappa _{2},\kappa _{3})\):

$$\begin{array}{@{}rcl@{}} P(\{I = 1\}) = 1 - CDF_{\chi^{2}(d,\zeta)}(\theta) = Q_{\frac{d}{2}}\left( \sqrt{\zeta},\sqrt{\theta} \right) \end{array} $$
(23)

by substitution in the Q function: \(\kappa _{1} = \frac {d}{2}\), \(\kappa _{2} = \sqrt {\zeta }\), and \(\kappa _{3} = \sqrt {\theta }\). For an analytical expression of (23), refer to Appendix A.7.. Hence, the optimal stopping time is obtained once we substitute \(\mathbb {E}[I]\) in Theorem 5 by the P({I = 1}) estimated in (23).

5.2 Context adaptation

Once node i has detected a change in at least one vector subspace then it initiates a process that adapts the patterns by modifying w as follows. A change in a vector subspace indicates that new patterns can be formed or existing patterns should be updated. Node i for every incoming x appearing at t > t updates either w to follow the trend or create a new pattern w K+1 = x as described in Algorithm 1. Algorithm 2 shows the change detection and adaptation process.

figure d

6 Expert knowledge context fusion

This CP evaluates the belief of an event based on experts’ knowledge (Fig. 1 (right)). Consider the context x at node i. Each variable x j , j = 1,…,d in x affects the event reasoning in a different way, as interpreted by human expert knowledge. For instance, consider the identification of a fire event. A fire event can be inferred based on temperature x 1, humidity x 2, and (ionization) smoke x 3 measurements, i.e., x = [x 1,x 2,x 3]. A human expert can express a fire event through an increment on temperature and smoke, with humidity remaining at relatively low levels. Let the row vector x P be constructed by variables from x that proportionally affect the presence of an event, i.e., the event is expressed by an increment on the values for those variables. Similarly, let the row vector x N be constructed by the variables from x that do not proportionally affect the presence of the event, i.e., the event is expressed by a decrease on the values of those variables. In this case, we obtain x = [x P ;x N ], where in our example we have that x P = [x 1,x 3] and x N = [x 2]. This classification of the x j variables into the x P and x N vectors is provided directly by the human interpretation of an event. Based on this representation, we introduce a vector fusion function that produces a unified view on the event identification. We introduce the normalized ‘state’ v j ∈ [0,1] of each x j from x P and x N :

$$\begin{array}{@{}rcl@{}} v_{j} = \left\{\begin{array}{ll} \frac{x_{j} - x^{\min}_{j}}{x^{\max}_{j} - x^{\min}_{j}}, & x_{j} \in \mathbf{x}_{P} \\ \frac{x^{\max}_{j} - x_{j}}{x^{\max}_{j} - x^{\min}_{j}}, & x_{j} \in \mathbf{x}_{N} \end{array}\right. \end{array} $$
(24)

The state v j indicates whether x j x P (or ∈x N ) has reached its maximum (or minimum) value and, thus, it partially expresses the existence of an event. Define v = [v 1,…,v d ] ∈ [0,1]d, which contains the states of all variables from x. Motivated by the sigmoid function from neural computation for activating the impact of each neuron input (the states variables in our case), we adopt the sigmoid product fusion function \(f: [0,1]^{d} \to \mathbb {R}^{+}\), which returns the entire state of vector x, i.e., the existence of an event if f(v) → 1, or not if f(v) → 0, with:

$$\begin{array}{@{}rcl@{}} f\left( \mathbf{v} \right) = \prod\limits_{j=1}^{d} \frac{1}{1+\exp\left( - \lambda_{2} v_{j} + \lambda_{1} \right)}. \end{array} $$
(25)

Function f fuses the current context vector into a scalar indicating the presence of an event through the normalized states v j . The \(\lambda _{1}, \lambda _{2} \in \mathbb {R}\) parameters are application specific. Through the adopted sigmoid function, we can either eliminate or pay more attention on the value of a given variable x j to the fusion result. For instance, we count a high impact of v j when its value is only above threshold λ 1 by setting λ 2 → 0 (tuning the steepness of the sigmoid function).

7 Event inference under uncertainty

7.1 Fuzzy contextual knowledge base

Based on the CPs in Sections 34, and 6, node i locally achieves event inference at time t by considering (i) the current context fusion f(v(t)) in (25), (ii) the current assignment probability p(w |x(t)) in (14) w.r.t. to closest pattern w , and (iii) the current deviation e(t) in (10) for x(t). We attempt to fuse these CPs through a finite set of Fuzzy Inference Rules (FIR). Each FIR reflects the degree of belief for a specific event inferred locally on node i. For instance, a FIR is: ‘when the local sensed temperature is high then the degree of belief for a fire event might be also high’. We propose a T2FLS, which defines the fuzzy knowledge base of FIRs for node i. In this work, we do not rely on a Type-1 FLS (T1FLS) as such an inference model has specific drawbacks when applied in dynamic environments and, more interestingly, when the construction of the FIRs involves uncertainty due to partial knowledge in representing the output of the inference result [21]. In our case, this corresponds to the uncertainty of defining the occurrence of an event based only on the local available knowledge: current context, predicted context, and possible outliers. The limitation in a T1FLS is on handling uncertainty in representing knowledge through FIRs [9, 21]. In a T1FLS, the experts define exactly the membership degree of the involved input and output variables in a FIR, e.g., the characterization of a value as ‘high’ or ‘low’. However, when even the definition of a membership function involves uncertainty, the experts cannot be certain about the membership grade. In such cases, uncertainty is observed not only on the environment of the examined problem, e.g., we classify a value as ‘high’ or ‘low’ or the degree of belief as ‘high’, but also on the description of the term e.g., ‘high’, itself in a FIR.

In a T2FLS, the membership functions that characterize the terms of the three CPs are themselves ‘fuzzy’, which leads to the definition of FIRs incorporating such uncertainty [21]. This approach seems appropriate in our case as FIRs cannot explicitly reflect knowledge on whether incoming measurements correspond to the occurrence of an event. Our FIRs take into consideration the uncertainty in the definition of an event by the human expert enhanced with the CPs: deviation of predicted context and outliers inference. Such FIRs refer to a non-linear mapping \(\mathcal {F}(f(\mathbf {v}), p(\mathbf {w}^{*}|\mathbf {x}), e)\) between the three CPs (inputs) and one output, i.e., the degree of belief μ i ∈ [0,1]. The antecedent part of a FIR is a linguistic conjunction of the CPs and the consequent part is the degree of belief that event actually occurs. The structure for a FIR is as follows:

$$\textbf{IF } f(\mathbf{v})\text{ is }A_{1k} \textbf{ AND } e\text{ is }A_{2k} \textbf{ AND } p(\mathbf{w}^{*}|\mathbf{x})\text{ is }A_{3k} $$
$$\textbf{THEN } \mu_{i} \text{ is } B_{k}, $$

where A 1k ,A 2k ,A 3k and B k are membership functions for the k-th FIR mapping the values of f(v), e, p(w |x) and μ i into unity intervals, respectively, by characterizing these values through the linguistic terms: low, medium, and high. If a linguistic term, e.g., ‘high’, was represented through one fuzzy set in a T1FLS then we would use one membership function g(x) ∈ [0,1] mapping the real value (input) x ∈ [0,1] to a discrete set of pairs (x j ,g(x j )), e.g., {(0,0);(0.25,0.1);(0.5,0.75);(1,1)}, where (0.25,0.1) means that the value x = 0.25 has a membership degree g(x) = 0.1.

In a T2FLS, each term A 1k ,A 2k ,A 3k and B k in FIRs is represented by two membership functions corresponding to lower and upper bounds [20]. For instance, the term ‘high’, unlike in a T1FLS, whose membership for each x is a number g(x), is represented by two membership functions. That is, each value x is assigned to an interval [g L (x),g U (x)] corresponding to a lower and an upper membership function g L and g U , respectively. E.g., the membership of x = 0.25 is the interval [0.05,0.2]. The interval areas [g L (x j ),g U (x j )] for each input x j reflect the uncertainty in defining the term, e.g., ‘high’, which is useful when it is difficult to determine the exact membership function for each term or in modeling the diverse opinions from different CPs in defining the occurrence of an event, in our case. If g L (x) = g U (x),∀x, we obtain a FIR in a T1FLS. Following the above FIR structure, each A j k ,j = 1,2,3, and B k , for each k-th FIR, corresponds to a set of intervals. The interested reader could also refer to [20] for fuzzy reasoning in T2FLS.

7.2 Determination of local degree of belief

A μ i value close to unity denotes the case where the belief is at high levels, i.e., there is a high belief that a hazardous phenomenon, like fire or flood, occurs in the area of interest based on the agreement of the three CPs (all of them assume values close to unity). The opposite stands when μ i tends to zero. We consider three fuzzy linguistic terms for the FIRs: Low, Medium, and High. Low represents that a variable (input or output) takes values close to 0, while High depicts the case where a variable takes values close to 1. Medium depicts the case where the variable takes values around 0.5. For instance, a Low fuzzy value for e indicates that the current and predicted context are close enough, thus, current context follows the trend of its recent historical context. A High fuzzy value for p(w |x) denotes that the current context does not significantly deviate from its regular statistical pattern. A High fuzzy value for f(v) indicates a positive inference on the presence of an event as represented by an expert’s knowledge. For each fuzzy term, human experts define the upper and the lower membership functions. Here, we consider triangular membership functions g L and g U as they are widely adopted in the literature. Our T2FLS is generic, thus, any type of membership functions can be adopted to better suit to the application domain.

Table 1 shows the proposed fuzzy knowledge based for event inference.Footnote 2 Upon receiving the current context x(t), node i produces its corresponding (i) fused context f(v), (ii) deviation e(t) and (iii) assignment probability p(w |x). Then, the T2FLS is activated as follows: (Step 1) calculation of the interval (based on the membership functions) for each input; (Step 2) calculation of the active interval of each FIR; (Step 3) performance of ‘type reduction’ to combine the active interval of each FIR and the corresponding consequent. Step 3 produces the interval of the consequent, and accordingly, the defuzzification phaseFootnote 3 determines a scalar value for the local degree of belief μ i at time t. The most common method for ‘type reduction’ is the center of sets type reducer [21], which generates a Type-1 Fuzzy Set as output, which is then converted in a scalar value for the μ i after defuzzification. When the μ i is over a pre-defined belief threshold 𝜖 ∈ [0,1], the T2FLS engine infers locally an event occurrence with degree of belief μ i .

Table 1 T2FLS fuzzy knowledge base

8 Belief-centric clustering

In our federated reasoning approach, groups of nodes are formated based on their local degrees of belief μ i , \(i \in \mathcal {N}\). The clustering process is repeated at a clustering era T 1,T 2,T 3,…, \(T_{n} \in \mathbb {T}\). The T n is a variable time index in \(\mathbb {T}\), which is triggered by node i which locally believes in an event presence in the first instance (i.e., μ i 𝜖), thus, asking for the opinions of its local neighbors before reaching a conclusion. In each group, a node is elected as the Cluster Head (CH) and is responsible to exchange the aggregated degrees of belief (discussed later) with a concentrator from set \(\mathcal {C}\) after a belief revision/update of the initial opinion on an event presence. Hence, the number of messages circulated in the network is reduced as it is not necessary for each node to relay messages to a concentrator. The election process concerns a node i to become a CH if it experiences the highest μ i related to an observed phenomenon among its neighbors \(\mathcal {N}_{i}\). The aim of the CH is to notify its members about its appointment as a CH, thus, avoiding redundant message dissemination. The CH node, after its appointment, aggregates the degrees of belief of its neighbors resulting to an enhanced neighborhood contextual knowledge by unanimously inferring a possible event.

The primary objectives of the federated election process are:

  • (i) Appointment of a subset of nodes as CHs responsible for determining and disseminating an unanimous (aggregated) degree of belief to the concentrators.

  • (ii) Dynamically changing the CH appointment to nodes. Evidently, this prolongs the network lifetime by changing CH appointments and, thus, balancing energy consumption for the event inference process and transmission of message to the members and concentrators.

  • (iii) Termination of the election process within a constant number of iterations (exchanged messages).

It should be noted that the description of the CH replacement process (i.e., objective (ii)) is beyond the scope of this paper. It is also worth noting that we do not make any assumption about the spatial distribution of IoT nodes in the area. Every node can act as either CH or member. This requires the need for an efficient CH election algorithm.

8.1 Belief-centric cluster-head election

A baseline solution for the election process involves nodes exchanging their μ i to all neighbors. The node with the highest μ i is elected to become the CH of the neighborhood. However, this solution requires a significant number of messages exchanged among nodes. Moreover, since the election process is re-initiated after a time interval T, then a high energy budget is required for that type of communication. There are certain election algorithms which could be adopted. In our case, neighboring nodes exchange their μ i values and then ‘elect’ the CH. To this end, we follow the concept of the CH election algorithm in [37] by modifying the election criteria to reflect the knowledge exchange over a neighborhood.

At each node, the election process requires a number of iterations L > 0. In every iteration, nodes send and receive specific small-sized messages from neighbors containing their degrees of belief. Before node i starts the election process, it configures a local probability of becoming a CH ξ i , hereinafter referred to as Election Probability (EP), as a function of μ i , i.e., ξ i = max (ξ min,μ i ), where ξ m i n is a minimum EP for each node: ξ i is not allowed to fall below the ξ min, e.g., 10−3. This restriction is essential for terminating the election process in L = O(1) iterations; see Lemma 1. Node i with a high EP ξ i starts the following process: it sends announcement messages of the form 〈ξ i ,i〉 to the \(\mathcal {N}_{i}\) neighbors to be a CH. A node j with a low EP ξ j delays the transmission of announcement messages and considers itself ‘non-CH’ if it has heard from 〈ξ i ,i〉 with ξ i > ξ j . During iteration ,1 ≤ L, every node i decides to become a CH with EP ξ i . Through the process, node i can either be elected to become a CH according to its EP ξ i or remain at the same status (i.e., non-CH) according to overheard announcement messages within its neighborhood \(\mathcal {N}_{i}\). A node j selects its CH node i to be the node with the highest μ i ; this is achieved by the comparison of ξ i and ξ j . Every node i then multiplies its EP ξ i with a factor of χ > 1, and goes to the next step + 1 and so on, i.e.,

$$\begin{array}{@{}rcl@{}} \xi_{i}(\ell + 1) & = & \min(\chi \xi_{i}(\ell),1). \end{array} $$
(26)

If node i decides to become a CH since its EP ξ i has reached unity, it sends an announcement message ‘CH i’ to its neighbors \(\mathcal {N}_{i}\). A node \(j \in \mathcal {N}_{i}\), then, considers itself ‘non-CH’ if it has heard from node i a ‘CH i’ message and terminates the election process. Note, this election process is completely distributed. Node i either decides to become a CH since μ i is the highest among its neighbors, or be a member which awaits a message by its unique CH.

Lemma 1

The belief-centric election process requires O(1)iterations.

Proof

For proof, see Appendix A.5.. □

The number of iterations for each node does not depend on the number of neighbors and is bounded by a constant. Indicatively, when ξ m i n = 10−3 and χ = e then a node needs at most eight iterations to elect or be elected as a CH.

Lemma 2

The message exchange complexity in the belief-centric election process is O(1)per node and \(O(|\mathcal {N}|)\) for the network.

Proof

For proof, see Appendix A.6.. □

8.2 Aggregated degree of belief & federate event reasoning

Once node i is appointed as a CH, it locally determines the average degree of belief of its members \(j \in \mathcal {N}_{i}\):

$$\begin{array}{@{}rcl@{}} \bar{\mu}_{i} & = & \frac{1}{|\mathcal{N}_{i}|}\sum\limits_{j \in \mathcal{N}_{i}}\mu_{j}. \end{array} $$
(27)

The \(\bar {\mu }_{i}\) reflects a degree of consensus of the neighborhood on event inference. CH i, based on the pair \((\mu _{i},\bar {\mu }_{i})\), determines an aggregated degree of belief \(\tilde {\mu }_{i}\). We adopt a reward-idle methodology to reason on the aggregated degree of belief \(\tilde {\mu }_{i}\), which will be delivered by CH i to its concentrator. If CH i and its neighbors unanimously agree on the presence of an event, i.e., if the logical expression:

$$\begin{array}{@{}rcl@{}} (\mu_{i} \geq \epsilon) \wedge (\bar{\mu}_{i} \geq \epsilon) \end{array} $$
(28)

holds true then we reward CH i’s belief on the event by sending to the concentrator \(\tilde {\mu }_{i} = \mu _{i}\). When CH i and its neighbors unanimously agree on the absence of an event, i.e., if it holds true that:

$$\begin{array}{@{}rcl@{}} (\mu_{i} < \epsilon) \wedge (\bar{\mu}_{i} < \epsilon) \end{array} $$
(29)

then \(\tilde {\mu }_{i}\) is the average value of all degrees of belief:

$$\begin{array}{@{}rcl@{}} \tilde{\mu}_{i} & = & \frac{1}{|\mathcal{N}_{i}| + 1}\left( \sum\limits_{j \in \mathcal{N}_{i}}\mu_{j}+\mu_{i}\right), \end{array} $$
(30)

and the CH i does not notify the concentrator. If there is a disagreement between CH i and its neighborhood, i.e., if it holds true that:

$$\begin{array}{@{}rcl@{}} (\mu_{i} > \epsilon) \wedge (\bar{\mu}_{i} < \epsilon) \end{array} $$
(31)

then CH i notifies its concentrator after regulating its local opinion by a factor of r ∈ (0,1) towards the neighbors’ average belief, i.e.,

$$\begin{array}{@{}rcl@{}} \tilde{\mu}_{i} & = & \mu_{i} + r(\bar{\mu}_{i}-\mu_{i}). \end{array} $$
(32)

The concentrator then acquires knowledge for a specific region of the area of interest about the appearance of an event and to what extend this local inference from nodes \(\{i, \mathcal {N}_{i}\}\) is of high belief by receiving \(\tilde {\mu }_{i}\). Note, since \(\mu _{i} \geq \max _{j \in \mathcal {N}_{i}}\{\mu _{j}\}\), there will be never the case: \((\mu _{i} < \epsilon ) \wedge (\bar {\mu }_{i} > \epsilon )\).

9 Computational complexity, energy & communication cost

In this section we present the time and space computational complexities for both Algorithms 1 and 2 and the energy and communication cost of the processes for each node i: (i) event inference (local derivation of degree of belief μ i , (ii) election and clustering era (appointment of cluster-heads CH and cluster members), (iii) derivation of aggregated degree of belief from the CHs \(\tilde {\mu }_{i}\), and (iv) report to the concentrators from CHs.

9.1 Computational complexity

We report on the time and space complexities of the processes that are needed for each node i to locally infer the degree of belief. such processes include: (i) context vector quantization for patterns derivation, (ii) change detection of the quantized data subspace, (iii) context adaptation, and (iv) degree of belief inference including context prediction and fuzzy inference.

9.1.1 Time & space complexity for context vector quantization

The Algorithm 1 is an incremental partitioning algorithm which updates its closest current pattern w based on the incoming context vector x(t) at time instance t. The closest pattern update stops when the algorithm has converged with respect to a convergence threshold γ. That is, the patterns’ updates are stopped at the first time instance (vector observation) t such that:

$$\begin{array}{@{}rcl@{}} t^{\prime} & = & \inf_{t}\{t > 0: \Gamma(t) \leq \gamma\}. \end{array} $$
(33)

During the training phase, at every observation x(t) at time instance t, the algorithm finds the closest pattern w to the context vector x(t). This requires O(d K) time per observation for searching for the closest pattern out of the current K patterns \(\{\mathbf {w}_{k}\}_{k=1}^{K}\). The whole training process requires O(d K t ) time. After convergence, i.e., at time instance t > t the structures of the patterns are used for outliers detection and, in certain cases, for adaptation based on the optimal stopping time methodology in Section 5. In this phase, the calculation of the probability p(w |x) requires O(d log K) given a k-d tree structure for searching the closest pattern. The space complexity of Algorithm 1 refers to the storage of the K d-dimensional patterns w k , which is O(d K).

9.1.2 Time & space complexity for change detection and adaptation

The Algorithm 2 is an incremental algorithm, which processes the sensed context vector x(t) at time instance t to determine whether there is a context change detection after several observations. Algorithm 2 requires a pre-calculation of the K scalars 𝜃 k ,k ∈ [K] using (22). Those scalars derive from the variances of the data subspaces represented by the patterns w k from Algorithm 1. This requires O(d K) time for the K variances 𝜃 k . The Algorithm 2, at every time instance t, calculates the outlier indicator I t using (16), which requires O(1), given the closest pattern w , which requires O(d log K). When the optimal stopping criterion holds true, which is determined in O(1) time, then the closest pattern w is either updated or a new pattern is inserted in the pattern set \(\mathcal {W}\) in O(1). In the adaptation, the dynamic vigilance ρ is updated in O(d K). The space complexity of Algorithm 2 refers to the storage of the K scalars (variances) 𝜃 k and the K dynamic vigilances \(\rho _{k}^{*}\), which is O(K).

Table 2 shows the asymptotic time and space complexities for the Algorithms 1 and 2.

Table 2 Asymptotic time & space complexities per node

9.1.3 Time & space complexity for degree of belief inference

Each node i upon sensing a d-dimensional context vector x(t) at time instance t performs event inference to derive locally the degree of belief μ i . Specifically, context prediction scales linearly with the number of the temporal nearest neighbors hm for smoothing, thus, requiring O(d m) time to predict context. In addition, concerning the outliers detection, upon reception of a context vector, node i performs a nearest neighbor search over the K patterns to find the closest one. By adopting a d-dimensional tree structure (a k-d tree) over the prototypes, we require O(d log K) for evaluating the probability of assignment. In the case of adaptation after the outliers detection, node i adapts its closest pattern in O(1). Moreover, the context fusion is achieved in O(d) to evaluate the vector state, that is, it depends only on the data dimensionality. Finally, the FIRs are fixed and provided by the experts. Hence, the fuzzy-based event inference takes O(R), where R is the number of FIRs.

Overall, based on Table 2, a node i requires O(d(m + log K) + R) to provide the degree of belief μ i on a event including any possible adaptation. It is worth noting that, node i requires O(d(K + m)) space to store the patterns and the most recent context vectors. Given the belief-centric clustering, a node i after local inference can initiate a clustering era for determining the aggregated degree of belief. In each clustering era, every node i requires O(1) messages to either be appointed as a CH or not (member of the cluster); see Lemmas 1 & 2. For a CH node, the calculation of the aggregated degree of belief depends on the cardinality of its neighborhood, i.e., number of cluster members, which requires \(O(|\mathcal {N}|)\) using (30). Every CH node then transmits to its concentrator the aggregated degree of belief requiring O(1) message (network communication). Table 3 summarizes the overall asymptotic complexities per node for the engaged processes: event inference, belief-centric election and report of the aggregated degree of belief to the concentrator.

Table 3 Asymptotic complexities for each process per node; ‘-’ means ‘not applicable’

9.2 Communication cost & computation energy consumption

The nodes must accomplish their assigned sensing and inference tasks by using the limited energy resources carried by them. The energy refers to a number of operations: (i) wireless communication, (ii) sensing the environment, and (iii) local computation. In our study, the energy and communication model reflects three facets: energy for communication required for the belief-centric clustering process, energy for computation, i.e., event inference and degree of belief derivation, and communication energy of the cluster-heads to report the aggregated degree of belief to their assigned concentrators.

Each node i consumes processing power for locally inferring the degree of belief μ i of a possible event as described in Section 7.2. We notate with \(\mathcal {E}_{\mu _{i}}\) the energy cost in Joule per CPU instructions corresponding to the executable inference algorithm for local degree of belief per node i. Moreover, when nodes initiate a clustering era, then some nodes are appointed as cluster-heads computing their EP values. During a clustering era, a node is either dynamically appointed as a CH or acting as a member. In a clustering era, the energy for in-cluster communication \(\mathcal {E}_{c,i}\) in Joules per bit transmission (TX) and reception (RX) is the energy consumption incurred on node i by transmitting (TX) and receiving (RX) election messages. After the election, each CH node has to calculate its neighbors’ aggregate belief \(\tilde {\mu }_{i}\) with energy cost \(\mathcal {E}_{\tilde {\mu }_{i}}\) in Joule per CPU instructions and then transmit (TX) this value to its assigned concentrator, thus, incurring an additional communication cost \(\mathcal {E}_{CH,i}\).

Let the CH indicator J i = 1 if node i is appointed as a CH after clustering era; otherwise J i = 0 when node i is a cluster member. Then, we define the total cumulative energy consumption C i per node i as the cumulative computation consumption for event inference and/or aggregated degree of belief, and communication consumption for clustering and transmitting the aggregated degree of belief to the concentrators (in the case of CHs only) up to time instance t, that is:

$$\begin{array}{@{}rcl@{}} C_{i} = C_{p,i} + C_{c,i}, \end{array} $$
(34)

where

$$\begin{array}{@{}rcl@{}} C_{p,i} = \sum\limits_{\tau=0}^{t}\left( \mathcal{E}^{\tau}_{\mu_{i}} + J_{i} \mathcal{E}^{\tau}_{\tilde{\mu}_{i}} + \mathcal{E}^{\tau}_{0} \right), \end{array} $$
(35)

and

$$\begin{array}{@{}rcl@{}} C_{c,i} = \sum\limits_{\tau=0}^{t}\left( \mathcal{E}^{\tau}_{c,i} + J_{i}\mathcal{E}^{\tau}_{CH,i} + \mathcal{E}^{\tau}_{0} \right), \end{array} $$
(36)

where \(\mathcal {E}_{0}\) is the energy cost for node i transiting from idle to standby operational modes [36]. Up to time instance t, the communication and computation costs for all nodes and the overall cost are, respectively:

$$\begin{array}{@{}rcl@{}} C_{c} = \sum\limits_{i=1}^{|\mathcal{N}|}C_{c,i}, C_{p} = \sum\limits_{i=1}^{|\mathcal{N}|}C_{p,i}, C = C_{p}+C_{c}. \end{array} $$
(37)

For the sensing, communication and computation energy consumption, we adopted the energy model from the Mica2 sensor board.Footnote 4 This energy model assumes an energy of two AA batteries that approximately supply 2200 mAh with effective average voltage 3V. It consumes 20mA if running a sensing application continuously. The communication cost for transmitting (TX) a bit is 720 nJ/bit and receiving (RX) a bit is 110 nJ/bit. Moreover, the packet header of the communication protocol adopted by Mica2 is 9 bytes (MAC header and CRC) and the maximum payload is 29 bytes. Therefore, the per-packet overhead equals to 23.7% (lowest value). For each transmitted data value, i.e., a value component x of a d-dimensional vector x and the EP value in an election message, the assumed payload is set to 4 bytes (floating point number) and 2 bytes, respectively. Finally, the energy cost for single CPU instructions (energy per instruction) is 4 nJ/instruction in Mica2. Table 4 shows all the energy consumption in nJ per bit, for communication, and in nJ per CPU instruction, for computation.

Table 4 Energy parameters

10 Performance evaluation

10.1 Performance metrics

We assess the performance of our mechanism in terms of: (i) probability of false (erroneous) event inference ϕ ∈ [0,1], (ii) event time index \(\tau \in \mathbb {T}\) of recognizing an event, (iii) communication overhead (number of aggregated degree of belief messages) \(\mathcal {M}\) required for CHs to inform the concentrators for event inference, (iv) energy consumption for event inference C p and communication cost C c per node i and the total IoT environment, and (v) efficiency of our mechanism in delivering event inference with a low false rate being communication and energy aware.

The false probability ϕ represents the rate of erroneous inference (false alerts) that the mechanism generates defined as the ratio of the number of false alerts out of a total number of inference results. Note, event inference is obtained at every time \(t \in \mathbb {T}\) corresponding to the reception of context vector x at any node i. A value of ϕ → 1 indicates high rate of false alerts, thus, no conclusion can be drawn for the true state of the phenomenon.

The event time index \(\tau \in \mathbb {T}\) refers to the time index of the measurement that actually corresponds to an event. Through that metric, we assess how ‘close’ to the real case an event is inferred by our mechanism; not at early stages in order to avoid false alerts and not many stages after the real event. The τ is evaluated by the rate of the identification for real events.

The number of messages \(\mathcal {M}\) refers to the total number of messages (\(\tilde {\mu }\) values) sent from CHs to their concentrators including the total number of messages sent for the belief-centric clustering. The lower the \(\mathcal {M}\) is, the lower energy resources in terms of communication are spent. Let us notate the lifetime of the entire network as \(\mathcal {T}\) (in terms of energy) and \(\mathcal {N}_{CH}\) be the set of CHs, i.e., \(|\mathcal {N}_{CH}| \ll |\mathcal {N}|\). Since at each clustering era T 1,T 2,…, our mechanism assigns certain nodes as CHs then, in the network lifetime, \(\lfloor \frac {\mathcal {T}}{T} \rfloor \) clustering eras are realized, where T is the expected number of clustering initiations out of the total number of observations. By adopting our belief-centric clustering, only \(\mathcal {N}_{CH}\) messages of \(\tilde {\mu }\) values are delivered to the concentrators to keep the concentrators up-to-date about the event inference along with \(O(|\mathcal {N}|)\) messages circulated locally for building the clusters as proved in Lemma 2. Hence, it holds true that:

$$\mathcal{M} = \lfloor \frac{\mathcal{T}}{T} \rfloor \left( |\mathcal{N}_{CH}| + O(|\mathcal{N}|) \right). $$

Without clustering, all nodes would send their μ s to the concentrators, thus, in this case we would obtain \(\mathcal {M} = \lfloor \frac {\mathcal {T}}{T} \rfloor |\mathcal {N}|\).

The energy consumption C p refers to the energy consumed for computational processing per node i to locally infer the degree of belief after observing a d-dimensional context vector. The energy consumption C c refers to the communication overhead cost for nodes during the clustering eras due to messages exchange for CH election. These messages include the EP values. Moreover, this cost includes the energy consumption for the appointed CHs to transit the aggregated degrees of belief (from their neighborhood) to their concentrators. The energy model for computation and communication derives from the Mica2 energy model presented in Section 9.2. Finally, we define as efficiency the total amount of energy consumed from our mechanism C = C p + C c to deliver event inference with a low false rate ϕ. We desire to obtain a low energy expenditure along with a low false rate. We compare our mechanism with other mechanisms in terms of energy consumption (communication and computation) and efficiency, as shown in Section 10.5.

10.2 Experiment setup

We experiment with a real multivariate dataset [38] adopted from the Microsoft research open datasets.Footnote 5 The dataset contains meteorological data retrieved in the cities of Beijing and Shanghai. The collected context variables are: temperature, humidity, barometer pressure and wind strength. In our experiments, we adopt 2-dim. context vectors with x 1 = ‘temperature’ and x 2 = ‘humidity’ recorded by \(|\mathcal {N}| = 50\) nodes deployed in the field and observe 50,000 context vectors. We consider one observation at each discrete time instance \(t \in \mathbb {T}\) and assume one concentrator acting also as the back-end system for those nodes. All vectors are scaled, i.e., x ∈ [0,1] × [0,1].

In the dataset, no hazardous events are identified, i.e., the probability of a true event is zero. To define an event, we exploit the expert knowledge in [25] stating that: a high temperature, e.g., around 600 Celsius, along with a low humidity, e.g., below 30%, defines a fire incident. Firstly, we consider injecting ‘faulty’ values to examine whether our mechanism produces erroneous inference/false alerts. Our target is to obtain ϕ → 0. To simulate a setting where nodes deliver faults/outliers, we randomly inject faulty measurements as indicated by the ‘faulty rules’ in [26] with some fault probability p F > 0. On a node i, an actual temperature value x 1 at time t will be replaced as x 1 ← (1 + a F )x 1 and for humidity \(x_{2} \leftarrow \frac {a_{F}}{1 + a_{F}} x_{2}\), with a F ∈ {2,3,5} and assume different faulty probabilities p F ∈ {5%,10%,20%,40%,60%,80%}. In addition, we inject a set of fire events represented by a state temperature value v 1 close to 1 and a state humidity value v 2 close to zero as depicted in [25]. Note, we increase the temperature value and decrease the humidity value corresponding to the same context vector. The event time index τ k of a predefined fire event E k is pre-recorded. We define 10 fire events randomly spanned in the dataset where: the time duration of an event is drawn from the Exponential distribution with average time event-duration 10 time units. Through this setup, we examine whether our mechanism is capable of (i) inferring the events E k given fault probability p F and (ii) producing a time index of E k as close to τ k as possible, i.e., if the proposed mechanism identifies E k at the right time.

The parameter values are presented in Table 5 and, specifically the default values are: belief threshold 𝜖 = 0.7, convergence threshold γ = 0.001, context history m = 10, h = 5 in h NN DES, vigilance percentage a = 0.1 and vigilance threshold is ρ = (a d)1/2 = 0.44 for 2-dim. context, initial learning rate η = 0.5, assignment probability factor β = 0.1, risk factor α = 0.95, opinion factor r = 0.5, and the number of FIRs is R = 27. The justification of those values is discussed in the remainder.

Table 5 Experimental parameters

10.3 Comparison models

We compare our mechanism, hereinafter referred to as Model (M), with the local Voting Scheme (VS) and the centralized Aggregation Scheme (AS).

In the local VS model, a node i locally infers an event at time t based only on the expert knowledge fusion function, i.e., when it holds true that:

$$\begin{array}{@{}rcl@{}} f(\mathbf{v}_{i}(t)) \geq \epsilon, \end{array} $$
(38)

thus, neglecting all other CPs to reason about the final decision. Then, each node i transmits only its inference result (event vote) to a central node gathers, which centrally infers an event based on the majority of votes.

In the centralized AS model, each node i transmits its current context data vector x i (t) to the central node. The central node, then, aggregates all the received context data vectors from the \(|\mathcal {N}|\) nodes and centrally infers an event based on:

$$\begin{array}{@{}rcl@{}} f(g\{ \mathbf{v}_{i}(t)\}_{i=1}^{|\mathcal{N}|}) \geq \epsilon, \end{array} $$
(39)

where v i (t) is the state context vector corresponding to node i’s context and g{⋅} is the average operator.

10.4 Performance evaluation

10.4.1 Quality of event inference

We analyze the event inference performance of model M for different values of nodes \(|\mathcal {N}|\), faulty probabilities p F , fusion parameter λ 2, belief threshold 𝜖, and vigilance ρ. In Table 6, we examine the robustness of model M in terms of false rate ϕ for different values of faulty probability p F , 5%p F ≤ 80%, and number of nodes \(|\mathcal {N}|\). Model M is robust assuming a very low ϕ (less that 1.5%) even for data streams involving a huge number of faulty values i.e., p F = 80%. This indicates the capability of model M to reason under uncertainty as treated by the involvement of the three CPs. Moreover, the knowledge fusion of the local degrees of belief depends on the number of opinions, i.e., the number of nodes involved in the event reasoning. The higher the \(|\mathcal {N}|\) is, the lower the ϕ becomes. The reason is that each node i locally process context and infers an even w.r.t. the three CPs and shares its local view/degree of belief with its neighbors through our CH-based consensus approach. Then, by voting among those aggregated degrees of belief \(\tilde {\mu }\), which actually related to an event (sent only by CHs), the back-end system clearly concludes on that event with high accuracy. Model M takes into consideration the groups’ perspectives, i.e., an event is locally agreed on a CH only when a large percentage of neighboring nodes support that event presence. When \(|\mathcal {N}|\) increases, the team is more ‘compact’ meaning that much more nodes support an event presence with more certainty in contrast to the case where \(|\mathcal {N}|\) is small. In the case where only one node i is present, model M is based on node i’s belief, thus, false alerts could arise more easily (as node i could have a faulty view on an event presence).

Table 6 Model M: false rate ϕ vs. p F and \(|\mathcal {N}|\)

In addition, we examine the impact of \(|\mathcal {N}|\) on the time lag τ from the actual event time index and the identified/inferred time index. Model M obtains an average time lag τ = 2.2 time units with standard deviation σ τ = 0.77 for \(5 \leq |\mathcal {N}| \leq 50\). This indicates that all events are identified in very near real-time.

Table 7 presents the effect of the expert knowledge fusion (CP1) on producing false alerts. Recall that expert knowledge fusion depends on parameters λ 1 and λ 2 that affect the result for f(v). From these two parameters, λ 1 ‘defines’ the threshold value of the fusion function as provided by the expert, while λ 2 defines the steepness of the function. We experiment with the steepness λ 2 ∈ {2.0,4.0,6.0} for fixed threshold λ 1. We observe in Table 7 that a high λ 2 results to a high false rate ϕ, while when λ 2 = 2.0, false rate ϕ is limited (equal or very close to 0). A high λ 2 leads to a more ‘relaxed’ identification of the event. However, this leads to an increased number of false alerts by overestimating the CP1 f(v) at the expense of the other two CPs (error e and assignment probability p(w |x)), which is passed to the T2FLS engine. A low λ 2 value regulates the impact of CP1 on the other two CPs, thus, model M exploits all CPs to avoid high rate of erroneous inference results.

Table 7 Model M: false rate ϕ vs. p F and \(|\mathcal {N}|\) for λ 2 ∈{2.0,4.0,6.0}

We also examine the impact of the belief threshold 𝜖 on model M in terms of false rate ϕ. Table 8 shows the results when 𝜖 ∈{0.5,0.9} for different values of \(|\mathcal {N}|\) and p F . A low 𝜖 leads to an optimistic and sensitive event identifies compared to high 𝜖 values. Evidently, this corresponds to an increased number of false alerts ϕ. In such cases, model M is also affected by an increased number of messages \(\mathcal {M}\) sent from CHs to the back-end system, which reaches the theoretical maximum \(\mathcal {M}\): the centralized approach, where all nodes send their observations to a back-end system. In addition, in Table 8 we observe results for 𝜖 = 0.9. In this case, ϕ is minimized especially when \(|\mathcal {N}| > 5\). A high 𝜖 makes event inference more insensitive and difficult to discriminate, thus, a limited number of nodes agree on an event presence. This behavior has obvious consequences on the identification of real events as τ is getting high. We set 𝜖 = 0.7 in our experiments as explained later.

Table 8 Model M: false rate ϕ vs. p F and \(|\mathcal {N}|\) for 𝜖 ∈{0.5,0.9}

In addition, we experiment with the average number of context patterns K per node that are required to quantize the vector data space to materialize the CP2 and CP3. Table 9 shows the number of K patterns (mean value avg(K) and standard deviation σ K out of \(|\mathcal {N}| = 50\) nodes) that quantize the context spaces needed for outliers and novelty detection against the vigilance percentage a, i.e., ρ = (a d)1/2. A low a value, which corresponds to low ρ, results in high quantization resolution in terms of patterns; a high number of patterns are generated to better represent the vector space. This, however, comes at the expense of a high number of patterns that are needed to be stored on a node. But, even in the case of a = 0.1, this number is significantly low (K ∼ 49). Hence, to achieve highly accurate inference results and maintain the model M up-to-date w.r.t. novelty vector subspaces, we set a = 0.1.

Table 9 Model M: patterns K per node vs. ρ(a); Messages \(\mathcal {M}\), average number of cluster heads \(|\mathcal {N}_{CH}|\), and clustering eras T vs. belief threshold 𝜖; \(|\mathcal {N}| = 50\)

10.4.2 Communication & computation cost

In terms of communication overhead (number of messages circulated in the IoT environment), we examine the capability of model M to achieve low false rates by avoiding transferring context data to the concentrator, but only the minimal sufficient knowledge for event reasoning in terms of belief threshold 𝜖. Table 9 shows the impact of belief threshold 𝜖 on: (i) the number of messages \(\mathcal {M}\), (ii) the average number of CHs \(|\mathcal {N}_{CH}|\) per clustering era, and (iii) the number of clustering eras T. A value of 𝜖 close to the cut-off value of 0.5 results to many clustering eras (T > 1000), thus, many messages are sent between clusters and from CHs to concentrators along with high ϕ value (see Table 8). Evidently, a value 𝜖 > 0.5 is adopted to ‘narrowing’ and clarifying the inference results. On the other hand, with a high 𝜖, model M increases its tolerance to assess an event presence thus being communication efficient. However, in this case, events are difficult to identify, which does not reflect the actual situation on the IoT network. To balance between communication load, accuracy of inference, and capability of event identification, we set a belief threshold 𝜖 = 0.7 in our experiments. For 𝜖 = 0.7, model M initiates T = 10 clustering eras in which 9% of nodes (CHs) transfer their aggregated knowledge to concentrators achieving a low ϕ value. In all these 10 clustering eras, the nodes successfully detect all 10 events.

In terms of energy consumption due to the computational cost of local inference and communication cost for each clustering era, we present in Fig. 2 (left) the total cost C M in nJoule and its breakdown in the processing (computation) cost C p and communication cost C c , as defined in (34) and (37), respectively, for \(|\mathcal {N} = 50|\) nodes, after observing 10,000 context vectors each. We can observe that the computational consumed energy is the lowest energy expenditure compared to the communication cost, which indicates the advantage of the distributed inference, thus reducing the network overhead. This is attributed to the fact that the energy required for TX and RX pieces of data is higher than the energy consumed for local computations in each node. Moreover, our proposed Algorithms 1 and 2 are on-line, incremental learning algorithms, thus providing a lightweight solution for localized context inference, which is our major goal: ‘to push on the intelligence to the edge of the network’, reducing unnecessary data transfer to the back-end system and/or to the concentrators. Since each node i can locally reason about contextual event then, instead of transmitting actual sensed contextual multidimensional data towards a centralized system (as it will be discussed later in the comparative assessment Section 10.5), it transmits only if needed the local inference results, i..e, the degree of belief. Moreover, the proposed clustering scheme involves localized message exchange among neighboring nodes, which further reduces the network overhead by avoiding transiting data values from the edge of the network to the concentrators. Even in this localized information dissemination process, the nodes are transmitting only inferred knowledge, i.e., local degrees of belief and not data values. The appointed CHs are the only responsible ones to transmit the aggregated degrees of belief to the concentrators, where they corresponds to the 9% of the total number of nodes in the network. By pushing this intelligence to the edge, the computational cost consists of 30% of the total consumed energy, while the remaining portion is devoted to localized communication during the clustering eras plus the communication of the CHs with the concentrators, as shown in Fig. 2 (right).

Fig. 2
figure 2

(Left) Total energy consumption C M in nJ and its breakdown to the processing/computation cost C p and communication cost C c for \(|\mathcal {N}| = 50\) nodes vs. number of observations; (right) the processing and communication ratios out of the total consumed energy

Figure 3 (left) shows the total consumed energy C M for different number of nodes \(|\mathcal {N}|\), while Fig. 3 (right) illustrates the impact of the vector quantization (vigilance percentage a) in each node i on the processing/computation cost C p out of the total cost C M for different number of nodes. It is worth mentioning that when we increase the resolution of the vector quantization, i.e., the number of patterns K that can be estimated during the vector quantization process (Algorithm 1) then the node i spends more energy for computation. This corresponds to identifying the closest pattern and to calculate the assignment probability. Obviously, the more patterns each node derives from the quantization process the higher the quality of inference, however, at the expense on the computational energy consumption. Nonetheless, the quality of inference is related with reducing the false rate ϕ. By achieving a significant low ϕ value, i.e., ϕ < 0.001, our mechanism requires a vigilance percentage a = 0.35. In this case, the processing ratio is approximately 30% of the total energy consumption. There is then a trade-off between quality of inference (due to high quality of vector quantization) and required energy for achieving this high quality. Our mechanism is flexible to tune this trade-off (as shown in Fig. 6) and attempts the lowest false rate by being energy efficient (in both: communication and computation) compared with the VS and AS models described in Section 10.5.

Fig. 3
figure 3

(Left) Total energy consumption C M in nJ for different number of nodes \(|\mathcal {N}|\) nodes vs. number of observations; (right) The processing/computation cost ratio C p /C M vs. the quality of vector quantization (vigilance percentage a) for different number of nodes \(|\mathcal {N}|\)

10.5 Comparative assessment

We compare model M with the models VS and AS, where their inference policies are provided in (38) and (39), respectively, focusing on: (i) quality of event inference, (ii) energy consumption in terms of computational cost and communication overhead, and (iii) efficiency.

10.5.1 Comparison in quality of inference

In the quality of inference we evaluate the false rate for each model given a probability of faulty data values to examine their robustness. In the comparison experiments, we take \(|\mathcal {N}| \in \left \lbrace 5, 10, 50 \right \rbrace \) . Table 10 shows the false rate ϕ for \(|\mathcal {N}| \in \{5,10,50\}\) and different p F values. We observe that model M outperforms VS and AS models when p F ≥ 40%. This is interesting as it shows that model M achieves a bounded erroneous inference probability even when nodes experience multiple faulty measurements. For p F = 80% indicating high uncertainty, model M achieves 80.00% and 82.76% fewer false alerts compared to VS and AS, respectively. We can also observe from Table 10 the comparison results for \(|\mathcal {N}| \in \{10,50\}\). In general, the increased number of nodes leads to a low number of false alerts (i.e., low ϕ), close to zero. For \(|\mathcal {N}| = 10\), model M outperforms VS and AS when p F > 40%. For p F = 80%, model M achieves 88.10% and 84.85% fewer false alerts compared to VS and AS, respectively. In any case, model M keeps ϕ close to zero. This indicates the capability of model M to exploit all CPs to reason about event in a robust way along with taking into account the local degrees of belief of neighboring nodes. For \(|\mathcal {N}| = 50\), model M produces alerts with very high accuracy, i.e., low ϕ, compared with models AS and VS, for all p F values.

Table 10 Comparison: model M, VS, and AS for ϕ vs. p F , \(|\mathcal {N}| \in \{5,10,50\}\)

10.5.2 Comparison in energy consumption, cost & efficiency

Model M obtains significant low false rates with a significant low number of messages sent from CHs to the back-end systems, compared to the VS and AS models. Specifically, for model M there are T = 13 clustering eras out of the total 5 ⋅ 104 observations, and we obtain number of messages \(\mathcal {M} = (2.32 \cdot 10^{6}, 24.6 \cdot 10^{4}, 3.42 \cdot 10^{3})\) for model AS, model VS and model M, respectively (we obtain, on average, \(|\mathcal {N}_{CH}|=4.65\) cluster heads per clustering era). This indicates that, for even uncertain and faulty data streams, i.e., p F > 40%, model M achieves 83.67% lower false rate from both AS and VS models by requiring three and one less orders of magnitude in communication overhead, respectively.

In terms of energy consumption in computation and communication, Fig. 4 (left) show the total cost C M , C V S , and C A S for model M, VS, and AS, respectively, for \(|\mathcal {N}| = 50\) in logarithmic scale. It is obvious that our model saves energy by at least two orders of magnitude compared to the localized inference model VS and the centralized inference model AS. This indicates the vision of pushing intelligence to the edge of the network with exploiting the computing capability of the nodes to infer events, thus avoiding data transfers from the source of information to the back-end-systems. Moreover, even in the case of the localized VS model, our model requires significantly less energy (two orders of magnitude) since the ‘instant’ inference achieved by a node executing the VS model appears to be in many times erroneous compared to our model. We capture this by introducing intelligent context reasoning processes such that the CHs will only infer an event if the neighboring nodes reach a consensus, thus minimizing the false rate. This, however, requires some additional computational cost and communication. But, as illustrated in Fig. 4 (right), the ratio of the consumed energy by our model is ∼ 10−3 and ∼ 10−2 of the consumed energy by the centralized and localized models, respectively.

Fig. 4
figure 4

(Left) Total energy consumption for models M, VS, and AS vs. number of observations; (right) the cost ratios C M /C A S and C M /C V S of the model M out of the models AS and VS, respectively vs. number of observations; \(|\mathcal {N}|=50\)

In Fig. 5 (left) we examine the scalability capability of our model in terms of the number of CHs as a percentage of the total number of nodes comparing with the AS and VS models. Specifically, we present the total consumed energy (computation and communication) starting from a CH percentage \(|\mathcal {N}_{CH}|\) of 10% to 100% of the total number of nodes \(|\mathcal {N}|\). We can observe the significantly low impact on the total cost compared with the other models. Moreover, the case where \(|\mathcal {N}_{CH}| = |\mathcal {N}|\) depicts the capability of inferring an event as accurately as possible by each of the nodes, thus minimizing the ϕ value. It is worth comparing this scalability performance with the VS model, where all the nodes are acting independently based on the inference policy in (38). This indicates the capability of our model not only to scale with the number of CHs but also to deliver inference results corresponding to high quality of inference.

Fig. 5
figure 5

(Left) Scalability: total energy consumption for models M, VS, and AS vs. number of observations. For model M the cost C M is shown for different percentages of the number of CH nodes \(|\mathcal {N}_{CH}|\) out of the total number of nodes \(|\mathcal {N}|=50\); (right) total energy consumption for models M, VS, and AS vs. the belief threshold 𝜖 with \(|\mathcal {N}|=50\)

Figure 5 (right) shows the impact of the belief threshold 𝜖 on the consumed energy for all the models with \(|\mathcal {N}|= 50\). The higher the 𝜖 value the less insensitive each model is to accurately inferring an event. However, this comes at a lower cost, since both the model M and model VS avoid inferring events, thus, reducing the communication with the back-end-system (transmitting the inference results from the CH nodes in model M and from the individual nodes in model VS). Evidently, the model AS is not influenced by this threshold since the nodes just deliver the sensed contextual vectors and do not perform any computations. On the other hand, this results to a high impact on the quality of inference, which is quantified by the ϕ value. Given a significant low value of ϕ, we set 𝜖 = 0.7, which results to three orders and two orders of magnitude less energy consumption achieved by our model, comparing with the AS and VS models, respectively. In that case, we define the efficiency indicator to examine the consumed energy of each model and its corresponding performance in terms of quality of inference.

Figure 6 shows the total energy consumption for all models against false rate ϕ for data faulty probability p F ∈{40%,80%}. It is worth noting the efficiency of our model compared to the other models AS and VS, which achieves very low false rate with significantly the lowest total energy consumption. The AS model appears to be the least efficient in terms of energy consumption and the achieved false rate, while model VS is moderate efficient for p F = 40%. When the faulty probability is high, then the model VS increases significantly its false rate, due to the lack of any reasoning algorithm to deal with high faulty data values, while it also consumes significantly more energy than model M. The model AS cannot reduce its energy consumption even if the data faulty probability decreases since that model does not take into consideration any characteristic of the captured contextual data streams (it only forwards data vectors to the back-end-system). Our model appears very robust in terms of efficiency even if the p F is high. Overall, our concept of pushing predictive intelligence and data processing to the edge devices benefit: (i) accurate event inference close to the source of the information, (ii) significantly low communication overhead by localized belief-centric groupings, thus, avoiding data transfer to the back-end systems, and (iii) energy-efficient and robust inference in terms of data faulty probability.

Fig. 6
figure 6

Efficiency: total energy consumption for models M, VS, and AS vs. false rate ϕ for different faulty value probabilities p F with number of nodes \(|\mathcal {N}|=50\)

11 Conclusions

We propose a novel federated event reasoning scheme by pushing predictive intelligence to the edge of the IoT network. This is achieved by an energy-efficient, real-time event reasoning mechanism, where data processing and predictive intelligence is pushed to the edge devices equipped with sensing and computing capabilities. Edge predictive intelligence and collaborative reasoning is materialized by the autonomous nature of nodes to locally perform data sensing & inference, and convey only inferred knowledge to their neighbors and concentrators. Nodes possess intelligence to reason about events, thus avoiding transferring raw data, while the complexity of inference is physically distributed to the sources of contextual information. Nodes are capable of locally processing and inferring events from contextual data streams enhanced with different context perspectives: predicted context, outliers context inference, and context fusion. The approximate event inference of each node is derived through Type-2 Fuzzy Logic inference to handle uncertainty. Finally, a knowledge-centric clustering scheme is introduced, where the clusters of nodes are formed according to their degrees of belief. The cluster heads are then disseminate the minimal sufficient knowledge to the concentrators / systems for event inference.

We provide mathematical analyses of our the statistical learning and stochastic optimization models, asymptotic complexities and energy consumption models for computation and communication cost, evaluate the model’s performance and provide a comprehensive comparative assessment with other local & centralized event inference mechanisms. It is evidenced that the idea of exploiting the computing and sensing capabilities of nodes to ‘intelligence at the edge’ is deemed appropriate for real-time applications in IoT environments.