1 Introduction

Applications within the area of Ambient Intelligence address technology to contribute to personal care for safety, health, performance, and wellbeing; e.g., (Aarts et al 2003, 2001; Riva et al 2005). Such applications make use of the possibilities to acquire sensor information about humans and their functioning, and knowledge for analysis of such information. Based on this, ambient devices can respond by undertaking appropriate actions that improve the human’s, safety, health, performance, and wellbeing. Often the performance of such actions has a reactive nature, triggered when a value of a variable based on sensor information is exceeding a certain threshold. A risk of such an approach is that the human is guided only at the level of his or her behaviour and not at the level of the underlying cognitive states causing the behaviour. Such a situation might lead to suggesting the human to suppress behaviour that is entailed by his or her internal cognitive states, without taking into account these cognitive states themselves. As an alternative route, the approach put forward in this paper incorporates a cognitive analysis of the internal cognitive states underlying certain performance aspects. Crucial aspects are the decisions on which cognitive states relate to considered performance aspects, what to be monitored (monitoring foci), and how to derive conclusions about the cognitive states from acquired monitoring information.

In this paper a component-based ambient agent architecture is presented for monitoring and cognitive analysis based on a cognitive model of the human’s functioning. Within this ambient agent architecture, first the relevance for the considered performance aspects to cognitive states are automatically determined using the cognitive model. Next, for these cognitive states monitoring foci are determined (again from the cognitive model) by deriving representation relations for the human’s cognitive states. Within Philosophy of Mind a representation relation relates the occurrence of an internal cognitive state property of a human at some time point to the occurrence of other (e.g., externally observable) state properties at the same or at different time points. In the ambient agent model, these representation relations are expressed as temporal predicate logical specifications. From these temporal expressions the externally observable events are derived that are to be monitored: events that are relevant to the human’s generation of the cognitive states addressed. From the monitoring information on these events the agent verifies the representation expressions, and thus concludes whether or not the human is in such a state. The proposed approach allows to identify human’s cognitive states of at any time point. Furthermore, in case an internal state has been identified that may affect the performance of the human in a negative way, appropriate actions may be undertaken by the agent.

The ambient agent architecture presented has been designed as a specialisation of a more general component-based ambient agent model (cf. Bosse et al 2008b), which is based on component-based agent design principles as presented in Brazier et al. (2000). Within this agent architecture, an explicitly represented cognitive model of the human’s functioning is assumed, expressed in the form of causal and dynamic relationships between cognitive states and behavioural aspects (i.e., specific forms of interaction by sensing and acting). The design has been specified in the form of an executable component-based agent-based model that has been used for simulation and prototyping.

The paper is organised as follows. First, an overview of the ambient agent architecture is provided in Sect. 2. In Sect. 3 the component Own Process Control of the architecture is described. Section 4 described the component Agent Specific Task with subcomponents for process analysis and plan determination. In Sect. 5 it is discussed how the relationships between the considered performance aspects and the relevant internal cognitive states are determined. Section 6 describes how the monitoring foci are obtained, by deriving from the given cognitive model representation relations for the relevant cognitive states. Section 7 illustrates the architecture by two examples. In Sect. 8 related research is considered. Finally, Sect. 9 is a discussion.

2 Overview of the ambient agent architecture

This section briefly introduces the modelling approach used. To specify the model conceptually and formally, the agent-oriented perspective is a suitable choice.

2.1 Component-based ambient agent model

An ambient agent is assumed to maintain knowledge about certain aspects of human functioning, and information about the current state and history of the world and other agents. Based on this knowledge it is able to have some understanding of the human processes, and can behave accordingly. Based on the component-based Generic Agent Model (GAM) presented in Brazier et al. (2000), a model for ambient agents (AAM) was designed (Bosse et al 2008b) (see Fig. 1).

Fig. 1
figure 1

Ambient Agent Architecture

Within AAM, as in GAM the component World Interaction Management takes care of interaction with the world, the component Agent Interaction Management takes care of communication with other agents. Moreover, the component Maintenance of World Information maintains information about the world, and the component Maintenance of Agent Information maintains information about other (for example, human) agents. In the component Agent Specific Task, specific tasks can be modelled. The component Own Process Control (OPC) initiates and coordinates the internal agent processes.

The ambient agent model AAM has been obtained as a refinement of this component-based agent model in the following manner. Maintenance of Agent Information has three subcomponents in AAM: Maintenance of a Dynamic Agent Model maintains a cognitive model represented by causal and temporal relationships for the human’s functioning. Note that in this way within the agent model that specifies the ambient software/hardware agent, another model is included which describes the human; this is a form of recursive modelling. Maintenance of an Agent State Model maintains a snapshot of the (current) state of the human. As an example, this may model the human’s gaze focussing state, a belief, or an intention. Maintenance of an Agent History Model maintains the history of the (current) state of the human. This may for instance model intentions over time.

Similarly, Maintenance of World Information has three subcomponents for a dynamic world model, a world state model, and a world history model, respectively. Moreover, Agent Specific Task has the following two subcomponents: Process Analysis assesses the current state of the human, and Plan Determination determines whether action (intervention) has to be undertaken, and, if so, which ones. Finally, as in the model GAM, World Interaction Management and Agent Interaction Management prepare (based on internally generated information) and receive (and internally forward) interaction with the world and other agents (including the human).

2.2 State ontologies and temporal relations

To express the information involved in the agent’s internal processes, the sorted predicate logical ontology shown in Table 1 was specified. An ontology is a signature specified by a tuple \( \langle\mathsf{S}_\mathsf{1},\ldots,\mathsf{S}_\mathsf{n},\ldots,\mathsf{C},\mathsf{f},\mathsf{P},\mathsf{arity}\rangle \), where \( \mathsf{S}_{\mathsf i} \) for i = 1, …, n is a sort (for a specific type of objects), C is a finite set of constant symbols, f is a finite set of function symbols, P is a finite set of predicate symbols, arity is a mapping of function or predicate symbols to a natural number. Furthermore, for each component of an agent input, output and internal ontologies are defined. Information transmitted from output to input interfaces of components may be mapped using information links connecting the components. Furthermore, information obtained at the input of a component may be mapped (based on the component’s functionality specification) to information generated at the component’s output.

Table 1 A part of the ontology used within the Ambient Agent Model

In particular, the input ontology of the World Interaction Management component contains a predicate to specify the results of (passive and active) observation from the world, and its output ontology contains predicates to specify actions and active observation performed in the world (see Table 1). To store the results of observation, the World Interaction Management component maps observation_result(I, S) (depending on whether the observation concerns the world or another agent) at its input to new_world_info(I’, S’) or new_agent_info(I’, S’) at its output. Here the sign S (or S’) denotes pos or neg, indicating true or false. The information new_world_info(I’, S’) is provided to the component Maintenance of World Information via a link. This link performs the mapping of new_world_info(I, S) to belief(I’, S’), which is a predicate that belongs to the input, output and internal ontologies of the component Maintenance of World Information. Similarly the information new_agent_info(I’, S’) is provided to Maintenance of Agent Information by a link performing the mapping of new_agent_info(I′, S′) to belief(I’, S’), which is a predicate that belongs to the input, output and internal ontologies of the component Maintenance of Agent Information. Note that for most cases the distinction between world information and agent information is straightforward, but in some other cases basically it may be a modelling choice. Stored beliefs may be further provided to other components of the agent (e.g., to the Agent Specific Task component). Similarly, the component Agent Interaction Management receives at its input information communicated from other agents and may initiate a communication at its output using the corresponding communication predicates from Table 1. The communicated information is mapped (depending on whether it concerns information about the world or another agent) from communicated_by(I, S, A) to new_world_info(I’, S’) (resp. new_agent_info(I’, S’)) at the output of the component Agent Interaction Management. This information may be stored by transmitting it to the component Maintenance of World Information (resp. Maintenance of Agent Information) by a link. This link describes the mapping of new_world_info(I, S) (resp. new_agent_info(I’, S’)) to belief(I’, S’), which is a predicate that belongs to the input, output and internal ontology of the component Maintenance of Agent Information (resp. Maintenance of Agent Information).

The subcomponent Maintenance of a Dynamic Agent Model contains cognitive models that are represented by sets of beliefs using a part of the ontology from Table 1. As an example

$$ {\mathsf{belief}}({\mathsf{leads}}\_{\mathsf{to}}\_{\mathsf{after}}({\mathsf{I}}{:}{\mathsf{INFO}}\_{\mathsf{ELEMENT}},{\mathsf{J}}{:}{\mathsf{INFO}}\_{\mathsf{ELEMENT}},{\mathsf{D}}{:}{\mathsf{REAL)}},{\mathsf{pos}}) $$

is an expression based on this ontology which represents that the agent has the knowledge that state property I leads to state property J with a certain time delay specified by D.

Cognitive model specifications are based on temporal relations. The modelling approach to model temporal expressions within the agent is based on the Temporal Trace Language (TTL) for formal specification and verification of dynamic properties (Bosse et al 2009; Sharpanskykh and Treur 2006, 2010). This is a socalled reified temporal predicate logical language; cf. (Galton 2006), which means that properties of states are used as constants and terms in the language. It supports formal specification and analysis of dynamic properties, covering both qualitative and quantitative aspects. The agent’s dynamics is represented in TTL as an evolution of states of the agent over time. A state of the agent (or of its component) is characterized by a set of state properties expressed over (state) ontology Ont that hold. In TTL state properties are used as terms (denoting objects). To this end the state language is imported in TTL as follows: For every sort S from the state language the following sorts are introduced in TTL: the sort S VARS, which contains all variable names of sort S, the sort S GTERMS, which contains names of all ground terms constructed using sort S; sorts S GTERMS and S VARS are subsorts of sort S TERMS. Sort STATPROP contains names for all state formulae. The set of function symbols of TTL includes ∧, ∨, → , ↔: STATPROP x STATPROP → STATPROP; not: STATPROP → STATPROP, and , : S VARS x STATPROP → STATPROP, of which the counterparts in the state language are Boolean propositional connectives and quantifiers. Further we shall use ∧, ∨, →, ↔ in infix notation and , in prefix notation for better readability. To represent dynamics of a system sort TIME (a set of time points) and the ordering relation >: TIME x TIME are introduced in TTL. To indicate that some state property holds at some time point the relation at: STATPROP x TIME is introduced. The terms of TTL are constructed by induction in a standard way from variables, constants and function symbols typed with all before-mentioned sorts. The set of atomic TTL-formulae is defined as:

  1. (1)

    If t is a term of sort TIME, and p is a term of the sort STATPROP, then at(p, t) is an atomic TTL formula.

  2. (2)

    If τ 1, τ 2 are terms of any TTL sort, then τ 1  = τ 2 is an TTL-atom.

  3. (3)

    If t 1 , t 2 are terms of sort TIME, then t 1  > t 2 is an TTL-atom.

The set of well-formed TTL formulae is defined inductively in a standard way using Boolean connectives and quantifiers over variables of TTL sorts. The language TTL has the semantics of many-sorted predicate logic. A special software environment has been developed for TTL, featuring a Property Editor for building TTL properties and a Checking Tool that enables automated formal verification of such properties against a set of traces.

To specify executable models (e.g., models that can be used for simulation), a sublanguage of TTL called LEADSTO (Bosse et al 2007) has been developed. This language enables modelling direct temporal dependencies between two state properties in successive states in the format:

$$ \alpha \twoheadrightarrow_{{{\mathsf{e}},{\mathsf{f}},{\mathsf{g}},{\mathsf{h}}}} \beta $$

Here α and β are state properties in form of a conjunction of atoms or negations of atoms, and e, f, g, h non-negative real numbers. This format is interpreted as follows: if state property α holds for a certain time interval with duration g, then after some delay (between e and f) state property β will hold for a certain time interval of length h. Sometimes, when e = f=g = h=1, a simpler format will be used: α ↠ β.

3 The component Own Process Control

For an agent a set of goals is defined, which it strives to achieve. These goals are maintained within the component OPC. Some of these goals concern the human’s well-being (e.g., goal G1 ‘it is required to maintain a satisfactory health condition of the human’); other may be related to the quality of the task execution by the human. Each goal is refined into more specific criteria that should hold for the human’s functioning (e.g., a criterion for G1 can be refined to ‘the human’s heart rate should be maintained in the range is 60–100 beats per minute’). Based on the criteria expressions, a set of output states (called an output focus) and a set of internal states (called an internal focus) of the human are determined, which are used for establishing the satisfaction of the criteria. For example, for G1 the output focus may include such states as ‘heart rate’ and ‘movement’, and the internal focus may contain ‘pain’ and ‘discomfort’ states.

A cognitive model of the human stored in the agent defines relations between an output state and internal states which cause the generation of the output state. The latter provide a more in depth understanding of why certain behaviours (may) occur. In general, using a cognitive model one can determine a minimal specification that comprises temporal relations to internal states, which provides necessary and sufficient conditions on internal states to ensure the generation of an output state. An automated procedure to generate such a specification is considered in Sect. 5. If more than one of such a minimal specification can be generated for an output state, depending on the agent’s task and criteria one of them is chosen. Such a specification is a useful means for prediction of agent behaviour. That is, if an essential part of a specification becomes satisfied (e.g., when some important internal state(s) hold(s)), the possibility that the corresponding output state will be generated increases significantly. If such an output is not desired, the ambient agent should undertake appropriate actions in a knowledgeable manner, based on an in depth understanding of the internal states causing the behaviour. Thus, the essential internal states from specifications for the states in the output focus should be added to the internal focus of the agent. These states are called predictors for an output.

As discussed, for an ambient agent, information on internal states of the human are important to obtain an in depth understanding of the human’s behaviour, and for undertaking (intervention) actions in a knowledgeable manner. However, such states in an internal focus cannot be observed directly. Therefore, representation relations should be established between these states and externally observable states of the human (i.e., the representational content should be defined for each internal state in focus). Representation relations are derived from the cognitive model representation (see Sect. 6) and usually have the form of complex temporal expressions over externally observable states. Thus, to detect the occurrence of an internal state, the corresponding representational content should be verified constantly. To support the verification process, monitoring is needed. To this end it is useful to decompose a representational content expression into atomic subformulae that describe particular atomic interaction and world events. The subformulae are determined in a top-down manner, following the nested structure of the overall formula. This decomposition process is specified in the following manner:

$$ \begin{array}{lll} {\mathsf{monitor}}\_{\mathsf{focus}}( {\mathsf{F}} ) & \twoheadrightarrow & {\mathsf{in}}\_{\mathsf{focus}}( {\mathsf{F}} ) \\ {\mathsf{in}}\_{\mathsf{focus}}( {\mathsf{E}} ) \wedge\\ {\mathsf{is}}\_{\mathsf{composed}}\_{\mathsf{of}}( {{\mathsf{E}},{\mathsf{C}},{\mathsf{E}}1,{\mathsf{E}}2} ) & \twoheadrightarrow &{\mathsf{in}}\_{\mathsf{focus}}( {{\mathsf{E}}1} ) \wedge {\mathsf{in}}\_{\mathsf{focus}}( {{\mathsf{E}}2} ) \\ \end{array} $$

Here is_composed_of(E, C, E1, E2) indicates that E is an expression obtained from subexpressions E1 and E2 by a logical operator C (i.e., and, or, implies, not, forall, exists). At each decomposition step subexpressions representing events (i.e., that belong to sort EVENT that comprises names of state properties corresponding to all possible interaction and world events) are added to the list of foci. The atomic expressions at the lowest level of this list augmented by the foci on the atomic states from the output focus is provided as atomic monitoring foci to World Interaction Management and Agent Interaction Management, which initiate monitoring. Furthermore, the obtained information on the states in the internal focus and their representation relations is provided to the subcomponent Process Analysis of Agent Specific Task, whereas the information on the states in the output focus and on the chosen predictors for these states are provided to the subcomponent Plan Determination of Agent Specific Focus.

4 The component Agent Specific Task

This section considers two subcomponents of Agent Specific Task: Process Analysis, which determines the human’s current (internal and externally observable) states and Plan Determination, which determines if (and which) an intervention in the human’s activities is required.

4.1 Process analysis

The subcomponent process analysis receives a stream of atomic information over time (obtained by observation or communication), from the components Maintenance of World Information and Maintenance of Agent Information. Every new piece of information obtained is labelled by a time point as follows:

  1. (a)

    information about the world:

    $$ {\mathsf{new}}\_{\mathsf{world}}\_{\mathsf{info}}( {{\mathsf{I}},\,{\mathsf{S}}} ) \wedge {\mathsf{current}}\_{\mathsf{time}}( {\mathsf{T}} ) \twoheadrightarrow {\mathsf{new}}\_{\mathsf{world}}\_{\mathsf{info}}( {{\mathsf{holds}}\_{\mathsf{at}}( {{\mathsf{I}},\,{\mathsf{T}}} ),\,{\mathsf{S}}} ) $$
  2. (b)

    information about another agent:

    $$ {\mathsf{new}}\_{\mathsf{agent}}\_{\mathsf{info}}( {{\mathsf{I}},\,{\mathsf{S}}} ) \wedge {\mathsf{current}}\_{\mathsf{time}}( {\mathsf{T}} ) \twoheadrightarrow {\mathsf{new}}\_{\mathsf{world}}\_{\mathsf{info}}( {{\mathsf{holds}}\_{\mathsf{at}}( {{\mathsf{I}},\,{\mathsf{T}}} ),\,{\mathsf{S}}} ) $$

Every time when new atomic information about an agent or the world is found, the representational content for the internal states in focus, expressed as TTL formulae in which this information occurs, are verified automatically on execution histories (or traces) by the TTL Checker tool (Bosse et al 2009). In the following the verification algorithm of this tool, is described briefly (for more details see (Bosse et al 2009)).

The verification algorithm is a backtracking algorithm that systematically considers all possible instantiations of variables in the TTL formula under verification. However, not for all quantified variables in the formula the same backtracking procedure is used. Backtracking over variables occurring in at-formulae is replaced by backtracking over values occurring in the corresponding at-atoms in traces under consideration. Since there are a finite number of such state atoms in the traces, iterating over them often will be more efficient than iterating over the whole range of the variables occurring in the at-atoms. As time plays an important role in TTL-formulae, special attention is given to continuous and discrete time range variables. Because of the finite variability property of traces (i.e., only a finite number of state changes occur between any two time points), it is possible to partition the time range into a minimum set of intervals within which all atoms occurring in the property are constant in all traces. Quantification over continuous or discrete time variables is replaced by quantification over this finite set of time intervals.

The complexity of the algorithm has an upper bound in the order of the product of the sizes of the ranges of all quantified variables. However, if a variable occurs in a at-atom, the contribution of that variable is no longer its range size, but the number of times that the at atom pattern occurs (with different instantiations) in trace(s) under consideration. The contribution of an isolated time variable is the number of time intervals into which the traces under consideration are divided.

If a representational content formula is evaluated to true, then the corresponding internal state holds. Information on internal states that hold is stored as beliefs in the Maintenance of Agent Information component.

4.2 Plan determination

The task of the subcomponent plan determination is to ensure that the criteria provided by Own Process Control component hold. The satisfaction of the criteria is established by checking them on the data about the human’s functioning obtained from the Maintenance of Agent Information and Maintenance of World Information components. To this end, the verification algorithm and the tool described in Sect. 4.1 are used.

However, to prevent the violation of a criterion promptly, information related to the prediction of agent behaviour (i.e., predictors for outputs) can be used. More specifically, if internal states—predictors for a set of output states O hold, and some performance criterion is violated under O, then an intervention in human activities is required. The type of intervention may be defined separately for each criterion and the context in which the criterion is violated.

5 Generating internal specifications for output states

One of the tasks of Own Process Control is the identification of (internal) predictors for outputs. A predictor(s) for a particular output can be identified based on a specification of human’s internal dynamics that ensures the generation of the output. In general, more than one specification can be identified, which is minimal (in terms of numbers of internal states and relations between them), however sufficient for the generation of a particular output. Such specifications are defined based the human’s cognitive model.

The approach presented in this paper adopts a rather general specification format for (internal) cognitive models that comprises past-present statements. A past-present statement (abbreviated as a pp-statement) is a statement φ of the form B ⇔ H, where the formula H, called the head and denoted by head(φ), is a statement of the form at(p, t) for some time point t and state property p, and B, called the body and denoted by body(φ), is a past statement for t. A past statement for a time point t over state ontology Ont is a temporal statement in the reified temporal predicate logic, such that each time variable s different from t is restricted to the time interval before t: for every time quantifier for a time variable s a restriction of the form t > s is required within the statement. Sometimes B is called the definition of H. Simple examples of bodies B are conjunctions of at(p i , t i ) for a number of state properties p i and time points t i before t, for example t i  = t − i. However, in other examples the past does not need to be prescribed in such a strict manner. An example of such a specification is

$$ {\mathsf{B}} = \exists {\mathsf{t}}1,\,{\mathsf{t}}2[ {{\mathsf{t}}1 < {\mathsf{t}}\,\& \,{\mathsf{t}}2 < {\mathsf{t}}\,\& \,{\mathsf{t}}1 \ge {\mathsf{t}} - 3\,\& \,{\mathsf{t}}2 \ge {\mathsf{t}} - 7\,\& \,{\mathsf{at}}( {{\mathsf{p}}1,\,{\mathsf{t}}1} )\& \,{\mathsf{at}}( {{\mathsf{p}}2,\,{\mathsf{t}}2} )} ] $$

In this case the exact time points where p1 and p2 should hold have some variability and also the order between them is not fixed.

Below an automated procedure for the identification of all possible minimal specifications for an output state based on a cognitive model is given. The rough idea underlying the procedure is the following. Suppose for a certain output state property p the pp-statement B ⇔ at(p, t) is available. Moreover, suppose that in B only two atoms of the form at(p1, t1) and at(p2, t2) with internal states p1 and p2 occur, whereas as part of the cognitive model also specifications B1 ⇔ at(p1, t1) and B2 ⇔ at(p2, t2) are available. Then, within B the atoms can be replaced (by substitution) by the formula B1 and B2. Thus, at(p, t) may be related by equivalence to four specifications:

$$ \begin{array}{ll}{{\mathsf{B}} \Leftrightarrow {\mathsf{at}}( {{\mathsf{p}},\,{\mathsf{t}}} )} & {{\mathsf{B}}[ {{\mathsf{B}}2/{\mathsf{at}}( {{\mathsf{p}}2,\,{\mathsf{t}}2} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{p}},\,{\mathsf{t}}} )} \\{{\mathsf{B}}[ {{\mathsf{B}}1/{\mathsf{at}}( {{\mathsf{p}}1,\,{\mathsf{t}}1} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{p}},\,{\mathsf{t}}} )} & {{\mathsf{B}}[ {{\mathsf{B}}1/{\mathsf{at}}( {{\mathsf{p}}1,\,{\mathsf{t}}1} ),\,{\mathsf{B}}2/{\mathsf{at}}( {{\mathsf{p}}2,\,{\mathsf{t}}2} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{p}},\,{\mathsf{t}}} )} \\\end{array} $$

Here for any formula C the expression C[x/y] denotes the formula C transformed by substituting x for y.

figure a

For each generated specification the following measures can be calculated:

  1. (1)

    The measure of desirability indicating how desirable is the human’s state, described by the generated specification at a given time point. The measure ranges from −1 (a highly undesirable state) to 1 (a highly desirable state).

  2. (2)

    The minimum and maximum time before the generation of the output state(s). This measure is critical for timely intervention in human’s activities.

These measures serve as heuristics for choosing one of the generated specifications. To facilitate the choice, constrains on the measures may be defined, which ensure that an intervention occurs only when a considerable (un)desirability degree of the human’s state is determined, but also the minimum time before the (un)desirable output(s) is above some acceptable threshold. To calculate the measure (1), the degree of desirability is associated with each output state of the cognitive model. Then, it is determined which output states from the cognitive specification can be potentially generated, given that the bodies of the formulae from the generated specification are evaluated to TRUE. This is done by executing the cognitive specification with body(φ i ) = TRUE for all φ i from the generated specification. Then, the desirability of a candidate specification is calculated as the average over the degrees of desirability of the identified output states, which can be potentially generated. The measure (2) can be calculated when numerical timing relations are defined in the properties of a cognitive specification. After a specification is chosen, a set of predictor states from the specification for the output states in focus can be identified. When statistical information in the form of past traces of human behaviour is available, then the set of predictors is determined by identifying for each candidate two sets: a set of traces S in which the outputs in focus were generated and set T ⊆ S in which the candidate set of predictors was generated. The closer the ratio |T|/|S| to 1, the more reliable is the candidate set of predictors for the output(s) in focus.

6 Representation relations

The component Own Process Control is responsible for the identification of representation relations for cognitive states specified in the represented cognitive model for the human. A representation relation for an internal state property p relates the occurrence of p to a specification Φ that comprises a set of state properties and temporal (or causal) relations between them. In such a case it is said that p represents Φ, or Φ describes representational content of p. This section presents an automated approach to identify representation relations for cognitive states from a cognitive model representation.

The representational content considered backward in time is specified by a history (i.e., a specification that comprises temporal (or causal) relations on past states) that relates to the creation of the cognitive state in which p holds. In the literature on Philosophy of Mind different approaches to defining representation relations have been put forward; for example, see (Kim 1996; Bickhard 1993). For example, according to the classical causal/correlation approach (Kim 1996), the representational content of an internal state property is given by a one-to-one mapping to an external state property. The application of this approach is limited to simple types of behaviour (e.g., purely reactive behaviour). In cases when an internal property represents a more complex temporal combination of state properties, other approaches have to be used. For example, the temporal-interactivist approach (cf., Bickhard 1993; Jonker and Treur 2003) allows defining representation relations by referring to multiple (partially) temporally ordered interaction state properties; i.e., input (sensor) and output (effector) state properties over time.

An application for the temporal-interactivist approach is demonstrated in the context of the following example of the animal behaviour. Initially, the animal observes that it has low energy (e.g., being hungry). The animal is placed in front of a transparent screen, behind which a piece of food is put afterwards. The animal is able to observe the position of the food and of the screen, after which a cup is place over the food. After some time the screen is raised and the animal chooses to go to the position at which food is present (but invisible). The graphical representation of the cognitive (Belief-Desire-Intention, BDI; see Rao and Georgeff 1991) model that produces such behaviour is given in Fig. 2. Here d is the desire to have food, b is the belief that food is present at some position p2, and i is the intention to go to that position.

Fig. 2
figure 2

Belief-Desire-Intention (BDI) model for motivation-based behaviour

The cognitive model from the example is formalised by the following properties in past-present format:

  • IPA1: Desire d generation

    At any point in time the (persistent) internal state property d holds iff

    at some time point in the past the agent observed its low energy. Formally:

    $$ \exists {\mathsf{t}}2[ {{\mathsf{t}}1 > {\mathsf{t}}2\,\& \,{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{own}}\_{\mathsf{low}}\_{\mathsf{energy}}} ),{\mathsf{t}}2} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{d}},\,{\mathsf{t}}1} ) $$
  • IPA2: Intention i generation:

    At any point in time the (persistent) internal state property i holds iff

    at some time point in the past the internal state property d was true,

    and the internal state property b was true. Formally:

    $$ \exists {\mathsf{t}}6\,{\mathsf{t}}5 > {\mathsf{t}}6\,\& \,{\mathsf{at}}( {{\mathsf{d}},\,{\mathsf{t}}6} )\,\& \,{\mathsf{at}}( {{\mathsf{b}},\,{\mathsf{t}}6} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{i}},\,{\mathsf{t}}5} ) $$
  • IPA3: Action goto p2 generation:

    At any point in time the agent goes to p2 iff at some time point in the past the internal state property i was true and the agent observed the absence of the screen. Formally:

    $$ \exists {\mathsf{t}}8\,{\mathsf{t}}7 > {\mathsf{t}}8\,\& \,{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{no}}\_{\mathsf{screen}}} ),{\mathsf{t}}8} )\,\& \,{\mathsf{at}}( {{\mathsf{i}},\,{\mathsf{t}}8} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{performing}}\_{\mathsf{action}}( {{\mathsf{goto}}\_{\mathsf{p}}2} ),{\mathsf{t}}7} ) $$
  • IPA4: Belief b generation:

    At any point in time internal state property b holds iff at some time point in the past the agent observed that food is present at position p2, and since then did not observe the absence of food. Formally:

    $$ \begin{array}{l} \exists {\mathsf{t}}10\;{\mathsf{t}}9 > {\mathsf{t}}10\;\& \;{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{food}}\_{\mathsf{p}}2} ),\;{\mathsf{t}}10} )\\\& \;\forall {\mathsf{t}}11\;{\mathsf{t}}11 > {\mathsf{t}}10\;\& \;{\mathsf{t}}11 < {\mathsf{t}}9\;{\mathsf{not}} ( {{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{no}}\_{\mathsf{food}}\_{\mathsf{p}}2} ),\;{\mathsf{t}}11} )} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{b}}1,\;{\mathsf{t}}9} ) \\ \end{array} $$

Furthermore, a cognitive specification is assumed to be stratified (Apt et al. 1988), which means that there is a partition of the specification Π = Π 1  ∪ … ∪ Π n into disjoint subsets such that the following condition holds: for i > 1: if a subformula at(φ, t) occurs in a body of a statement in Π i , then it has a definition within j<i Π j .

To automate the process of representation relation identification based on this idea, the following algorithm has been developed:

figure b

In Step 3 subformulae of each formula of the highest stratum n of X’ are replaced by their definitions, provided in lower strata. Then, the formulae of n − 1 stratum used for the replacement are eliminated from X’. As result of such a replacement and elimination, X’ contains n − 1 strata (Step 4). Steps 3 and 4 are performed until X’ contains one stratum only. In this case X’ consists of a formula φ defining the representational content for at(s, t), i.e., head(φ) is at(s, t) and body(φ) is a formula expressed over interaction states and (temporal) relations between them.

In the following it is shown how this algorithm is applied for identifying the representational content for state i from the example. By performing Step 1 the specification of the cognitive model given above is automatically stratified as follows: stratum 1: {IPA1, IPA4}; stratum 2: {IPA2}; stratum 3: {IPA3}.

By Step 2 the property IPA3 is eliminated as unnecessary for determining the representational content of i.

Further, in Step 3 we proceed with the property IPA2 of the highest stratum that defines the internal state i.

$$ \exists {\mathsf{t}}6\,{\mathsf{t}}5 > {\mathsf{t}}6\,\&\,{\mathsf{at}}( {{\mathsf{d}},\,{\mathsf{t}}6} )\,\&\,{\mathsf{ at}}( {{\mathsf{b}}1,\,{\mathsf{t}}6} )\Leftrightarrow {\mathsf{at}}( {{\mathsf{i}},\,{\mathsf{t}}5}) $$

By Step 3 both d and b1 state properties are replaced by their definitions with renaming of temporal variables in the stratum 1. The property IP2 is replaced by the following formula:

$$ \exists {\mathsf{t}}6\,{\mathsf{t}}5 > {\mathsf{t}}6\,\& \,\exists {\mathsf{t}}2\,{\mathsf{t}}6 > {\mathsf{t}}2\,\& \,{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{own}}\_{\mathsf{low}}\_{\mathsf{energy}}} ),\,{\mathsf{t}}2} ) \, \& \,\exists {\mathsf{t}}10\,{\mathsf{t}}6 > {\mathsf{t}}10\,\& \,{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{food}}\_{\mathsf{p}}2} ),\,{\mathsf{t}}10} ) \, \& \,\forall {\mathsf{t}}11\,{\mathsf{t}}11 > {\mathsf{t}}10\,\& \,{\mathsf{t}}11 < {\mathsf{t}}6\,\& \,{\mathsf{not}}( {{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{no}}\_{\mathsf{food}}\_{\mathsf{p}}2} ),\,{\mathsf{t}}11} )} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{i}},\,{\mathsf{t}}5} ) $$

Further, both formulae IPA1 and IPA4 are removed and the property resulted from the replacement is added to the stratum 1, which becomes the only stratum in the specification. The obtained formula is the representational content for the state i that occurs at any time point t5.

The algorithm has been implemented in Java. Worst case time and representation complexity of the algorithm are satisfactory as will be briefly discussed. The worst case time complexity of the algorithm is estimated as follows. The worst case time complexity for step 1 is O(|X| 2 /2). Time complexity of step 2 is O(|X|). The worst case time complexity for steps 3–5 is calculated as:

$$ {\mathsf{O}}(|{\mathsf{STRATUM}}({\mathsf{X}}^{\prime},{\mathsf{n}})| \cdot |{\mathsf{STRATUM}}({\mathsf{X}}^{\prime},{\mathsf{n}} - 1)|) + {\mathsf{O}}(|{\mathsf{STRATUM}}({\mathsf{X}}^{\prime},{\mathsf{n}})| \cdot |{\mathsf{STRATUM}}({\mathsf{X}}^{\prime},{\mathsf{n}} - 2)|) + \cdots + {\mathsf{O}}(|{\mathsf{STRATUM}}({\mathsf{X}}^{\prime},{\mathsf{n}})| \cdot |{\mathsf{STRATUM}}({\mathsf{X}}^{\prime},1)|) = {\mathsf{O}}(|{\mathsf{STRATUM}}({\mathsf{X}}^{\prime},{\mathsf{n}})| \cdot |{\mathsf{X}}^{\prime}|) . $$

Thus, the overall time complexity of the algorithm for the worst case is O(|X| 2 ).

As an example consider a model based on the theory of consciousness by Damasio (2000). In particular, the notions of ‘emotion’, ‘feeling’, and ‘core consciousness’ or ‘feeling a feeling’ are addressed. Damasio (2000) describes an emotion as neural object (or internal emotional state) as an (unconscious) neural reaction to a certain stimulus, realised by a complex ensemble of neural activations in the brain. As the neural activations involved often are preparations for (body) actions, as a consequence of an internal emotional state, the body will be modified into an externally observable state. Next, a feeling is described as the (still unconscious) sensing of this body state. Finally, core consciousness or feeling a feeling is what emerges when the organism detects that its representation of its own body state (the proto-self) has been changed by the occurrence of the stimulus: it becomes (consciously) aware of the feeling. In Fig. 3 a cognitive model for this process is depicted. Here s0 is an internal representation of the situation that no stimulus is sensed, and no changed body state, s1 is an internal representation of the sensed stimulus without a sensed changed body state yet, and s2 is an indication for both sensed stimulus and changed body state (which is the core consciousness state).

Fig. 3
figure 3

Cognitive model based on the theory of core consciousness by Damasio (2000)

The cognitive model for this example comprises the following properties expressed in past-present format:

  • LP1: Generation of the sensory representation for music

    At any point in time the sensory representation for music occurs iff at some time point in the past the sensor state for music occurred. Formally:

    $$ \exists {\mathsf{t}}2\,{\mathsf{t}}1 > {\mathsf{t}}2\,\& \,{\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{music}},\,{\mathsf{t}}2} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{music}},\,{\mathsf{t}}1} ) $$
  • LP2: Generation of the preparation

    At any point in time the preparation p occurs iff at some time point in the past the sensory representation for music occurred. Formally:

    $$ \exists {\mathsf{t}}4\,{\mathsf{t}}3 > {\mathsf{t}}4\,\& \,{\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{music}},{\mathsf{ t}}4} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{p}},{\mathsf{ t}}3} ) $$
  • LP3: Generation of the body state

    At any point in time the body state S occurs iff at some time point in the past the preparation p occurred. Formally:

    $$ \exists {\mathsf{t}}6\,{\mathsf{t}}5 > {\mathsf{t}}6\,\& \,{\mathsf{at}}( {{\mathsf{p}},\,{\mathsf{t}}6} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{S}},\,{\mathsf{t}}5} ) $$
  • LP4: Generation of the sensor state

    At any point in time the sensor state for S occurs iff at some time point in the past the body state S occurred. Formally:

    $$ \exists {\mathsf{t}}8\,{\mathsf{t}}7 > {\mathsf{t}}8\,\& \,{\mathsf{at}}( {{\mathsf{S}},{\mathsf{ t}}8} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{ss}}\_{\mathsf{for}}( {\mathsf{S}} ),{\mathsf{ t}}7} ) $$
  • LP5: Generation of the sensory representation for S

    At any point in time the sensory representation for S occurs iff at some time point in the past the sensor state vector for S occurred

    Formally:

    $$ \exists {\mathsf{t}}10\,{\mathsf{t}}9 > {\mathsf{t}}10\,\& \,{\mathsf{at}}( {{\mathsf{ss}}\_{\mathsf{for}}( {\mathsf{S}} ),\,{\mathsf{t}}10} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{S}},\,{\mathsf{t}}9} ) $$
  • LP6: Generation of s0

    At any point in time s0 occurs iff at some time point in the past no sensory representation for music and no sensory representation for S occurred. Formally:

    $$ \exists {\mathsf{t}}12\,{\mathsf{t}}11 > {\mathsf{t}}12\,\& \,{\mathsf{not}}( {{\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{music}},{\mathsf{ t}}12} )} )\,\& \,{\mathsf{not}}( {{\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{S}},{\mathsf{ t}}12} )} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{s}}0,\,{\mathsf{ t}}11} ) $$
  • LP7: Generation of s1

    At any point in time s1 occurs iff

    at some time point in the past the sensory representation for music and no sensory representation for S and s0 occurred. Formally:

    $$ \exists {\mathsf{t}}14\,{\mathsf{t}}13 > {\mathsf{t}}14\,\& \,{\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{music}},\,{\mathsf{ t}}14} )\,\& \,{\mathsf{at}}( {{\mathsf{s}}0,\,{\mathsf{ t}}14} )\,\& \,{\mathsf{not}}( {{\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{S}},\,{\mathsf{ t}}14} )} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{s}}1,\,{\mathsf{ t}}13} ) $$
  • LP8: Generation of s2

    At any point in time s2 occurs iff

    at some time point in the past the sensory representation for music and the sensory representation for S and s1 occurred. Formally:

    $$ \exists {\mathsf{t}}16\,{\mathsf{ t}}15 > {\mathsf{t}}16\,\& \,{\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{music}},{\mathsf{ t}}16} )\,\& \,{\mathsf{at}}( {{\mathsf{s}}1,\,{\mathsf{ t}}16} )\,\& \,{\mathsf{at}}( {{\mathsf{sr}}\_{\mathsf{S}},\,{\mathsf{ t}}16} ) \Leftrightarrow {\mathsf{at}}( {{\mathsf{s}}2,\,{\mathsf{ t}}15} ) $$

The generated representation relation for state s2 is:

$$ \begin{aligned} &\exists {\mathsf{t}}16\,{\mathsf{ t15 > t16}}\,\exists {\mathsf{t2\,t16 > t2\, at (ss\_music,\,t2)\& }}\exists {\mathsf{t10 t10 at (ss\_for(S),\, t10)}}\,\& \,\exists {\mathsf{t14\, t16 > t14 }}\,\exists {\mathsf{t2^{\prime}}}\\ & {\mathsf{t14 > t2^{\prime}\, at(ss\_music,\,t2^{\prime})}}\,\& \,\neg\\ \exists {\mathsf{t10^{\prime} \,t14>t10\,\&\, at(ss\_for(S),\,t10^{\prime}) }}\,\&\, \exists {\mathsf{t12\, t14 > t12}}\,\neg \exists {\mathsf{t2^{\prime\prime}\,t12 > t2^{\prime\prime}}}\\& \&\,{\mathsf{ at(ss\_music,\,12^{\prime\prime}) }}\,\&\,\neg \exists {\mathsf{t10^{\prime\prime}\,\&\, at(ss\_for(S),\, t10^{\prime\prime})}} \\& \Leftrightarrow {\mathsf{at(s2,\,t15)}}\end{aligned} $$

Note that to verify whether the body of this specification is true at some point in time, an agent would need to monitor information on the atomic monitoring foci (and the time points for which they hold) that are occurring in this body:

  • ss_music

  • ss_for(S)

  • ss_music

  • ss_for(S)

  • ss_music

  • ss_for(S)

This will be illustrated in more detail in the examples below.

7 Examples

In this section two examples are considered: In Sect. 7.1 it is described how the proposed ambient agent architecture was used in an example to support an elderly person in food and medicine intake. Then, an application of the proposed ambient agent architecture in the context of the instruction process is considered in Sect. 7.2.

7.1 Ambient agent model to support an elderly person

The following setting is considered. In normal circumstances the interval between two subsequent food intakes by the human during the day is known to be between 2 and 5 h. When the human is hungry, she goes to the refrigerator and gets and consumes the food she prefers. Sometimes the human feels internal discomfort, which can be soothed by taking medicine X. The box with the medicine lies in a cupboard. After the medicine is taken, the food intake should be avoided for at least 2 h. The agent has the goal to maintain a satisfactory health condition of the human. This goal is refined in Own Process Control into two more specific criteria:

  1. 1.

    food is consumed every 5 h (at latest) during the day;

  2. 2.

    after the medicine is taken, no food consumption during the following 2 h occurs.

As the first two criteria cannot be satisfied when the human takes the medicine after 3 h from the last food intake, a third criterion is formulated:

  1. (3)

    after 3 h from the last food intake no medicine intake occurs.

Thus, the output focus determined by Own Process Control consists of the states performed(eat food) and performed(medicine intake). To determine internal predictors for these states, a cognitive model of the human based on the BDI architecture is used (see Fig. 4).

Fig. 4
figure 4

BDI-based cognitive model for food and medicine intake by the human

In this model the beliefs are based on the observations. For example based on the observation that food is taken, the belief b1 that food is taken is created. The desire and intention to have food are denoted by d1 and i1 correspondingly in the model. The desire and intention to take medicine are denoted by d2 and i2 correspondingly.

The cognitive model from the example was formalised by the following properties in past-present format:

  • IP1: General belief generation property

  • At any point in time a (persistent) belief state b about c holds iff

    at some time point in the past the human observed c. Formally:

    $$ \exists {\mathsf{t}}2\,[ {{\mathsf{t}}1 > {\mathsf{t}}2\,\& \,{\mathsf{at}}({{\mathsf{observed}}( {\mathsf{c}}),\,{\mathsf{t}}2})}] \Leftrightarrow {\mathsf{at}}( {{\mathsf{b}},\,{\mathsf{ t}}1} ) $$
  • IP2: Desire d1 generation

    At any point in time the internal state property d1 holds iff

    at some time point in the past b1 held. Formally:

    $$ \exists {\mathsf{t}}4\,[ {{\mathsf{t}}3 > {\mathsf{t}}4\,\& \,{\mathsf{at}}( {{\mathsf{b}}1,\,{\mathsf{ t}}4} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{d}}1,\,{\mathsf{ t}}3} ) $$
  • IP3: Intention i1 generation

    At any point in time the internal state property i1 holds iff

    at some time point in the past b2 and d1 held. Formally:

    $$ \exists {\mathsf{t}}6\,[ {{\mathsf{t}}5 > {\mathsf{t}}6\,\& \,{\mathsf{at}}( {{\mathsf{d}}1,\,{\mathsf{t}}6} )\,\& \,{\mathsf{at}}( {{\mathsf{b}}2,\,{\mathsf{t}}6} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{i}}1,\,{\mathsf{t}}5} ) $$
  • IP4: Action eat food generation

    At any point in time the action eat food is performed iff

    at some time point in the past both b3 and i1 held. Formally:

    $$ \exists {\mathsf{t}}8\,[ {{\mathsf{t}}7 > {\mathsf{t}}8\,\& \,{\mathsf{at}}( {{\mathsf{i}}1,\,{\mathsf{t}}8} )\,\& \,{\mathsf{at}}( {{\mathsf{b}}3,\,{\mathsf{t}}8} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{performed}}( {{\mathsf{eat}}\,{\mathsf{food}}} ),{\mathsf{ t}}7} ) $$
  • IP5: Desire d2 generation

    At any point in time the internal state property d2 holds iff

    at some time point in the past b4 held. Formally:

    $$ \exists {\mathsf{t}}10[ {{\mathsf{t}}9 > {\mathsf{t}}10\,\& \,{\mathsf{at}}( {{\mathsf{b}}4,\,{\mathsf{ t}}10} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{d}}2,\,{\mathsf{ t}}9} ) $$
  • IP6: Intention i2 generation

    At any point in time the internal state property i2 holds iff

    at some time point in the past b5 and d2 held. Formally:

    $$ \exists {\mathsf{t}}12\,[ {{\mathsf{t}}11 > {\mathsf{t}}12\,\& \,{\mathsf{at}}( {{\mathsf{d}}2,\,{\mathsf{ t}}12} )\,\& \,{\mathsf{at}}( {{\mathsf{b}}5,\,{\mathsf{t}}12} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{i}}2,\,{\mathsf{ t}}11} ) $$
  • IP7: Action medicine intake generation

    At any point in time the action medicine intake is performed iff

    at some time point in the past both b6 and i2 held. Formally:

    $$ \exists {\mathsf{t}}14[ {{\mathsf{t}}13 > {\mathsf{t}}14\,\& \,{\mathsf{at}}( {{\mathsf{i}}2,\,{\mathsf{ t}}14} )\,\& \,{\mathsf{at}}( {{\mathsf{b}}6,\,{\mathsf{ t}}14} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{performed}}( {{\mathsf{medicine\, intake}}} ),\,{\mathsf{ t}}13} ) $$

From the automatically generated specifications that ensure the creation of the state performed(eat food) the one expressed by property IP4 is chosen. This specification has the highest confidence degree of producing the output equal to the undesirability measure of the state performed(eat food)), when it is undesirable. It is assumed that the time interval t7t8 in IP4 is sufficient for the agent’s intervention. The predictor state from the chosen specification is i1, as in the most cases it is generated earlier than the state b3. Thus, i1 is included in the internal focus. By a similar line of reasoning, the specification expressed by property IP7 is chosen, in which i2 is the predictor state included into the internal focus. Thus, the internal focus is the set {i1, i2}. The identified predictors are provided by the Own Process Control component to the Agent Specific Task component.

To be able to monitor the states in the internal focus, the representational content is determined automatically for each of these states by the Own Process Control component:

$$ \begin{aligned}&\exists {\mathsf{t}}6[{\mathsf{t}}5 > {\mathsf{t}}6\;\& \;\exists {\mathsf{t}}4\;[{\mathsf{t}}6 > {\mathsf{t}}4\;\& \;\exists {\mathsf{t}}2[ {{\mathsf{t}}4 > {\mathsf{t}}2\;\& \;{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{food}}\_{\mathsf{not}}\_{\mathsf{eaten}}\_{\mathsf{more}}\_{\mathsf{than}}\_2\,{\mathsf{h}}} ),{\mathsf{ t}}2} )} ] \\ & \& \;\exists {\mathsf{t}}16[ {{\mathsf{t}}1 > {\mathsf{t}}6\;\& \;{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{own}}\_{\mathsf{position}}\_{\mathsf{refrigerator}}} ),{\mathsf{ t}}16} )} ]] \Leftrightarrow {\mathsf{at}}( {{\mathsf{i}}1,{\mathsf{ t}}5} ) \\ & \exists {\mathsf{t}}12[ {{\mathsf{t}}11 > {\mathsf{t}}12\;\& \;\exists {\mathsf{t}}20[ {{\mathsf{t}}12 > {\mathsf{t}}20\;\& \;{\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{own}}\_{\mathsf{position}}\_{\mathsf{cupboard}}} ),{\mathsf{ t}}20} )} ]} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{i}}2,{\mathsf{t}}11} ) \end{aligned} $$

The identified representation relations are provided by the Own Process Control component to the Agent Specific Task component. Furthermore, the Own Process Control derives automatically from the identified representation content the atomic monitoring foci, which are together with the output focus provided to World Interaction Management and Agent Interaction Management components:

  • observed(food_not_eaten_more_than_2 h)

  • observed(own_position_refrigerator)

  • observed(own_position_cupboard)

The observation results for these atomic monitoring foci are stored as beliefs of the agent in these interaction management components. The subcomponent Process Analysis of the Agent Specific Task component constantly monitors the belief base of the agent. As soon as a belief about an event that is in the atomic monitoring foci occurs, the subcomponent initiates automated verification of the corresponding representational content property on the history of the events in focus that occurred as beliefs on the atomic foci so far. For this case such a history (or a trace) was created using the LEADSTO simulation tool (Bosse et al 2007). A part of the used trace is shown in Fig. 5. When the representational content property is established to hold, the belief of the agent is created that the human has the internal state for this content, which is stored in the Maintenance of Agent Information.

Fig. 5
figure 5

A part of history (trace) of events from the case study

The subcomponent Plan Determination of the Agent Specific Task component monitors constantly the belief base of the agent. As soon as the occurrence of the prediction states is established (i1 and i2 in this case), the violation of the criteria is determined under the condition that the predicted outputs hold. Furthermore, a necessary intervention is identified. To this end the following intervention rules are specified to prevent the violation of the considered criteria:

  1. (1)

    If no belief of the agent exists that the human consumed food during last 5 h, then inform the human about the necessary food intake. Formally:

$$ \begin{aligned} &\forall {\mathsf{t}}1\,{\mathsf{ current}}\_{\mathsf{time(t}}1{\mathsf{)}}\,\&\, \neg \exists {\mathsf{t}}2\, {\mathsf{t}}1{-}300 \le {\mathsf{t}}2 < {\mathsf{t}}1\\\,{\mathsf{ belief}}( {{\mathsf{holds}}\_{\mathsf{at}}( {{\mathsf{performed}}( {{\mathsf{eat \, food}}} ),{\mathsf{ t}}2} ),{\mathsf{ pos}}} ) \\ &\Rightarrow {\mathsf{to}}\_{\mathsf{be}}\_{\mathsf{communicated}}\_{\mathsf{to}}( \textquoteleft{{\mathsf{Meal\, time}}\textquoteright,{\mathsf{ pos}},{\mathsf{ Human}}} )\end{aligned} $$
  1. (2)

    If the belief of the agent exists that the human took medicine X <2 h ago (time point t2 in minutes) and the existence of the predictor i1 is established, then inform the human that she still needs to wait (120 − t2) minutes for taking medicine. Formally:

$$ \begin{aligned}&\forall {\mathsf{t}}1\,{\mathsf{ current}}\_{\mathsf{time}}({\mathsf{t}}1)\,\& \,\exists {\mathsf{t}}2\,{\mathsf{t}}1{-}120 < {\mathsf{t}}2\\{\mathsf{belief}}({\mathsf{holds}}\_{\mathsf{at}}({\mathsf{performed}}({\mathsf{medicine\, intake}}),\,{\mathsf{t}}2),\,{\mathsf{pos}})\,\& \,{\mathsf{ at}}( {{\mathsf{i}}1,{\mathsf{ t}}1} ) \\&\Rightarrow {\mathsf{to}}\_{\mathsf{be}}\_{\mathsf{communicated}}\_{\mathsf{to}}(\textquoteleft{\mathsf{Please\, wait}}\,120 - {\mathsf{t}}2\,{\mathsf{ min\, more}}\textquoteright,\,{\mathsf{pos}},\,{\mathsf{Human)}} \end{aligned} $$
  1. (3)

    If no belief exists that the human consumed food during last 3 h and the existence of the predictor i2 is established, inform the human that she better eats first. Formally:

$$ \begin{aligned} &\forall {\mathsf{t}}1\,{\mathsf{ current}}\_{\mathsf{time}}({\mathsf{t}}1)\,\& \,\neg\exists {\mathsf{t}}2\,{\mathsf{t}}1 - 180 \le {\mathsf{t}}2 < {\mathsf{t}}1 \\{\mathsf{ belief}}({\mathsf{holds}}\_{\mathsf{at}}({\mathsf{performed}}({\mathsf{eat\, food}}),\,{\mathsf{t}}2),{\mathsf{ pos}})\,\& \,{\mathsf{at}}( {{\mathsf{i}}2,{\mathsf{ t}}1} ) \hfill \\ &\Rightarrow {\mathsf{to}}\_{\mathsf{be}}\_{\mathsf{communicated}}\_{\mathsf{to}}(\textquoteleft{\mathsf{Please\,eat\ first}}\textquoteright ,\,{\mathsf{ pos}},\,{\mathsf{ Human)}} \end{aligned} $$

The simulation trace in Fig. 5 illustrates the situation, in which the criterion 3 is violated and the intervention rule (3) is executed.

7.2 Ambient agent model to facilitate the instruction process

This example is inspired by the ambient intelligence application described in (Marreiros et al. 2010). Normally an introvert child does not take the initiative to answer a question of a teacher even when the child knows the correct answer. This behavior stems from the suppressed desire to speak up in a social context. Because of this the child misses a positive feedback from the teacher, which may lead to further social withdrawal and frustration. To improve this situation, a personal ambient device may be provided to the child, which keeps track of activities, the learning progress, and the knowledge quality of the child and provides the child support to overcome the consequences of introversion. In particular, this device may provide an external motivation to the child to answer the teacher’s question, when the device considers that the child may know the correct answer.

The ambient device has the goal to ensure that the child is happy and to prevent the avoiding answering behaviour (i.e., the output focus consists of the states child_happy and action(avoid_answering)). Similarly to the first example from Sect. 7.1 to determine internal predictors for the output states, a cognitive model of the human based on the BDI architecture is used (see Fig. 6).

Fig. 6
figure 6

BDI-based cognitive model for addressing a question of the teacher by an introvert child

The beliefs are created based on the observations: Based on the observation that a question has been raised by the teacher belief b3 is created. Based on the observation that the material related to the question asked has been learned successfully together with the belief that the question has been raised lead to the belief that the correct answer to the question is known (b2). The observation that an external motivation is provided (either by the ambient agent or by the teacher) leads to belief b4. The observation that the opportunity to answer is provided generates b1, and the opportunity not to answer generates b5. The observation that the approval for the correct answer is provided by the teacher leads to b6. The desire and intention to answer the question are denoted d1 and i1 correspondingly in the model. The desire and intention to avoid answering are denoted d2 and i2 correspondingly.

The cognitive model from the example was formalised by the following properties in past-present format:

  • IP1: General belief generation property

    At any point in time a (persistent) belief state b about c holds iff

    at some time point in the past the human observed c. Formally:

    $$ \exists {\mathsf{t2}}\;[ {{\mathsf{ t1}}\; > \;{\mathsf{t2}}\;\& \;{\mathsf{at}}( {{\mathsf{observed}}( {\mathsf{c}} ),\;{\mathsf{t2}}} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{b}},\;{\mathsf{t1}}} ) $$
  • IP1*: Belief b2 generation

    At any point in time belief b2 that the correct answer to the question is known holds iff

    at some time point in the past both b3 and observation that the material related to the question

    asked has been learned successfully held. Formally:

    $$ \begin{gathered} \exists {\mathsf{t2}}\;\left[ {{\mathsf{t1 }} > {\mathsf{ t2 }}\& {\mathsf{ at}}\left( {{\mathsf{observed}}\left( {{\mathsf{material}}\_{\mathsf{related}}\_{\mathsf{to}}\_{\mathsf{question}}\_{\mathsf{learned}}} \right),{\mathsf{ t2}}} \right)} \right. \hfill \\ \left. {\& {\mathsf{ at}}\left( {{\mathsf{b3}},{\mathsf{ t2}}} \right)} \right] \hfill \\ \Leftrightarrow {\mathsf{at}}\left( {{\mathsf{b2}},{\mathsf{ t1}}} \right) \hfill \\ \end{gathered} $$
  • IP2: Desire d1 generation

    At any point in time the internal state property d1 holds iff

    at some time point in the past b2 and b3 held. Formally:

    $$ \exists {\mathsf{t4 }}[ {{\mathsf{ t3 }} > {\mathsf{ t4 }}\& {\mathsf{ at}}( {{\mathsf{b2}},{\mathsf{ t4}}} ){\mathsf{ }}\& {\mathsf{ at}}( {{\mathsf{b3}},{\mathsf{ t4}}} ){\mathsf{ }}} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{d1}},{\mathsf{ t3}}} ) $$
  • IP3: Intention i1 generation

    At any point in time the internal state property i1 holds iff

    at some time point in the past b4 and d1 held. Formally:

    $$ \exists {\mathsf{t6 }}[ {{\mathsf{ t5 }} > {\mathsf{ t6 }}\& {\mathsf{ at}}( {{\mathsf{d1}},{\mathsf{ t6}}} ){\mathsf{ }}\& {\mathsf{ at}}( {{\mathsf{b4}},{\mathsf{ t6}}} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{i1}},{\mathsf{ t5}}} ) $$
  • IP4: Action answer question generation

    At any point in time the action answer question is performed iff

    at some time point in the past both b1 and i1 held. Formally:

    $$ \begin{gathered} \exists {\mathsf{t8 }}[ {{\mathsf{ t7 }} > {\mathsf{ t8 }}\& {\mathsf{ at}}( {{\mathsf{i1}},{\mathsf{ t8}}} ){\mathsf{ }}\& {\mathsf{ at}}( {{\mathsf{b1}},{\mathsf{ t8}}} )} ] \Leftrightarrow \hfill \\ {\mathsf{at}}( {{\mathsf{performed}}( {{\mathsf{answer}}\_{\mathsf{question}}} ),{\mathsf{ t7}}} ) \hfill \\ \end{gathered} $$
  • IP5: Desire d2 generation

    At any point in time the internal state property d2 holds iff

    at some time point in the past not b2 and b3 held. Formally:

    $$ \exists {\mathsf{t1}}0{\mathsf{ }}[ {{\mathsf{ t9 }} > {\mathsf{ t1}}0{\mathsf{ }}\& {\mathsf{ at}}( {{\mathsf{not}}( {{\mathsf{b2}}} ),{\mathsf{ t1}}0} ){\mathsf{ }}\& {\mathsf{ at}}( {{\mathsf{b3}},{\mathsf{ t1}}0} ){\mathsf{ }}} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{d2}},{\mathsf{ t9}}} ) $$
  • IP6: Intention i2 generation

    At any point in time the internal state property i2 holds iff

    at some time point in the past either not b4 and d1 or d2 held. Formally:

    $$ \begin{gathered} \exists {\mathsf{t12}}\;[ {{\mathsf{ t11}}\; > \;{\mathsf{t12}}\;\& \;{\mathsf{at}}( {{\mathsf{d2}},{\mathsf{ t12}}} )\;|\;( {{\mathsf{at}}( {{\mathsf{not}}( {{\mathsf{b4}}} ),{\mathsf{ t12}}} )\;\& \;{\mathsf{at}}( {{\mathsf{d1}},{\mathsf{ t12}}} )} )} ] \Leftrightarrow \hfill \\ {\mathsf{at}}( {{\mathsf{i2}},\;{\mathsf{t11}}} ) \hfill \\ \end{gathered} $$
  • IP7: Action avoid answering generation

    At any point in time the action avoid answering is performed iff

    at some time point in the past both b5 and i2 held. Formally:

    $$ \begin{gathered} \exists {\mathsf{t14}}\;[ {{\mathsf{ t13}}\; > \;{\mathsf{t14}}\;\& \;{\mathsf{at}}( {{\mathsf{i2}},\;{\mathsf{t14}}} )\;\& \;{\mathsf{at}}( {{\mathsf{b5}},{\mathsf{ t14}}} )} ] \Leftrightarrow \hfill \\ {\mathsf{at}}( {{\mathsf{performed}}( {{\mathsf{avoid}}\_{\mathsf{answering}}} ),\;{\mathsf{t13}}} ) \hfill \\ \end{gathered} $$
  • IP8: State happiness generation

    At any point in time the state happiness is generated iff

    at some time point in the past b6 held. Formally:

    $$ \begin{gathered} \exists {\mathsf{t14}}\;[ {{\mathsf{ t13}}\; > \;{\mathsf{t14}}\;\& \;{\mathsf{at}}( {{\mathsf{i2}},\;{\mathsf{t14}}} )\;\& \;{\mathsf{at}}( {{\mathsf{b5}},{\mathsf{ t14}}} )} ] \Leftrightarrow \hfill \\ {\mathsf{at}}( {{\mathsf{performed}}( {{\mathsf{avoid}}\_{\mathsf{answering}}} ),\;{\mathsf{t13}}} ) \hfill \\ \end{gathered} $$
  • IP9: External state child happy generation

    At any point in time the state child happy is generated iff

    at some time point in the past state happiness held. Formally:

    $$ \exists {\mathsf{t18 }}[ {{\mathsf{ t17 }} > {\mathsf{ t18 }}\& {\mathsf{ at}}( {{\mathsf{happiness}},{\mathsf{ t18}}} )} ] \Leftrightarrow {\mathsf{at}}( {{\mathsf{child}}\_{\mathsf{happy}},{\mathsf{ t17}}} ) $$
  • IP10: Observation state approval is provided generation

    At any point in time the state observed(approval is provided) is generated iff

    at some time point in the past state action(answer_question) held. Formally:

    $$ \begin{gathered} \exists {\mathsf{t2}}0{\mathsf{ }}[ {{\mathsf{ t19 }} > {\mathsf{ t2}}0{\mathsf{ }}\& {\mathsf{ at}}( {{\mathsf{action}}( {{\mathsf{answer}}\_{\mathsf{question}}} ),{\mathsf{ t2}}0} )} ] \Leftrightarrow \hfill \\ {\mathsf{at}}( {{\mathsf{observed}}( {{\mathsf{approval is provided}}} ),{\mathsf{ t19}}} ) \hfill \\ \end{gathered} $$

From the automatically generated specifications that ensure the creation of the state child_happy the following is chosen:

$$ \begin{gathered} \exists {\mathsf{t18 }}[{\mathsf{ t17}}\; > \;{\mathsf{t18}}\;\& \exists {\mathsf{t16 }}[{\mathsf{ t18}}\; > \;{\mathsf{t16}}\;\& \exists {\mathsf{t18 }}[{\mathsf{t16}}\; > \;{\mathsf{t18}}\;\& \exists {\mathsf{t2}}0\;[\;{\mathsf{t18}}\; > \hfill \\ {\mathsf{t2}}0\;\& \exists {\mathsf{t8 }}[{\mathsf{ t2}}0\\ > \;{\mathsf{t8}}\;\& \exists {\mathsf{t6 }}[ {{\mathsf{ t8}}\; > \;{\mathsf{t6 }}\& \;{\mathsf{at}}( {{\mathsf{d1}},{\mathsf{ t6}}} )\;\& \;{\mathsf{at}}( {{\mathsf{b4}},\;{\mathsf{t6}}} )} ]\;\& \;{\mathsf{at}}({\mathsf{b1}}, \hfill \\ {\mathsf{t8}})]]]] \Leftrightarrow {\mathsf{at}}( {{\mathsf{child}}\_{\mathsf{happy}},\;{\mathsf{t17}}} ) \hfill \\ \end{gathered} $$

This specification has the highest confidence degree of producing the output equal to the desirability measure of the state child_happy. The predictor state from the chosen specification is d1, which is included in the internal focus. The representational content for this state:

$$ \begin{aligned} &\exists{\mathsf{t4}}[{\mathsf{ t3}}\; > \;{\mathsf{t4 }}\& \exists {\mathsf{t2}}\left[\;{\mathsf{t4}} > \;{\mathsf{t2}}\;\&\right.\\ &\left.{\mathsf{at}}\left({{\mathsf{observed}}\left( {{\mathsf{material}}\_{\mathsf{related}}\_{\mathsf{to}} \_{\mathsf{question}}\_{\mathsf{learned}}} \right),\;{\mathsf{t2}}} \right)\right.\\ &\left.\& \exists {\mathsf{t24}}\;\left[\;{\mathsf{t23}}\; > \hfill {\mathsf{t2}}\right.\right.\\ &\left.\left.\& \;{\mathsf{at}}\left( {\mathsf{observed}}\left({{\mathsf{question}}\_{\mathsf{raised}}} \right),\;{\mathsf{t24}}\right)\right]\right]\;\& \exists {\mathsf{t26 }}[{\mathsf{ t23}}\; > \;{\mathsf{t4}}\;\& \hfill\\ &{\mathsf{at}}\left({{\mathsf{observed}}\left( {{\mathsf{question}}\_{\mathsf{raised}}} \right),{\mathsf{t26}}} \right)]] \Leftrightarrow {\mathsf{at}}\left( {{\mathsf{d1}},{\mathsf{ t3}}} \right) \hfill \\ \end{aligned} $$

From the generated specifications that ensure the creation of the state performed(avoid_answering) the one expressed by the property IP7 has been chosen. This specification has the highest confidence degree of producing the output equal to the undesirability measure of the state performed(avoid_answering). The predictor state from the chosen specification included in the internal focus is i2. The representational content for i2 is:

$$ \begin{gathered} \exists {\mathsf{t12 }}[{\mathsf{ t11 }} > {\mathsf{ t12 }}\& \exists {\mathsf{t1}}0{\mathsf{ }}[{\mathsf{ t12 }} > {\mathsf{ t1}}0{\mathsf{ }}\& \neg (\exists {\mathsf{t2 }}[{\mathsf{ t1}}0{\mathsf{ }} > {\mathsf{ t2 }}\& \hfill \\ {\mathsf{at}}\left( {{\mathsf{observed}}\left( {{\mathsf{material}}\_{\mathsf{related}}\_{\mathsf{to}}\_{\mathsf{question}}\_{\mathsf{learned}}} \right),{\mathsf{ t2}}} \right){\mathsf{ }}\& \exists {\mathsf{t24 }}[{\mathsf{ t2 }} > \hfill \\ \left. {\left. {\left. {{\mathsf{t24 }}\& {\mathsf{ at}}\left( {{\mathsf{observed}}\left( {{\mathsf{question}}\_{\mathsf{raised}}} \right),{\mathsf{ t24}}} \right){\mathsf{ }}} \right]} \right]} \right){\mathsf{ }}\& \exists {\mathsf{t24 }}[{\mathsf{ t1}}0{\mathsf{ }} > {\mathsf{ t26 }}\& \hfill \\ \left. {\left. {{\mathsf{at}}\left( {{\mathsf{observed}}\left( {{\mathsf{question}}\_{\mathsf{raised}}} \right),{\mathsf{ t26}}} \right)} \right]} \right]{\mathsf{ }}|{\mathsf{ }}(\neg (\exists {\mathsf{t32 }}[{\mathsf{ t12 }} > {\mathsf{ t32 }}\& \hfill \\ {\mathsf{at}}\left( {{\mathsf{observed}}\left( {{\mathsf{external}}\_{\mathsf{motivation}}\_{\mathsf{provided}}} \right),{\mathsf{ t32}}} \right){\mathsf{ }}]){\mathsf{ }}\& \exists {\mathsf{t4 }}[{\mathsf{ t12 }} > {\mathsf{ t4 }}\& \hfill \\ \exists {\mathsf{t2 }}[{\mathsf{ t4}} > {\mathsf{ t2 }}\& {\mathsf{ at}}\left( {{\mathsf{observed}}\left( {{\mathsf{material}}\_{\mathsf{related}}\_{\mathsf{to}}\_{\mathsf{question}}\_{\mathsf{learned}}} \right),{\mathsf{ t2}}} \right) \hfill \\ \& \exists {\mathsf{t28 }}\left[ {{\mathsf{ t2 }} > {\mathsf{ t28 }}\& {\mathsf{ at}}\left( {{\mathsf{observed}}\left( {{\mathsf{question}}\_{\mathsf{raised}}} \right),{\mathsf{ t28}}} \right){\mathsf{ }}} \right]{\mathsf{ }}]{\mathsf{ }}\& \exists {\mathsf{t3}}0{\mathsf{ }}[{\mathsf{ t4}} \hfill \\ > {\mathsf{ t3}}0{\mathsf{ }}\& {\mathsf{ at}}\left( {{\mathsf{observed}}\left( {{\mathsf{question}}\_{\mathsf{raised}}} \right),{\mathsf{ t3}}0} \right){\mathsf{ }}]{\mathsf{ }}\left] ) \right] \hfill \\ \end{gathered} $$

Based on the representational content expressions the following atomic monitoring foci are determined:

figure c

The agent’s goals are not achieved, when the occurrence of i2 coupled with the occurrence of d1 is established. In this case the subcomponent Plan Determination of the Agent Specific Task component schedules an intervention by providing a motivating message to the child or a notification to the teacher that the child may know the answer.

8 Related work

A wide range of existing ambient intelligence applications is formalised using production rules (cf. Christensen 2002) and if-then statements. Two important advantages of such rules are modelling simplicity and executability. However, such formalism is not suitable for expressing more sophisticated forms of temporal relations, which can be specified using the TTL language. In particular, references to multiple time points possible in TTL are necessary for modelling forms of behaviour more complex than stimulus-response (e.g., to refer to events that happened in the past). Furthermore, TTL allows representing temporal intervals and to refer to histories of states, for example, to express that a medicine improves the health condition of a patient.

The executable modelling language LEADSTO used in the proposed architecture has similarities with augmented state machines (Alur and Dill 1994), as well as with timed automata in general. In particular, similarly to augmented state machines, the cognitive dynamics of an agent are represented by a temporal evolution of the agent’s states based on the input of the agent. A more precise and extensive comparison of LEADSTO with other modelling languages is provided in Bosse et al. (2007). In contrast to the subsumption architecture (Brooks 1991) based on augmented state machines, the architecture proposed in this paper allows automated cognitive analysis based on cognitive agent models. Such an analysis is a form of data-driven meta-reasoning involving domain and support agent models, which is not addressed in other existing architectures in Ambient Intelligence.

A statistical approach to predict a change in the state of an elderly person is proposed in Doukas and Maglogiannis (2008). However, the precise type of the change is unknown. To address this issue, the knowledge about relations between internal dynamics of the human and the demonstrated behaviour could be used, as in the approach proposed in this paper.

Approaches based on neural networks can be used to predict human behavior in ambient intelligence systems (e.g., Liang and Liang 2006), from empirical sensor data. Neural networks are trained using these empirical data to establish instantaneous functional relations between system’s inputs and outputs. However, for systems with less trivial internal dynamics (e.g., including accumulation over time, bifurcation), which can be observed only partially, these neural network-based predictions are often less accurate than cognitive model-based predictions.

As an example of such a case, consider a simplified description of the cognitive dynamics of a human performing a task (based on the ideas from Bosse et al. 2008a and Treur 2011), involving a cognitive state of being exhausted to a certain extent. The human is provided with a demand curve D(t) over time to be met (e.g., a task of some complexity). Given the demand at time t, the human puts effort E(t) to address the demand. It is assumed that each human is characterized by a critical point CP, the amount of effort which the human can exert without becoming (more) exhausted. If E(t) > CP, then exhaustion EX(t) will accumulate while the human exerts his/her effort; if E(t) < CP, then the human recovers the accumulated exhaustion; if E(t) = CP, then the human does not accumulate exhaustion and does not recover. Furthermore, if the exhaustion is maximal (equal to 1), the human’s effort is limited to CP. A simplified formal cognitive model is provided below:

$$ \begin{aligned}&{\mathsf{E}}({\mathsf{t}}) = {\mathsf{D}}({\mathsf{t}}), \hbox{ when } {\mathsf{EX}}({\mathsf{t}})< 1\\&{\mathsf{E}}( {\mathsf{t}} ) = {\mathsf{CP}},\hbox{ when }{\mathsf{EX}}({\mathsf{t}} ) \ge 1\\&{\mathsf{EX}}({\mathsf{t}} + \Updelta {\mathsf{t}})= {\mathsf{EX}}({\mathsf{t}})+ \gamma ( {{\mathsf{E}}( {\mathsf{t}}) - {\mathsf{CP}}} )\Updelta {\mathsf{t}}\end{aligned} $$

Suppose D(t) is constant and above the critical point: D(t) = D > CP. From the model it follows that as long as EX(t) < 1, also E(t) is constant, E(t) = E, and it holds:

$$ {\mathsf{EX}}({\mathsf{t}} ) = {\mathsf{at}}\quad \hbox{where } {\mathsf{a}} = \gamma ( {\mathsf{E}} - {\mathsf{CP}}) $$

This linear dynamics of exhaustion is supported by evidences from Treur (2011).

An ambient agent may be given the goal to ensure that the human is not exhausted while executing a task. The exhaustion is an internal state that cannot be observed and measured directly. Thus, neural network-based methods may derive information about this state only indirectly. Specifically, the exhaustion state may be estimated by using training data on relations between input demand D and the human’s effort E exerted for this demand. In the simple example scenario above, the empirical data on D and E acquired are constant over time. Therefore any instantaneous functional relation calculated, for example, by a neural network will also give a constant value for exhaustion over time: the exhaustion function may be estimated by EX*(t) = c, where c is a constant. The error of such an estimation compared with the accumulative dynamics of exhaustion will be at time t as follows:

$$ err = EX(t) - EX(t)^* = at - c $$

The average error over a time interval [0,  t ] can be calculated by

$$ \begin{aligned} averr & = \int\limits_{0}^{t} {| {EX( u ) - EX( u )^{*} } |}\,du/{\hbox{t}} \\ & = \int\limits_{0}^{t} {| {au - c} |} \,du/{\hbox{t}} \\ & = \int\limits_{0}^{t} {| {au - c} |}\, du/{\hbox{t}} \\ & = \int\limits_{0}^{c/a} {| {au - c} |}\, du/{\hbox{t}} + \int\limits_{c/a}^{t} {| {au - c} |}\, du/{\hbox{t}} \\ & = - \int\limits_{0}^{c/a} {( {au - c} )\,du/{\hbox{t}} + \int\limits_{c/a}^{t} {( {au - c} )\,du/{\hbox{t}}} } \\ & = - [ {1/2{\hbox{au}}^{2} - {\hbox{cu}}} ]_{0}^{{{\hbox{c/a}}}} /{\hbox{t}} + [ {1/2{\hbox{au}}^{2} - {\hbox{cu}}} ]_{{{\hbox{c/a}}}}^{{\hbox{t}}} {\hbox{/t}} \\ & = [ { - 1/2{\hbox{a}}( {{\hbox{c/a}}} )^{2} + {\hbox{c}}( {{\hbox{c/a}}} ) + 1/2{\hbox{at}}^{2} - {\hbox{ct}} - 1/2{\hbox{a}}( {{\hbox{c/a}}} )^{2} + {\hbox{c}}( {{\hbox{c/a}}} )} ]/{\hbox{t}} \\ & = [ {1/2{\hbox{at}}^{2} - {\hbox{ct}} + {\hbox{c}}^{{\hbox{2}}} {\hbox{/a}}} ]/{\hbox{t}} \\ \end{aligned} $$

The smallest average error would be given by the c minimizing this expression, which can be calculated by differentiation to c:

$$ \begin{aligned} &-{\hbox{ t}} + 2{\hbox{c}}_{{{\hbox{opt}}}} /{\hbox{a}} = 0 \\ &{\hbox{c}}_{{{\hbox{opt}}}} = {\hbox{at}}/2 \ \end{aligned} $$

For this optimal constant value copt the average error is:

$$ [1/2\,{\hbox{at}}^{2} -{\hbox{c}}_{\rm opt} {\hbox{t}} + {\hbox{c}}_{\rm opt}^{2}/{\hbox{a}}]/{\hbox{t}} = [1/2\,{\hbox{at}}^{2} -( {\hbox{at}/2} ){\hbox{t}} +( {{\hbox{at}}/2})^{2} /{\hbox{a}}]/{\hbox{t}} = 1/4\,{\hbox{at}} $$

From this analysis it is clear that over longer time periods the average error of any approach based on instantaneous functional relations is large. Because of this error, a support provided by the ambient agent to the human based on such a neural network model may not always be appropriate.

If, however, data on exhaustion were available, the cognitive model could have been learned using for example time series analysis methods (Box et al 1994). The knowledge compilation approach described in this paper could then be applied to determine direct relations between inputs and the cognitive states. Such a knowledge compilation allows for more efficient identification of particular cognitive states and the need for intervention than based on the original cognitive model. The computational gain depends on the number of operations that need to be executed to determine the cognitive state in focus using the original model.

Another popular approach to formalise recognition and prediction of human behaviour is by Hidden Markov Models (HMM) (e.g., Sanchez et al. 2007). In HMM-based approaches known to the authors, recognition of human activities is based on contextual information of the activity execution only; no cognitive or (gradual) preparation states that precede actual execution of activities are considered. As indicated in Sanchez et al. (2007) a choice of relevant contextual variables for HMMs is not simple and every additional variable causes a significant increase in the complexity of the recognition algorithm. Knowledge of cognitive dynamics that causes particular behaviour would provide more justification and support for the choice of variables relevant for this behaviour. Furthermore, as pointed in Brdiczka et al. (2009), for high quality behaviour recognition a large corpus of training data is needed. The computational costs of the pre-processing (knowledge compilation) phase of our approach are much lower (polynomial in the size of the specification). Also, no model training is required. HMM-based approaches may be used complementary to the approach proposed in this paper. As our approach relies heavily on the validity of cognitive models, HMM-based approaches could be applied to learn cognitive models based on behavioural patterns of individuals. Furthermore, in many cases the structure of a cognitive model can be determined based on theories and evidences from Cognitive Science, Psychology and Neurology, as it is done e.g., in Bosse et al. (2008a). The model parameters may be then estimated using dedicated techniques, e.g., as it is shown in Both et al. (2009).

9 Discussion

In this paper an ambient agent model was presented incorporating a more in depth analysis based on a cognitive model of a human’s functioning. Having such a cognitive model allows the ambient agent to relate certain performance aspects that are considered, to underlying cognitive states causing these performance aspects. Cognitive models usually are represented by causal or dynamical relationships between cognitive states. Often such models are used either by performing temporal reasoning methods or by logical and/or numerical simulation; e.g., Bosse et al. (2007) and Port and van Gelder (1995).

In the current paper a third way of using such a model is introduced, namely by deriving more indirect relations from the cognitive model. Such an approach can be viewed as a form of knowledge compilation (Cadoli and Donini 1997) in a pre-processing phase so that the main processing phase is less intensive from the computational point of view. Such a form of (automated) knowledge compilation occurs in two ways. First, to derive the relationships between considered performance aspects to the relevant internal cognitive states, and next to relate such cognitive states to observable events in the form of monitoring foci. These monitoring foci are determined from the cognitive model by automatically deriving representation relations for cognitive states in the form of temporal predicate logical specifications. From these temporal expressions the (atomic) events are derived that are to be monitored, and from the atomic monitoring information on these events the more complex representation expressions are verified automatically.

The introduced approach can be applied in domains where a cognitive model for the human’s functioning is available, or at least can be acquired. It has been illustrated for two not too complex case studies. In these case studies it was shown that the approach indeed is feasible and efficient. To evaluate the approach in a wider context by implementing it in a more complex real life application is a next step.

As an extension of the approach introduced here, within the hybrid predicate logical approach uncertainty can be incorporated by adding arguments in predicates indicating probabilities or other indicators for imperfection. In the verification process such labeled information can also be propagated to the more complex formulae for the representation relations, when certain combination rules are adopted for the logical connectives. Such combination functions may be affected by the extent to which different uncertainties or probabilities are (conditionally) dependent, which often is highly domain-specific. For example, when two atoms a and b are fully independent, and a & b causes c with some certainty, then the certainty of c be calculated in a different manner than when a and b coincide for almost 100%. As a special case, for domains in which the assumptions underlying a Bayesian network approach are fulfilled, the corresponding propagation functions can be used.