Temporally extended goal recognition in fully observable non-deterministic domain models

Goal Recognition is the task of discerning the intended goal that an agent aims to achieve, given a set of goal hypotheses, a domain model, and a sequence of observations (i.e., a sample of the plan executed in the environment). Existing approaches assume that goal hypotheses comprise a single conjunctive formula over a single final state and that the environment dynamics are deterministic, preventing the recognition of temporally extended goals in more complex settings. In this paper, we expand goal recognition to temporally extended goals in Fully Observable Non-Deterministic (fond) planning domain models, focusing on goals on finite traces expressed in Linear Temporal Logic (ltl\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_f$$\end{document}f) and Pure-Past Linear Temporal Logic (ppltl). We develop the first approach capable of recognizing goals in such settings and evaluate it using different ltl\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_f$$\end{document}f and ppltl goals over six fond planning domain models. Empirical results show that our approach is accurate in recognizing temporally extended goals in different recognition settings.


Introduction
Goal Recognition is the task of recognizing the intentions of autonomous agents or humans by observing their interactions in an environment.Existing work on goal and plan recognition addresses this task over several different types of domain settings, such as planlibraries (Avrahami-Zilberbrand and Kaminka, 2005), plan tree grammars (Geib and Goldman, 2009), classical planning domain models (Ramírez andGeffner, 2009, 2010;Sohrabi et al, 2016;Pereira et al, 2020), stochastic environments (Ramírez and Geffner, 2011), continuous domain models (Kaminka et al, 2018), incomplete discrete domain models (Pereira et al, 2019a), and approximate control models (Pereira et al, 2019b).Despite the ample literature and recent advances, most existing approaches to Goal Recognition as Planning cannot recognize temporally extended goals, i.e., goals formalized in terms of time, e.g., the exact order that a set of facts of a goal must be achieved in a plan.Recently, Aineto et al (2021) propose a general formulation of a temporal inference problem in deterministic planning settings.However, most of these approaches also assume that the observed actions' outcomes are deterministic and do not deal with unpredictable, possibly adversarial, environmental conditions.
Research on planning for temporally extended goals in deterministic and non-deterministic domain settings has increased over the years, starting with the pioneering work on planning for temporally extended goals (Bacchus and Kabanza, 1998) and on planning via model checking (Cimatti et al, 1997).This continued with the work on integrating ltl goals into planning tools (Patrizi et al, 2011(Patrizi et al, , 2013)), and, most recently, the work of Bonassi et al (2023), introducing a novel Pure-Past Linear Temporal Logic encoding for planning in the Classical Planning setting.Other existing work relate ltl goals with synthesis for planning in non-deterministic domain models, often focused on the finite trace variants of ltl (De Giacomo andVardi, 2013, 2015;Ca-macho et al, 2017Ca-macho et al, , 2018;;De Giacomo and Rubin, 2018;Aminof et al, 2020).
In this paper, we introduce the task of goal recognition in discrete domains that are fully observable, and the outcomes of actions and observations are nondeterministic, possibly adversarial, i.e., Fully Observable Non-Deterministic (fond), allowing the formalization of temporally extended goals using two types of temporal logic on finite traces: Linear-time Temporal Logic (ltl f ) and Pure-Past Linear-time Temporal Logic (ppltl) (De Giacomo et al, 2020).
The main contribution of this paper is three-fold.First, based on the definition of Plan Recognition as Planning introduced in (Ramírez and Geffner, 2009), we formalize the problem of recognizing temporally extended goals (expressed in ltl f or ppltl) in fond planning domains, handling both stochastic (i.e., strongcyclic plans) and adversarial (i.e., strong plans) environments (Aminof et al, 2020).Second, we extend the probabilistic framework for goal recognition proposed in (Ramírez and Geffner, 2010), and develop a novel probabilistic approach that reasons over executions of policies and returns a posterior probability distribution for the goal hypotheses.Third, we develop a compilation approach that generates an augmented fond planning problem by compiling temporally extended goals together with the original planning problem.This compilation allows us to use any off-the-shelf fond planner to perform the recognition task in fond planning models with temporally extended goals.
We focus on fond domain models with stochastic non-determinism, and conduct an extensive set of experiments with different complex planning problems.We empirically evaluate our approach using different ltl f and ppltl goals over six fond planning domain models, including a real-world non-deterministic domain model (Nebel et al, 2013), and our experiments show that our approach is accurate to recognize temporally extended goals in different two recognition settings: offline recognition, in which the recognition task is performed in "one-shot", and the observations are given at once and may contain missing information; and online recognition, in which the observations are received incrementally, and the recognition task is performed gradually.

Preliminaries
In this section, we briefly recall the syntax and semantics of Linear-time Temporal Logics on finite traces (ltl f /ppltl) and revise the concept and terminology of fond planning.
An ltl f formula φ is true in τ , denoted by τ ⊧ φ, iff τ, 0 ⊧ φ.As advocated in (De Giacomo et al, 2020), we also use the pure-past version of ltl f , here denoted as ppltl, due to its compelling computational advantage compared to ltl f when goal specifications are naturally expressed in a past fashion.ppltl refers only to the past and has a natural interpretation on finite traces: formulas are satisfied if they hold in the current (i.e., last) position of the trace.
Given a set AP of propositional symbols, ppltl formulas are defined by: where a ∈ AP , ⊖ is the before operator, and S is the since operator.Similarly to ltl f , common abbreviations are the once operator φ ≐ true S φ and the historically operator ⊟φ ≐ ¬ ¬φ.Given a finite trace τ and a ppltl formula φ, we inductively define when φ holds in τ at position i (0 ≤ i < |τ |), written τ, i ⊧ φ as follows.For atomic propositions and Boolean operators it is as for ltl f .For past operators: and τ, k ⊧ φ 2 , and for all j, k < j ≤ i, we have τ, j ⊧ φ 1 .
Temporally Extended Goal Recognition in Fully Observable Non-Deterministic Domain Models

3
A ppltl formula φ is true in τ , denoted by τ ⊧ φ, if and only if τ, |τ | − 1 ⊧ φ.A key property of temporal logics that we exploit in this work is that, for every ltl f /ppltl formula φ, there exists a Deterministic Finite-state Automaton (DFA) A φ accepting the traces τ satisfying φ (De Giacomo and Vardi, 2013;De Giacomo et al, 2020).

FOND Planning
A Fully Observable Non-deterministic Domain planning model (fond) is a tuple D = ⟨2 F , A, α, tr⟩ (Geffner and Bonet, 2013), where 2 F is the set of possible states and F is a set of fluents (atomic propositions); A is the set of actions; α(s) ⊆ A is the set of applicable actions in a state s; and tr(s, a) is the non-empty set of successor states that follow action a in state s.A domain D is assumed to be compactly represented (e.g., in PDDL (McDermott et al, 1998)), hence its size is |F|.Given the set of literals of F as Literals(F) = F ∪ {¬f | f ∈ F}, every action a ∈ A is usually characterized by ⟨Pre a , Eff a ⟩, where Pre a ⊆ Literals(F) is the action preconditions, and Eff a is the action effects.An action a can be applied in a state s if the set of fluents in Pre a holds true in s.The result of applying a in s is a successor state s ′ non-deterministically drawn from one of the Eff i a in Eff a = {Eff 1 a , ..., Eff n a }.In fond planning, some actions have uncertain outcomes, such that they have non-deterministic effects (i.e., |tr(s, a)| ≥ 1 in all states s in which a is applicable), and effects cannot be predicted in advance.PDDL expresses uncertain outcomes using the oneof (Bryce and Buffet, 2008) keyword, as widely used by several fond planners (Mattmüller et al, 2010;Muise et al, 2012).We define fond planning problems as follows.
Definition 1 A fond planning problem is a tuple P = ⟨D, s 0 , G⟩, where D is a fond domain model, s 0 is an initial assignment to fluents in F (i.e., initial state), and G ⊆ F is the goal state.
Solutions to a fond planning problem P are policies.A policy is usually denoted as π, and formally defined as a partial function π ∶ 2 F → A mapping nongoal states into applicable actions that eventually reach the goal state G from the initial state s 0 .A policy π for P induces a set of possible executions ⃗ E = {⃗ e 1 , ⃗ e 2 , . . .}, that are state trajectories, possibly finite (i.e., histories) (s 0 , . . ., s n ), where s i+1 ∈ tr(s i , a i ) and a i ∈ α(s i ) for i = 0, . . ., n − 1, or possibly infinite s 0 , s 1 , . . ., obtained by choosing some possible outcome of actions instructed by the policy.A policy π is a solution to P if every generated execution is such that it is finite and satisfies the goal G in its last state, i.e., s n ⊧ G.In this case, we say that π is winning.Cimatti et al (2003) define three solutions to fond planning problems: weak, strong and strong-cyclic solutions.We formally define such solutions in Definitions 2, 4, and 3.
Definition 2 A weak solution is a policy that achieves the goal state G from the initial state s 0 under at least one selection of action outcomes; namely, such solution will have some chance of achieving the goal state G.
Definition 3 A strong-cyclic solution is a policy that guarantees to achieve the goal state G from the initial state s 0 only under the assumption of fairness 1 .However, this type of solution may revisit states, so the solution cannot guarantee to achieve the goal state G in a fixed number of steps.
Definition 4 A strong solution is a policy that is guaranteed to achieve the goal state G from the initial state s 0 regardless of the environment's non-determinism.This type of solution guarantees to achieve the goal state G in a finite number of steps while never visiting the same state twice.
In this work, we focus on strong-cyclic solutions, where the environment acts in an unknown but stochastic way.Nevertheless, our recognition approach applies to strong solutions as well, where the environment is purely adversarial (i.e., the environment may always choose effects against the agent).
As a running example, we use the well-known fond domain model called Triangle-Tireworld, where locations are connected by roads, and the agent can drive through them.The objective is to drive from one location to another.However, while driving between locations, a tire may go flat, and if there is a spare tire in the car's location, then the car can use it to fix the flat tire.Figure 1a illustrates a fond planning problem for the Triangle-Tireworld domain, where circles are locations, arrows represent roads, spare tires are depicted as tires, and the agent is depicted as a car. Figure 1b shows a policy π to achieve location 22.Note that, to move from location 11 to location 21, there are two arrows labeled with the action (move 11 21): (1) when moving does not cause the tire to go flat; (2) when moving causes the tire to go flat.The policy depicted in Figure 1b guarantees the success of achieving location 22 despite the environment's non-determinism.
In this work, we assume from Classical Planning that the cost is 1 for all non-deterministic instantiated actions a ∈ A. In this example, policy π, depicted in    3 FOND Planning for LTL f and PPLTL Goals We base our approach to goal recognition in fond domains for temporally extended goals on fond planning for ltl f and ppltl goals (Camacho et al, 2017(Camacho et al, , 2018;;De Giacomo and Rubin, 2018).We formally define a fond planning problem for ltl f /ppltl goals in Definition 5, as follows.
Definition 5 A fond planning problem for ltl f /ppltl goals is a tuple Γ = ⟨D, s 0 , φ⟩, where D is a standard fond domain model, s 0 is the initial state, and φ is a goal formula, formally represented either as an ltl f or a ppltl formula.
In fond planning for temporally extended goals, a policy π is a partial function π ∶ (2 F ) + → A mapping histories, i.e., states into applicable actions.A policy π for Γ achieves a temporal formula φ if and only if the sequence of states generated by π, despite the nondeterminism of the environment, is accepted by A φ .
Key to our recognition approach is using off-theshelf fond planners for standard reachability goals to handle also temporally extended goals through an encoding of the automaton for the goal into an extended planning domain expressed in PDDL.Compiling temporally extended goals into planning domain models has a long history in the Planning literature.In particular, Baier and McIlraith (2006) develops deterministic planning with a special first-order quantified ltl goals on finite-state sequences.
Their technique encodes a Non-Deterministic Finitestate Automaton (NFA), resulting from ltl formulas, into deterministic planning domains for which Classical Planning technology can be leveraged.Our parameterization of objects of interest is somehow similar to their approach.
Starting from Baier and McIlraith (2006), always in the context of deterministic planning, Torres and Baier (2015) proposed a polynomial-time compilation of ltl goals on finite-state sequences into alternating automata, leaving non-deterministic choices to be decided at planning time.Finally, Camacho et al (2017Camacho et al ( , 2018) ) built upon Baier and McIlraith (2006) and Torres and Baier (2015), proposing a compilation in the context of fond domain models that simultaneously determinizes on-the-fly the NFA for ltl f and encodes it into PDDL.However, this encoding introduces a lot of bookkeeping machinery due to the removal of any form of angelic non-determinism mismatching with the devilish non-determinism of PDDL for fond.
Although inspired by these works, our approach differs in several technical details.We encode the DFA directly into a non-deterministic PDDL planning domain by taking advantage of the parametric nature of PDDL domains that are then instantiated into propositional problems when solving a specific task.Given a fond planning problem Γ represented in PDDL, we transform Γ as follows.First, we transform the temporally extended goal formula φ (formalized either in ltl f or ppltl) into its corresponding DFA A φ through the highly-optimized MONA tool (Henriksen et al, 1995).Second, from A φ , we build a parametric DFA (PDFA), representing the lifted version of the DFA.Finally, the encoding of such a PDFA into PDDL yields an augmented fond domain model Γ ′ .Thus, we reduce fond planning for ltl f /ppltl to a standard fond planning problem solvable by any off-the-shelf fond planner.

Translation to Parametric DFA
The use of parametric DFAs is based on the following observations.In temporal logic formulas and, hence, in the corresponding DFAs, propositions are represented by domain fluents grounded on specific objects of interest.We can replace these propositions with predicates using object variables and then have a mapping function m obj that maps such variables into the problem instance objects.In this way, we get a lifted and parametric representation of the DFA, i.e., PDFA, which is merged with the domain.Here, the objective is to capture the entire dynamics of the DFA within the planning domain model itself.To do so, starting from the DFA we build a PDFA whose states and symbols are the lifted versions of the ones in the DFA.Formally, to construct a PDFA we use a mapping function m obj , which maps the set of objects of interest present in the DFA to a set of free variables.Given the mapping function m obj , we can define a PDFA as follows.
Definition 6 Given a set of object symbols O, and a set of free variables V, we define a mapping function m that maps each object in O with a free variable in V.
Given a DFA and the objects of interest for Γ , we can construct a PDFA as follows: Definition 7 A PDFA is a tuple A p φ = ⟨Σ p , Q p , q p 0 , δ p , F p ⟩, where: Σ p = {σ p 0 , ..., σ p n } = 2 F is the alphabet of fluents; Q p is a nonempty set of parametric states; q p 0 is the parametric initial state; δ p ∶ Q p × Σ p → Q p is the parametric transition function; F p ⊆ Q p is the set of parametric final states.Σ p , Q p , q p 0 , δ p and F p can be obtained by applying m obj to all the components of the corresponding DFA.
Example 1 Given the ltl f formula "◇(vAt 51)", the object of interest "51" is replaced by the object variable x (i.e., m obj (51) = x), and the corresponding DFA and PDFA for this ltl f formula are depicted in Fig. 2: DFA and PDFA for ◇(vAt(51)).
When the resulting new domain is instantiated, we implicitly get back the original DFA in the Cartesian product with the original instantiated domain.Note that this way of proceeding is similar to what is done in (Baier and McIlraith, 2006), where they handle ltl f goals expressed in a special fol syntax, with the resulting automata (non-deterministic Büchi automata) parameterized by the variables in the ltl f formulas.

PDFA Encoding in PDDL
Once the PDFA has been computed, we encode its components within the planning problem Γ , specified in PDDL, thus, producing an augmented fond planning problem Intuitively, additional parts of Γ ′ are used to synchronize the dynamics between the domain and the automaton sequentially.Specifically, Γ ′ is composed of the following components.

Fluents
F ′ has the same fluents in F plus fluents representing each state of the PDFA, and a fluent called turnDomain, which controls the alternation between domain's actions and the PDFA's synchronization action.Formally,

Domain Actions
Actions in A are modified by adding turnDomain in preconditions and the negated turnDomain in effects: Pre ′ a = Pre a ∪{turnDomain} and Eff ′ a = Eff a ∪ {¬turnDomain} for all a ∈ A.

Initial and Goal States
The new initial condition is specified as s ′ 0 = s 0 ∪ {q p 0 } ∪ {turnDomain}.This comprises the initial condition of the previous domain D (s 0 ) plus the initial state of the PDFA and the predicate turnDomain.Considering the example in Figure 1a and the PDFA in Figure 2b, the new initial condition is as follows in PDDL: and ( r o a d 11 2 1 ) ( r o a d 11 2 1 ) . . .
( s p a r e − i n 2 1 ) ( s p a r e − i n 1 2 ) . . .( q0 5 1 ) ( turnDomain ) ) ) Listing 2: PDDL initial condition for φ = ◇(vAt( 51)) The new goal condition is specified as , we want the PDFA to be in one of its accepting states and turnDomain, as follows: ( : g o a l ( and ( q1 5 1 ) ( turnDomain ) ) ) Listing 3: PDDL goal condition for φ = ◇(vAt( 51)) We note that, both in the initial and goal conditions of the new planning problem, PDFA states are grounded back on the objects of interest thanks to the inverse of the mapping m obj .
Executions of a policy for our new fond planning problem where a ′ i ∈ A ′ are the real domain actions, and t 1 , . . ., t n are sequences of synchronization trans actions, which, at the end, can be easily removed to extract the desired execution In the remainder of the paper, we refer to the compilation just exposed as fond4ltl f .

Theoretical Property of the PDDL Encoding
We now study the theoretical properties of the encoding presented in this section.Theorem 1 states that solving fond planning for ltl f /ppltl goals amounts to solving standard fond planning problems for reachability goals.A policy for the former can be easily derived from a policy of the latter.
Theorem 1 Let Γ be a fond planning problem with an ltl f /ppltl goal φ, and Γ ′ be the compiled fond planning problem with a reachability goal state.Then, Γ has a policy π ∶ (2 Proof ( →).We start with a policy π of the original problem that is winning by assumption.Given π, we can always build a new policy, which we call π ′ , following the encoding presented in Section 3 of the paper.The newly constructed policy will modify histories of π by adding fluents and an auxiliary deterministic action trans, both related to the DFA associated with the ltl f /ppltl formula φ.Now, we show that π ′ is an executable policy and that is winning for Γ ′ .To see the executability, we can just observe that, by construction of the new planning problem Γ ′ , all action effects Eff a ′ of the original problem Γ are modified in a way that all action effects of the original problem Γ are not modified and that the auxiliary action trans only changes the truth value of additional fluents given by the DFA A p φ (i.e., automaton states).Therefore, the newly constructed policy π ′ can be executed.To see that π ′ is winning and satisfies the ltl f /ppltl goal formula φ, we reason about all possible executions.For all executions, every time the policy π ′ stops we can always extract an induced state trajectory of length n such that its last state s ′ n will contain one of the final states F p of the automaton A p φ .This means that the induced state trajectory is accepted by the automaton A p φ .Then, by Theorem De Giacomo and Vardi (2013); De Giacomo et al ( 2020), we have that τ ⊧ φ.
(← ).From a winning policy π ′ for the compiled problem, we can always project out all automata auxiliary trans actions obtaining a corresponding policy π.We need to show that the resulting policy π is winning, namely, it can be successfully executed on the original problem Γ and satisfies the ltl f /ppltl goal formula φ.The executability follows from the fact that the deletion of trans actions and related auxiliary fluents from state trajectories induced by π does not modify any precondition/effect of original domain actions (i.e., a ∈ A).Hence, under the right preconditions, any domain action can be executed.Finally, the satisfaction of the ltl f /ppltl formula φ follows directly from Theorem De Giacomo and Vardi (2013); De Giacomo et al (2020).Indeed, every execution of the winning policy π ′ stops when reaching one of the final states F p of the automaton A p φ in the last state s n , thus every execution of π would satisfy φ.Thus, the thesis holds.

Goal Recognition in FOND Planning Domains with LTL f and PPLTL Goals
We now introduce our recognition approach that is able to recognizing temporally extended (ltl f and ppltl) goals in fond planning domains.Our approach extends the probabilistic framework of Ramírez and Geffner (2010) to compute posterior probabilities over temporally extended goal hypotheses, by reasoning over the set of possible executions of policies π and the observations.Our goal recognition approach works in two stages: the compilation stage and the recognition stage.In the next sections, we describe in detail how these two stages work.Figure 3  Since we deal with non-deterministic domain models, an observation sequence Obs corresponds to a successful execution ⃗ e in the set of all possible executions ⃗ E of a strong-cyclic policy π that achieves the actual intended hidden goal φ * .In this work, we assume two recognition settings: Offline Keyhole Recognition, and Online Recognition.In Offline Keyhole Recognition the observed agent is completely unaware of the recognition process (Armentano and Amandi, 2007), the observation sequence Obs is given at once, and it can be either full or partial -in a full observation sequence, we observe all actions of an agent's plan, whereas, in a partial observation sequence, only a sub-sequence thereof.By contrast, in Online Recognition (Vered et al, 2016), the observed agent is also unaware of the recognition process, but the observation sequence is revealed incrementally instead of being given in advance and at once, as in Offline Recognition, thus making the recognition process an already much harder task.
An "ideal" solution for a goal recognition problem comprises a selection of the goal hypotheses containing only the single actual intended hidden goal φ * ∈ G that the observation sequence Obs of a plan execution achieves (Ramírez andGeffner, 2009, 2010).Fundamentally, there is no exact solution for a goal recognition problem, but it is possible to produce a probability distribution over the goal hypotheses and the observations, so that the goals that "best" explain the observation sequence are the most probable ones.We formally define a solution to a goal recognition problem in fond planning with temporally extended goals in Definition 9.
Definition 9 Solving a goal recognition problem T φ requires selecting a temporally extended goal hypothesis φ ∈ G φ such that φ = φ * , and it represents how well φ predicts or explains what observation sequence Obs aims to achieve.
Existing recognition approaches often return either a probability distribution over the set of goals (Ramírez and Geffner, 2010;Sohrabi et al, 2016), or scores associated with each possible goal hypothesis (Pereira et al, 2020).Here, we return a probability distribution P over the set of temporally extended goals G φ that "best" explains the observations sequence Obs.

Probabilistic Goal Recognition
We now recall the probabilistic framework for Plan Recognition as Planning proposed in Ramírez and Geffner (2010).The framework sets the probability distribution for every goal G in the set of goal hypotheses G, and the observation sequence Obs to be a Bayesian posterior conditional probability, as follows: where P(G) is the a priori probability assigned to goal G, η is a normalization factor inversely proportional to the probability of Obs, and P(Obs | G) is P(Obs | π) is the probability of obtaining Obs by executing a policy π and P(π | G) is the probability of an agent pursuing G to select π.Next, we extend the probabilistic framework above to recognize temporally extended goals in fond planning domain models.

Compilation Stage
We perform a compilation stage that allows us to use any off-the-shelf fond planner to extract policies for temporally extended goals.To this end, we compile and generate new fond planning domain models Γ ′ for the set of possible temporally extended goals G φ using the compilation approach described in Section 3. Specifically, for every goal φ ∈ G φ , our compilation takes as input a fond planning problem Γ , where Γ contains the fond planning domain D along with an initial state s 0 and a temporally extended goal φ.Finally, as a result, we obtain a new fond planning problem Γ ′ associated with the new domain D ′ .Note that such a new fond planning domain Γ ′ encodes new predicates and transitions that allow us to plan for temporally extended goals by using off-the-shelf fond planners.
Corollary 1 Let T φ be a goal recognition problem over a set of ltl f /ppltl goals G φ and let T ′ be the compiled goal recognition problem over a set of propositional goals G.Then, if T ′ has a set of winning policies that solve the set of propositional goals in G, then T φ has a set of winning policies that solve its ltl f /ppltl goals.
Proof From Theorem 1 we have a bijective mapping between policies of fond planning for ltl f /ppltl goals and policies of standard fond planning.Therefore, the thesis holds.

Recognition Stage
The stage in which we perform the goal recognition task comprises extracting policies for every goal φ ∈ G φ .From such policies along with observations Obs, we compute posterior probabilities for the goals G φ by matching the observations with all possible executions in the set of executions ⃗ E of the policies.To ensure compatibility with the policies, we assume the recognizer knows the preference relation over actions for the observed agent when unrolling the policy during search.

Computing Policies and the Set of Executions ⃗ E for G φ
We extract policies for every goal φ ∈ G φ using the new fond planning domain models Γ ′ , and for each of these policies, we enumerate the set of possible executions ⃗ E. The aim of enumerating the possible executions ⃗ E for a policy π is to attempt to infer what execution ⃗ e ∈ ⃗ E the observed agent is performing in the environment.Environmental non-determinism prevents the recognizer from determining the specific execution ⃗ e the observed agent goes through to achieve its goals.The recognizer considers possible executions that are all paths to the goal with no repeated states.This assumption is partially justified by the fact that the probability of entering loops multiple times is low, and relaxing it is an important research direction for future work.
After enumerating the set of possible executions ⃗ E for a policy π, we compute the average distance of all actions in the set of executions ⃗ E to the goal state φ from initial state s 0 .We note that strong-cyclic solutions may have infinite possible executions.However, here we consider executions that do not enter loops, and for those entering possible loops, we consider only the ones entering loops at most once.Indeed, the computation of the average distance is not affected by the occurrence of possibly repeated actions.In other words, if the observed agent executes the same action repeatedly often, it does not change its distance to the goal.The average distance aims to estimate "how far" every observation o ∈ Obs is to a goal state φ.This average distance is computed because some executions ⃗ e ∈ ⃗ E may share the same action in execution sequences but at different time steps.We refer to this average distance as d.For example, consider the policy π depicted in Figure 1b.This policy π has two possible executions for achieving the goal state from the initial state, and these two executions share some actions, such as (move 11 21).In particular, this action appears twice in Figure 1b due to its uncertain outcome.Therefore, this action has two different distances (if we count the number of remaining actions towards the goal state) to the goal state: distance = 1, if the outcome of this action generates the state s 2 ; and distance = 2, if the outcome of this action generates the state s 3 .Hence, since this policy π has two possible executions, and the sum of the distances is 3, the average distance for this action to the goal state is d = 1.5.The average distances for the other actions in this policy are: d = 1 for (changetire 21), because it appears only in one execution; and d = 0 for (move 21 22), because the execution of this action achieves the goal state.
We use d to compute an estimated score that expresses "how far" every observed action in the observa-tion sequence Obs is to a temporally extended goal φ in comparison to the other goals in the set of goal hypotheses G φ .This means that the goal(s) with the lowest score(s) along the execution of the observed actions o ∈ Obs is (are) the one(s) that, most likely, the observation sequence Obs aims to achieve.We note that, the average distance d for those observations o ∈ Obs that are not in the set of executions ⃗ E of a policy π, is set to a large constant number, i.e., to d = e 5 .As part of the computation of this estimated score, we compute a penalty value that directly affects the estimated score.This penalty value represents a penalization that aims to increase the estimated score for those goals in which each pair of subsequent observations ⟨o i−1 , o i ⟩ in Obs does not have any relation of order in the set of executions ⃗ E of these goals.We use the Euler constant e to compute this penalty value, formally defined as e p(oi−1,oi) , in which we use R(⃗ e) as the set of order relation of an execution ⃗ e, where 6.5 = 61.87,φ 1 = 2.5 e 5 +2.5 = 0.016, φ 2 = 4 e 5 +4 = 0.026.Note that for the observation o 1 , the average distance d for φ 0 is e 5 = 148.4because this observation is not an action for one of the executions in the set of executions for this goal (Obs aims to achieve the intended goal φ * = φ 1 ).Furthermore, the penalty value is applied to φ 0 , i.e., e 1 = 2.71.We can see that the estimated score of the intended goal φ 1 is always the lowest for all observations Obs, especially when we observe the second observation o 1 .Note that our approach correctly infers the intended goal φ * , even when observing with just few actions.

Computing Posterior Probabilities for G φ
To compute the posterior probabilities over the set of possible temporally extended goals G φ , we start by computing the average estimated score for every goal φ ∈ G φ for every observation o ∈ Obs, and we formally define this computation as E(φ, Obs, G φ ), as follows: The average estimated score E aims to estimate "how far" a goal φ is to be achieved compared to other goals (G φ ∖{φ}) averaging among all the observations in Obs.The lower the average estimated score E to a goal φ, the more likely such a goal is to be the one that the observed agent aims to achieve.Consequently, E has two important properties defined in Equation 5, as follows.
Proposition 1 Given that the sequence of observations Obs corresponds to an execution ⃗ e ∈ ⃗ E that aims to achieve the actual intended hidden goal φ * ∈ G φ , the average estimated score outputted by E will tend to be the lowest for φ * in comparison to the scores of the other goals (G φ ∖ {φ * }), as observations increase in length.
Proposition 2 If we restrict the recognition setting and define that the goal hypotheses G φ are not sub-goals of each other, and observe all observations in Obs (i.e., full observability), we will have the intended goal φ * with the lowest score among all goals, i.e., ∀φ ∈ G φ is the case that E(φ * , Obs, G φ ) ≤ E(φ, Obs, G φ ).
After defining how we compute the average estimated score E for the goals using Equation 5, we can define how our approach tries to maximize the probaof observing a sequence of observations Obs for a given goal φ, as follows: Thus, by using the estimated score in Equation 6, we can infer that the goals φ ∈ G φ with the lowest estimated score will be the most likely to be achieved according to the probability interpretation we propose in Equation 5.For instance, consider the goal recognition problem presented in Example 2, and the estimated scores we computed for the temporally extended goals φ 0 , φ 1 , and φ 2 based on the observation sequence Obs.From this, we have the following probabilities P(Obs | φ) for the goals: After normalizing these computed probabilities using the normalization factor η2 , and assuming that the prior probability P(φ) is equal to every goal in the set of goals G φ , we can use Equation 6to compute the posterior probabilities (Equation 1) for the temporally extended goals G φ .We define the solution to a recognition problem T φ (Definition 8) as a of temporally extended goals G * φ with the maximum probability, formally: G * φ = arg max φ∈Gφ P(φ | Obs).Hence, considering the normalizing factor η and the probabilities P(Obs | φ) computed before, we then have the following posterior probabilities for the goals in Example 2: P(φ 0 | Obs) = 0.001; P(φ 1 | Obs) = 0.524; and P(φ 2 | Obs) = 0.475.Recall that in Example 2, φ * is φ 1 , and according to the computed posterior probabilities, we then have G * φ = {φ 1 }, so our approach yields only the correct intended goal by observing just two observations.
Using the average distance d and the penalty value p allows our approach to disambiguate similar goals during the recognition stage.For instance, consider the following possible temporally extended goals: φ 0 = ϕ 1 U ϕ 2 and φ 1 = ϕ 2 U ϕ 1 .Here, both goals have the same formulas to be achieved, i.e., ϕ 1 and ϕ 2 , but in a different order.Thus, even having the same formulas to be achieved, the sequences of their policies' executions are different.Therefore, the average distances are also different, possibly a smaller value for the temporally extended goal that the agent aims to achieve, and the penalty value may also be applied to the other goal if two subsequent observations do not have any order relation in the set of executions for this goal.

Computational Analysis
The most expensive computational part of our recognition approach is computing the policies π for the goal hypotheses G φ .Thus, we can say that our approach requires |G φ | calls to an off-the-shelf fond planner.Hence, the computational complexity of our recognition approach is linear in the number of goal hypotheses |G φ |.In contrast, to recognize goals and plans in Classical Planning settings, the approach of Ramírez and Geffner (2010) requires 2 * |G| calls to an off-the-shelf Classical planner.Concretely, to compute P(Obs | G), Ramirez and Geffner's approach computes two plans for every goal and based on these two plans, they compute a costdifference between these plans and plug it into a Boltzmann equation.For computing these two plans, this approach requires a non-trivial transformation process that modifies both the domain and problem, i.e., an augmented domain and problem that compute a plan that complies with the observations, and another augmented domain and problem to compute a plan that does not comply with the observations.Essentially, the intuition of Ramirez and Geffner's approach is that the lower the cost-difference for a goal, the higher the probability for this goal, much similar to the intuition of our estimated score E.

Experiments and Evaluation
We now present experiments and evaluations carried out to validate the effectiveness of our recognition approach.We empirically evaluate our approach over thousands of goal recognition problems using well-known fond planning domain models with different types of temporally extended goals expressed in ltl f and ppltl.
The source code of our PDDL encoding for ltl f and ppltl goals3 and our temporally extended goal recognition approach4 , as well as the recognition datasets and results are available on GitHub.

Domains, Recognition Datasets, and Setup
For experiments and evaluation, we use six different fond planning domain models, in which most of them are commonly used in the AI Planning community to evaluate fond planners (Mattmüller et al, 2010;Muise et al, 2012), such as: Blocks-World, Logistics, Tidyup, Tireworld, Triangle-Tireworld, and Zeno-Travel.The domain models involve practical real-world applications, such as stacking, picking up and putting down objects, loading and unloading objects, loading and unloading objects, and etc.Some of the domains combine more than one of the characteristics we just described, namely, Logistics, Tidyup (Nebel et al, 2013), and Zeno-Travel, which involve navigating and manipulating objects in the environment.In practice, our recognition approach is capable of recognizing not only the set of facts of a goal that an observed agent aims to achieve from a sequence of observations, but also the temporal order (e.g., exact order ) in which the agent aims to achieve this set of facts.For instance, for Tidy-up, is a real-world application domain, in which the purpose is defining planning tasks for a household robot that could assist elder people in smart-home application, our approach would be able to monitor and assist the household robot to achieve its goals in a specific order.
Based on these fond planning domain models, we build different recognition datasets: a baseline dataset using conjunctive goals (ϕ 1 ∧ϕ 2 ) and datasets with ltl f and ppltl goals.
For the ltl f datasets, we use three types of goals: -◇ϕ, where ϕ is a propositional formula expressing that eventually ϕ will be achieved.This temporal formula is analogous to a conjunctive goal; -◇(ϕ 1 ∧ ○ (◇ϕ 2 )), expressing that ϕ 1 must hold be- fore ϕ 2 holds.For instance, we can define a temporal goal that expresses the order in which a set of packages in Logistics domain should be delivered; ϕ 1 U ϕ 2 : ϕ 1 must hold until ϕ 2 is achieved.For the Tidy-up domain, we can define a temporal goal that no one can be in the kitchen until the robot cleans the kitchen.
For the ppltl datasets, we use two types of goals: ϕ 1 ∧ ϕ 2 , expressing that ϕ 1 holds and ϕ 2 held once.For instance, in the Blocks-World domain, we can define a past temporal goal that only allows stacking a set of blocks (a, b, c) once another set of blocks has been stacked (d, e); ϕ 1 ∧(¬ϕ 2 S ϕ 3 ), expressing that the formula ϕ 1 holds and since ϕ 3 held ϕ 2 was not true anymore.For instance, in Zeno-Travel, we can define a past temporal goal expressing that person1 is at city1 and since the person2 is at city1, the aircraft must not pass through city2 anymore.
Thus, in total, we have six different recognition datasets over the six fond planning domains and temporal formulas presented above.Each of these datasets contains hundreds of recognition problems (≈ 390 recognition problems per dataset), such that each recognition problem T φ in these datasets is comprised of a fond planning domain model D, an initial state s 0 , a set of possible goals G φ (expressed in either ltl f or ppltl), the actual intended hidden goal in the set of possible goals φ * ∈ G φ , and the observation sequence Obs.We note that the set of possible goals G φ contains very similar goals (i.e., φ 0 = ϕ 1 U ϕ 2 and φ 1 = ϕ 2 U ϕ 1 ), and all possible goals can be achieved from the initial state by a strong-cyclic policy.For instance, for the Tidy-up domain, we define the following ltl f goals as possible goals G φ : φ 0 = ◇((wiped desk1) ∧ ○ (◇(on book1 desk1))); φ 1 = ◇((on book1 desk1) ∧ ○ (◇(wiped desk1))); φ 2 = ◇((on cup1 desk2) ∧ ○ (◇(wiped desk2))); φ 3 = ◇((wiped desk2) ∧ ○ (◇(on cup1 desk2))); Note that some of the goals described above share the same formulas and fluents, but some of these formulas must be achieved in a different order, e.g., φ 0 and φ 1 , and φ 2 and φ 3 .We note that the recognition approach we developed in the paper is very accurate in discerning (Table 1) the order that the intended goal aims to be achieved based on few observations (executions of the agent in the environment).
As we mentioned earlier in the paper, an observation sequence contains a sequence of actions that represent an execution ⃗ e in the set of possible executions ⃗ E of policy π that achieves the actual intended hidden goal φ * , and as we stated before, this observation sequence Obs can be full or partial.To generate the observations Obs for φ * and build the recognition problems, we extract strong-cyclic policies using different fond planners, such as PRP and MyND.A full observation sequence represents an execution (a sequence of executed actions) of a strong-cyclic policy that achieves the actual intended hidden goal φ * , i.e., 100% of the actions of ⃗ e being observed.A partial observation sequence is represented by a sub-sequence of actions of a full execution that aims to achieve the actual intended hidden goal φ * (e.g., an execution with "missing" actions, due to a sensor malfunction).In our recognition datasets, we define four levels of observability for a partial observation sequence: 10%, 30%, 50%, or 70% of its actions being observed.For instance, for a full observation sequence Obs with 10 actions (100% of observability), a corresponding partial observations sequence with 10% of observability would have only one observed action, and for 30% of observability three observed actions, and so on for the other levels of observability.
We ran all experiments using PRP (Muise et al, 2012) planner with a single core of a 12 core Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz with 16GB of RAM, set a maximum memory usage limit of 8GB, and set a 10-minute timeout for each recognition problem.We note that we are unable to provide a direct comparison of our approach against existing recognition approaches in the literature because most of these approaches perform a non-trivial process that transforms a recognition problem into planning problems to be solved by a planner (Ramírez and Geffner, 2010;Sohrabi et al, 2016).Even adapting such a transformation to work in fond settings with temporally extended goals, we cannot guarantee that it will work properly in the problem setting we propose in this paper.

Evaluation Metrics
We evaluate our goal recognition approach using widely known metrics in the Goal and Plan Recognition literature (Ramírez and Geffner, 2009;Vered et al, 2016;Pereira et al, 2020).To evaluate our approach in the Offline Keyhole Recognition setting, we use four metrics, as follows: -True Positive Rate (TPR) measures the fraction of times that the intended hidden goal φ * was correctly recognized, e.g., the percentage of recognition problems that our approach correctly recognized the intended goal.A higher TPR indicates better accuracy, measuring how often the intended hidden goal had the highest probability P (φ | Obs) among the possible goals.TPR (Equation 7) is the ratio between true positive results5 , and the sum of true positive and false negative results6 ; -False Positive Rate (FPR) is a metric that measures how often goals other than the intended goal are recognized (wrongly) as the intended ones.A lower FPR indicates better accuracy.FPR is the ratio between false positive results7 , and the sum of false positive and true negative results8 ; -False Negative Rate (FNR) aims to measure the fraction of times in which the intended correct goal was recognized incorrectly.A lower FNR indicates better accuracy.FNR (Equation 9) is the ratio between false negative results and the sum of false negative and true positive results; -F1-Score (Equation 10) is the harmonic mean of precision and sensitivity (i.e., TPR), representing the trade-off between true positive and false positive results.The highest possible value of an F1-Score is 1.0, indicating perfect precision and sensitivity, and the lowest possible value is 0. Thus, higher F1-Score values indicate better accuracy.
In contrast, to evaluate our approach in the Online Recognition setting, we use the following metric: -Ranked First is a metric that measures the number of times the intended goal hypothesis φ * has been correctly ranked first as the most likely intended goal, and higher values for this metric indicate better accuracy for performing online recognition.
In addition to the metrics mentioned above, we also evaluate our recognition approach in terms of recognition time (Time), which is the average time in seconds to perform the recognition process (including the calls to a fond planner);

Offline Keyhole Recognition Results
We now assess how accurate our recognition approach is in the Keyhole Recognition setting.Table 1 shows three inner tables that summarize and aggregate the average results of all the six datasets for four different metrics, such as Time, TPR, FPR, and FNR.|G φ | represents the average number of goals in the datasets, and |Obs| the average number of observations.Each row in these inner tables represents the observation level, varying from 10% to 100%.Figure 5 shows the performance of our approach by comparing the results using F1-Score for the six types of temporal formulas we used for evaluation.Table 2 shows in much more detail the results for each of the six datasets we used for evaluating of our recognition approach.

Offline Results for Conjunctive and Eventuality Goals
The first inner table shows the average results comparing the performance of our approach between conjunctive goals and temporally extended goals using the eventually temporal operator ◇.We refer to this comparison as the baseline since these two types of goals have the same semantics.We can see that the results for these two types of goals are very similar for all Moreover, it is also possible to see that our recognition approach is very accurate and performs well at all levels of observability, yielding high TPR values and low FPR and FNR values for more than 10% of observability.Note that for 10% of observability, and ltl f goals for ◇φ, the TPR average value is 0.74, and it means for 74% of the recognition problems our approach recognized correctly the intended temporally extended goal when observing, on average, only 3.85 actions.Figure 5a shows that our approach yields higher F1-Score values (i.e., greater than 0.79) for these types of formulas when dealing with more than 50% of observability.

Offline Results for ltl f Goals
Regarding the results for the two types of ltl f goals (second inner table), it is possible to see that our approach shows to be accurate for all metrics at all levels of observability, apart from the results for 10% of observability for ltl f goals in which the formulas must be recognized in a certain order.Note that our approach is accurate even when observing just a few actions (2.1 for 10% and 5.4 for 30%), but not as accurate as for more than 30% of observability. Figure 5b shows that our approach yields higher F1-Score values (i.e., greater than 0.75) when dealing with more than 30% of observability.

Offline Results for ppltl Goals
Finally, as for the results for the two types of ppltl goals, it is possible to observe in the last inner table that the overall average number of observations |Obs| is less than the average for the other datasets, making the task of goal recognition more difficult for the ppltl datasets.Yet, we can see that our recognition approach remains accurate when dealing with fewer observations.We can also see that the values of FNR increase for low observability, but the FPR values are, on average, inferior to ≈ 0.15. Figure 5c shows that our approach gradually increases the F1-Score values when also increases the percentage of observability.

Online Recognition Results
With the experiments and evaluation in the Keyhole Offline recognition setting in place, we now proceed to present the experiments and evaluation in the Online recognition setting.As before, performing the recognition task in the Online recognition setting is usually harder than in the offline setting, as the recognition task has to be performed incrementally and gradually, and we see to the observations step-by-step, rather than performing the recognition task by analyzing all observations at once, as in the offline recognition setting.
Figure 6 exemplifies how we evaluate our approach in the Online recognition setting.To do so, we use the Ranked First metric, which measures how many times over the observation sequence the correct intended goal φ * has been ranked first as the top-1 goal over the goal hypotheses G φ .The recognition problem example depicted in Figure 6 has five goal hypotheses (y-axis), and ten actions in the observation sequence (x-axis).As stated before, the recognition task in the Online setting is done gradually, step-by-step, so at every step our approach essentially ranks the goals according to the probability distribution over the goal hypotheses G φ .We can see that in the example in Figure 6 the correct goal φ * is Ranked First six times (at the observation indexes: 4, 6, 7, 8, 9, and 10) over the observation sequence with ten observation, so it means that the goal correct intended goal G φ is Ranked First (i.e., as the top-1, with the highest probability among the goal hypotheses G φ ) 60% of the time in the observation sequence for this recognition example.
We aggregate the average recognition results of all the six datasets for the Ranked First metric as a histogram, by considering full observation sequences that represent executions (sequences of executed actions) of strong-cyclic policies that achieves the actual intended goal φ * , and we show such results in Figure 7.The     results represent the overall percentage (including the standard deviation -black bars) of how many times the of time that the correct intended goal φ * has been ranked first over the observations.The average results indicated our approach to be in general accurate to recognize correctly the temporal order of the facts in the goals in the Online recognition setting, yielding Ranked First percentage values greater than 58%.Figures 8,9,10,11,10,12,and 13 shows the Online recognition results separately for all six domains models and the different types of temporally extended goals.By analyzing the Online recognition results more closely, we see that our approach converges to rank the correct goal as the top-1 mostly after a few observations.This means that it is commonly hard to disambiguate among the goals at the beginning of the execution, which, in turn, directly affects the overall Ranked First percentage values (as we can see in Figure 7).We can observe our approach struggles to disambiguate and recognize correctly the intended goal for some recognition problems and some types of temporal formulas.Namely, our approach has struggled to disambiguate when dealing with ltl f Eventuality goals in Blocks-World (see Figure 8a), for most temporal extended goals in Tidy-Up (see Figure 10), and for ltl f Eventuality goals in Zeno-Travel (see Figure 13a).

Related Work and Discussion
To the best of our knowledge, existing approaches to Goal and Plan Recognition as Planning cannot explicitly recognize temporally extended goals in non-deterministic environments.Seminal and recent work on Goal Recognition as Planning relies on deterministic planning techniques (Ramírez and Geffner, 2009;Sohrabi et al, 2016;Pereira et al, 2020) for recognizing conjunctive goals.By contrast, we propose a novel problem formalization for goal recognition, addressing temporally extended goals (ltl f or ppltl goals) in fond planning domain models.While our probabilistic approach relies on the probabilistic framework of Ramírez and Geffner (2010), we address the challenge of computing P(Obs | G) in a completely different way.
There exist different techniques to Goal and Plan Recognition in the literature, including approaches that rely on plan libraries (Avrahami-Zilberbrand and Kaminka, 2005), context-free grammars (Geib and Goldman, 2009), and Hierarchical Task Network (HTN) (Höller et al, 2018).Such approaches rely on hierarchical structures that represent the knowledge of how to achieve the possible goals, and this knowledge can be seen as potential strategies for achieving the set of possible goals.
Note that the temporal constraints of temporally extended goals can be adapted and translated to such hierarchical knowledge.For instance, context-free grammars are expressive enough to encode temporally extended goals (Chiari et al, 2020).ltl f has the expressive power of the star-free fragment of regular expressions and hence captured by context-free grammars.However, unlike regular expressions, ltl f uses negation and conjunction liberally, and the translation to regular expression is computationally costly.Note, being equally expressive is not a meaningful indication of the complexity of transforming one formalism into another.De Giacomo et al (2020) show that, while ltl f and ppltl have the same expressive power, the best translation techniques known are worst-case 3EXPTIME.
As far as we know, there are no encodings of ltl flike specification languages into HTN, and its difficulty is unclear.Nevertheless, combining HTN and ltl f could be interesting for further study.HTN techniques focus on the knowledge about the decomposition property of traces, whereas ltl f -like solutions focus on the knowledge about dynamic properties of traces, similar to what is done in verification settings.
Most recently, Bonassi et al ( 2023) develop a novel Pure-Past Linear Temporal Logic PDDL encoding for planning in the Classical Planning setting.

Conclusions
We have introduced a novel problem formalization for recognizing temporally extended goals, specified in either ltl f or ppltl, in fond planning domain models.We have also developed a novel probabilistic framework for goal recognition in such settings, and implemented a compilation of temporally extended goals that allows us to reduce the problem of fond planning for ltl f /ppltl goals to standard fond planning.We have shown that our recognition approach yields high accuracy for recognizing temporally extended goals (ltl f /ppltl) in different recognition settings (Keyhole Offline and Online recognition) at several levels of observability.
As future work, we intend to extend and adapt our recognition approach for being able to deal with spurious (noisy) observations, and recognize not only the temporal extended goals but also anticipate the policy that the agent is executing to achieve its goals.11.50 17.28 1.00 0.00 0.00 11.50 20.97 1.00 0.00 0.00 10.50 10.15 1.00 0.00 0.00 9.00 5.55 1.00 0.08 0.00 9.00 5.55 1.00 0.08 0.00 9.00 19.17

(
: a c t i o n t r a n s : p a r a m e t e r s ( ?x − l o c a t i o n ) : p r e c o n d i t i o n ( n o t ( turnDomain ) ) : e f f e c t ( and

Fig. 8 :
Fig. 8: Online recognition ranking over the observations for Blocks-World.

Fig. 9 :
Fig. 9: Online recognition ranking over the observations for Logistics.
ltl f Ordering.
ltl f Until.

Fig. 10 :
Fig. 10: Online recognition ranking over the observations for Tidy-Up.

Fig. 13 :
Fig. 13: Online Recognition ranking over the observations for Zeno-Travel.
illustrates how our approach works.
We define the task of goal recognition in fond planning domains with ltl f and ppltl goals by extending the standard definition of Plan Recognition as Planning (Ramírez and Geffner, 2009), as follows.Definition 8 A goal recognition problem in a fond planning setting with temporally extended goals (ltl f and/or ppltl) is a tuple T φ = ⟨D, s 0 , G φ , Obs⟩, where: D = ⟨2 F , A, α, tr⟩ is a fond planning domain; s 0 is the initial state; G φ = {φ 0 , φ 1 , ..., φ n } is the set of goal hypotheses formalized in ltl f or ppltl, including the intended goal φ * ∈ G φ ; Obs = ⟨o 0 , o 1 , ..., o n ⟩ is a sequence of successfully executed (non-deterministic) actions of a policy π φ * that achieves the intended goal φ * , s.t.o i ∈ A.

Table 1 :
Offline Recognition results for Conjunctive, ltl f , and ppltl goals.