Verification and validation meet planning and scheduling

  • Saddek Bensalem
  • Klaus Havelund
  • Andrea Orlandini
Introduction

Abstract

A planning and scheduling (P&S) system takes as input a domain model and a goal, and produces a plan of actions to be executed, which will achieve the goal. A P&S system typically also offers plan execution and monitoring engines. Due to the non-deterministic nature of planning problems, it is a challenge to construct correct and reliable P&S systems, including, for example, declarative domain models. Verification and validation (V&V) techniques have been applied to address these issues. Furthermore, V&V systems have been applied to actually perform planning, and conversely, P&S systems have been applied to perform V&V of more traditional software. This article overviews some of the literature on the fruitful interaction between V&V and P&S.

Keywords

Verification and validation Planning and scheduling  Model checking Theorem proving Testing Monitoring 

1 Introduction

This article introduces a special volume of the International Journal on Software Tools for Technology Transfer, containing extended versions of selected papers presented at the 3rd ICAPS workshop on verification and validation (V&V) of planning and scheduling (P&S) systems, abbreviated VVPS, held in Freiburg, Germany, 2011. The article provides an overview of literature on V&V of P&S systems, and more broadly on the intersection of V&V and P&S.

P&S systems are finding increased application in mission-critical systems that operate under high levels of unpredictability. Given a description of a desired goal, and a model of possible actions and their causal/temporal constraints, the planning problem consists of finding a plan, which is a sequence of actions, the execution of which is calculated to lead to the goal state under “normal” circumstances. Such technology can be used to generate plans to control a plant (for example a spacecraft, or a rover), driven by goals often issued by humans. Such technology is occasionally referred to as model-based autonomy.

One of the first applied approaches to model-based autonomy in a real-world context was the Deep-Space 1 (DS-1) experiment (1998–2001) by the NASA agency [48]. DS-1 was equipped with a “Remote Agent” (RA) software module capable of model-based goal-driven planning and scheduling, plan execution, monitoring and diagnosis, and recovery. The model-based diagnosis system of the RA was the Livingstone model [88]. This diagnosis system performed estimation of the mode of the spacecraft by updating a diagnosis model, taking into account the commands issued, and the observations perceived from the spacecraft. The RA monitoring and diagnosis system was an interesting and effective form of V&V technology, running in parallel with the executing system, and was in itself a contribution to the V&V research field.

However, broader scoped tools and methodologies for V&V [72] of P&S systems have until recently received relatively little attention, although this is changing, as documented in this article. In this regard, it is worth reminding that verification is the act of determining whether an artifact is correct with respect to a formalized specification, and validation is the act of determining whether an artifact is correct with respect to informal intentions. Another popular definition is that verification is concerned with ensuring that you are building the thing right, whereas validation is concerned with ensuring that you are building the right thing. The literature surveyed in this article does not in all cases conform with these definitions, and we, therefore, do not always conform neither, to be faithful to the formulations chosen by the various authors. In truth, the majority of the papers surveyed are concerned with verification rather than validation. See [80] for a description of the knowledge-acquisition process for models and heuristics of a complex autonomous system based on P&S, and [61] for a review of V&V problems and methods suitable for AI-based systems.

P&S systems have unique architectural features that give rise to new V&V challenges. A planning system typically consists of a planner, see Fig. 1, that is largely stable across applications. A planner takes as input a declaratively specified domain model, stable for a particular application, and a problem model defining a given initial state and a goal to be achieved, varying within an application. From these two inputs, the planner produces, usually taking advantage of heuristic search, a plan of actions achieving the goal. The plan is subsequently executed by an exec, which controls a plant via actuators. The execution in turn produces an execution trace of actions executed, which are fed to a monitor, which also reads the status of the controlled plant via sensors. Based on these observations, the monitor determines whether the execution is well behaved, and as a result provides input to generate new goals (updating the problem model) for the next planning step.
Fig. 1

Generic planning architecture

Experience has shown that most errors are in domain models, which can be inconsistent, incomplete, or inaccurate models of the target domains. There are currently few tools to support the model construction process itself, and even fewer that can be used to verify and validate the models once they are constructed [76, 85]. Another challenge to V&V of P&S systems is to demonstrate that specific heuristic strategies have reliable and predictable behaviors.

A field closely related to planning is program synthesis. We shall only briefly mention this area of research, without going into details. A thorough survey of the topic can be found in [16]. The general aim of program synthesis is to derive low-level programs from high-level logical declarative specifications. Planning likewise is concerned with deriving “programs” (referred to as plans) from declarative models, namely the domain models. Two kinds of synthesis are mentioned in [16]: (a) synthesis of controllers for reactive systems as well as synthesis of hardware circuits, and (b) synthesis of data-oriented functional and imperative programs. The former category is of specific relevance to planning. A popular approach here is to synthesize finite state programs (automata) from linear temporal logic (LTL) [70] specifications. Examples include [54, 69, 71]. Such programs are typically non-terminating, i.e., programs that accept an infinite stream of requests. In contrast, plans typically do not contain loops. Previous work has been dedicated to the study of semantics/constraint-based work flow synthesis [36]. The basis here is semantic linear time logic, which is interpreted over regular languages (finite words) rather than omega languages (infinite words). Other work includes [84], which presents synthesis technology to automatically compose tool chains in a goal-oriented fashion, and [83] which uses this approach to automatically generate benchmark programs with known temporal properties. The reader may find further references in [16].

The work on NASA’s Remote Agent led to the creation of the VVPS workshop series in 20051 aiming at establishing a long-term forum focusing on the interaction between V&V and P&S. The original goal of the VVPS workshop series was to identify innovative V&V tools and methodologies that can be applied to ensure the correctness and reliability of P&S systems. However, the workshop series has also attracted papers with slightly different bends, such as using verification systems, for example model checkers, as planners (planning as model checking). Of course, whether a model checker is used to verify a domain model by exploring its state space, or is used as the planner, is only a subtle difference, using similar ideas but with different objectives (verification versus planning). Finally, work has emerged that goes in the complete opposite direction, namely focusing on the use of P&S systems to solve V&V problems for traditional software systems.

The three pieces of work selected and included in this volume of the International Journal on Software Tools for Technology Transfer are briefly described in the following.

Article [42] “A loop acceleration technique to speed up verification of automatically generated plans”, by Goldman, Pelican and Musliner, presents the integration of an optimization technique (called loop acceleration) in the CIRCA planning system to address the state space explosion issue while performing runtime verification of reactive plans. In particular, such problem is encountered while checking CIRCA controllers that execute quick reactions to meet environmental threats, while simultaneously monitoring long-duration processes. The paper describes the technique and its implementation, showing that it radically speeds up the verification process.

Article [26] “Authorized workflow schemas : deciding realizability through LTL(F) model checking”, by Crampton, Huth, and Kuo, proposes the use of model checking of an NP-complete fragment of propositional LTL as an alternative solution to the workflow satisfiability problem, i.e., the problem of determining whether there exists a workflow plan that realizes workflow specifications. A suitable LTL encoding is reported aiming at modeling business processes as workflows and, thus, showing the verification method effectiveness while checking authorization plans against business rules, legal requirements, and authorization policies.

Article [75] “Generating effective tests for concurrent programs via AI automated planning techniques”, by Razavi, Farzan, and McIlraith, presents a general approach to concurrent program testing that is based on AI automated planning techniques. A framework is proposed for finding concurrent program runs that violate a collection of specifications. The problem of finding failing runs is characterized as a sequential planning problem, with the temporally extended goal of achieving a particular violating pattern.

The remaining part of this introductory article provides a brief survey of other articles published broadly within the intersection of V&V and P&S.

The paper is structured as follows: Sect. 2 surveys literature on V&V of P&S systems (the originally intended theme of the workshop series). Specifically (See Fig. 1) V&V of domain models in Sect. 2.1, plans in Sect. 2.2, plan executions in Sect. 2.3, planners in Sect. 2.4, execution engines in Sect. 2.5, and monitors in Sect. 2.6. Section 3 surveys literature on the use of V&V technology for performing planning and scheduling. This includes planning as model checking in Sect. 3.1, and logic-based approaches to planning in Sect. 3.2.

In the other direction, Sect. 4 surveys literature on the use of P&S technology for verification of tradition software systems. Finally Sect. 5 concludes the paper.

2 V&V of P&S systems

V&V can as already mentioned be applied to different artifacts of a P&S system, specifically the domain model, plans, plan executions, the planner, the exec, and the monitor. The following sub-sections cover selected V&V literature in these respective areas.

Most of the approaches mentioned solve specific problems. In [17], however, is described a more comprehensive approach to on-board autonomy, relying on model-based reasoning. This approach offers a uniform formal framework, including model validation, plan generation, plan validation, plan execution and monitoring, as well as fault detection identification and recovery. The approach is based on a symbolic representation of the system to control and uses model checking techniques (specifically the NuSMV model checker) to validate the symbolic representation of the system, in essence following a planning as model checking approach (see later Sect. 3.1). Representing a formal model in terms of a Kripke structure allows to validate the model to guarantee, that it captures the behaviors of the system. Plans contain assumptions that can be checked during execution. The work in [17] is similar in spirit to the Remote Agent experiment, but differs using the same formal model in all phases.

2.1 V&V of domain models

In P&S systems, the domain model plays a crucial role. A domain model formalizes what actions are possible, and their constraints (such as for example orderings: action \(A\) must always precede action \(B\)). A domain model in part reflects the complex environment in which the plant is operating. The correctness of a domain model has a direct impact on plan correctness (e.g., safety, liveness) and performance. Due to modeling errors, a plan model can, however, be inconsistent, incomplete, or simply inaccurate. Domain model languages are typically declarative, such as for example PDDL [60], and models are usually small compared with industrial sized software programs. In spite of these characteristics, it is the declarative nature of domain models that makes it a challenge to explore all possible planning scenarios up front. For these reasons, V&V of domain models is a critical task that has been considered by several authors and which perhaps is the biggest V&V challenge to the P&S community.

Domain model verification aims at showing that (a) plans can, or cannot, be generated for various goals, and (b) that generated plans satisfy given properties. This can be done using formal methods, e.g., model checking or just using more traditional testing. Testing can only show the presence of errors (i.e., if no error is found there is no guarantee that none exists), whereas model checking in theory also can demonstrate the absence of errors (i.e., if no error is found we are guaranteed that none exists). Not surprisingly, model checking is computationally much more expensive than just testing, since the former will look at all reachable states of the domain model. Because the number of such states in general is exponential in the domain size (state explosion), only moderate size domains can typically be handled using model checking techniques for exhaustive verification. Note that the problem is different when using model checkers for planning (planning as model checking) since the goal there is to find a single plan (error trace), rather than to perform an exhaustive search. In a testing-based approach to domain model verification, a large number of plans are generated and then checked to verify that each of them satisfies the given properties. Testing-based domain verification rests on plan verification, which will be discussed in Sect. 2.2.

In [56] is discussed in general terms the problem of verifying and validating domain models. The paper identifies some examples of domain modeling errors and discusses how common they are, noting that domain models usually are much smaller than traditional software. One technique suggested is to cast a domain model into different representations, each focusing on different aspects of the model. This process enhances inspection by requiring the reader to go through a process of mental evaluation. The paper also discusses how plan validation is a process to indirectly validate domain models, comparing plan validation with unit testing.

2.1.1 V&V of domain models using model checking

In a model checking approach, a domain model is formulated as a model in the modeling language of a model checker. The model checker can then be used as a planner by formulating the planning problem as a temporal logic satisfaction problem, where the goal is transformed into a temporal logic formula representing the negation of the goal: that it cannot be reached from the initial state. The model checker will then, if the goal is reachable, produce an error trace leading to the goal state. The error trace represents the plan. Stated in a slightly more formal manner, although still in generic terms, a planning problem
$$\begin{aligned} \varPi = \langle D,P(i,g) \rangle \end{aligned}$$
consists of a domain model\(D\) and a problem model\(P(i,g)\), stating a particular initial situation \(i\) and a goal \(g\). Solving the planning problem consists of applying the planner to the problem to obtain a plan:
$$\begin{aligned} plan := planner(\varPi ) \end{aligned}$$
The planning problem can alternatively be reformulated as a model checking problem as follows. Let \(\Pi ^{MC}\) be the corresponding model represented in the model checker’s input language. Using LTL [70] for writing properties and assuming for simplicity that the goal \(g\) can be carried across unmodified, the planning problem can be formulated as the following satisfaction problem:
$$\begin{aligned} \varPi ^\mathrm{MC} \, \models \, \lnot \diamond g \end{aligned}$$
This represents the assertion, to be proved by the model checker, that there is no execution trace in \(\varPi ^\mathrm{MC}\) from the initial state, for which it holds that eventually (\(\diamond \)) the goal state \(g\) is reached. If, however, there is such an execution trace from the initial state leading to a goal state, the model checker will consider this as a violation of the property, and will return it as an error trace, effectively the sought-after plan.
Furthermore, checking that a temporal property \(\varPhi \) is true on all plans generated from \(\Pi \) to reach a goal \(g\) would correspond to formulating the following model checking problem, which states that if a trace eventually reaches the goal \(g\) then that trace also satisfies \(\varPhi \):
$$\begin{aligned} \varPi ^\mathrm{MC} \models ((\diamond g) \Rightarrow \varPhi ) \end{aligned}$$
The use of model checking for V&V of domain models was pioneered in [68], using three model checkers (Spin, SMV, Murphi). This work studied expressiveness, as well as efficiency and scalability of verification of safety and liveness properties of simple planning domains for the HSTS planner [63]. Also the work described in [43, 81] explores the use of model checking with Spin to guarantee that all plans enabled by a domain model meet certain desired properties. Real-time temporal properties and temporally flexible plans are not addressed in either of these works.

Formal methods applied to timeline-based temporal planning are considered within the ANML framework, a timeline-based domain modeling language proposed at NASA Ames Research Center. In [77] the authors present a translator from ANMLite (a simplified version of ANML) to the SAL model checker. Given this mapping, the authors illustrate preliminary results to assess the efficiency of model checking in plan synthesis. The main purpose of this work, however, was to support NASA Ames in the definition of the ANML language, by offering a verification technology for analyzing ANML domain models in an exhaustive manner.

Using a more expressive temporal model to represent time constraints, the authors of [52, 53] propose to map from interval-based temporal relation models (i.e., DDL models for HSTS) to timed automata models (UPPAAL). This mapping was in part introduced to understand the relationship between timed automata and P&S technology and in part to explore the application of V&V techniques in timeline-based temporal planning. Analogously, [86] presents a mapping from contingent temporal constraint networks to timed game automata (TGA).

2.1.2 V&V of domain models using testing

In [74] is presented a methodology and tool (PDVer) for testing PDDL domain models, based on generating tests from LTL formulas. More specifically, from an LTL formula is generated a set of test cases, each consisting of a PDDL goal, and the planner itself is then used to “run the test”: generate a plan or fail. LTL coverage criteria drive the test case generation. The work is based on the basic observation that the alternative approach of translating a domain model into the language of a model checker, and then applying model checking to explore the domain model, is not practical due to the size of the state space, and the fact that the PDDL model of the system could include features (such as durations and costs) that are hard to encode in the input language of a model checker.

In [39] is described an approach to regression testing of a plan models using a planner and a temporal property synthesizer. The scenario is one where a plan model is constantly modified, and after each modification one needs to ensure that it satisfies certain properties from a planning perspective. For a given goal (test input) the planner generates a plan while emitting planning operations on a log. From the same input is also generated a temporal property that this planner log must satisfy. The satisfaction reflects the fact that the plan model itself is correct wrt. certain criteria. By checking logs against temporal properties, higher flexibility is achieved than if comparing logs from different regression runs against each other.

2.2 V&V of plans

Plan verification consists of checking that a generated plan satisfies certain properties. A typical approach is to generate a limited number of sample plans, which are then checked by automated test oracles. For example, this method was employed in the testing of the Remote Agent [78, 79], where a few hundreds of plans were generated to validate the domain model. The effort in this case, however, was still long and expensive since, although the automated test oracles pointed out violations, humans still had to investigate the error reports manually to identify the actual causes of the violations. This and similar efforts have indeed motivated further research on automatic tools for plan verification. Note that plan verification can be also used for automated testing of the planner as well, by showing that the planner’s output is correct with respect to given properties. Checking plans is considerably easier than showing correctness of the planner itself.

Verification of temporal plans expressed in PDDL with durative actions is enabled by the VAL plan verification tool [46] that has been used during international planning competitions since 2002. However, flexible temporal plans, complex temporal constraints, and other temporal features are still to be addressed [35].

The MURPHY system [40] is proposed to analyze a plan to identify ways that uncontrolled (disturbance) actions could cause the plan to fail in execution and produce counterexample traces that would show how failures could occur. MURPHY translates a plan into a counter planning problem, combining a representation of the initial plan with the definition of a set of uncontrolled actions. These uncontrolled actions may be the actions of other agents in the environment, either friendly, indifferent or hostile, or they may be events that simply occur. The result of this translation is a disjunctive planning problem, which is further processed to play into the strengths of existing classical planners. Using this formulation, a classical planner can find counter examples that illustrate ways a plan may go awry.

More recently, work has been performed on verifying flexible timeline-based plans, by translating them into TGA, which are then analyzed using model checking techniques, specifically UPPAAL-TIGA [12]. In this regard, a suitable TGA formalism has been proposed to verify flexible plans [21, 22].

In [82] is described an approach for finding conditional plans with loops and branches for planning in situations with uncertainty in state properties as well as in object quantities. A state abstraction technique from static analysis of programs is used to build such plans incrementally, using generalizations of input example plans generated by classical planners. Pre-conditions of the resulting plans with loops are computed. Although this work focuses on generating conditional and looping plans, by determining the plan pre-conditions, the work effectively addresses verification of plans with program-like structure, including branches and loops.

A problem related to plan verification is the problem of determining the distance between plans generated by a planner using different planning strategies, for example, in a dynamic environment where a plan has to be adapted and replaced with an alternative plan achieving the same goals. The work in [67] defines a notion of plan proximity that is more precise than a previously suggested notion of plan stability. Plan proximity considers actions missing from the reference plan, extra actions added in the new plan, sequential ordering of the plans, and the expected outcome states of these plans. Robust plan validation during execution is considered in [34], where hybrid timed automata are deployed to handle plan validation with temporal uncertainty. The paper proposes a probing strategy, where plans around a selected original plan are generated, forming a tube around the original plan, to exercise robustness in the face of uncertainties concerning the timing of selected actions. The width of the tube is a parameter, which can be adjusted to present a degree of robustness testing.

In [11] is described an approach, applied in a NASA mission, to the verification of command sequences against a set of flight rules, before the sequences are sent to a satellite. A command sequence is usually created manually on ground by scientists and engineers, but has the same characteristics as a plan: it is a sequence of actions (commands) to be executed on board the satellite. The flight rules are properties that command sequences must satisfy and are in this approach formulated as monitors in the TraceContract tool, an API in the Scala programming language. The API offers classes and methods for writing linear temporal logic properties as well as data parameterized state machines. TraceContract was originally designed for analyzing program execution traces, where an execution trace is a sequence of events that occur during execution of a program/system. However, from the tool’s perspective, a command sequence is just a sequence of events (commands).

NASA operates manned spacecraft according to rigorously defined procedures. Procedures can be viewed as plans for crew and flight controllers. Procedure V&V is currently mostly done through human reviews. The paper [19] describes an approach to the verification of procedures for human space flight (the Space Shuttle and ISS) based on model checking, specifically with the JPF (Java PathFinder) Java model checker. Procedures formulated in procedure representation language (PRL) are translated into finite state machines (in Java), which, when coupled with a finite state machine representing the controlled system, can be verified.

Some procedures can be executed both automatically and manually. In some cases manual procedures are defined as backup for automated procedures. In [64] is described an approach to demonstrate that procedures defined in the two different procedure description languages SCL and PRL are equivalent. This is accomplished by translating both procedures to the common verification language Promela and using the Spin model checker to confirm that the procedures behave identically when given identical inputs. The objective is to provide assurance for NASA engineers that if an automatic SCL program cannot be executed, a backup manual procedure in PRL will be equivalent and safe. The approach generalizes to comparisons between other procedure representation languages.

2.3 V&V of plan executions

V&V of plan executions can be categorized into runtime verification, verifying the executions against properties, and runtime enforcement, enforcing robust plan execution.

2.3.1 Verification of plan executions

V&V of the planner, the domain model, and even the generated plans themselves, does not guarantee robustness of actual plan execution. Indeed, a valid plan can be brittle at execution time due to environment conditions that cannot be modeled in advance (e.g., disturbances). As a last line of defense, V&V techniques can be used for plan execution verification, also referred to as runtime verification. Several P&S systems include a monitor component, which monitors plan executions, and react accordingly in case expectations are violated. The Livingstone system [88] is an example of such a monitoring component, which was part of the Remote Agent P&S system controlling the NASA DS-1 spacecraft. Section 2.6 is specifically devoted to V&V of such monitors.

In [7] is described an approach to automatically test the NASA K9 execution engine based on checking plan executions against temporal properties. A test case consists of a plan and an oracle expressed in temporal logic, that can be used to test, that the execution of the generated plan conforms with the intended plan semantics. As a follow-up of the experiment described in [7], the work of Giannakopoulou et al. [37] describes a compositional approach to V&V applied to the K9 Rover executive system, using the same runtime verification techniques for checking plan executions, but in a compositional verification context. In [18] is described a study comparing different techniques to verify the K9 execution engine. The techniques include monitoring plan executions against handwritten temporal properties, reflecting the plan semantics, as well as execution trace based deadlock and data race analysis.

The K9 rover plan execution scenario is also considered in [15]. Here, a generated plan for the rover is transformed into a timed automaton. An observer is synthesized from the timed automaton to check whether the sequence of observations complies with the specification.

2.3.2 Robust plan execution

Robust plan execution in uncertain and dynamic environments is a critical issue for plan-based autonomous systems. Indeed, once a planner has generated a temporal plan, it is up to the exec to decide, at run-time, how and when to execute each planned activity, preserving both plan consistency and controllability. Such a capability is even more crucial when the generated plan is temporally flexible and partially specified. Such a plan captures an envelope of potential behaviors, to be instantiated during the execution, taking into account temporal/causal constraints and controllable/uncontrollable activities and events. In this regard, several authors (e.g., [62]) proposed a dynamically controllable execution approach where a flexible temporal plan is then used by an exec system that schedules activities on-line while guaranteeing constraint satisfaction. Then, given a plan, a plan controller can be defined as a scheduling function that provides suitable timings for plan actions execution. And, an exec system can be endowed with a plan controller to guide plan executions. In this regard, several research initiatives integrate P&S and V&V techniques aiming to enforce robust execution and monitoring.

The work in [66] presents a method to synthesize robust plan controllers for timeline-based flexible plans, solving a TGA model checking problem. In this work, flexible temporal plan evolutions are modeled as TGA automata, and a winning strategy generated after the UPPAAL-TIGA verification process is used to generate a flexible plan controller that achieves planning goals maintaining dynamic controllability during the overall plan execution.

In [35] is described the VAL framework, coupled with a plan-execution architecture, which has been applied to on-board plan verification and repair. This can be considered as an element in robust plan execution. The observation made by the authors is that while on-board planning technology has an important role to play, state-of-the-art technology does not make it practical for systems with limited resources (the success in the Remote Agent experiment notwithstanding). Their goal has been to provide an on-board “planning assistant”, performing adjustment and repair of plans on-board, when circumstances make it impossible to execute the plans as they were constructed on the ground by humans.

The CIRCA planning system [41] is an architecture for intelligent real-time control. It includes a real-time subsystem used to execute reactive control plans that are guaranteed to meet the domain’s real-time deadlines, keeping the system safe. In this regard, CIRCA automatically creates reactive plans and uses formal verification techniques to prove that those plans will preserve system safety. In particular, the CIRCA’s Controller Synthesis Module uses timed automata to generate reactive plans as time discrete controllers and uses a model-checking based plan verifier to check reactive plans against safety requirements.

2.4 V&V of planners

V&V of a planner consists of ensuring that the planner itself works correctly. This task corresponds to the more traditional V&V task of ensuring that a large piece of complex software works correctly. As discussed above, formal methods have mostly been applied to V&V of domain models, plans, and plan executions since those artifacts appear somewhat more manageable than the planner software (and any of the other involved software systems, such as exec and monitor). Traditional testing is, therefore, still the most commonly applied V&V approach in practice to ensure correctness of planners. For example, the verification of the P&S system for the Remote Agent [48, 65, 79] is based on test cases to check for convergence and plan correctness. More specifically, the P&S system is verified by generating hundreds of plans for a variety of initial states and goals and by verifying that the generated plans meet a validated set of plan correctness requirements. Plan verification can be done using an automated plan checker.

A similar approach has been followed at JPL for validating the EO-1 science agent [23]. A key issue in empirical testing is achieving adequate coverage with a manageable number of tests. Test selection should be guided by a coverage metric. However, classical approaches used for testing traditional software systems are not suitable for planning systems because of the complex search engines and rich input/output space. Within the IDEA framework of the Remote Agent [48], model checking techniques are used to explore the space of input scenarios to generate tests for the planner [73].

In [47] is described a project to automate the scheduling process for NASA’s Deep Space Network (DSN). The paper lays out an approach to verification and validation of the scheduling engine component of this system. The scheduling engine is responsible for interpreting user requests for communications and other services from the DSN, then generating and checking schedules that achieve those requests. The verification process described involves several elements, including regression testing, performance testing, script-based test case generation, as well as a test GUI allowing to easily experiment with the system and generate tests. Users are given access to the GUI and are, therefore, part of the process of defining test cases. Various static and dynamic program analysis tools are used.

In [33] it is noted that it is easier to check that a plan is correct with respect to a model than it is to produce a proof that the planner itself is correct. They load a plan resulting from execution of the planner into a database and then check the database against constraints generated from the model.

2.5 V&V of plan execution engines (exec)

V&V of a plan execution engine (exec) consists of ensuring that plan execution works correctly for any input plan. As is the case for V&V of the planner, this task corresponds to the more traditional V&V task of ensuring that a piece of software works correctly. Also for the exec, traditional testing is the most commonly applied V&V approach in practice.

The work [7, 18, 37], already mentioned in Sect. 2.3.1 on verification of plan executions, is relevant for plan execution engine verification as well, for obvious reasons. In [18] is described a study comparing different techniques to verify the K9 plan execution engine, consisting of 35,000 lines of C++ code, for which a downscaled 6,000 line Java version was used for part of the experiment. The techniques include monitoring plan executions against handwritten temporal properties, reflecting the plan semantics, deadlock and data race analysis, model checking, static analysis, and traditional testing.

In [7] is described an approach to automatically test the K9 execution engine, based on checking plan executions against temporal properties (using a different temporal logic than the one used in [18]). The test framework uses model checking and symbolic execution to automatically generate test cases, which are then applied. A test case consists of a plan and an oracle, expressed in temporal logic, that can be used to test that the execution of the generated plan conforms with the intended plan semantics. The plan language allows for branching based on conditions that need to be checked and also for flexibility with respect to the starting time and ending time of an action.

As a follow-up on the K9 experiments described above, the work of Giannakopoulou et al. [37] describes a compositional approach to V&V applied to the K9 rover plan execution engine, by deploying formal methods throughout the overall design and development lifecycle. The approach uses the same temporal logic monitoring framework as used in [7].

Another example of checking execution traces against specifications is the work described in [10]. The approach here is to verify the operation of a spacecraft software controller (NASA’s Mars Curiosity Rover [1]) by analyzing logs generated by the running software against temporal properties. The rover receives command sequences from ground, similar to plans, to be executed over a limited time period.

In [45, 57] is described an application of model checking to verify the correctness of the plan execution engine for NASA’s DS-1 spacecraft. An abstraction of the exec (programmed in LISP) was modeled in the Promela modeling language of the Spin model checker. In [44] is described a follow-up analysis (using the Java PathFinder Java model checker, based on Spin) of the same code after one of the errors, identified in [45] as a data race, actually occurred in flight causing a deadlock.

In [8] is described a dynamic data race detection analysis algorithm (analysis is performed during execution of an instrumented program) detecting inconsistencies in the way groups of variables are protected by locks in a concurrent program. Such data races involving several variables are referred to as high-level data races. Lack of such consistency may reflect coding errors. The high-level data race problem was inspired by the actual data race in the DS-1 spacecraft, also identified in [45] before flight using model checking, which caused the before mentioned deadlock in space, as documented in [44] after flight, also using model checking. The dynamic analysis approach is a scalable alternative to model checking for detecting this kind of error.

2.6 V&V of plan execution monitors

A monitor analyzes the execution of a plan and initiates recovery actions in case the expected behavior is violated. As such, a monitor is itself part of the V&V solution. However, even the monitor can be incorrectly programmed. V&V of a monitoring system consists of verifying and validating that the monitor makes the right judgments about the correctness of the current execution and in case of execution errors that the right reactions are triggered.

There is not a large amount of work on V&V of P&S monitoring systems. To the best of our knowledge, the only relevant work is related to the Livingstone PathFinder (LPF) [55], a system for testing Livingstone models. Livingstone is the model-based monitoring and diagnosis system for the Remote Agent. LPF consists of a test driver that generates a sequence consisting of either commands or injected faults, a simulator of the modeled device, and the Livingstone engine. The system checks whether the diagnosis system can detect the faults injected into the input stream.

3 V&V systems used for P&S

In the approaches mentioned so far, various methods, including formal methods, have been used to analyze planning artifacts. A slightly different bend is the use of formal methods to actually perform planning. Such approaches reflect the observation that both kinds of techniques (V&V and P&S) are based on search, which has created an interesting cross fertilization between formal methods and artificial intelligence. Note that just the fact that a formal methods based tool is used for planning does not necessarily mean that the generated plans are correct by construction. A model checker can contain errors, just as can a planner. However, some verification tools, such as some theorem provers, are based on a very small kernel, which can be estimated to be correct with a very high probability.

3.1 Planning as model checking

The “planning as model checking” approach [24, 25, 38] considers the planning problem as a model checking problem, using a model checker to perform planning. This approach is based on the representation of a domain model as a finite state automaton, effectively a model in the modeling language of the model checker. Planning is done by verifying whether temporal formulas are true or not wrt. the model, along the lines illustrated in Sect. 2.1.1. In the above-mentioned work, symbolic representation and exploration techniques based on symbolic model checking, using binary decision diagrams, allow for efficient planning in non-deterministic domains. In [17] is described the later result of a comprehensive approach to on-board autonomy relying on using the NuSMV model checker to perform planning, using a symbolic representation of the system to control. Representing a formal model in terms of a Kripke structure allows to validate the model to guarantee that it captures the behaviors of the system.

An interesting application of real-time model checking [13] has been dedicated to extend and retarget the timed automata technology towards optimal planning and scheduling. Two interesting applications have been studied, demonstrating the use of UPPAAL-CORA for the generation of optimal plans and schedules. In [14], task graph scheduling problems are modeled as networks of priced timed automata and then solved by means of a branch-and-bound algorithm for solving cost-optimal reachability problems. In [27], planning problems are defined by means of a variant of PDDL 2.1, i.e., considering duration-dependent and continuous effects. Planning problems are then translated into linearly priced automata, and UPPAAL-CORA is exploited to generate cost-optimal traces which represent valid plans.

In [5], the authors investigate and compare constraint-based temporal planning techniques and timed game automata methods for representing and solving realistic temporal planning problems. In this direction, they propose a mapping from IxTeT planning problems to UPPAAL-TIGA game-reachability problems and present a comparison of the two planning approaches.

3.2 Logic-based approaches to planning

Traditionally, planning has been formalized as deduction: plans are generated by constructive proofs of so-called plan specification formulae, stating that there exists a plan leading from the initial state to a state satisfying the goal. The best-known logical formalization of planning in the deductive view is the situation calculus [59]. Such seminal work has influenced many works. Related to the VVPS workshop series, in [28] is described an approach, which tackles the planning problem as a theorem proving task. The paper describes a formalization of the planning problem in the Isabelle/HOL theorem proving system of intuitionistic linear logic, specifying the initial state and possible actions. The theorem proving task is then to show that some goal resources can be realized from the given resources, using actions as basic inference steps. Furthermore, such a proof can be mapped mechanically into a typed functional programming language, yielding an executable plan. The plans so found are provably correct, by construction (assuming the correctness of the theorem prover).

Parallel to the planning as derivability approach, the planning as satisfiability paradigm was introduced by Kautz et al. [51], and carried on in [49, 50]. According to this paradigm, a planning problem is encoded by a logical theory, modeling the rules governing the world evolution in such a way that any model of the theory corresponds to a valid plan. The SATPLAN planner is based on the above-cited works. In SATPLAN, the target logic of the encoding is classical propositional logic. The work presented in [20, 58] conforms to the planning as satisfiability paradigm, but, differently from [51], the logic used to encode planning problems is propositional LTL. The choice of LTL is mainly motivated by the fact that it allows a simple and natural representation of a world that changes over time. Moreover, domain-dependent knowledge can be expressed in LTL, as well as domain restrictions and intermediate tasks.

It is also worth underscoring that, as a side effect, the use of logic-based approaches enables the exploitation of well-known formal method tools/techniques to perform V&V of planning systems following such approaches.

4 P&S systems used for V&V

Search heuristics, inspired by those used in planning, have been studied for verification systems, for example, as described in [87]. However, one can go even further and apply planning technology directly as a verification technology for ensuring correctness of traditional software systems. For example, the effectiveness of translating model checking inputs into PDDL has been described in several papers, including from communication protocol specification languages [29], Petri nets [30], \(\mu \)-calculus formulae [9], and graph transition systems [31].

In [6] the main idea is to formulate the system model to be analyzed, as well as the property it has to satisfy, as a planning problem. To illustrate the approach, models in NuSMV and Promela (Spin’s modeling language) are translated to planning domain models and goals in PDDL. Experimental results comparing the planning approach to NuSMV and Spin show that planners can provide significant time improvements when checking safety and liveness properties that are violated, compared with state-of-the-art model checkers, especially on large tasks. Results are less convincing in the case where properties are not violated, and the whole state space, therefore, must be explored. In other words, for error detection (in contrast to proof of correctness) the approach appears promising.

The work presented in [32] relies on the translation of concurrent C/C++ programs into PDDL domain models. The system then runs a heuristic search-based planner on such a generated PDDL model to generate a plan, read: error trace, for locating programming bugs. This counter-example error trace is then used to provide an interactive debugging aid.

5 Conclusion

This paper introduces extended versions of papers selected from the 3rd workshop on verification and validation of planning and scheduling systems. The paper continues with an overview of work done in the intersection of V&V and P&S. This includes work on V&V of P&S systems for ensuring the correctness of the latter, work on using V&V systems to perform P&S, and finally the use of P&S systems to perform V&V of traditional software systems. The overview is not exhaustive by any means.

The original focus of the VVPS workshop series was the study of V&V techniques to ensure the correctness of P&S systems. The model-based nature of P&S systems should in principle make it easier to apply V&V techniques. However, as it turns out, P&S systems present features that make them hard to verify and validate, such as the non-deterministic nature of the domain models and planner heuristics. Thus, powerful tools/methods, such as model checkers originating from the formal methods/software engineering community, have been studied for this purpose.

However, it is clear that the other interactions between V&V and P&S as mentioned above are interesting, including topics such as planning as model checking and model checking as planning. The common thread in these techniques are specification languages and search-based analysis techniques. Research in the cross section between V&V and P&S is important for both fields, since the impact seems bi-directional.

An interesting topic for future research is the relationship between model-based programming and model-based planning. The former relies on program synthesis techniques to derive programs from models, or verification techniques to prove a program correct wrt. the model. The latter relies on generating plans (programs) on the fly from models, incorporating models, programs, and fault protection within one framework. It would be desirable to formulate a unifying framework encompassing these different views.

Footnotes

  1. 1.

    In 2004 Remote Agent P&S scientist Kanna Rajan suggested to V&V scientist Klaus Havelund (both at NASA Ames Research Center at the time) to organize a workshop on the topic: “V&V of P&S systems”. The series was started in 2005 [2] and continued in 2009 [3] and 2011 [4].

References

  1. 1.
    Mars Science Laboratory (MSL) mission website. http://mars.jpl.nasa.gov/msl. Accessed Nov 2013
  2. 2.
    VVPS 2005 workshop website. http://icaps05.uni-ulm.de/workshops.html
  3. 3.
    VVPS 2009 workshop website. http://www-vvps09.imag.fr
  4. 4.
  5. 5.
    Abdedaim, Y., Asarin, E., Gallien, M., Ingrand, F., Lesire, C., Sighireanu, M.: Planning robust temporal plans: a comparison between CBTP and TGA approaches. In: Proceedings of the Seventeenth International Conference on Automated Planning and Scheduling (ICAPS’07), pp. 2–10 (2007)Google Scholar
  6. 6.
    Albarghouthi, A., Baier, J.A., McIlraith, S.A.: On the use of planning technology for verification. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  7. 7.
    Artho, C., Barringer, H., Goldberg, A., Havelund, K., Khurshid, S., Lowry, M., Pasareanu, C., Rosu, G., Sen, K., Visser, W., Washington, R.: Combining test-case generation and runtime verification. Theor. Comput. Sci. 336(2–3), 209–234 (2005)CrossRefMATHMathSciNetGoogle Scholar
  8. 8.
    Artho, C., Havelund, K., Biere. A.: High-level data races. Softw. Test. Verif. Reliab. 13(4), 207–227 (2004)Google Scholar
  9. 9.
    Bakera, M., Edelkamp, S., Kissmann, P., Renner, C.D.: Solving \(\mu \)-calculus parity games by symbolic planning. In: Peled. D., Wooldridge, M. (eds.) MoChArt. Lecture Notes in Computer Science, vol. 5348, pp. 15–33. Springer, New York (2008)Google Scholar
  10. 10.
    Barringer, H., Groce, A., Havelund, K., Smith, M.: Formal analysis of log files. J. Aerosp. Comput. Inf. Commun. 7(11), 365–390 (2010)CrossRefGoogle Scholar
  11. 11.
    Barringer, H., Havelund, K., Kurklu, E., Morris, R.: Checking flight rules with TraceContract: application of a Scala DSL for trace analysis. In: Scala Days 2011. Stanford University, California (2011)Google Scholar
  12. 12.
    Behrmann, G., Cougnard, A., David, A., Fleury, E., Larsen, K., Lime, D.: UPPAAL-TIGA: time for playing games! In: Proceedings of 19th International Conference on Computer Aided Verification (CAV’07), vol. 4590 in LNCS, pp. 121–125. Springer, New York (2007)Google Scholar
  13. 13.
    Behrmann, G., Fehnker, A., Hune, T., Larsen, K., Pettersson, P., Romijn, J.: Efficient Guiding Towards Cost-Optimality in UPPAAL. Springer, New York (2001)Google Scholar
  14. 14.
    Behrmann, G., Larsen, K.G., Rasmussen, J.I.: Optimal scheduling using priced timed automata. In: Proceedings of the ICAPS Workshop on Verification and Validation of Model-Based Planning and Scheduling Systems (VVPS’05) (2005)Google Scholar
  15. 15.
    Bensalem, S., Bozga, M., Krichen, M., Tripakis, S.: Testing conformance of real-time applications: case of planetary rover controller. In: Proceedings of the ICAPS Workshop on Verification and Validation of Model-Based Planning and Scheduling Systems (VVPS’05), pp. 23–32 (2005)Google Scholar
  16. 16.
    Bodik, R., Jobstmann, B.: Algorithmic program synthesis: introduction. Int. J. Softw. Tools Technol. Transf. STTT 15(5–6), 397–411 (October 2013)Google Scholar
  17. 17.
    Bozzano, M., Cimatti, A., Roveri, M., Tchaltsev, A.: A comprehensive approach to on-board autonomy verification and validation. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  18. 18.
    Brat, G., Drusinsky, D., Giannakopoulou, D., Goldberg, A., Havelund, K., Lowry, M., Pasareanu, C., Visser, W., Washington, R.: Experimental evaluation of verification and validation tools on Martian rover software. Form. Methods Syst. Des. 25(2), 167–198 (2004)Google Scholar
  19. 19.
    Brat, G., Gannakopoulou, D., Izygon, M., Alex, E., Wang, L., Frank, J., Molin, A.: Model-based verification and validation for procedure authoring. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  20. 20.
    Cerrito, S., Mayer, M.C.: Using linear temporal logic to model and solve planning problems. In: Giunchiglia, F (ed.) Artificial Intelligence: Methodology, Systems, and Applications. Lecture Notes in Computer Science, vol 1480, pp. 141–152. Springer, New York (1998)Google Scholar
  21. 21.
    Cesta, A., Finzi, A., Fratini, S., Orlandini, A., Tronci, E.: Verifying flexible timeline-based plans. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  22. 22.
    Cesta, A., Finzi, A., Fratini, S., Orlandini, A., Tronci, E.: Flexible plan verification: feasibility results. Fundamenta Informaticae 107, 111–137 (2011)MATHMathSciNetGoogle Scholar
  23. 23.
    Cichy, B., Chien, S., Schaffer, S., Tran, D., Rabideau, G., Sherwood, R.: Validating the autonomous EO-1 science agent. In: Proceedings of the ICAPS Workshop on Verification and Validation of Model-Based Planning and Scheduling Systems (VVPS’05), pp. 75–85 (2005)Google Scholar
  24. 24.
    Cimatti, A., Giunchiglia, F., Giunchiglia, E., Traverso, P.: Planning via model checking: a decision procedure for AR. In: Steel, S., Alami, R. (eds.) ECP. Lecture Notes in Computer Science, vol. 1348, pp. 130–142. Springer, New York (1997)Google Scholar
  25. 25.
    Cimatti, A., Roveri, M., Traverso, P.: Strong planning in non-deterministic domains via model checking. In: Simmons, R.G., Veloso, M.M., Smith, S.F. (eds.) AIPS, pp. 36–43. AAAI, Pittsburgh (1998)Google Scholar
  26. 26.
    Crampton, J., Huth, M., Kuo, J.H.-P.: Authorized workflow schemas : deciding realizability through LTL(F) model checking. Int. J. Softw. Tools Technol. Transf. STTT (2014)Google Scholar
  27. 27.
    Dierks, H.: Finding optimal plans for domains with restricted continuous effects with UPPAAL-CORA. In: Proceedings of the ICAPS Workshop on Verification and Validation of Model-Based Planning and Scheduling Systems (VVPS’05) (2005)Google Scholar
  28. 28.
    Dixon, L., Smaill, A., Bundy, A.: Verified planning by deductive synthesis in intuitionistic linear logic. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  29. 29.
    Edelkamp, S.: Promela planning. In: Ball, T., Rajamani, S.K. (eds.) SPIN. Lecture Notes in Computer Science, vol. 2648, pp. 197–212. Springer, New York (2003)Google Scholar
  30. 30.
    Edelkamp, S., Jabbar, S.: Action planning for directed model checking of Petri nets. Electr. Notes Theor. Comput. Sci. 149(2), 3–18 (2006)CrossRefGoogle Scholar
  31. 31.
    Edelkamp, S., Jabbar, S., Lluch-Lafuente, A.: Cost-algebraic heuristic search. In: Veloso, M.M., Kambhampati, S. (eds.) AAAI, pp. 1362–1367. AAAI Press/The MIT Press, Cambridge (2005)Google Scholar
  32. 32.
    Edelkamp, S., Kellershoff, M., Sulewski, D.: Program model checking via action planning. In: van der Meyden, R., Smaus, J.-G. (eds.) MoChArt. Lecture Notes in Computer Science, vol. 6572, pp. 32–51. Springer, New York (2010)Google Scholar
  33. 33.
    Feather, M., Smith, B.: Automatic generation of test oracles—from pilot studies to application. Autom. Softw. Eng. 8(1), 31–61 (2001)CrossRefMATHGoogle Scholar
  34. 34.
    Fox, M., Howey, R., Long, D.: Exploration of the robustness of plans. In: Proceedings of the ICAPS Workshop on Verification and Validation of Model-Based Planning and Scheduling Systems (VVPS’05), pp. 67–74 (2005)Google Scholar
  35. 35.
    Fox, M., Long, D., Baldwin, L., Wilson, G., Woods, M., Jameux, D., Aylett, R.: On-board timeline validation and repair: a feasibility study. In: Proceedings of 5th International Workshop on Planning and Scheduling for Space (IWPSS’06) (2006)Google Scholar
  36. 36.
    Freitag, B., Margaria, T., Steffen, B.: A pragmatic approach to software synthesis. In: Wing, J.M., Wexelblat, R.L. (eds.) Workshop on Interface Definition Languages, Portland, Oregon, USA, pp. 46–58. ACM Press, New York (1994)Google Scholar
  37. 37.
    Giannakopoulou, D., Pasareanu, C.S., Lowry, M., Washington, R.: Lifecycle verification of the NASA Ames K9 rover executive. In: Proceedings of the ICAPS Workshop on Verification and Validation of Model-Based Planning and Scheduling Systems (VVPS’05), pp. 75–85 (2005)Google Scholar
  38. 38.
    Giunchiglia, F., Traverso, P.: Planning as model checking. In: Biundo, S., Fox, M. (eds.) ECP. Lecture Notes in Computer Science, vol. 1809, pp. 1–20. Springer, New York (1999)Google Scholar
  39. 39.
    Goldberg, A., Havelund, K., McGann, C.: Runtime verification for autonomous spacecraft software. In: Proceedings of IEEE Aerospace Conference, IEEE Computer Society, USA (2005)Google Scholar
  40. 40.
    Goldman, R.P., Kuter, U., Schneider, A.: Using classical planners for plan verification and counterexample generation. In: Proceedings of AAAI Workshop on Problem Solving Using Classical, Planning (2012)Google Scholar
  41. 41.
    Goldman, R.P., Musliner, D.J., Pelican, M.J.: Exploiting implicit representations in timed automaton verification for controller synthesis. In: Proceedings of the Fifth International Workshop on Hybrid Systems: Computation and Control (HSCC’02) (2002)Google Scholar
  42. 42.
    Goldman, R.P., Pelican, M.J., Musliner, D.J.: A loop acceleration technique to speed up verification of automatically-generated plans. Int. J. Softw. Tools Technol. Transf. STTT (2014)Google Scholar
  43. 43.
    Havelund, K., Groce, A., Holzmann, G., Joshi, R., Smith, M.: Automated testing of planning models. In: Proceedings of the Fifth International Workshop on Model Checking and Artificial Intelligence, pp. 90–105 (2008)Google Scholar
  44. 44.
    Havelund, K., Lowry, M., Park, S., Pecheur, C., Penix, J., Visser, W., White, J.L.: Formal analysis of the Remote Agent—before and after flight. In: The Fifth NASA Langley Formal Methods Workshop, Virginia (2001)Google Scholar
  45. 45.
    Havelund, K., Lowry, M., Penix, J.: Formal analysis of a spacecraft controller using SPIN. IEEE Trans. Softw. Eng. 27(8), 749–765 (2001). An earlier version occurred in the proceedings of 4th SPIN, workshop, 1998CrossRefGoogle Scholar
  46. 46.
    Howey, R., Long, D.: VAL’s progress: the automatic validation tool for PDDL2.1 used in the international planning competition. In: Proceedings of the ICAPS Workshop on The Competition: Impact, Organization, Evaluation, Benchmarks, pp. 28–37, Trento (2003)Google Scholar
  47. 47.
    Johnston, M.D., Tran, D.: Verification and validation of a deep space network scheduling application. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  48. 48.
    Jonsson, A., Morris, P., Muscettola, N., Rajan, K., Smith, B.: Planning in interplanetary space: theory and practice. In: Proceedings of the Fifth International Conference on Artificial Intelligence Planning and Scheduling (AIPS’00), pp. 177–186 (2000)Google Scholar
  49. 49.
    Kautz, H., Selman, B.: BLACKBOX: a new approach to the application of theorem proving to problem solving. In: AIPS’98 Workshop on Planning as Combinatorial, Search, pp. 58–60 (1998)Google Scholar
  50. 50.
    Kautz, H., Selman, B.: Unifying sat-based and graph-based planning. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-99), vol. 99, pp. 318–325 (1999)Google Scholar
  51. 51.
    Kautz, H.A., Selman, B., et al.: Planning as satisfiability. In: Proceedings of the International European Conference on Artificial Intelligence (ECAI-92), vol. 92, pp. 359–363 (1992)Google Scholar
  52. 52.
    Khatib, L., Muscettola, N., Havelund, K.: Verification of plan models using UPPAAL. In: First International Workshop on Formal Approaches to Agent-Based Systems, NASA’s Goddard Space center, Maryland. Lecture Notes in Artificial Intelligence, vol. 1871. Springer, New York (2000)Google Scholar
  53. 53.
    Khatib, L., Muscettola, N., Havelund, K.: Mapping temporal planning constraints into timed automata. In: The Eigth International Symposium on Temporal Representation and Reasoning (TIME’01), pp. 21–27 (2001)Google Scholar
  54. 54.
    Kupferman, O., Vardi, M.Y.: Safraless decision procedures. In: 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), Pittsburgh, pp. 531–542 (2005)Google Scholar
  55. 55.
    Lindsey, T., Pecheur, C.: Simulation-based verification of autonomous controllers with Livingstone PathFinder. In: Proceedings of the 10th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS’04), Barcelona, Spain. Lecture Notes in Computer Science, vol. 2988 (2004)Google Scholar
  56. 56.
    Long, D., Fox, M., Howey, R.: Planning domains and plans: validation, verification and analysis. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  57. 57.
    Lowry, M.R., Havelund, K., Penix, J.: Verification and validation of AI systems that control deep-space spacecraft. In: Foundations of Intelligent Systems, Proceedings of 10th International Symposium, ISMIS’97, Charlotte. Lecture Notes in Computer Science, vol. 1325, pp. 35–47. Springer, New York (1997)Google Scholar
  58. 58.
    Mayer, M.C., Limongelli, C., Orlandini, A., Poggioni, V.: Linear temporal logic as an executable semantics for planning languages. J. Log. Lang. Inf. 16(1), 63–89 (2007)CrossRefMATHMathSciNetGoogle Scholar
  59. 59.
    McCarthy, J., Hayes, P.: Some Philosophical Problems from the Standpoint of Artificial Intelligence. Stanford University, USA (1968)Google Scholar
  60. 60.
    McDermott, D., Ghallab, M., Howe, A., Knoblock, C., Ram, A., Veloso, M., Weld, D., Wilkins, D.: PDDL-the planning domain definition language. Technical Report CVC TR98003/DCS TR1165, New Haven, CT. Yale Center for Computational Vision and Control (1998)Google Scholar
  61. 61.
    Menzies, T., Pecheur, C.: Verification and validation and artificial intelligence. Adv. Comput. 65, 5–45 (2005)Google Scholar
  62. 62.
    Morris, P.H., Muscettola, N.: Temporal dynamic controllability revisited. In: Proceedings of the 20th National Conference on Artificial Intelligence (AAAI-05) (2005)Google Scholar
  63. 63.
    Muscettola, N.: HSTS: integrating planning and scheduling. In: Zweben, M., Fox, M.S. (eds.) Intelligent Scheduling. Morgan Kauffmann, Burlington (1994)Google Scholar
  64. 64.
    Musliner, D.J., Pelican, M.J.S., Schlette, P.J.: Verifying equivalence of procedures in different languages: preliminary results. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  65. 65.
    Nayak, P.P., Bernard, D.E., Dorais, G., Gamble, E.B., Kanefsky, B., Kurien, J., Millar, W., Muscettola, N., Rajan, K., Rouquette, N., Smith, B.D., Taylor, W.: Validating the DS1 Remote Agent experiment. In: Proceeedings of the Fifth International Symposium on Artificial Intelligence, Robotics and Automation in Space (iSAIRAS’99) (1999)Google Scholar
  66. 66.
    Orlandini, A., Finzi, A., Cesta, A., Fratini, S.: TGA-based controllers for flexible plan execution. In: Advances in Artificial Intelligence (KI 2011), 34th Annual German Conference on AI. Lecture Notes in Computer Science, vol. 7006, pp. 233–245. Springer, New York (2011)Google Scholar
  67. 67.
    Patron, P., Birch, A.: Plan proximity: an enhanced metric for plan stability. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  68. 68.
    Penix, J., Pecheur, C., Havelund, K.: Using model checking to validate AI planner domain models. In: Proceedings of the 23\(^{rd}\) Annual Software Engineering, Workshop (1998)Google Scholar
  69. 69.
    Piterman, N., Pnueli, A., Sa’ar, Y.: Synthesis of reactive(1) designs. In: 7th International Conference on Verification, Model Checking and Abstract Interpretation (VMCAI’06). Lecture Notes in Computer Science, vol. 3855, pp. 364–380. Springer, New York (2006)Google Scholar
  70. 70.
    Pnueli, A.: The temporal logic of programs. In: 18th Annual Symposium on Foundations of Computer Science, pp. 46–57. IEEE Computer Society, USA (1977)Google Scholar
  71. 71.
    Pnueli, A., Rosner, R.: On the synthesis of a reactive module. In: Symposium on Principles of Programming Languages (POPL’89), pp. 179–190 (1989)Google Scholar
  72. 72.
    Preece, A.: Evaluating verification and validation methods in knowledge engineering. In: Roy, R. (ed.) Micro-Level Knowledge Management, pp. 123–145. Morgan-Kaufman, Burlington (2001)Google Scholar
  73. 73.
    R-Moreno, M.D., Brat, G., Muscettola, N., Rijsman, D.: Validation of a multi-agent architecture for planning and execution. In: Proceedings of 18th International Workshop on Principles of Diagnosis (DX’07) (2007)Google Scholar
  74. 74.
    Raimondi, F., Pecheur, C., Brat, G.: PDVer, a tool to verify PDDL planning domains. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  75. 75.
    Razavi, N., Farzan, A., McIlraith, S.A.: Generating effective tests for concurrent programs via AI automated planning techniques. Int. J. Softw. Tools Technol. Transf. STTT (2014)Google Scholar
  76. 76.
    Shah, M., Chrpa, L., Jimoh, F., Kitchin, D., McCluskey, T., Parkinson, S., Vallati, M.: Knowledge engineering tools in planning: state-of-the-art and future challenges. In: Proceedings of the ICAPS Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2013) (2013)Google Scholar
  77. 77.
    Siminiceanu, R.I., Butler, R.W., Munoz, C.A.: Experimental evaluation of a planning language suitable for formal verification. In: Proceedings of the Fifth International Workshop on Model Checking and Artificial Intelligence, pp. 18–34 (2008)Google Scholar
  78. 78.
    Smith, B., Feather, M., Muscettola, N.: Challenges and methods in testing the Remote Agent planner. In: Proceedings of the Fifth International Conference on Artificial Intelligence Planning and Scheduling (AIPS’00), pp. 254–263 (2000)Google Scholar
  79. 79.
    Smith, B., Millar, W., Dunphy, J., Tung, Y.-W., Nayak, P., Gamble, E., Clark, M.: Validation and verification of the Remote Agent for spacecraft autonomy. In: Proceedings of IEEE Aerospace Conference (1999)Google Scholar
  80. 80.
    Smith, B., Rajan, K., Muscettola, N.: Knowledge acquisition for the onboard planner of an autonomous spacecraft. In: 10th European Workshop on Knowledge Acquisition, Modeling and Management (EKAW’97). Lecture Notes in Computer Science, vol. 1319, pp. 253–268 (1997)Google Scholar
  81. 81.
    Smith, M.H., Holzmann, G.J., Cucullu, G.C., Smith, B.D.: Model checking autonomous planners: even the best laid plans must be verified. In: Proceedings of IEEE Aerospace Conference, pp. 1–11. IEEE Computer Society, USA (2005)Google Scholar
  82. 82.
    Srivastava, S., Immerman, N., Zilberstein, S.: Finding plans with branches, loops and preconditions. In: Proceedings of the ICAPS Workshop on Verification and Validation of Planning and Scheduling Systems (VVPS’09) (2009)Google Scholar
  83. 83.
    Steffen, B., Isberner, M., Naujokat, S., Margaria, T., Geske, M.: Property-driven benchmark generation. In: Bartocci, E., Ramakrishnan, C. (eds.) Model Checking Software, vol. 7976, pp. 341–357. Lecture Notes in Computer Science. Springer, Berlin (2013)Google Scholar
  84. 84.
    Steffen, B., Margaria, T., Braun, V.: The electronic tool integration platform: concepts and design. Int. J. Softw. Tools Technol. Transf. STTT 1(1–2), 9–30 (1997)CrossRefMATHGoogle Scholar
  85. 85.
    Vaquero, T., Silva, J., Beck, J.: A brief review of tools and methods for knowledge engineering for planning and scheduling. In: Proceedings of the ICAPS Workshop on Knowledge Engineering for Planning and Scheduling (KEPS 2011) (2011)Google Scholar
  86. 86.
    Vidal, T.: A unified dynamic approach for dealing with temporal uncertainty and conditional planning. In: Proceedings of the Fifth International Conference on Artificial Intelligence Planning and Scheduling (AIPS’00) (2000)Google Scholar
  87. 87.
    Wehrle, M. Helmert, M.: The causal graph revisited for directed model checking. In: Palsberg, J., Su, Z. (eds.) SAS. Lecture Notes in Computer Science, vol. 5673, pp. 86–101. Springer, New York (2009)Google Scholar
  88. 88.
    Williams, B.C., Nayak, P.P.: A model-based approach to reactive self-configuring systems. AAAI/IAAI 2, 971–978 (1996)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Saddek Bensalem
    • 1
  • Klaus Havelund
    • 2
  • Andrea Orlandini
    • 3
  1. 1.Verimag LaboratoryGrenobleFrance
  2. 2.Jet Propulsion LaboratoryCalifornia Institute of TechnologyPasadenaUSA
  3. 3.ISTC-CNR, National Research CouncilRomeItaly

Personalised recommendations