Compositional schedulability analysis of real-time actor-based systems

We present an extension of the actor model with real-time, including deadlines associated with messages, and explicit application-level scheduling policies, e.g.,“earliest deadline first” which can be associated with individual actors. Schedulability analysis in this setting amounts to checking whether, given a scheduling policy for each actor, every task is processed within its designated deadline. To check schedulability, we introduce a compositional automata-theoretic approach, based on maximal use of model checking combined with testing. Behavioral interfaces define what an actor expects from the environment, and the deadlines for messages given these assumptions. We use model checking to verify that actors match their behavioral interfaces. We extend timed automata refinement with the notion of deadlines and use it to define compatibility of actor environments with the behavioral interfaces. Model checking of compatibility is computationally hard, so we propose a special testing process. We show that the analyses are decidable and automate the process using the Uppaal model checker.


Introduction
Actors were originally introduced by Hewitt as autonomous reasoning objects [33]. Actor languages have since then evolved as a powerful tool for modeling distributed and concurrent systems [3,4]. Different extensions of Actors are proposed in several domains and are claimed to be the most suitable model of computation for many dominating applications [34]. Examples of these domains include designing embedded systems [50,51], wireless sensor networks [19], multi-core programming [43] and designing web services [16,17].
In an Actor-based model, actors are (re)active objects with encapsulated data and methods which represent their state and behavior, respectively. Actors are the units of concurrency, i.e., an actor conceptually has a dedicated processor. In the pure asynchronous setting [3,33], actors can only send asynchronous messages and have queues for receiving messages. An actor progresses by taking a message out of its queue and processing it by executing its corresponding method. A method is a piece of sequential code that may send messages. We model dynamic reconfiguration by deciding the message recipients based on actor states, but we do not consider dynamic creation of actors. We may use the terms actors and objects interchangeably.
This model of concurrent computation forms the basis of the programming languages Erlang [9] and Scala [30] that have recently gained in popularity, in part due to their support for scalable concurrency. However, for optimal use of both hardware and software resources, we cannot avoid leveraging scheduling and performance related issues from the underlying operating system to the application level as argued for example in [8,11]. In general, the goal of resource-aware programming (RAP [54], SUMATRA [2], CAMELOT [52]) is to express policies for the management of resources.
In this paper, we focus on CPU time and introduce a new extension of the actor model with real-time and present a methodology for compositional schedulability analysis. The key to compositionality is the actors' behavioral interfaces where the communication protocol of each actor is defined in terms of how it is expected to send and receive messages. In the first step for compositional analysis, as previously introduced in [38], every actor is individually analyzed for schedulability by taking its behavioral interface as an abstraction of the environment. However, at the next step of composing the actors, we need to make sure that each actor's environment is indeed compatible with the protocol specified in its behavioral interface. As discussed in Sect. 4, it does not suffice to check compatibility only at the level of behavioral interfaces and we did not address this problem in our previous work [38] due to its complexity. This paper, as the extended version of our initial report in [40], proves that the notion of refinement including quiescence and deadlines is a sufficient condition for compositionality, even in presence of dynamic reconfiguration. Nevertheless since this approach still requires reasoning on the global system, we propose a testing method. By using Uppaal model checker for testing, we aims at finding counter-examples to refinement, which may in turn be used as test cases for global schedulability. In the rest of this section, we present an intuitive introduction to our methodology for modeling and the schedulability analysis framework and finally, at the end of this section, the extension points with respect to our initial report in [40] are identified. Figure 1 presents the two levels of abstraction we propose for modeling real-time actors. At the detailed level, an actor consists of its methods, scheduler and queue. This specification is given in timed automata [7] in order to support automated analysis techniques. In this model, deadlines are assigned to messages and explicit application-level scheduling policies are associated to the individual actors (rather than, for instance, assuming "First Come First Fig. 1 An off-the-shelf actor component is guaranteed to be schedulable if it is used as expected in its behavioral interface. This correct usage in a system S can be tested, called the compatibility check Served" (FCFS) by default). A scheduling policy determines the order in which the (methods corresponding to) queued messages should be executed. We restrict to schedulers in which a new message cannot preempt the currently running method; as we discussed in [37,38], preemptive scheduling is undecidable in this framework unless a minimum delay is enforced between every two rounds of preemption. Method automata mainly specify what messages are sent and when. Received messages are handled by the schedulers and can be seen as events generating tasks (as a comparison to Task Automata [25]). A deadline is assigned to each message specifying the time before which the intended job should be accomplished. Although we ignore communication delays, they can be added as we considered in [39]. This framework allows quality-of-service and deployment requirements to be analyzed and resolved at design time. For example in a real-time setting, we must guarantee a maximum on average response-time (end-to-end deadlines) or a minimum on the level of system throughput. For our analysis, we have chosen to use Uppaal [49]; this choice is however not essential and in principle other tools for timed automata can also be employed. This framework complements the work in [57], in which we describe how application-level scheduling policies can be implemented into a programming language on top of Java. To exemplify our approach, we model in Sect. 3.2 a peer-to-peer system with an architecture similar to that of Skype: a centralized broker is responsible for connecting the clients (aka peers). The goal is to ensure that the system will function quickly without delays. The analysis is difficult because this is a large distributed system; furthermore, the network of the connected peers may change dynamically.
As seen in Fig. 1, a behavioral interface provides an abstract and high-level view of the actor by abstracting from the queue and method implementations. In fact, it shows the pattern of possible interactions it may have with its environment. This captures the valid sequences of provided and required services (received and sent messages). We model a behavioral interface as a deterministic timed automaton to further capture the timings and deadlines of the messages. For instance, the behavioral interface of a server that handles at most one request at a time can be defined as a loop of receiving a 'request' followed by a 'reply', i.e., no second request is allowed before providing a reply.
To perform schedulability analysis of an actor-based system in a compositional manner, first schedulablity of every actor has to be analyzed individually with respect to its behavioral interface. This was elaborated in our previous work [38], which is based on the ideas of Task Automata [25]. As an actor is expected to be used as specified by its behavioral interface, we restrict the actor behavior accordingly when checking its schedulability in isolation. Our method allows us to statically find an upper bound on the length of schedulable queues; hence, the behavioral model of the actor has a finite number of states and the analysis is decidable. This way, one can analyze the actor with regard to different scheduling strategies, and find the best strategy.
Once an actor is proved schedulable, in order to use it as an off-the-shelf component we need to check additionally that its actual usage in a given distributed system follows the expected usage (as specified by the behavioral interface), called the compatibility check. In other words, checking compatibility for each actor A i involves ensuring that the rest of the system respects the requirements specified in its behavioral interface B i . Unfortunately, as we explain in Sect. 4, this cannot be checked simply at the level of behavioral interfaces, because they include only an approximation of when messages are sent.
In theory, compatibility can be checked by to constructing the complete system behavior S of all actors together and showing that S is a refinement of the product of the automata for behavioral interfaces (call it B), i.e., the set of timed traces of S is a subset of that of B. Assuming that B is deterministic, one can prove that S is refinement of B by model-checking the synchronous product of S and B (restricted by the computed queue bounds). Due to the message queues (one for each actor) in S, this would lead however to an unmanageable state-space explosion. In [36], we have investigated a compositional approach purely based on model checking for schedulability and compatibility analysis, but to preserve soundness, it becomes pessimistic. Instead of verifying the refinement relation, we introduce a novel method for counter-example oriented testing, which is more realistic. In this method we generate a test case from B as follows. We take a trace from B and complete it into a test case for S by adding transitions that capture all possible one-step deviations from the original trace. Among these transitions, those not allowed in B produce a counter-example, i.e., a trace of S which does not belong to B. This technique is much more effective than generating test cases from S to be checked against B, because it allows for automated generation of test cases from B (note that B does not involve queues) and a reduction of the overall system behavior S by the test case.
Our proposed testing technique gives rise to some issues which are not common in standard frameworks. First, the system under test is a model and not a real implementation. We do not take advantage of our knowledge of the model, so it can be seen as black-box testing, but the execution of test cases will be simplified as we can apply tools that can systematically explore a model. Another difference from usual frameworks is that the system involves internal actions that are not specified in the behavioral interfaces. The consequence is that the test, built from the abstract specification only, will not be able to fully control the system under test during its execution. This leads to a lot of non-determinism but this can be solved by using a model checker to execute a test case. Last but not least, our main goal is to find a counter-example in the case of wrong refinement. Then test cases must be as "rigid" as possible to take any incorrect behavior into account.
The following items summarize the key points of the compositional schedulability analysis framework, and in which paper they were first introduced.
-Modeling real-time actors -Behavioral interfaces • Defined in [38] as drivers, i.e., only including inputs to the actor • Synthetic behavior of the actor (abstract from queue and methods) including both inputs and outputs [40] • Deadlines as schedulability requirements [38] -Actors with explicit scheduling policies [38] -Formal semantics of a closed system of actors [this paper] -Schedulability analysis of individual actors -Behavioral interface as a contract between actor and its environment [38,40] -Decidability because of queue size limit [38] -Modeling and analysis in Uppaal (introduced in [40], a complete presentation in this paper) -Schedulability of the composition of individually schedulable actors -Compatibility defined in terms of refinement extended with deadlines and quiescence (introduced in [40], formally defined in this paper) -Counter-example oriented testing [40] • Soundness [proof in this paper] • Exhaustiveness [proof in this paper] • Rigidness [introduced and proved in this paper] -A peer-to-peer case study with dynamically reconfigurable network [this paper] Paper structure Section 2 provides the grounds for the approach by explaining timed automata. In Sect. 3, we explain how we model real-time actors and exemplify this by means of a peer-to-peer case study. A review of our compositional schedulability analysis technique is given in Sect. 4. Section 5 describes our approach to testing refinement for timed automata, which is then applied to test compatibility in the context of schedulability analysis. Related works are presented in Sect. 6. Section 7 concludes the paper.

Preliminaries: timed automata
We base our techniques on timed automata [7] and thus can take advantage of the abundant tools available. As we choose Uppaal [49], we tailor the definitions accordingly.
Syntax Let Act be a finite set of actions. Let C be a finite set of real-valued clocks. We define B(C), the set of clock constraints, as the set of boolean formulas built over elementary constraints x ∼ n and x − y ∼ n where x, y ∈ C, n ∈ N, and ∼ ∈ {<, ≤, =, ≥, >}, with boolean operators ∨, ∧ and ¬. A timed automaton A over Act and C is a tuple (L , l 0 , E, I ) where L is a finite set of locations, with l 0 ∈ L being the initial location.
We write l g,a,r − −− → l for an edge from location l to location l guarded by a clock constraint g ∈ B(C), labeled with the action a ∈ Act and resetting the subset r of C. Finally, I : L → B(C) assigns an invariant to each location. Location invariants are restricted to conjunctions of constraints of the form x < n or x ≤ n for x ∈ C and n ∈ N.
Semantics A timed automaton defines an infinite labeled transition system whose states are pairs (l, u) where l ∈ L and u : C → R + is a clock assignment. We denote by 0 the assignment mapping every clock in C to 0. The initial state is s 0 = (l 0 , 0). There are two types of transitions: action transitions (l, u) a → (l , u ) where a ∈ Act, if there exists l g,a,r − −− → l such that u satisfies the guard g, u is obtained by resetting to zero all clocks in r and leaving the others unchanged and u satisfies the invariant of location l ; delay transitions (l, u) d → (l, u ) where d ∈ R + , if u is obtained by delaying every clock for d time units and for each 0 ≤ d ≤ d, u satisfies the invariant of location l.
For a sequence of labels w = w 1 w 2 . . . w n , we write s 0 w − → s n to denote the sequence of Deterministic timed automata We call a timed automaton deterministic if and only if given any two edges l g,a,r − −− → l and l g ,a,r −−−→ l , the guards g and g are disjoint (i.e. g ∧ g is unsatisfiable). Furthermore, there is at most one transition with an invisible action l g,τ,r − −− → l from any location l, in which case, g is disjoint from guards of other transitions from l. Note that deterministic timed automata may still produce nondeterministic behavior in the sense that at a given state, multiple transitions may be enabled, but only if they have different actions.
Variables As accepted in Uppaal, we allow variables of type boolean and bounded integers for each automaton. Variables can appear in guards and updates. The semantics of timed automata changes such that each state will include the current values of the variables as well, i.e. (l, u, v) with v a variable assignment. An action transition (l, u, v) a → (l , u , v ) additionally requires v and v to be considered in the corresponding guard and update.
Timed automata with inputs and outputs In the following, we assume the set of actions Act is partitioned into two disjoints sets: a set Act I of input actions a? and a set Act O of output actions a!. A non-observable internal action τ is also assumed. A timed automaton with inputs and outputs is a timed automaton over Act τ = Act ∪ {τ }.
Network of timed automata A system may be described as a collection of timed automata with inputs and outputs A i (1 ≤ i ≤ n) communicating with each other. The behavior of the system, referred to as the product or network of these automata, is then defined as the parallel composition of A 1 · · · A n . Semantically, the system can delay if all automata can delay and can perform an action if one of the automata can perform an internal action or if two automata can synchronize on complementary actions (inputs and outputs are complementary). Notice that a network of deterministic timed automata is not in general necessarily deterministic. In a network of timed automata, variables can be defined locally for one automaton, globally (shared between all automata), or as parameters to the automata.
Using terminology of Uppaal, a location can be marked urgent in an automaton to indicate that the automaton cannot spend any time in that location. This is equivalent to resetting a fresh clock x in all of its incoming edges and adding an invariant x ≤ 0 to the location. In a network of timed automata, the enabled transitions from an urgent location may be interleaved with the enabled transitions from other automata (while time is frozen). Like urgent locations, committed locations freeze time; furthermore, if any process is in a committed location, the next step must involve an edge from one of the committed locations.
Timed traces A timed sequence σ ∈ (Act τ ∪ R + ) * is a sequence of timed actions in the form of σ = t 1 a 1 t 2 a 2 . . . a n t n+1 such that for all i, 1 ≤ i ≤ n, t i ≤ t i+1 . Given a timed sequence σ , π obs (σ ) denotes the timed sequence obtained after deleting t i τ occurrences. The sequence π obs (σ ) is called the observable timed sequence associated to σ .
A run of a timed automaton A from initial state (l 0 , 0) over a timed sequence σ = t 1 a 1 t 2 a 2 . . . a n t n+1 is a sequence of transitions: where d 1 = t 1 and for all i, 1 < i ≤ n + 1, t i = t i−1 + d i . The set Traces(A) of timed traces of A is the set of timed sequences σ for which there exists a run of A over σ . The set Traces obs (A) of observable timed traces of A is the set {π obs (σ ) | σ ∈ Traces(A)}.

Actors as real-time asynchronous concurrent objects
We describe in this section how to use automata theory, along the lines of our previous work [38,40], to describe actors. Actors in our framework specify local scheduling strategies, e.g., based on fixed priorities, earliest deadline first, or a combination of such policies. Realtime actors may need certain customized scheduling strategies in order to meet their QoS requirements. Our approach can be easily adapted to any actor-based modeling platform, e.g., Rebeca [1,63], Creol [42], which in turn may provide abstractions on programs in actor-based languages like Scala or Erlang.

A formal model of actors
An actor must be modeled at two levels of abstraction (cf. Fig. 1). First a synthetic abstract behavior of the actor is given in one automaton called its behavioral interface. Second, a more detailed specification of the actor behavior is given in terms of its methods, each modeled as an automaton, plus a scheduling strategy. The behavioral interface presents the actor behavior in one place, contrasted to the detailed behavior specification which is scattered over the methods. At the end of this section, we will describe how composition of multiple actor instances comprises a closed system.
Behavioral interface model A behavioral interface specifies at a high level, and in the most general terms, how an actor behaves. It consists of the messages an actor may receive and send. A behavioral interface abstracts from specific method implementations, the message queue in the actor and the scheduling strategy. As explained later in this section, behavioral interfaces are key to compositional analysis of actors.
To formally define a behavioral interface, we assume a finite set M for method names. Since every message is handled by one corresponding method, we use the terms 'method name' and 'message name' interchangeably. We assume without loss of generality that for any given method m, there are unique actors a and b such that only a can send m and only b can receive m, i.e., messages communicated between actors are one-to-one and unidirectional. This assumption may seem restrictive as it disallows a method to be called by different actors. To overcome this, one may duplicate the method for each caller and give it a different name; alternatively, in our implementation in the next sub-section, we consider the sender and receiver of each message as part of the message name.

Definition 1 (Behavioral interface)
A behavioral interface B providing a set of method names M B ⊆ M is a deterministic timed automaton over alphabet Act B such that Act B is partitioned into two finite sets of actions: The number d associated to input actions represents a deadline. Intuitively, this is a requirement on the implementation of an actor saying that the actor should be able to finish method m before d time units. We restrict to natural numbers for deadlines, because using real numbers makes analysis of timed automata undecidable. Output actions are the methods called by this actor and should be handled by (other actors in) the environment. We choose to disallow τ transitions at the level of behavioral interfaces to guarantee determinism in their products (see the lemma below), although in theory τ transitions can be removed if clocks are not reset [10].
The semantics of a behavioral interface is defined simply as the timed traces on its action set. We define composition of behavioral interfaces as their synchronous product on complementary actions, where an output action m! synchronizes with input actions m(d)? and produces the action m(d) in the composed automaton.

Lemma 1 Given a set of actors, the product of their behavioral interfaces is deterministic.
Proof Based on the definition of determinism (see Sect. 2) and due to absence of τ transitions, nondeterminism in the product of behavioral interfaces may arise only if there are two edges from the same location with the same action, say m(d). Since messages are assumed to be one-to-one, m! and m(d)? can each appear in only one behavioral interface. Therefore, nondeterminism in the product is only possible if there is a similar nondeterminism in the individual behavioral interfaces, which is by definition not the case.
Behavioral interfaces conceptually serve two purposes: (1) represent the actor to the environment, as explained above; (2) represent the environment to the actor. The latter function can be enabled by syntactically swapping the ! and ? signs in a behavioral interface. Thus one obtains an abstraction of the environments in which an actor may be used. In the following sections, this abstraction will be used for modeling and analysis of actors in Uppaal.
Actor definition An actor may implement a behavioral interface B by providing implementation for the methods in M B . Additionally, it has an unbounded queue to store incoming messages and a scheduling policy. A queue uses a clock c to keep track of the waiting time for message m. This clock is reset to zero when the message is added to the queue. This message misses its deadline when c > d. In the sequel, we do not distinguish between a queue as a general data structure and a particular state of the queue.
As part of the actor state, a queue shows the messages pending to be processed, while the first message in the queue represents the currently running method. Messages are inserted into the queue by a scheduler in the order they should be executed, based on a scheduling strategy, e.g., FCFS (First Come First Served) or EDF (Earliest Deadline First). Typically the scheduler could dynamically examine the remaining time before the deadline of each message in the queue. However, to be able to statically write down the specification of a scheduler, we define a scheduler function that returns the set of all possibilities for putting the new message in the queue depending on different clock values. Examples of such functions are given in Fig. 2.

Definition 3 (Scheduler function)
We write g ⇒ q to mean that once guard g is satisfied the scheduler should produce the queue q c is a fresh clock not used (i.e., not assigned to any messages) in q; and, q ∈ Q is the queue after inserting m(d, c) in a particular position as implied by the guard G.
An overloading of the scheduler function is defined as sched(q, m(d, c)) such that it inserts a task into the queue using a given clock c. By reusing the deadline and the clock already assigned to a task in the queue, we can model inheriting the deadline. A scheduler function is preemptive if it can place the new task in the first position. As discussed in [38], we only consider non-preemptive schedulers because preemption leads to undecidability. In our implementation, we will use a timed automaton to act as both the queue and the scheduler function (cf. Sect. 3.2). There is always initial R ∈ M R that is responsible for initialization of the actor.

Definition 4 (Actor) An actor R implementing the behavioral interface B is a tuple
Method automata A i ∈ A R only send messages while computations are abstracted into time delays. Sending a message m ∈ M R is called a self call. A self call with no explicit deadline inherits the (remaining) deadline of the method that triggers it (as described above using the overloaded scheduler definition); this mechanism is called delegation. Other send operations are to be given explicit deadlines. Finally note that unlike behavioral interfaces, actor automata can be nondeterministic possibly due to the τ transitions in the methods.
The semantics of an actor can be defined as a timed automaton, called actor automaton. Every location of the actor automaton is written as a pair (l, q) where q represents the contents of the queue and l refers to the current location of the currently executing method, i.e., the first method in the queue. The actor takes one transition if the currently running method takes a step. On the other hand, changes to the queue also cause a transition in the actor automaton. This may be receiving a new message or removing a message from the queue after it has been processed.
To concretely define semantics of an actor, one needs to characterize its environment. As mentioned earlier, the behavioral interface can act as an abstract representation of the environment. In [38], such semantics is defined and it forms the basis of individual actor analysis in Sect. 4.1. Alternatively, we define below the semantics of actors in a closed system.

Definition 5 (Schedulable actor)
An actor is schedulable if it never reaches a state in which the queue contains a triple m(d, c) such that d < c.
In this paper, an actor is said to be schedulable if and only if it finishes all of the tasks within their deadlines, assuming the specific scheduling policy that is given for the actor. In principle, actors have infinite queues, but we have shown in [38] that in a schedulable system they do not put more than d max /b min messages in their queues, where d max is the longest deadline for the messages and b min is the shortest termination time of its method automata.

Lemma 2 (Queue length) An actor with an unbounded queue is schedulable if and only if the actor is schedulable with a queue length of d max /b min .
Proof The "if" part is trivial, so we prove the "only if" part. Assume an actor with unbounded queue has n = d max /b min +1 messages in the queue. Processing them takes at least n * b min time units and that is then longer than d max . In other words, the last message finishes after more than d max time units since its creation and therefore misses its deadline.
One can calculate the best case runtime for timed automata as shown by Courcoubetis and Yannakakis [22]. This is important because finite queues make it possible to use model checking techniques for schedulability analysis (see Sect. 4).
System composition We define a system as a number of actors that run concurrently, each maintaining a queue of the messages it has to process. The system must be closed, i.e., any output actions by an actor must be either a self call or included in the the behavioral interface of another actor in the system. Definition 6 (Closed system) A set of actors R 1 , . . . , R n (as in Definition 4) with behavioral interfaces B 1 , . . . , B n (as in Definition 1) comprise a closed system if for all R i we have The semantics of a system is defined by a timed automaton, called the system automaton.
The system automaton for a given system with method names M S = 1≤i≤n M R i and a scheduler function sched is a timed automaton S = (L S , l S , E S , I S ) over the alphabet Act S = M S and the clocks C S : -The set of clocks C S is the union of all sets of clocks for the method automata plus the queue clocks of each actor. -The locations of the system are the product of each of the locations of actors together with a queue, i.e., { (l 1 , q 1 ), (l 2 , q 2 ), . . . , (l n , q n ) }. -The initial location l S is: In Fig. 3, function start (m) returns the initial location of the automaton for method m. Locations in method automata with no outgoing transitions are called final. The first three rules in this figure take care of enqueuing sent messages. The first two are for self calls and therefore the queue of the same object is used. In case of delegation, the clock of the current task is reused and thus the deadline is inherited. Whenever the execution of the current task finishes, the context switch rule makes sure the next method in the queue is executed, if there is any. If in a location {(l, q), . . . }, l is final and q has more than one element, the location is marked urgent. This forces context switch to happen as soon as it is possible. Definition 7 (Schedulable system) A closed system is schedulable if non of its constituent actors reaches a state in which the queue contains a triple m(d, c) such that d < c.
Lemma 2 still applies in the context of a closed system. Therefore, we can put a maximum on the queue length and ensure the rules in Fig. 3 are used only if the queue bound is not exceeded. Going beyond the maximum queue length for any actor or missing a deadline by a message in a queue must also lead to an error state. The extra rules in Fig. 4 depict cases of nonschedulability by moving to an explicit error state.
In the following, we explain how to use Uppaal to implement actors and behavioral interfaces using an example. Peer-to-peer systems are a commonly used way of sharing data as well as chat-based communication. Contrasted to a client-server architecture, these systems are called peer-to-peer because all nodes can act both as a server and a client; simply said, they are all peers. We model and analyze a hybrid peer-to-peer architecture (like in Skype or BitTorrent), where a central server (called the broker or tracker) keeps track of all active nodes in the system. 1 To start communication, a node acts as a client and asks the broker to connect it to another node. In case of Skype, the client provides the Skype ID to the broker. In a file-sharing system, a keyword is provided in order to search for some data, for example, the name of a song. The broker connects the client to a proper server, e.g., with the given Skype ID, or having the song with the given name. The two nodes then communicate directly by sending requests and replies.
Each node, upon creation, registers its ID/data with the broker. In the case of a file-sharing system, the nodes may obtain new data after every round of communication with other nodes. In this case, they need to update their information registered at the broker.
In Uppaal, we model for each actor three parts: the behavioral interface, the methods and the scheduler (which in turn includes a queue). These automata are parameterized on the identity of the actor itself (written as self), and the identifiers of the actors communicating with it (called its known actors). In this case, the known actor of a peer is a broker, and a broker has some Peers as its known actors. To have more than one instance of an actor, we instantiate the scheduler and method automata and provide different identity values (i.e., self) to different actor instances.
Communication In Uppaal, communication between automata is done via channels. We use the channels invoke and delegate for sending messages. The channel invoke has three dimensions (parameters), the message name, the sender and the receiver, e.g., invoke[connect][Peer][self]!. This way, actors instantiated from the same automata will have disjoint method names by assigning different identities to their self parameter. By setting both sender and receiver as self (in method automata), one can invoke a self call (when a deadline is to be given, as explained next). The delegate channel is used for delegation. The self call made using the delegate channel inherits the deadline of the currently running method (it is taken care of by the scheduler automaton). Since a delegation is used only for self calls, no sender is specified (it has only two parameters).
Deadlines and parameters We take advantage of the fact that when two edges synchronize, Uppaal performs the updates on the emitter before the receiver. Hence we can use global variables for passing information. In this case study, we use variables deadline and srv to pass deadlines and the parameter to SReq message (cf. Sect. 3.2.2), respectively. The emitter sets the desired value into the corresponding variable which is read by the receiver. The receiver, however, cannot use this value in its guard, as guards are evaluated before updates. We define these variables as meta, i.e., they are not kept in the state, which implies that their values must be stored properly by the receiver.

Behavioral interfaces
The first thing to model for an actor is its behavioral interface. Following the explanation in the previous subsection, we model a behavioral interface to represent the environment to the actor. To enable synchronization between outputs of method automata and output actions of the behavioral interface (and similarly between inputs of the scheduler and inputs of the behavioral interface), we use the ! sign for inputs and ? for outputs of the behavioral interface.
Both the broker and peers have relatively independent behavior on their server side and client side. Therefore, we model these two sides using independent automata in Fig. 5, which need to be interleaved in order to produce the complete behavior of the actor. Since the messages sent by different peers to the broker are also independent, the client and server side automata of the broker are defined per peer. Figure 5 shows the messages that a broker object may use to communicate with one peer.
On the server side, the broker specifies only that a peer must register its local information once. However, on the client side, the broker expects to receive requests (CReq messages) from a peer repeatedly. For each request, the broker connects it to a server, i.e., the ID of the server is sent as a parameter of the connect message to the client; outgoing parameters are not captured in the behavioral interface. For simplicity, we assume that a request by a client is always successful, i.e., every data item searched for is available. The connections between peers is transparent to the broker. Assuming that the client peer has obtained new data after this connection, it should update its registry at the broker (because it can now provide more data on its server side). The clock x is used to ensure a delay of at least 5 time units between sending the update message and the subsequent request.
Similarly, the server side behavioral interface of a peer starts by registering its data with the broker to initialize its operation. Then it can receive requests (SReq messages) and send replies to other peers. We opt for a simple scenario, i.e., each server or client handles only one request at a time. The peer may accept an SReq message from any peer (s:int [  Behavioral interfaces for broker and peer as interleaving of client-side and server-side automata excluding itself (s != self). It may only send a reply message to the same peer; this is ensured by means of the srv variable.
The behavioral interface of the peer is similar to broker on the client side, too, except that it additionally models the communication with a server after a connection has been established. The client can send any number of requests per connection, although only one at a time. Furthermore, the incoming parameter of the connect message is also captured with a select expression s:int [1, in Uppaal, which means that it may receive the ID of any peer. The global variable server is used for communicating this parameter (like the deadline variable). Figure 6 shows the method automata of the broker and peer. In this implementation, each method is modeled as a separate automaton. A method may start its behavior when it receives a signal on the start channel from the scheduler. After accomplishing its tasks, it sends a signal on the finish channel to the scheduler, who will select the next method for execution (see next subsection).

Broker and peer actors
The initial, register and update methods take one time unit to execute. These methods do not perform any computation as we abstract from the data. The CReq method nondeterministically selects a server and sends a connect message back to the sender. The variable sender is set by the scheduler to refer to the sender of a message. The ID of the selected server is sent using the server variable.
In addition to initial, a peer implements the connect and reply methods as a client, and the SReq method as a server. Furthermore, the method userReq simulates a user who initiates a search request by sending a CReq message to the broker. The userReq message is sent first by the initial method and then by the reply method in order to create a loop. Notice that this implementation of a peer sends exactly one request per connection, while the behavioral interface allows for any number of requests.

Modeling the scheduler
A scheduler function, as described in the previous section, can be implemented as a scheduler automaton. This automaton also contains a queue. Figure 7 shows the general structure of a scheduler automaton. This general picture does not specify any specific scheduling strategy. The scheduler automata applies the scheduling strategy at dispatch time (instead of insertion time like in Definition 3), but since we only deal with non-preemptive schedulers, the resulting behavior, i.e., the order of processing messages, is the same. The reason is to enable using deadlines in the strategy. As explained earlier, the deadline value cannot be used (in the guard) on the same transition where a message is received.
Queue The queue is modeled using arrays in Uppaal and thus it can be modeled compactly, i.e., without different locations for different queue states. Tasks in the queue are modeled using the following arrays: q holds the message names, d holds their initial deadline values and clk consists of clocks that keep track of the time a task has been in the queue. The sender of every message is stored in the s array. If messages can have parameters, p arrays will be added for each parameter. We assume a maximum length of MAX for these arrays. As described in Sect. 4  A clock is free if its counter is zero. When delegation is used, the counter becomes greater than one.
Initializaton The initialization of a queue takes place in the initialize function. This transition is taken before any method in any actor is started, because its start location is   Fig. 8. The synchronization between this transition and the method automata corresponds to the invocation rules in Fig. 3.
A similar transition accepts messages on the delegate channel (top-right in the picture). In this case, the clock already assigned to the currently running task (parent task) is assigned to the internal task (ca[tail] = ca[run]); this is handled in the function insertDelegate shown in Fig. 8. In a delegated task, no sender is specified (it is always self). The variable run shows the index of the currently running task in the queue (which is not necessarily the first task). This handles the rule delegation in Fig. 3.
Error The scheduler automaton moves to the Error state if a deadline is missed (clk[i]> d[i]). The guard counter[i]> 0 checks whether the corresponding clock is currently in use, i.e., assigned to a message in the queue. Furthermore, to make sure no queue overflow occurs, the property to check should include tail ≤ M AX.
Scheduling strategy When a message is added to an empty queue, the corresponding method is immediately started. When a method is finished (synchronizing on finish channel), it is taken out of the queue (by shift() given in Fig. 8). If the currently running method is the last in the queue, nothing needs to be selected (i.e., if tail == 1 we only need to shift). Otherwise, the next method to be executed should be chosen based on a specific scheduling strategy (by assigning the right value to run). For a concrete scheduler, the guard and update of run should be well defined. If run is always assigned 0 during context switch, the automaton serves as a First Come First Served (FCFS) scheduler. In an FCFS scheduler, the two transitions on finish channel can be combined.
A Fixed Priority Scheduler (FPS) can be implemented by associating a constant priority value to each method/task type. Suppose the array p represents the static priority of all methods, such that for a message q[i] in the queue, its priority can be obtained by p[q[i]]. We can then formulate FPS strategy using the guard: An Earliest Deadline First (EDF) scheduler always selects the task with the smallest remaining deadline. This is an example of dynamic priority scheduling because the remaining deadline of a task gets smaller as time passes. The remaining deadline of message i is given by d and i will show the task with the smallest remaining deadline. Notice that clk [

a] − clk[m] ≥ d[a] − d[m] is equivalent to d[m] − clk[m] ≥ d[a] − clk[a]. The rest ensures that an empty queue cell (i< tail) or the currently finished method (run) is not selected.
After the next method to execute is selected, context-switch happens by starting the selected method. Having defined start as an urgent channel, the next method is immediately scheduled (if queue is not empty) by taking the bottom-right transition in Fig. 7.

Compositional schedulability analysis
We can use the value d max /b min for bounding the queues of schedulers as explained in the previous section. These automata have a special location Error such that a missed deadline results in an error. The system is then schedulable if the Error location is not reachable and no queue overflow occurs. In other words, schedulability analysis is reduced to reachability analysis in a tool like Uppaal and thus it is decidable. However, the intrinsic asynchrony of actors and their message buffers will lead to state space explosion for larger systems. This can be avoided by compositional analysis of the actors. To this end, we use the behavioral interface of every actor as a contract between the actor and its environment. Below, we describe how to check whether, firstly the actor itself, and secondly the environment in which it is used, respect this contract. We refer to the latter as compatibility check.

Individual actor analysis
Analyzing actors in isolation is hindered by the fact that the methods of an actor can in theory be called in infinitely many ways. However, taking the behavioral interface as the contract to which the actor should adhere, it is reasonable to restrict only to the incoming method calls specified in its behavioral interface. In other words, we use the behavioral interface as a driver where the input actions correspond to the incoming messages. Incoming messages are buffered in the actor; this can be interpreted as creating a new task for handling that message. The behavioral interface doesn't capture the internal tasks triggered by self calls. Therefore, one needs to consider both the internal tasks and the tasks triggered by the behavioral interface, which abstractly models the acceptable environments.
The semantics of an isolated actor can be defined as an isolated actor automaton. The states of this automaton are written as (l i , q i , b i ) where l i shows the current location of the currently running method, q i reflects the contents of the actor queue, and b i is the current location of the behavioral interface; a full characterization is given in [38].

Peers and broker
To analyze the broker, we need to know from how many peers it may receive requests. The automata representing the behavioral interfaces of the broker (cf. Fig. 5) need to be replicated for every peer, and these instances should be interleaved. For more efficient analysis, queue sizes smaller than d max /b min can be tried first; and, to handle three peers it turns out that the broker can manage with a queue of size 7. For bigger systems, the model checking becomes very time consuming and intractable. To improve efficiency one can follow the guidelines in Credo methodology [29]. The analysis of each peer can be performed with one instance of its client side and server side behavioral interfaces. Table 1 summarized schedulability analysis times of different configurations. The table on the left shows the analysis of individual actors. The high level of asynchrony introduced by increasing the number of known actors for the broker results in highly exponential increase in the analysis time. For comparison, the table on the right shows how long it would take to analyze a complete system. We can see that analysis of a complete system takes a much longer time and becomes intractable faster than individual actors. The combination of model checking and testing for compositional schedulability analysis proposed in this paper is an effective method to overcome this problem.

Compatibility check
Once an actor is proved to be schedulable with respect to its behavioral interface, it can be used as an off-the-shelf component. A system composed of individually schedulable actors is itself schedulable if the actual use of the actors in this system is compatible with their behavioral interfaces. For each actor, the behavioral interface abstractly models its observable behavior in terms of the messages it may receive and the messages it sends. Ideally, it would be enough to check compatibility only considering the behavioral interfaces, for example, by checking deadlock freedom in the composition of the behavioral interfaces [58]. Unfortunately, this is not possible in a real-time system. We explain this using three sample pairs of behavioral interfaces shown in Fig. 9. On the one hand, behavioral interfaces cannot be used to disprove compatibility. Consider the two automata in Fig. 9a. If both automata take their left transitions, i.e., communicate by message c, there will be a deadlock because of the mismatching provided and required messages. However since these (c) (b) (a) Fig. 9 Compatibility cannot be checked at the level of behavioral interfaces behavioral interfaces are abstractions of object behaviors, such a mismatch could be due to a spurious behavior not possible in the real system model. In other word, the implementations of these behavioral interfaces could happily communicate only a and b messages.
On the other hand, behavioral interfaces cannot be used to prove compatibility, either. For example, the automata in Fig. 9b can be composed with no problem, e.g., no deadlock occurs. Such a compatibility means that compatible implementations exist; but this does not guarantee compatibility of every possible implementation. An actor implementing the top interface may be too fast and send a outside the time constraint required by the bottom interface. In general, a behavioral interface does not reflect the precise timing of the send action by the real system model. In the work of [23], this problem is avoided by requiring the specifications to be input-enabled, like Fig. 9c where unacceptable inputs lead to an error location. This is, however, too restrictive because for example, it makes the example in Fig. 9c incompatible whereas we know already that compatible implementations exist.
The only solution to this problem is to take both the system composition and the behavioral interfaces into account. Intuitively, a running system of actors is compatible with their behavioral interfaces if its observable behavior is captured by the composition of the behavioral interfaces of the participating actors. Checking compatibility is prone to state-space explosion due to the size of the system; we avoid this by means of the testing technique described in Sect. 5. We first formally define compatibility in terms of a refinement relation.
Quiescence in refinement In the context of timed automata, an observable behavior is either an observable action (any action except τ ), the passage of time (a delay) or blocking (also called quiescence, i.e., the absence of observable actions). Actions and delays are already taken into account in timed traces, which describe the possible distribution of observable actions in time. To be able to take blocking into account, we need to represent them explicitly in the specification. There are three different scenarios for blocking a real-time system:

Deadlock
No action is possible but time can go on. Timelock Time is stopped, no action and no delay is possible (in the left hand side of Fig. 10 the synchronization cannot happen due to the mismatching invariant and guard). Zeno-timelock Infinitely many actions can occur in finite time.
Deadlocks and timelocks can occur as a result of composing the behavioral interfaces, and are therefore allowed in our refinement definition; the specifications, however, should not Fig. 10 Examples of building suspension automata. We do not allow zeno-timelocks in our models include zeno-timelocks. Approaches like [65] can be employed to make sure zeno-timelocks do not appear in the specification. For a correct refinement, the system may deadlock (resp. timelock) only if the composition of behavioral interfaces deadlocks (resp. timelocks). Locations with a deadlock or timelock are called quiescent. To explicitly specify quiescence in the specification, we add a loop on each blocking location labeled by a new action δ, considered to be an observable action [64] (see the right hand side of Fig. 10). The automaton obtained by adding δ actions is called a suspension automaton. A = (L , l 0 , E, I ) be a timed automaton over clocks C and actions Act. The suspension automaton of A is the timed automaton (A) = (L , l 0 , E , I ) over C and Act ∪ {δ} where E is defined as follows:

Definition 8 (Suspension automaton) Let
The traces of the suspension automaton, called suspension traces, then represent all the observable behaviors of this automaton. The set of all suspension traces of a timed automaton A is the set Traces( (A)) denoted by STraces(A).
In our models, S represents the system composition, and B is the synchronous product of the behavioral interfaces. Thus B and S represent two timed automata on the same set of actions, while S also contains τ actions. In the definition of refinement below, we consider observable suspension traces of S and all suspension traces of B.
Further we need to consider deadlines in refinement. Recall that in the behavioral interfaces, only input actions are assigned deadlines, whereas in the system of actors, the output actions in method automata have deadlines. Input actions in B provide the guaranteed deadlines (checked during individual actor analysis) whereas output actions in methods specify the required deadlines. In the definition of refinement below, we define that a deadline required by an action in S may not be smaller than the deadline guaranteed by a matching action in B. The actors used in a system are proved individually schedulable with respect to their behavioral interfaces (as explained in Sect. 4.1). The following theorem states that compatibility implies schedulability of the whole system provided that individual behavioral object models are schedulable. Intuitively, this means that every message in the system will be finished within the designated deadline.

Theorem 1 (System schedulability) A system composed of a set of actors O 1 , . . . , O n is schedulable, if every actor O i is individually schedulable and the system is compatible with the behavioral interfaces of the actors.
Proof To prove this we can assume that the system is compatible but not schedulable and all actors are individually schedulable. This means that there is a trace σ = t 1 , a 1 , t 2 , . . . , a k , t k+1 of the system automaton (cf. Sect. 3.1) in which one of the actors, say O j , drives the system to the Error state, i.e., either the queue of O j is overflown or a task in its queue misses its deadline. We show that this requires the existence of a trace in B j that drives O j to the Error state, which contradicts the schedulability assumption.
Due to compatibility, σ obs exists in the product of behavioral interfaces. This trace can be projected onto the behavioral interface of O j alone by removing the delay-actions t i , a i for every a i that is not in the action set of the behavioral interface of O j . Call the resulting trace σ obs | j . We can compute a set of traces in 'isolated actor automaton' of O j (cf. Sect. 3) as T = {ϕ i : ϕ i obs = σ obs | j }. Since the actor individually is schedulable, these traces do not lead to Error state, i.e., given this sequence of inputs and outputs, none of the tasks in the queue of O j misses its deadline, nor a queue overflow occurs.
On the other hand, the trace σ corresponds to a run of the system: where l j i is the location of actor j after i steps. Formally, l j i = (s, Q) where s is the location of its currently running task and Q is the current task queue. For brevity the delay transitions are not shown. We project the trace σ = t 1 , a 1 , t 2 , . . . , a k , t k+1 onto the actions of O j , by removing the delay-actions t i , a i such that l j i−1 = l j i . We represent the resulting trace as By considering the definition of system automaton, we can show that σ | j ∈ T . This requires that σ | j drives O j to the Error state, which is in contradiction with the schedulability assumption.
Since B is deterministic, checking trace inclusion becomes decidable [7,62], but due to the size of S, it may be susceptible to state-space explosion. To avoid this, we propose a method for testing trace inclusion in the next section. In particular, we want to be able to exhibit a counter-example if some incompatibility is found.

Counter-example oriented testing: compatibility check
We will show in this section how compatibility, defined as a refinement relation based on trace inclusion, can be tested. Trace inclusion is a usual notion of correctness (or conformance) between a system and its abstract specification B in formal testing frameworks [32]. A naive approach involves taking a trace from the system model and check if it is a valid trace in the abstract specification. This is not practical because the system model is very big and is not a suitable source of generating test cases. Our idea is to generate test cases based on traces from B and use it to restrict the system behavior. As long as the system can follow this trace, the test case looks for possible violations of the refinement relation. We will formally define rigidness in this section as a characteristic of counter-example oriented testing. First, we formally define a test case.
We are given two timed automata: one is the system of actors that we call the model under test MUT, and the other is the product of the behavioral interfaces of these actors. To test compatibility, we take the suspension automaton of the latter as the abstract specification. We denote this by B, which is then a deterministic timed automaton over the action set Act B .
MUT is a timed automaton over the set of actions Act MUT = Act B ∪ {τ }. A test case is a deterministic timed automaton without loops whose leaves are labeled with verdicts.
Definition 11 (Test case) Let B be a timed automaton over Act B . A test case for B is a deterministic acyclic timed automaton TC = (L , l 0 , E, I ) over Act B ∪ {τ }, in which all leaf locations (i.e., those with no outgoing transitions) are labeled with a verdict Pass or Fail. We refer to a set of test cases as a test set.
A verdict labeling a location allows us to evaluate an execution of the test case terminating on this location. The Pass verdict is reachable via only one path, which covers exactly the intended behavior we are testing for. This means that the system fulfilled the test case requirements. To find a counter-example to refinement, we need to search for locations marked Fail. These are the locations that are reachable with forbidden behaviors of the system (a non-specified action or an action happening outside its time constraints in the specification of B, for example). If the system deviates from the behavior aimed at by the test case without violating refinement, the test may terminate prematurely resulting in an inconclusive verdict. This is similar to the 'timed failures' used in [59] as a semantic model for timed CSP.
Recall that proving compatibility implies schedulability of the system. However, violating refinement and thus compatibility does not per se imply the violation of schedulability. Nevertheless, considering the assume-guarantee approach, it does violate the assumptions on schedulability of individual actors specified in the behavioral interfaces. Therefore, by means of testing one can find and remove counter-examples to compatibility and, as a result, attain more confidence in schedulability of the system.
The fact that a test case is deterministic means that from any location l, for any action a ∈ Act B , all transitions from l labeled by a, as well as the transition labeled by τ if any, have disjoint guards: given action a ∈ Act B and location l ∈ L B , then for any two guards g 1 and g 2 from the set {g i |l

Generating a test case
The idea is to take a timed trace from the abstract specification B and turn it into a test case (cf. Fig. 11). Since the exact timing of actions in a timed trace make the test case too restrictive, we take instead a linear timed automaton T . This consists of a sequence of transitions from B  representing a set of timed traces, but correspondig to exactly one untimed trace. As shown in Fig. 11, T contains a sequence of transitions written as l i−1 Such a sequence abstractly represents a desired system behavior (the test purpose).
The sequence of transitions of T corresponds to the behavior we want to test so the last location must be labeled Pass. All other locations are completed as follows, such that any forbidden behavior makes the test fail. If a location has an invariant h i in B, violating this invariant must make the test fail; thus, a transition labeled with τ and with guard ¬h i leading to Fail is added. Furthermore, no other transition may be taken if the invariant is violated; this is ensured by conjunction of guards of all other transitions with h i . Additionally, every behavior which is not allowed in B is forbidden, so for every action, a transition labeled by this action and whose guard is the complement of all the existing guards for this action leads to a Fail location; this guard is computed in g f . Any trace leading to Fail is an example of behavior not allowed in the abstract behavior specification.
Example 1 In Fig. 12, B shows the suspension automaton of an abstract specification, T is a selected sequence of transitions from B and finally TC is the test case generated from T by the algorithm. The location invariant x ≤ 10 is kept in bold face in the test case only to show its effect on guards. The transition labeled by Act stands for three transitions labeled by a, b and c with guard true.
The theorem below states that the timed automaton we obtain by this construction is a test case in the sense of Definition 11. The proof is straightforward by following the steps in the algorithm. Fig. 11 is a test case. Remark. The starting point for test case generation is a sequence of transitions in the specification. Given a desired reachability property ϕ, we can generate such a sequence of transitions automatically. We start by model-checking ϕ on the suspension automaton of the specification B. The diagnostic trace produced by the model-checking tool gives the sequence of moves that have to be made by this automaton and the required clock constraints needed to reach the targeted location. This method is in parts similar to [32]. Instead of checking for a reachability property, one can also use the simulation feature of a model-checker to generate specific hand-made traces. Another interesting property is a deadlock or timelock in the abstract specification B. Although a correct refinement is theoretically allowed to deadlock or timelock in such cases, too, such situations are in practice undesired. Therefore, such traces could also contribute to good test cases for checking system correctness. Note that a timelock in our models does not violate schedulability because when time stops no deadline is missed, but such a scenario is in fact an unrealistic situation.

Properties of the generated test cases
A test case must drive the execution of the system such that actions happen in the specified order. Usually, this happens by feeding inputs to the system and observing the outputs. In our case, we deal with a closed system which has no inputs to be controlled. Instead, since we deal with a system model, we drive the system execution by making it synchronize with the test case. Formally, the execution of a test case on the system is defined as the parallel composition of the automata of the test case and the system, synchronizing on the same actions. We denote the product automaton by TC MUT.
The model under test passes the test, denoted by MUT passes TC, if and only if the Fail location is not reachable in the product TC MUT. A test set T being a set of test cases, the model under test passes T , denoted by MUT passes T , if and only if for all test cases TC in T , MUT passes TC.

Soundness
The soundness requirement for a test set states that it must not reject a correct refinement. In other words, any counter-example reported by a test case (a trace leading to the Fail verdict) should indeed violate the refinement. A test case is formally defined to be sound (or unbiased) for the refinement relation if and only if

MUT B ⇒ MUT passes TC
A test set T is sound if and only if all test cases in T are sound. Fig. 11 is sound for .

Theorem 3 (Soundness) Let B be a deterministic timed automaton and T be a linear timed automaton built from a sequence of transitions in B. The test case T C generated from T and B by the algorithm in
Proof We assume that MUT does not pass T C, i.e., there is a trace in T C MUT that leads to the fail location l f = (l, f ). This trace can be decomposed into its MUT and T C components. Every location in T C, except for f , can be mapped to a location in B; since B is deterministic, this mapping is unique. The diagram below, shows the decomposition of this trace and the mapping to B.
that location l i has an invariant h that is violated by u i +d i+1 . Let T be a sequence of i +1 transitions from B such that t 1 a 1 . . . t i a i ∈ STraces(T ). Let TC be the test case generated from T by the algorithm. Since location l i has an invariant h, there is a transition in TC from l i to location f whose guard is ¬h and labeled with τ . As u i + d i+1 does not satisfy h, location f is reachable in the product TC MUT with the trace t 1 a 1 . . . t i a i t i+1 τ corresponding to the run i.e., the action a i+1 is not allowed in B at time t i+1 . It means that there is no transition in B from location l i labeled with a i+1 whose guard is satisfied by u i + d i+1 . Let T be a sequence of i + 1 transitions from B such that t 1 a 1 . . . t i a i t i+1 ∈ STraces(T ). Let TC be the test case generated from T by the algorithm. By construction of TC, there is a transition from location l i to location f labeled with a i+1 whose guard is the complement of all other guards of transitions from l i labeled with a i+1 , let us call it g. Since u i +d i+1 does not satisfy any of these guards, it satisfies g. Then location f is reachable in the product TC MUT with the trace t 1 a 1 . . . t i a i t i+1 a i+1 corresponding to the run Therefore, the set of all test cases generated by the algorithm is exhaustive for .
A sound and exhaustive test set is called complete. Completeness is in general impossible to reach, since it usually needs an infinite test set. Thus we know that we cannot in practice find all counter examples to refinement. However, we still want to ensure a certain quality to test cases. For instance, we want to avoid useless sound test cases where all paths lead to Pass. Below, we introduce a more practical property for test cases.

Rigidness
We are interested in test cases that reject models which behave in a wrong way along the test case: the test case should not say Pass if it is possible to detect something wrong during the test case execution. We show that any test case generated by our algorithm can detect every wrong behavior occurring along it. We can actually show that we can provide a counterexample for any incorrect refinement occurring along the sequence of transitions the test case is built from. Given a trace σ ∈ STraces(B), an incorrect refinement is formally characterized as an action or delay e ∈ Act ∪ {δ} ∪ R + that is allowed after σ in MUT but not in B, i.e., σ.e ∈ STraces obs (MUT) STraces(B). A test case TC is rigid for the refinement relation if and only if it rejects any incorrect refinement along the traces of the test case: Intuitively, if σ ends in a non-leaf location in TC, the test case T C will observe any onestep divergence after σ . This notion is close to the notions of non-laxness in the untimed setting [41] and of strictness in the timed setting [46] but it is stronger. These notions state that if the system behaves in a non-conforming way during the execution of the test case, it must be rejected. Also in our framework, every detected divergence leads to the rejection of the system, but we can add that every divergence is actually detected. This result directly follows from the construction of the test case. Fig. 11 is rigid for .

Theorem 5 (Rigidness) Let B be a deterministic timed automaton and T be a linear timed automaton built from a sequence of transitions in B. The test case for B generated from T by the algorithm in
Proof We show that for every trace σ of the test case TC ending in a non-leaf location, σ.e is a trace of TC leading to Fail if σ.e is a trace of MUT and not of B.
If e ∈ Act ∪ {δ}, let σ = t 1 a 1 . . . t k ∈ Traces(TC) corresponding to the run where k − 1 = n. Since σ.e is not a trace of B, it means that e is not allowed in B at location l k−1 after a delay of d k . Then, by construction of the test case TC, there is a transition from l k−1 to the Fail location labeled with action e and whose guard satisfies u k−1 . Then σ.e is a trace of TC and l 0 σ.e − → Fail. If e ∈ R + , let σ = t 1 a 1 . . . a k ∈ Traces(TC) corresponding to the run where k = n. Since σ.e is not a trace of B, it means that a delay of e is not allowed in B at location l k , due to an invariant h at this location in B. Then, by construction of the test case TC, there is a transition from l k to the Fail location labeled with τ and whose guard ¬h satisfies u k . Then σ.e is a trace of TC and l 0 σ.e − → Fail.

Executing test cases in Uppaal
Recall that we are testing the inclusion of the observable traces of a system S in the traces of a specification B. Test cases generated from B are used to restrict the system behavior and at the same time detect any violations of refinement along the test case. When submitting a test case, we require that any communication between two actors should synchronize with the test case, as well. Practically, this means that the sender actor (in one of its methods), the receiver actor (in its scheduler) and the test case should synchronize. Since we do not want to change the specification of the model under test, we solve the problem of three-way synchronization by splitting every action in the test case into two steps. At the first step, the sender actor synchronizes with the test case, and immediately afterwards, the test case synchronizes with the receiver actor. The urgency between these two steps is modeled by using a 'committed' location in the test case between these two steps. For the test case to be able to intercept the messages, we bind all known actors to refer to the test case (see Fig. 13).
Although a test case is deterministic and its synchronization with the system resolves part of the non-determinism, the final model is not yet completely deterministic. The actor implementations may also contain some internal choices that are not controlled by the test case. In principle, this would mean that a test case needs to be repeated several times and a coverage of different nondeterministic choices requires extra control over the system behavior. To avoid this problem, we take advantage of the model checking capability of Uppaal as explained in the sequel.
What is important in the execution of a test case is the final verdict. Reaching a Pass or Fail verdict can in fact be formulated as a reachability property in Uppaal. This way each test case needs to be submitted once. The Uppaal model checker can provide a diagnostic trace whenever the searched verdict is reachable. A trace leading to the Fail verdict shows exactly how and when the system is not compatible.   Fig. 13 In this test case, the broker actor is assigned 0 while the peer actors are assigned 1, 2 and 3. In order to intercept the messages between actors, the test automaton represents each actor by adding 4 to its ID (when binding the known actors). For the sake of simplicity, this test case does not include all possible violations of refinement that lead to the Fail verdict Using the model checker in this scenario is plausible because the system behavior is controlled by the test case, while model-checking the whole system may not be tractable. We thus avoid state space explosion by restricting verification to the part of system behavior that follows the main line of the test case.

Testing compatibility for the peer-to-peer system
In this section, we give a sample test case generated for a system consisting of 3 peers and a broker. We have already demonstrated in Table 1 that model checking the whole system runs out of system resources and is not feasible for 3 or more peers. The test case in Fig. 13 is generated from the composition of the behavioral interfaces of peer and broker (cf. Fig. 5) considering three peers. Execution of this test case takes less than a second.
As mentioned earlier, the server side behavioral interface of a peer allows only one request at a time. This means that two SReq messages may not be sent to the same peer before it has replied to the first one. The scenario captured in this test case is designed to check this property for peer number 2 (in Step 5).
This test case starts with registering the servers at the broker followed by requests from clients to the broker. Then the broker replies to these requests by sending via the server variable. Since server is defined as a meta variable, the test case uses a temporary variable ps in order to pass on this value to the clients. If a behavioral interface specification requires special conditions on the values of this parameter, the test case would also check these conditions. Finally, if two SReq messages are sent to peer number 2, the test fails, i.e., there is a counter-example to compatibility. The test passes if server 2 replies to the first request.
Given the nondeterministic selection of a server in the CReq method (cf. Fig. 6), the test case in Fig. 13 will fail. The reason is that the broker may assign the same server to multiple clients which may independently send a request to this server. One simple solution to this incompatibility would be using a round-robin assignment of the servers by the broker to the incoming requests. With this strategy, the test case in Fig. 13 does not lead to the Fail verdict anymore.

Related work
There has been lots of work on scheduling in real-time systems [13]. The main aspect of our work is that we address schedulability at a modeling level as in [6,25,27,56], whereas [21,45] are applied to programming languages. This results in a major methodological difference. In the latter case, analysis is performed only after the software has been developed: a given application is augmented with real-time requirements (like deadlines) and automata are derived from code. This approach can be useful specifically for legacy software. In contrast, we use automata for platform-independent modeling of actors and their behavioral interfaces at the design stage. One can thus boost the application performance by fine-tuning the scheduling in the early steps of development and at a much lower cost. With this in mind, we compare our schedulability and testing methods with some most relevant works, focusing on its different aspects.
Task specifications Schedulability analysis results depend on the assumptions on task generation. Many approaches are based on simple task generation patterns like periodic tasks and rate-monotonic analysis [27,61]. Although useful in many cases, such techniques are too coarse in many distributed systems and produce pessimistic outcomes. In our work, behavioral interfaces specify how tasks may be generated in an actor. Being based on Task automata [25], we can describe non-uniformly recurring tasks, inducing much more accurate analysis. We extend also the decidability results by Fersman et al. [25] and show that our analysis (based on non-preemptive scheduling) can be reduced to checking for reachability in timed automata, and is therefore decidable [31]. Although [21,45] also use automata, they do not discuss decidability.
The main difference between our work and task automata is that in our framework tasks are specified as timed automata. Therefore tasks can trigger other (internal) tasks during execution, which may inherit the (remaining) deadline of the task generating them (called delegation). In task automata, a task is completely abstracted away into an execution time, and generation of all tasks is captured in the task automaton. In our approach, internal tasks cannot be captured in the behavioral interfaces, because their arrival depends on the scheduling of the parent tasks, which in turn depends on the selected scheduling strategy. Our approach is therefore strictly more expressive than task automata.
Compositionality Schedulability has usually been analyzed for a whole system running on a single processor, whether at modeling [6,25] or programming level [21,45]. We address distributed systems where each actor has a dedicated processor and scheduling policy. In our approach, behavioral interfaces are key to compositionality. They model the most general message arrival pattern for actors. They can be viewed as a contract as in 'design by contract' [53] or as a most general assumption in modular model checking [48] (based on assumeguarantee reasoning); schedulability is guaranteed if the actual use of the actor (i.e., at the method definition level) satisfies this assumption in the behavioral interface.
The approach in [56] is modular in the sense that the untimed specification of the actors, and the timing constraints (specified separately) can be reused. However, they still analyze a complete system, rather than individual actors. Furthermore, a deadline in their framework includes only the time until an event is received. Hence, their approach cannot address complications like delegation of a task to subtasks. Another related work is TAXYS [21], where an abstract model of the environment is used for schedulability analysis. However, it is used to analyze a complete program and is not used compositionally.
Interface design Our behavioral interfaces are similar to Timed Interface Automata [24], but the notion of compatibility is different. Alfaro et al. [24] take an optimistic approach in which two interfaces are compatible if there is a possible way for them to work properly. This leads to a simpler theory but to implement these interfaces, one needs to adhere to these possibilities to end up with a working system. David et al. [23] suggest to make specifications input-enabled by adding an Error state and directing every undesired behavior to that state. They define two specifications to be compatible if their composition does not reach the Error state. This is unfortunately too restrictive for high-level specifications; abstract behavioral interfaces easily fall into spurious incompatibilities whereas their implementations may still work together. Our approach bridges the gap between these two methods by considering the actual implementation of actors. We check whether the implementations at hand, when composed, indeed follow the behavior that makes their interfaces compatible (w.r.t. the optimistic approach of [24]). Finally, timed actor interfaces in [28] are defined to accept early actions. This is an orthogonal feature and can be combined with our notion of behavioral interfaces if desired.
Analyzing the composition of the concurrent objects is subject to state space explosion because of their asynchronous nature and all their queues. We discussed in [36] the necessary conditions for compositional model checking of refinement. However it is not sufficient to prove refinement in all cases. We proposed in this paper a sound and complete testing technique for compatibility, based on finding counter-examples to refinement, as positioned below.
Testing real-time systems In real-time systems, conformance (or refinement) is tested in terms of allowed actions as well as of right timings. Different conformance relations have been investigated: timed bisimulation [14], may and must preorders [55], timed trace inclusion [15,32,44], and timed extension of Tretmans' conformance relation ioco [47,60,64]. The main difference here is that our notion of refinement takes deadlines on actions into account. Similarly, timelocks (and sometimes also deadlocks) are generally considered errors in a specification. When testing, the specification and the implementation are usually assumed to be non-blocking, meaning that they will never block time in any environment. However, since our specification is obtained by synchronous product of behavioral interfaces, such cases can happen, it makes sense to allow the same "errors" in the system as in its specification: Our conformance relation then takes into account the presence of deadlocks and timelocks, and allows them in the system whenever they exist in the specification.
A naive approach to testing refinement is to take a trace from the concrete system and check whether it exists also in the abstract specification. This approach is impractical because the concrete system is too complicated. Therefore, as is usual in the literature, e.g., in ioco testing, we generate test cases from the abstract specification. A main difference is that these techniques are for testing an open system, i.e., by feeding inputs and observing the outputs. However, we deal with a closed system, i.e., we have no inputs and outputs. In this respect, we take advantage of the fact that the system under test is also a model in our case (and not an implementation for example in Java or C), so we can use a tool like Uppaal, and drive the system execution by synchronizing on the actions of the test case. This greatly restricts the system behavior, although still nondeterministic. The restricted system behavior is now amenable to model checking in order to address the remaining nondeterminism during test submission.
A similar approach of using testing techniques to avoid state space explosion in the analysis of real-time system models has been followed by Clarke and Lee [20], in the setting of a real-time process algebraic formalism called ACSR (the Algebra of Communicating Shared Resources). They focus on testing timing constraints of real-time systems, deriving timeefficient test cases from a graphical representation of those constraints and defining time domains coverage criteria.
The soundness of our testing method depends on the determinism of the behavioral interfaces. The problem of determinizability of arbitrary timed automata is undecidable [26,66], so in this paper we require the behavioral interfaces to be given as deterministic automata. To relax this requirement, one can consider the class of determinizable timed automata as in [44], or use digital test cases [47], where time is discrete to answer the implementability of test cases. Alternatively, automated over-approximation techniques as in [12] can be employed.

Conclusions
The main contribution of this work is the integration of the abstract formalism of timed automata into a high-level object based modeling paradigm (along the same lines as typestateoriented programming [5]). On the one hand, the abstraction level of automata theory enables us to provide powerful analysis techniques and specifically less pessimistic schedulability analysis compared to traditional approaches. On the other hand, we augment the successful actor-based approach to object-orientation (as in Scala and Erlang) with application-level scheduling, as also motivated in resource-aware programming techniques.
We presented a complete framework for compositional schedulability analysis of distributed systems. Schedulability of each actor is analyzed individually with respect to its behavioral interface. This is made feasible by putting a finite bound on the task queue such that the schedulability results hold for any queue length. We can then test a system of communicating objects to make sure objects are used as expected. This compatibility further implies the schedulability of the whole system. In this paper, we specifically gave a detailed account of a novel counter-example oriented technique for testing refinement as the basis for compatibility check. To this end, we gave an algorithm to generate sound, complete and rigid test cases. Overall, we envisage such maximal use of verification combined with testing as a promising approach to deal with state-space explosion problem.
As future work, we are planning to integrate this high-level analysis framework into our implementation of application-level scheduling on top of Java [57]. The integrated tool suite will span a complete software development cycle, and will be a basis for developing safety critical real-time distributed and embedded systems. The main advantage of such a tool suite is that the designer/programmer will gain direct control on scheduling in the whole development cycle, which is in turn the key to efficiency in the software running on future multi-core and cloud infrastructures. Nonetheless, it will in practice be based on a best-effort basis as true runtime guarantees depend on the exact operating system and the hardware.
A specific and interesting case where we can apply our approach is in modeling TinyOS [18,35]. TinyOS is an actor-based open-source runtime environment designed for sensor network and has a large user base of over 500 research groups and companies [18]. The eventdriven execution model of TinyOS enables fine-grained power management yet allows the scheduling flexibility made necessary by the unpredictable nature of wireless communication and physical world interfaces. Modeling a TinyOS instance as an actor, we can define its own scheduling policy and hence the designer is able to introduce and analyze different policies in scheduling.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.