Learning One-Clock Timed Automata

We present an algorithm for active learning of deterministic timed automata with a single clock. The algorithm is within the framework of Angluin’s \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^*$$\end{document} algorithm and inspired by existing work on the active learning of symbolic automata. Due to the need of guessing for each transition whether it resets the clock, the algorithm is of exponential complexity in the size of the learned automata. Before presenting this algorithm, we propose a simpler version where the teacher is assumed to be smart in the sense of being able to provide the reset information. We show that this simpler setting yields a polynomial complexity of the learning process. Both of the algorithms are implemented and evaluated on a collection of randomly generated examples. We furthermore demonstrate the simpler algorithm on the functional specification of the TCP protocol.


Introduction
In her seminal work [9], Angluin introduced the L * algorithm for learning a regular language from queries and counterexamples within a query-answering framework.The Angluin-style learning therefore is also termed active learning or query learning, which is distinguished from passive learning, i.e., generating a model from a given data set.Following this line of research, an increasing number of efficient active learning methods (cf.[37]) have been proposed to learn, e.g., Mealy machines [33,29], I/O automata [2], register automata [24,1,14], nondeterministic finite automata [11], Büchi automata [18,27], symbolic automata [28,17,10] and Markov decision processes [35], to name just a few.Full-fledged libraries, tools and applications are also available for automata-learning tasks [12,26,19,20].
For real-time systems where timing constraints play a key role, however, learning a formal model is much more complicated.As a classical model for realtime systems, timed automata [4] have an infinite set of timed actions.This yields a fundamental difference to finite automata featuring finite alphabets.Moreover, it is difficult to detect resets of clock variables from observable behaviors of the system.This makes learning formal models of timed systems a challenging yet interesting problem.
Various attempts have been carried out in the literature on learning timed models, which can be classified into two tracks.The first track pursues active learning methods, e.g.[21] for learning event-recording automata (ERA) [5] and [8] for learning real-time automata (RTA) [16].ERA are time automata where, for every untimed action a, a clock is used to record the time of the last occurrence of a.The underlying learning algorithm [5], however, is prohibitively complex due to too many degrees of freedom and multiple clocks for recording events.RTA are a class of special timed automata with one clock to record the execution time of each action by resetting at the starting.The other track pursues passive learning.In [41,40], an algorithm was proposed to learn deterministic RTA.The basic idea is that the learner organizes a tree sketching traces of the data set while merging nodes of the tree following a certain heuristic function.A passive learning algorithm for timed automata with one clock was further proposed in [38,39].A common weakness of passive learning methods is that the generated model merely accepts all positive traces while it rejects all negative ones for the given set of traces, without guaranteeing that it is a correct model of the target system.A theoretical result was established in [39] showing it is possible to obtain the target system by continuously enriching the data set, however the number of iterations is unknown.In addition, the passive learning methods cited above concern only discrete-time semantics of the underlying timed models, i.e., the clock takes values from non-negative integers.We furthermore refer the readers to [13,31] for learning specialized forms of practical timed systems in a passive manner, [36] for passively learning timed automata using genetic programming which scales to automata of large sizes, [32] for learning probabilistic real-time automata incorporating clustering techniques in machine learning, and [35] for L * -based learning of Markov decision processes with testing and sampling.
In this paper, we present the first active learning method for deterministic one-clock timed automata (DOTAs) under continuous-time semantics 1 .Such timed automata provide simple models while preserving adequate expressiveness, and therefore have been widely used in practical real-time systems [34,3,15].We present our approach in two steps.First, we describe a simpler algorithm, under the assumption that the teacher is smart in the sense of being able to provide information about clock resets in membership and equivalence queries.The ba-sic idea is as follows.We define the reset-logical-timed language of a DOTA and show that the timed languages of two DOTAs are equivalent if their resetlogical-timed languages are equivalent, which reduces the learning problem to that of learning a reset-logical-timed language.Then we show how to learn the reset-logical-timed language following Maler and D'Antoni's learning algorithms for symbolic automata [28,17].We claim the correctness, termination and polynomial complexity of this learning algorithm.Next, we extend this algorithm to the case of a normal teacher.The main difference is that the learner now needs to guess the reset information on transitions discovered in the observation table.Due to these guesses, the latter algorithm features exponential complexity in the size of the learned automata.The proposed learning methods are implemented and evaluated on randomly generated examples.We also demonstrate the simpler, polynomial algorithm on a practical case study concerning the functional specification of the TCP protocol.Detailed proofs for theorems and lemmas in this paper can be found in Appendix A of the full version [7].
In what follows, Sect. 2 provides preliminary definitions on one-clock timed automata.The learning algorithm with a smart teacher is presented and analyzed in Sect.3. We then present the situation with a normal teacher in Sect. 4. The experimental results are reported in Sect. 5. Finally, Sect.6 concludes this paper.

Preliminaries
Let R ≥0 and N be the set of non-negative reals and natural numbers, respectively, and B the Boolean set.We use ⊤ to stand for true and ⊥ for false.The projection of an n-tuple x onto its first two components is denoted by Π {1,2} x, which extends to a sequence of tuples as Π {1,2} (x 1 , . . ., x k ) = ( Π {1,2} x 1 , . . ., Π {1,2} x k ) .Timed automata [4], a kind of finite automata extended with a finite set of real-valued clocks, are widely used to model real-time systems.In this paper, we consider a subclass of timed automata with a single clock, termed one-clock timed automata (OTAs).Let c be the clock variable, denote by Φ c the set of clock constraints of the form ϕ ::= ⊤ | c ▷◁ m | ϕ ∧ ϕ, where m ∈ N and ▷◁ ∈ {=, <, >, ≤, ≥}.

Definition 1 (One-clock timed automata). A one-clock timed automaton
A transition δ = (q, σ, ϕ, b, q ′ ) allows a jump from the source location q to the target location q ′ by performing the action σ ∈ Σ if the constraint ϕ ∈ Φ c is satisfied.Meanwhile, clock c is reset to zero if b = ⊤, and remains unchanged otherwise.
A clock valuation is a function ν : c → R ≥0 that assigns a non-negative real number to the clock.For t ∈ R ≥0 , let ν + t be the clock valuation with (ν + t)(c) = ν(c) + t.According to the definitions of clock valuation and clock constraint, a transition guard can be represented as an interval whose endpoints are in N∪{∞}.For example, ϕ 1 : c < 5∧c ≥ 3 is represented as [3,5), ϕ 2 : c = 6 as [6,6], and ϕ 3 : ⊤ as [0, ∞).We will use the inequality-and interval-representation interchangeably in this paper.
A state s of A is a pair (q, ν), where q ∈ Q and ν is a clock valuation.A run , where ν 0 (c) = 0, t i ∈ R ≥0 stands for the time delay spending on q i−1 before The trace of a run ρ is a timed word, denoted by trace(ρ).trace(ρ) = ϵ if ρ = (q 0 , ν 0 ), and trace(ρ such a timed word is also called delay-timed word.The corresponding reset-delaytimed word can be defined as trace where b i is the reset indicator for δ i , for 1 ≤ i ≤ n.If ρ is an accepting run of A, trace(ρ) is called an accepting timed word.The recognized timed language of A is the set of accepting delay-timed words, i.e., L(A) = {trace(ρ) | ρ is an accepting run of A}.The recognized reset-timed language L r (A) is defined as {trace r (ρ) | ρ is an accepting run of A}.
The delay-timed word ω = (σ ) is observed outside, from the view of the global clock.On the other hand, the behavior can also be observed inside, from the view of the local clock.This results in a logical-timed word of the form γ = (σ We will denote the mapping from delay-timed words to logical-timed words above by Γ .
Similarly, we introduce reset-logical-timed word γ r = (σ in terms of the local clock.Without any substantial change, we can extend the mapping Γ to map reset-delay-timed words to reset-logical-timed words.The recognized logical-timed language of A is given as L(A) = {Γ (trace(ρ)) | ρ is an accepting run of A}, and the recognized reset-logical-timed language of A as L r (A) = {Γ (trace r (ρ)) | ρ is an accepting run of A}.
An OTA is a deterministic one-clock timed automaton (DOTA) if there is at most one run for a given delay-timed word.In other words, for any location q ∈ Q and action σ ∈ Σ, the guards of transitions outgoing from q labelled with σ are disjoint subsets of R ≥0 .We say a DOTA is complete if for any of its location q ∈ Q and action σ ∈ Σ, the corresponding guards form a partition of R ≥0 .This means any given delay-timed word has exactly one run.Any DOTA A can be transformed into a complete DOTA (referred to as COTA) A accepting the same timed language as follows: (1) Augment Q with a "sink" location q s which is not an accepting location; (2) For every q ∈ Q and σ ∈ Σ, if there is no outgoing transition from q labelled with σ, introduce a (resetting) transition from q to q s with label σ and guard [0, ∞); (3) Otherwise, let S be the subset of R ≥0 not covered by the guards of transitions from q with label σ.Write S as a union of intervals I 1 , . . ., I k in a minimal way, then introduce a (resetting) transition from q to q s with label σ and guard I j for each 1 ≤ j ≤ k.
From now on, we therefore assume that we are working with COTAs.
Example 1. Fig. 1 depicts the transformation of a DOTA A (left part) into a COTA A (right part).First, a non-accepting "sink" location q s is introduced.Second, we introduce three fresh transitions (marked in blue) from q 1 to q s as well as transitions from q s to itself.At last, for location q 0 and label a, the existing guards cover Hence, we introduce transitions (q 0 , a, [0, 1], ⊤, q s ) and (q 0 , a, [3, ∞), ⊤, q s ).Two fresh transitions from q 1 to q s are introduced similarly.

Learning from a Smart Teacher
In this section, we consider the case of learning a COTA A with a smart teacher.
Our learning algorithm relies on the following reduction of the equivalence over timed languages to that of reset-logical timed languages.

Theorem 1. Given two DOTAs
, that is, to construct a COTA A that recognizes a target timed language L = L(A), it suffices to learn a hypothesis H which recognizes the same reset-logical timed language.For equivalence queries, instead of checking directly whether L r (H) = L r (A), the contraposition of Theorem 1 guarantees that we can perform equivalence queries over their timed counterparts: if L(H) = L(A), then H recognizes the target language already; otherwise, a counterexample making L(H) ̸ = L(A) yields an evidence also for L r (H) ̸ = L r (A).
We now describe the behavior of the teacher who keeps an automaton A to be learnt, while providing knowledge about the automaton by answering membership and equivalence queries through an oracle she maintains.For the membership query, the teacher receives a logical-timed word γ and returns whether γ is in L(A).In addition, she is smart enough to return the reset-logical-timed word γ r that corresponds to γ (the exact correspondence is described in Sect.

3.1). For the equivalence query, the teacher is given a hypothesis H and decides whether L(H) = L(A).
If not, she is smart enough to return a reset-delayedtimed word ω r as a counterexample.The usual case where a teacher can deal with only standard delay-timed words will be discussed in Sect. 4.

Remark 1.
The assumption that the teacher can respond with timed words coupled with reset information is reasonable, in the sense that the learner can always infer and detect the resets of the logical clock by referring to a global clock on the wall, as long as he can observe running states of A, i.e., observing the clock valuation of the system whenever an event happens therein.This conforms with the idea of combining automata learning with white-box techniques, as exploited in [23], providing that in many application scenarios source code is available for the analysis.
In what follows, we elaborate the learning procedure including membership queries, hypotheses construction, equivalence queries and counterexample processing.

Membership query
In our setting, the oracle maintained by the smart teacher can be regarded as a COTA A that recognizes the target timed language L, and thereby its logical-timed language L(A) and reset-logical-timed counterpart L r (A).In order to collect enough information for constructing a hypothesis, the learner makes membership queries as "Is the logical-timed word γ in L(A)?".If there does not exist a run ρ such that Γ (trace(ρ)) = γ, meaning that there is some k such that the run is blocked after the k'th action (i.e.γ is invalid) and hence the teacher gives a negative answer, associated with a reset-logical-timed word γ r where all b i 's with i > k are set to ⊤; If there exists a run ρ (which is unique due to the determinacy of A) that admits γ (i.e., γ is valid), the teacher answers "Yes", if ρ is accepting, or "No" otherwise, while in both cases providing the corresponding reset-logical-timed word γ r , with Π {1,2} γ r = γ.
For the sake of simplicity, we define a function π that maps a logical-timed word to its unique reset-logical-timed counterpart in membership queries.Information gathered from the membership queries is stored in a timed observation table defined as follows.

Definition 2 (Timed observation table). A timed observation table for a COTA
A is a 7-tuple T = ( Σ, Σ, Σ r , S, R, E, f ) where Σ is the finite alphabet; r and E ⊂ Σ * are finite sets of words, where S is called the set of prefixes, R the boundary, and E the set of suffixes.Specifically, -S and R are disjoint, i.e., S ∪ R = S ⊎ R; -The empty word is by default both a prefix and a suffix, i.e., ϵ ∈ E and ϵ ∈ S; Given a table T, we define row : S ∪ R → (E → {+, −}) as a function mapping each γ r ∈ S∪R to a vector indexed by e ∈ E, each of whose components is defined as f (γ r • e), denoting a potential location.
Before constructing a hypothesis H based on the timed observation table T, the learner has to ensure that T satisfies the following conditions: A timed observation table T is prepared if it satisfies the above five conditions.To get the table prepared, the learner can perform the following operations: Making T closed.If T is not closed, there exists r ∈ R such that for all s ∈ S row (r) ̸ = row (s).The learner thus can move such r from R to S.Moreover, each reset-logical-timed word π(Π {1,2} r • σ) needs to be added to R, where σ = (σ, 0) for all σ ∈ Σ.Such an operation is important since it guarantees that at every location all actions in Σ are enabled, while specifying a clock valuation of these actions, despite that some invalid logical-timed words might be involved.Particularly, giving a bottom value 0 as the clock valuation satisfies the precondition of the partition functions that will be described in Sect.3.2.
Making T consistent.If T is not consistent, one inconsistency is resolved by adding σ • e to E, where σ and e can be determined as follows.T being inconsistent implies that there exist two reset-logical-timed words γ r , γ r Thereafter, the learner fills the table by making membership queries.Note that this operation keeps the set E of suffixes being a set of logical-timed words.
Making T evidence-closed.If T is not evidence-closed, then the learner needs to add all prefixes of π(Π {1,2} s • e) to R for every s ∈ S and e ∈ E, except those already in S ∪ R. Similarly, the learner needs to fill the table through membership queries.
The condition that a timed observation table T is reduced and prefix-closed is inherently preserved by the aforementioned operations, together with the counterexample processing described later in Sect.3.3.Furthermore, a table may need several rounds of these operations before being prepared (cf.Algorithm 1), since certain conditions may be violated by different, interleaved operations.

Hypothesis construction
As soon as the timed observation table T is prepared, a hypothesis can be constructed in two steps, i.e., the learner first builds a DFA M based on the information in T, and then transforms M to a hypothesis H, which will later be shown as a COTA.
Given a prepared timed observation table M , F M ) can be built as follows: -the finite set of locations Q M = {q row (s) | s ∈ S}; -the initial location q 0 M = q row (ϵ) for ϵ ∈ S; -the set of accepting locations The constructed DFA M is compatible with the timed observation table T in the sense captured by the following lemma.
Lemma 1.For a prepared timed observation table The learner then transforms the DFA M to a hypothesis being the clock and Σ the given alphabet as in T. The set of transitions ∆ in H can be constructed as follows: For any q ∈ Q M and σ ∈ Σ, let Ψ q,σ = {µ | (q, (σ, µ, b), q ′ ) ∈ ∆ M }, then applying the partition function P c (•) (defined below) to Ψ q,σ returns k intervals, written as where k = |Ψ q,σ |; consequently, for every (q, (σ, µ i , b i ), q ′ ) ∈ ∆ M , a fresh transition δ i = (q, σ, I i , b i , q ′ ) is added to ∆.

Definition 3 (Partition function). Given a list of clock valuations
then a partition function P c (•) mapping ℓ to a set of intervals {I 0 , I 1 , . . ., I n }, which is a partition of R ≥0 , is defined as Remark 2. Definition 3 is adapted from that in [17] by imposing additional assumptions of the list of clock valuations in order to guarantee µ i ∈ I i , for any 0 ≤ i ≤ n, due to the underlying continuous-time semantics.Whereas, by T being prepared and the normalization function described in Sect.3.3, the set of clock valuations Ψ q,σ can be arranged into a list ℓ q,σ = µ 0 , µ 1 , . . ., µ n satisfying such assumptions given in Definition 3 for any q ∈ Q M and σ ∈ Σ.
Fig. 2: The prepared timed observation table T5, the corresponding DFA M5 and hypothesis H5.
Example 2. Suppose A in Fig. 1 recognizes the target timed language.Then the prepared table T 5 , the corresponding DFA M 5 and hypothesis H 5 are depicted in Fig. 2. Here, the subscript 5 indicates the fifth iteration of T (Details concerning the constructions and the entire learning process are enclosed in Appendix B of [7].).
Theorem 2. The hypothesis H is a COTA.
Then for every µ ′ i ∈ µ i , the hypothesis H accepts the reset-logical-timed word , and cannot accept it if f (γ r • e) = −.

Equivalence query and counterexample processing
Suppose that the teacher knows a COTA A which recognizes the target timed language L. Then to answer an equivalence query is to determine whether L(H) = L(A), which can be divided into two timed language inclusion problems, i.e.

, whether L(H) ⊆ L(A) and L(A) ⊆ L(H). Most decision procedures for language inclusion proceed by complementation and emptiness checking of the intersection [22]: L(A) ⊆ L(B) iff L(A) ∩ L(B) = ∅.
The fact that deterministic timed automata can be complemented [6] enables solving the inclusion problem by checking the emptiness of the resulted product automata H × A and H × A. The complementation technique, however, does not apply to nondeterministic timed automata even if with only one single clock [4], which we plan to incorporate in our learning framework in future work.We therefore opt for2 the alternative method presented by Ouaknine and Worrell in [30] showing that the language inclusion problem of timed automata with one clock (regardless of their determinacy) is decidable by reduction to a reachability problem on an infinite graph.That is, there exists a delay-timed word ω that leads to a bad configuration if L(H) ⊈ L(A).In detail, the corresponding run ρ of ω ends in an accepting location in H but the counterpart ρ ′ of ω in A is not accepting.Consequently, the teacher can provide the reset-delay-timed word ω r resulted from ω as a negative counterexample ctx − .Similarly, a positive counterexample ctx + = (ω r , +) can be generated if L(A) ⊈ L(H).An algorithm elaborating the equivalence query is provided in Appendix C of the full version [7].
When receiving a counterexample ctx = (ω r , +/−), the learner first converts it to a reset-logical-timed word By definition, γ r and ω r share the same sequence of transitions in A. Furthermore, by the contraposition of Theorem 1, γ r is an evidence for L r (H) Additionally, by the definition of clock constraints Φ c , at any location, if an action σ is enabled, i.e., its guard is satisfied, w.r.t. the clock value µ ∈ R ≥0 \ N, then σ should be enabled w.r.t.any clock value ⌊µ⌋ + θ at the location, where θ ∈ (0, 1).Specifically, only one transition is available for σ at the location on the interval µ , because the target automaton is deterministic.Therefore, in order to avoid unnecessarily distinguishing timed words and violating the assumptions of the list ℓ for the partition function, the learner needs to apply a normalization function g to normalize γ r .

Definition 4 (Normalization). A normalization function g maps a reset-logicaltimed word γ
to another reset-logicaltimed word by resetting any logical clock to its integer part plus a constant fractional part, i.e., g(γ r ) = (σ for some fixed constant θ ∈ (0, 1) otherwise.We will instantiate θ = 0.1 in what follows.Clearly our approach works for any other θ valued in (0, 1).This normalization process guarantees the assumptions needed for Definition 3. Example 3. Consider the prepared table T 5 in Fig. 3 (as in Fig. 2).When the leaner asks an equivalence query with hypothesis H 5 , the teacher answers that L(H 5 ) ̸ = L(A), where A in Fig. 1 is the target automaton, and provides a counterexample (ω r , −) with ω r = (a, 0, ⊤)(a, 1.3, ⊤), which can be transformed to a reset-logical-timed word γ r = (a, 0, ⊤)(a, 1.3, ⊤).If he adds prefixes of γ r to the table directly, the learner will get a prepared table T 6 and thus construct a DFA M 6 .Unfortunately, the partition function defined in Definition 3 is not applicable to (a, 1.3, ⊤) and (a, 1.1, ⊥) any more.On the other hand, if he adds the  = (Σ, Σ, Σr , S, R, E, f ).output: the hypothesis H recognizing the target language L. prefixes of the normalized reset-logical-timed word, i.e., γ ′ r = (a, 0, ⊤)(a, 1.1, ⊤), to T 5 , the learner will then get an inconsistent table whose consistency can be retrieved by the operation of "making T consistent" as expected.
The following theorem guarantees that the normalized reset-logical-timed word γ ′ r is also an evidence for L r (H) ̸ = L r (A).Therefore, the learner can use it as a counterexample and thus adds all the prefixes of γ ′ r to R except those already in S ∪ R. Theorem 4. Given a valid reset-logical-timed word γ r of A, its normalization γ ′ r = g(γ r ) shares the same sequence of transitions in A.

Learning algorithm
We present in Algorithm 1 the learning procedure integrating all the previously stated ingredients, including preparing the table, membership and equivalence queries, hypothesis construction and counterexample processing.The learner first initializes the timed observation table T = (Σ, Σ, Σ r , S, R, E, f ), where S = {ϵ}, E = {ϵ}, while for every σ ∈ Σ, he builds a logical-timed word γ = (σ, 0) and then obtains its reset counterpart π(γ) = (σ, 0, b) by triggering a membership query to the teacher, which is then added to R. Thereafter, the learner can fill the table by additional membership queries.Before constructing a hypothesis, the learner performs several rounds of operations described in Sect.3.1 until T is prepared.Then, a hypothesis H is constructed leveraging an intermediate DFA M and submitted to the teacher for an equivalence query.If the answer is positive, H recognizes the target language.Otherwise, the learner receives a counterexample ctx and then conducts the counterexample processing to update T as described in Sect.3.3.The whole procedure repeats until the teacher gives a positive answer to an equivalence query.
To facilitate the analysis of correctness, termination and complexity of Algorithm 1, we introduce the notion of symbolic state that combines each location with its clock regions: a symbolic state of a COTA A = (Σ, Q, q 0 , F, c, ∆) is a pair (q, µ ), where q ∈ Q and µ is a region containing µ.If κ is the maximal constant appearing in the clock constraints of A, then there exist 2κ + 2 such regions, including [n, n] with 0 ≤ n ≤ κ, (n, n + 1) with 0 ≤ n < κ, and (κ, ∞) for each location, so there are a total of |Q| × (2κ + 2) symbolic states.Then the correctness and termination of Algorithm 1 is stated in the following theorem, based on the fact that there is an injection from S (or equivalently, the locations of H) to symbolic states of A.
Theorem 5. Algorithm 1 terminates and returns a COTA H which recognizes the target timed language L.

Complexity
Given a target timed language L which is recognized by a COTA A, let n = |Q| be the number of locations of A, m = |Σ| the size of the alphabet, and κ the maximal constant appearing in the clock constraints of A. In what follows, we derive the complexity of Algorithm 1 in terms of the number of queries.
By the proof of Theorem 5, H has at most n(2κ + 2) locations ( the size of S) distinguished by E. Thus, |E| is at most n(2κ + 2) in order to distinguish these locations.Therefore, the number of transitions of H is bounded by mn 2 (2κ+2) 3 .Furthermore, as every counterexample adds at least one fresh transition to the hypothesis H, where we consider each interval of the partition corresponds to a transition, this means that the number of counterexamples and equivalence queries is at most mn 2 (2κ + 2) 3 .Now, we consider the number of membership queries, that is, to compute (|S| + |R|) × |E|.Let h be the maximal length of counterexamples returned by the teacher, which is polynomial in the size of A according to Theorem 5 in [39], bounded by n 2 .There are three cases of extending R by adding fresh rows, namely during the processing of counterexamples, making T closed, and making T evidence-closed.The first case adds at most hmn 2 (2κ + 2) 3 rows to R, while the latter two add at most n(2κ + 2) × m and n 2 (2κ + 2) 2 , respectively, yielding that the size of R is bounded by O(hmn 2 κ 3 ), where O(•) is the big Omicron notation.As a consequence, the number of membership queries is bounded by O(mn 5 κ 4 ).So, the total complexity is O(mn 5 κ 4 ).
It is worth noting the above analysis is given in the worst case, where all partitions need to be fully refined.But, in practice we can learn the automaton without refining most partitions, and therefore the number of equivalence and membership queries, as well as the number of locations in the learned automaton are much fewer than the corresponding worst-case bounds.This will be demonstrated by examples in Sect. 5.

Accelerating Trick
In the timed observation table, the function f maps invalid reset-logical-timed words as well as certain valid ones to "−" when the teacher maintains a COTA A as the oracle.The learner thus needs multiple rounds of queries to distinguish the "sink" location from other unaccepting locations.If the function f is extended to map invalid reset-logical-timed words to a distinct symbol, say "×", when we let a DOTA A be the oracle, then the learner will take much fewer queries.We will later show in the experiments that such a trick significantly accelerates the learning process.

Learning from a Normal Teacher
In this section, we consider the problem of learning timed automata with a normal teacher.As before, we assume the timed language to be learned comes from a complete DOTA.For the normal teacher, inputs to membership queries are delay-timed words, and the teacher returns whether the word is in the language (without giving any additional information).Inputs to equivalence queries are candidate DOTAs, and the teacher either answers they are equivalent or provides a delay-timed word as a counterexample.
The algorithm here is based on the procedure in the previous section.We still maintain observation tables where the elements in S ∪ R are reset-logicaltimed words and the elements in E are logical-timed words.In order to obtain delay-timed words for the membership queries, we need to guess clock reset information for transitions in the table.More precisely, in order to convert a logical-timed word to a delay-timed word, it is necessary to know clock reset information for all but the last transition.Hence, it is necessary to guess reset information for each word in S ∪R (since S ∪R is prefix-closed, this is equivalent to guessing reset information for the last transition of each word).Also, for each entry in (S ∪ R) × E, it is necessary to guess all but the last transition in E. The algorithm can be thought of as exploring a search tree, where branching is caused by guesses, and successor nodes are constructed by the usual operations of preparing a table and dealing with a counterexample.
The detailed process is given in Algorithm 2. The learner maintains a set of table instances, named ToExplore, which contains all table instances that need to be explored.
The initial tables in ToExplore are as follows.Each table has S = E = {ϵ}.For each σ ∈ Σ, there is one row in R corresponding to the logical-timed word ω = (σ, 0).It is necessary to guess a reset b for each ω thereby transforming it to a reset-logical-timed word γ r = (σ, 0, b).There are 2 |Σ| possible combinations of guesses.These tables are filled by making membership queries (in this case, will be generated and inserted into ToExplore.The operation for making tables evidence-closed is analogous. Once the current table is prepared, the learner builds a hypothesis H and makes an equivalence query to the teacher.If the answer is positive, then H is a COTA which recognizes the target timed language L; otherwise, the teacher gives a delay-timed word ω as a counterexample.The learner first finds the longest reset-logical-timed word in R which, when converted to a delay-timed word, agrees with a prefix of ω.The remainder of ω, however, needs to be converted to a reset-logical-timed word by guessing reset information.The corresponding prefixes are then added to R. Hence, at most 2 |ω| unfilled table instances are generated.For each unfilled table instance, at most 2 ( ∑ e i ∈E\{ϵ} (|ei|−1))×|ω| filled tables are produced and inserted into ToExplore.
Throughout the learning process, the learner adds a finite number of table instances to ToExplore at every iteration.Hence, the search tree is finite-branching.Moreover, if all guesses are correct, the resulting table instance will be identical to the observation table in the learning process with a smart teacher (apart from the guessing processes, the basic table operations are the same as those in Section 3.1).This means, with an appropriate search order, for example, taking the table instance that requires the least number of guesses in ToExplore at every iteration, the algorithm terminates and returns the same table as in the learning process with a smart teacher, which is a COTA that recognizes the target language L. In conformity to Theorem 1, the algorithm may terminate even if the corresponding reset-logical-timed languages are not equivalent, while yielding correct COTAs recognizing the same delay-timed language.Theorem 6. Algorithm 2 terminates and returns a COTA H which recognizes the target timed language L. Σ, Σ, Σ r , S, R, E

Implementation and Experimental Results
To investigate the efficiency and scalability of the proposed methods, we implemented a prototype3 in Python for learning deterministic one-clock timed automata.The examples include a practical case concerning the functional specification of the TCP protocol [25] and a set of randomly generated DOTAs to be learnt.All of the evaluations have been carried out on a 3.6GHz Intel Core-i7 processor with 8GB RAM running 64-bit Ubuntu 16.04.
Functional specification of the TCP protocol.In [25], a state diagram on page 23 specifies state changes during a TCP connection triggered by causing events while leading to resulting actions.As observed by Ouaknine and Worrell in [30], such a functional specification of the protocol can be represented as a one-clock timed automaton.In our setting, the corresponding DOTA A to be learnt is configured to have |Q| = 11 states with the two CLOSED states collapsed, |Σ| = 10 after abstracting the causing events and the resulting actions, |F | = 2, and |∆| = 19 with appropriately specified timing constraints including guards and resets.Using the algorithm with the smart teacher, a correct DOTA H is learned in 155 seconds after 2600 membership queries and 28 equivalence queries.Specifically, H has 15 locations excluding a sink location connected by 28 transitions.The introduction of 4 new locations comes from splitting of guards along transitions, which however can be trivially merged back with other locations.The figures depicting A and H can be found in Appendix D of [7].
Random examples for a smart teacher.We randomly generated 80 DOTAs in eight groups, with each group having different numbers of locations, size of alphabet, and maximum constant appearing in clock constraints.As shown in Table 1, the proposed learning method succeeds in all cases in identifying a DOTA that recognizes the same timed language.In particular, the number of membership queries and that of equivalence queries appear to grow polynomially with the size of the problem4 , and are much smaller than the worst-case bounds estimated in Sect.3.5.Moreover, the learned DOTAs do not have prominent increases in the number of locations (by comparing n mean with the first component of Case IDs).The average wall-clock time including both time taken by the learner and by the teacher is recorded in the last column t mean , of which, however, often over 90% is spent by the teacher for checking equivalences w.r.t.small T's while around 50% by the learner for checking the preparedness condition w.r.t.large T's.
It is worth noting that all of the results reported above are carried out on an implementation equipped with the accelerating trick discussed in Sect.3.6.We remark that when dropping this trick, the average number of membership queries blow up with a factor of 0.83 (min) to 15.02 (max) with 2.16 in average for all the 8 groups, and 0.84 (min) to 1.71 (max) with 1.04 for the average number of equivalence queries, leading to dramatic increases also in the computation time (including that in operating tables).The alternative implementation and experimental results without the accelerating trick can also be found in the tool page (under the dev branch).
Random examples for a normal teacher.Due to its high, exponential complexity, the algorithm with a normal teacher failed (out of memory) in identifying DOTAs for almost all the above examples, except 6 cases out of the 10 in group 4_4_20.
We therefore randomly generated 40 extra DOTAs of smaller size classified into 4 groups.With the accelerating trick, the learner need not guess the resets in elements of E for an entry in S ∪ R if the querying result of the entry is the sink location.We also omitted the checking of the evidence-closed condition, since it may add redundant rows in R, leading to more guesses and thereby a larger search space.The omission does not affect the correctness of the learnt DOTAs.Moreover, as different table instances may generate repeated queries, we cached the results of membership queries and counterexamples, such that the numbers of membership and equivalence queries to the teacher can be significantly reduced.Table 2 shows the performance of the algorithm in this setting.Results without caching are available in the tool page (under the normal branch).

Conclusion
We have presented a polynomial active learning method for deterministic oneclock timed automata from a smart teacher who can tell information about clock resets in membership and equivalence queries.Our technique is based on converting the problem to that of learning reset-logical-timed languages.We then extend the method to learning DOTAs from a normal teacher who receives delay-timed words for membership queries, while the learner guesses the reset information in the observation table.We evaluate both algorithms on randomly generated examples and, for the former case, the functional specification of the TCP protocol.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material.If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Fig. 1 :
Fig. 1: A DOTA A on the left and the corresponding COTA A on the right.The initial location is indicated by 'start' and an accepting location is doubly circled.

Fig. 3 :
Fig. 3: An illustration of the necessity of normalization by the normalization function.
, f ) is the final observation table for the correct candidate COTA, the number of guessed resets in S ∪ R is |S| + |R|, and the number of guessed resets for entries in each row of the table is ∑ Assuming an appropriate search order (for example according to the number of guesses in each table), this yields the number of table instances considered before termination as O(2 (|S|+|R|)×(1+ ei∈E\{ϵ} (|e i | − 1).Hence, the total number of guessed resets is(|S| + |R|) × (1 + ∑ ei∈E\{ϵ} (|e i | − 1)).

Table 1 :
Experimental results on random examples for the smart teacher situation.Case ID: n_m_κ, consisting of the number of locations, the size of the alphabet and the maximum constant appearing in the clock constraints, respectively, of the corresponding group of A's.|∆|mean: the average number of transitions in the corresponding group.#Membership & #Equivalence: the number of conducted membership and equivalence queries, respectively.N min : the minimal, Nmean: the mean, Nmax: the maximum.nmean: the average number of locations of the learned automata in the corresponding group.
tmean: the average wall-clock time in seconds, including that taken by the learner and the teacher.