Hybrid cosimulation: it’s about time
 1.3k Downloads
 2 Citations
Abstract
Modelbased design methodologies are commonly used in industry for the development of complex cyberphysical systems (CPSs). There are many different languages, tools, and formalisms for modelbased design, each with its strengths and weaknesses. Instead of accepting some weaknesses of a particular tool, an alternative is to embrace heterogeneity, and to develop tool integration platforms and protocols to leverage the strengths from different environments. A fairly recent attempt in this direction is the functional mockup interface (FMI) standard that includes support for cosimulation. Although this standard has reached acceptance in industry, it provides only limited support for simulating systems that mix continuous and discrete behavior, which are typical of CPS. This paper identifies the representation of time as a key problem, because the FMI representation does not support well the discrete events that typically occur at the cyberphysical boundary. We analyze alternatives for representing time in hybrid cosimulation and conclude that a superdense model of time using integers only solves many of these problems. We show how an execution engine can pick an adequate time resolution, and how disparities between time representations internal to cosimulated components and the resulting effects of time quantization can be managed. We propose a concrete extension to the FMI standard for supporting hybrid cosimulation that includes integer time, automatic choice of time resolution, and the use of absent signals. We explain how these extensions can be implemented modularly within the frameworks of existing simulation environments.
Keywords
Cosimulation Functional mockup interface Time1 Introduction
Modelbased design of cyberphysical systems (CPS) requires modeling techniques that embrace both the cyber and the physical parts of a system [24]. There is a long history of modeling languages and tools that integrate techniques that were originally developed independently, and on different sides of the border that separates the cyber and the physical. Modelica [20, 39], for example, integrates objectoriented design (a cyber modeling technique) with differentialalgebraic equations (DAEs, a physical modeling technique). Languages and tools for hybrid systems design [11] integrate finite state automata (cyber) with ordinary differential equations (ODEs, physical). Discreteevent (DE) modeling tools [12, 18, 29, 49] integrate a model of a time continuum (physical) with discrete, instantaneous events (cyber). Such simulation tools are capable, in principle, of simulating both cyber components (software and networks) and physical components (mechanical, electrical, fluid flows, etc.).
In spite of the power and utility of existing tools, we should not be sanguine about CPS modeling. All of the above integrations have pitfalls, limitations, and corner cases where a model that can be easily handled in one tool cannot be easily handled in another. Modelicabased tools, for example, have difficulty with some discrete phenomena, even purely physical ones, forcing model builders to sometimes model discrete behaviors as rapid continuous dynamics [41]. Conversely, tools that handle discrete events well, such as DE tools, may have difficulty with continuous dynamics [36], forcing model builders into bruteforce methods such as sampleddata models with high sampling frequencies.
One possible solution is to embrace the heterogeneity of tools and to provide tool integration platforms and protocols that enable cosimulation using a multiplicity of tools [23]. There is a long history of tool integration platforms (sometimes called “simulation backplanes”) for DE modeling and a wellestablished standard called the highlevel architecture (HLA) for tool interoperability [27]. A more recent development is the functional mockup interface (FMI), a standard initiated by Daimler AG within the ITEA2 MODELISAR project [5, 40], now maintained by the Modelica Association. It has been designed to enable the exchange or cosimulation of model components, functional mockup units (FMUs), designed with different modeling tools. The standard consists of a C application program interface (API) for simulation components and an XML schema for describing components. Largely unspecified is the algorithm that coordinates the execution of a collection of FMUs, the master algorithm (MA). The idea is that the standard should be flexible enough to accommodate the inevitable differences between execution engines in different tools. FMI provides two distinct mechanisms for interaction between an FMU and a host simulator: (i) model exchange (FMIME), where the host simulator is responsible for all numerical integration methods, and (ii) cosimulation (FMICS), where the FMU implements its own mechanisms for advancing the values of state variables. FMI for cosimulation is more focused on tool interoperability; the host simulator provides input values to the FMU, requests that the FMU advance its state variables and output values in time, and then queries for the updated output values.
The current standard for cosimulation (version 2.0 [38]), however, is unable to correctly simulate many mixed discrete and continuous behaviors, limiting its utility in current form for modelbased design for CPS [8]. As a consequence, the communitydriven standardization process is considering another mechanism called hybrid cosimulation that strives for the loose coupling of cosimulation, but with support for discrete and discontinuous signals and instantaneous events. The intent of this mechanism is to support hybrid systems [1, 11, 34, 42, 46], where continuous dynamics are combined with discrete mode changes and discrete events. Hybrid cosimulation promises better interoperability between models of the cyber and the physical sides of the CPS problem.
In this article, we focus on a particular issue with hybrid cosimulation that has proved central to the problem, namely the modeling of time. Time is a central concept in reasoning about the physical world, but is largely abstracted away when reasoning about the cyber world. As a result, the engineering methods that CPS builds on have misaligned abstractions between the physics domain, the mathematical domain used to model physics, the computational domain used to implement these mathematical abstractions for simulation, and the computational domain used on the cyber side of CPS. The most egregious misaligned abstractions concern time, where all four domains routinely use mutually incompatible models of time.
Although the approach presented in this paper is general and could potentially be applicable to many different hybrid cosimulation environments, we have chosen to illustrate the concept concretely, by applying it to FMI and showing that only modest extensions to the FMI standard are needed to follow our recommendations. Within this framework, we show how to perform hybrid cosimulation with heterogeneous time models and accommodate mixtures of components that may internally represent time differently. Specifically, our solution supports hybrid cosimulation of FMUs that use floatingpoint time together with integertime FMUs, even if those integertime FMUs internally use a different resolutions.

We analyze and compare alternatives for representing time, including floatingpoint numbers, rational numbers, and integers. We discuss superdense time and the concepts of time resolution. We also propose a model of time that supports a multiplicity of time resolutions, differing even within the same simulation, supports discrete events with an exact notion of simultaneity, invulnerable to quantization errors, and is efficiently converted to and from legacy floatingpoint representations of time to accommodate legacy simulators within a cosimulation environment. It also supports abstractions of time such as sequences of events where time does not elapse, enabling better integration of cyber models with physical ones (Sect. 2).

We present a concrete proposal for a new FMI standard for hybrid cosimulation, FMIHC. The three main parts of the proposal are: (i) the use of integer time, (ii) the capability of FMUs to negotiate the resolution of time, (iii) the use of absent signals for handling discrete events (Sect. 3).

We describe how a master algorithm can use the FMIHC extensions and support cosimulation of components that operate at different time resolutions. The algorithm finds a suitable global time resolution for the simulation based on the FMUs’ preferences and is able to handle disparities between the time resolutions of cosimulated FMUs. Our modular implementation using wrappers demonstrates that it is easy to add support for hybrid cosimulation to existing master algorithms (Sect. 3).

We give a detailed analysis and a solution to the time conversion and quantization problem, an unavoidable consequence when different components operate at different time resolutions (Sect. 4).
1.1 A motivating example
Figure 2 shows the output of the Integrator assuming the constant values 1 and \(\,0.001\) shown in Fig. 1. An essential feature of this output is that discontinuities occur at precise times, that they take zero time to transition, and that between the discontinuities, signals are continuous.
Our goal in this paper is to provide a model of time for the interactions between these components that is semantically well defined and exact. If the interactions between the components are well defined, then the components themselves can be made much more complex, with predictable results. The integrator FMU could be replaced with a sophisticated ordinary differential equation (ODE) solver with a much more complicated internal model, the adder could be replaced with some continuoustime simulation engine, the zerocrossing detector could be replaced with more elaborate discreteevent processing, and the microstep delay could be replaced with some software engineering model of the control strategies.
1.2 Related work
This paper follows the line of research on FMI initiated in [7], where the FMI standard was formalized, and two cosimulation algorithms were proposed and proven to be determinate. In that same paper, small extensions to the standard were proposed with the goal of enhancing the standard’s ability to handle mixed discrete and continuous behaviors. Some of these extensions are reminiscent of functions included in the actor interface in the modular formal semantics of the Ptolemy tool [48]. For instance, the “getMaxStepSize” function of [7] is similar to the D (“deadline”) function of [48].
Followup work includes [8], which proposes a collection of test cases together with acceptance criteria that can be used to determine whether a hybrid cosimulation technique is acceptable. Tripakis [47] investigates techniques to bridge the semantic gap between various formalisms (state machines, discreteevent models, dataflow, etc.) and FMI. Cremona et al. [14] propose a new master algorithm that uses step size refinement to enable state event detection with FMI. An implementation of the FMI extensions on top of Ptolemy II is described in [15]. This implementation has been used in [6] to connect via cosimulation the modelcheckers Uppaal [28] and SpaceEx [19]. Several authors have also described approaches to implement FMI master algorithms [2, 45], and ways to implement FMUs in the currently available FMI standard [17, 43], without considering the time aspects for hybrid cosimulation.
The topics addressed in this paper are relevant to modeling of hybrid and cyberphysical systems at large, but we choose to present our ideas on the basis of a concrete framework, namely the FMI standard. They could equally well be applied to other frameworks, such as HLA. Several papers exist in the literature that address the problem of formal modeling of cyberphysical systems, in particular, using hybrid automata with an emphasis on verification [1, 22, 26, 46]. The focus of this paper, however, is not formal verification, but rather (co)simulation, with an emphasis on the practicalities, and in particular the representation of time.
The list of modeling languages and tools for CPS and hybrid systems design is long, and beyond the scope of this paper to cover exhaustively. A survey dating from 2006 can be found in [11], and a description of the mapping between formalisms, languages, and tools can be found in [9]. There have been numerous developments in the field, including many others, e.g., [3, 4, 10, 50, 51].
A side benefit of our proposal in this paper is that it potentially enables cosimulation between classical ODE simulators and a relatively newer way of modeling continuous dynamics called “quantizedstate systems” (QSS) [25]. QSS simulators model continuous dynamics using discrete events, and sometimes the resulting simulations are more accurate (because of the use of symbolic computation) and more efficient than classical ODE solvers [31]. Since our proposed technique facilitates interoperability between continuoustime solvers and discreteevent systems, and QSS is based on discreteevent systems, it potentially enables interesting hybrid simulation techniques, where QSS can be used where it is most beneficial.
2 Representing time
A major challenge in the design of cyberphysical systems is that time is almost completely absent from models used on the cyber side, while time is central on the physical side. In order for hybrid simulators to interact in predictable and controllable ways, we will need a semantic notion of time that can be used to model both continuous physical dynamics and discrete events. It is naive to assume that we can just use the Newtonian ideal, where time is absolute, a real number t, visible everywhere, and advancing uniformly. We begin in this section by reviewing a set of requirements that any useful model of time must satisfy. We then elaborate with an analysis of practical, realizable alternatives that meet these requirements, at least partially.
2.1 Models of time
Like the Newtonian ideal, any useful semantic notion of time has to provide a clear ordering of events. Specifically, each component in a system must be able to distinguish past, present, and future. The state of a component at a “present” is a summary of the past, and it contains everything the component needs to react to further stimulus in the future. A component changes state as time advances, and every observer of this component should see state changes in the same order.
We also require a semantic notion of time to respect an intuitive notion of causality. If one event A causes another B, then every observer should see A ordered before B.
In order to cleanly support discrete events, we also require a semantic notion of simultaneity. Under such a notion, two events are simultaneous if all observers see them occurring at the same time. We need to avoid models where one observer deems two events to be simultaneous and another does not.
We could easily now digress into philosophy or modern physics. For example, how could a notion of simultaneity be justifiable, given relativity and the uncertainty principles of quantum mechanics? We resist the temptation to digress, and appeal instead to practicality. We need models that are useful for cosimulation. The goal is be able to design and build better simulators, not to unlock the secrets of the universe. Even after the development of relativity and quantum mechanics, Newtonian ideal time is a practical choice for studying many macroscopic systems.
But ironically, Newtonian time proves not so practical for hybrid cosimulation. The most obvious reason is that digital computers do not work with real numbers. Computer programs typically approximate real numbers using floatingpoint numbers, which can create problems. While real numbers have infinite precision, their floatingpoint representation does not. This discrepancy leads to quantization errors. Quantization errors may accumulate. Although real numbers can be compared for equality (e.g. to define “simultaneity”), it rarely makes sense to do so for floatingpoint numbers. In fact, some software bug finders, such as Coverity, report equality tests of floatingpoint numbers as potential bugs.
Consider a model where two components produce periodic events with the same period starting at the same time. The modeling paradigm should assure that those events will appear simultaneously to any other component that observes them. Without such a notion of simultaneity, the order of these events will be arbitrary, and changing the order of discrete events can have a much bigger effect than perturbing their timing, and a much bigger effect than perturbing samples of a continuous signals. Periods that are simple multiples of one another should also yield simultaneous events. Quantization errors should not be permitted to weaken this property.
 1.
The precision with which time is represented is finite and should be the same for all observers in a model. Infinite precision (as provided by real numbers) is not practically realizable in computers, and if precision differs between observers, then they will not agree on which events are simultaneous.
 2.
The precision with which time is represented should be independent of the absolute magnitude of the time. In other words, the time origin (the choice for the meaning of time zero) should not affect the precision.
 3.
Addition of time should be associative. That is, for any three time intervals \(t_1\), \(t_2\), and \(t_3\),$$\begin{aligned} (t_1 + t_2) + t_3 = t_1 + (t_2 + t_3) . \end{aligned}$$
In contrast to the above quote from Broman et al. [8], and to avoid confusion with the term precision in measurement theory, henceforth we will in this article use the term resolution instead of precision to denote the grain at which we can tell apart two distinct time stamps.
Definition 1
(Time resolution) Time resolution is the smallest representable time difference between two time stamps.
For instance, if we state that “a model has a time resolution of one millisecond,” or for short, “the time resolution is milliseconds,” it means that the time points 0.001, 0.002, 0.003 s, \(\ldots \) are representable, but the time points in between are not. No values can be defined at unrepresentable time points.
The output of this program is 0.800000,0.800000,0. Both r and k appear to have value 0.800000, but due to rounding errors, the test for equality r==k evaluates to false, which is represented as a 0value integer in C. Hence, floatingpoint numbers should not be used as the primary representation for time if there is to be a clean notion of simultaneity. Unfortunately, in FMI 2.0 and many other simulation frameworks, it is exactly the representation that is used. This is problematic.
2.2 Superdense time
A model of time that is particularly useful for hybrid cosimulation is superdense time [13, 33, 34, 35]. Superdense time is supported by FMIME, but not by FMICS. Fundamentally, superdense time allows two distinct ordered events to occur in the same signal without time elapsing between them.
A superdense time value can be represented as a pair (t, n), called a time stamp, where t is the model time and n is a microstep (also called an index). The model time represents the time at which some event occurs, and the microstep represents the sequencing of events that occur at the same model time. Two time stamps \((t, n_1)\) and \((t,n_2)\) can be interpreted as being simultaneous (in a weak sense) even if \(n_1 \ne n_2\). A stronger notion of simultaneity would require the time stamps to be equal (both in model time and microstep).
Superdense time is ordered lexicographically (like a dictionary), which means that \((t_1, n_1) < (t_2, n_2)\) if either \(t_1 < t_2\), or \(t_1 = t_2\) and \(n_1 < n_2\). Thus, an event is considered to occur before another if its model time is less or, if the model times are the same, if its microstep is lower.
An event is value with a time stamp. Time stamps are a particular realization of tags in the taggedsignal model of [32]. They provide a semantic ordering relationship between events that can be used in software simulations of physical phenomena and also in the programming logic on the cyber side of a cyberphysical system. But computers cannot perfectly represent real numbers, so a time stamp of form \((t,n) \in \mathbb {R}\times \mathbb {N}\) is not realizable in software. Many software systems approximate a time t using a doubleprecision floatingpoint number. But as we noted above, this is not a good choice. We examine alternatives below.
The microstep can also be problematic for software, because in theory, it has no bound. But computers can represent unbounded integers (assuming that memory is unbounded), although the implementation cost of doing so may be high, and the benefit may not justify the cost. The microstep, therefore, should either be represented using a bounded integer (such as a 32bit integer), or not represented at all. With some care in simulator design, it may be possible to never construct an explicit representation of the microstep, and instead rely only on welldefined ordering of time stamped values. Microsteps can implicitly begin at zero and increment until a signal stabilizes. This is the approach used in FMIME, where there is no explicit microstep, and yet, superdense time is supported. By contrast, in FMICS version 2.0, microsteps are explicitly disallowed [40].
2.3 Integer time
Given that floatingpoint numbers are a problematic representation of time, what should we use? An obvious alternative is integers. We postulate that a hybrid cosimulation extension must use integer numbers in some way to represent the progress of time for coordinating FMUs. But how, exactly? And at what cost?
Integers are typically represented in a computer using a fixed number of bits. E.g., a C int32_t is a 32bit, two’scomplement integer. A uint32_t in C is a 32bit unsigned integer. Note that the integer values can be interpreted as representing a time value with some arbitrary units. For example, we might interpret an integer value as having units of microseconds, in which case a value 100, for example, represents 0.0001 s.
Integers can be added and subtracted without quantization errors, a key property enabling a clean semantic notion of simultaneity. For example, suppose that one discreteevent signal has regularly spaced events with a period of \(p_1 = 3\), and another has regularly spaced events with a period of \(p_2 = 1\), both beginning at time 0. The times of the events in the first signal are \(0, p_1, p_1 + p_1, p_1 + p_1 + p_1, \ldots \), and the times of the events in the second signal are \(0, p_2, p_2 + p_2, p_2 + p_2 + p_2, \ldots \). Then no matter how these additions are performed, every third event in the second signal will be simultaneous with an event in the first signal.
Again, we have no such assurance with floatingpoint numbers. For example, suppose that we are using the IEEE 754 doubleprecision floatingpoint standard, and we let \(p_1 = 0.000003\) (3 ms). If we add \(p_1\) to itself 12 times, performing \((\cdots ((p_1 + p_1) + p_1) + p_1 \cdots )\), then the result is 0.000003599999999999999. On the other hand, if we let \(q = ((p_1 + p_1) + p_1)\), then \((((q + q)+q)+q)\) yields 0.0000036. The results are not equal.
If signals are continuous, then such small differences in time have very little effect on system behavior. But if signals are discrete, then any difference in time can change the order in which events occur, and the potential effects on system behavior are not bounded.
One possible solution is to explicitly use an error tolerance when comparing two floatingpoint numbers. For example, suppose we assume an error tolerance of 100 ns. That is, we consider two times to be simultaneous if their difference is less than 100 ns. Then the above two times are simultaneous. But now consider three times, \(t_1 = 0.0000036\), \(t_2 = 0.00000367\), and \(t_3 = 0.00000374\). Then \(t_1\) is simultaneous with \(t_2\), and \(t_2\) is simultaneous with \(t_3\), but \(t_1\) is not simultaneous with \(t_3\). Surely we would want simultaneity to be a transitive property!
An alternative to floatingpoint numbers is rational numbers. A time value could be given by two unsigned integers, a numerator and denominator. Addition of two such numbers will require first finding the least common multiple M of the denominators, then scaling all four numbers so that the two denominators equal M. Then the numerators can be added, and the denominator of the result will be M. However, this makes addition a relatively expensive operation, unless measures are taken to ensure that denominators are equal. Such measures, however, are equivalent to reaching agreement across a model on a time resolution, so we believe that a simpler solution uses an integer representation of time with an agreed resolution. It is also much more difficult to determine when overflow will occur with rational numbers. For example, if denominators are represented using 32bit unsigned integers, and two times with denominators 100,000 and 100,001 are added, will overflow occur?
Suppose we adopt an integer representation of time. What units should we choose? We could start by considering existing integer representations of time. For example, VHDL, a widely used hardware simulation language, uses integer time with units of femtoseconds. Another example is the network time protocol (NTP) [37], a widely used clock synchronization protocol that sets the current time of day on most computers today. NTP represents time using two 32 bit integers, one counting seconds, one counting fractions of a second (with units of \(2^{32}\) s). This can be treated as an ordinary 64bit integer with units of \(2^{32}\) s (about 0.23 ns). IEEE 1588 [16], a more recent clock synchronization protocol, is designed to deliver higher precision clock synchronization on local area networks. A time value in IEEE 1588 is represented using two integers, a 32bit integer that counts nanoseconds, and a 48bit integer that counts seconds.
NTP and IEEE 1588 are designed to coordinate notions of time across a network. All participants in such a network agree to a time resolution (based on a resolution of \(2^{32}\) s for NTP, 1 nanosecond for 1588). The first requirement in Broman et al. [8] stipulates simply that the time resolution should be the same for all observers in a model. It need not be the same across models. In fact, simulation models tend to have very different time scales; highspeed circuits require femtoseconds while astronomy may only require years. Cosimulation involves the coupling of independent models that are coordinated in a blackbox manner, each of which can operate at a different time resolution. From the perspective of the master that coordinates the exchange of data between components, however, all components must be understood to progress in increments that are multiples of the time resolution used by the master.
In Ptolemy II [44], the time resolution is a single global property of a simulation model, shared by all components. The resolution is given as a floatingpoint number, but the time itself is given as an integer, representing a multiple of that resolution. All arithmetic is done on the integer representation, and the unit is only used when rendering the resulting times for human observation. For example, if the resolution is given by the floatingpoint number 1E10, which in units of seconds denotes 0.1 ns, then the integer 10,000,000 will be presented to the user as 0.001 s.
Integers are, of course, vulnerable to overflow. Adding two integers can result in an integer that is no longer representable in the same bit format. Subtracting two unsigned integers can result in a negative number, which is not representable using an unsigned integer.
Whether and when an overflow occurs depends on the resolution, but also on the origin (what time is zero time). NTP and IEEE 1588 both set time relative to a fixed zero time, which in the case of NTP is 0h January 1, 1900, and in the case of 1588 is 0h January 1, 1970, TAI (international atomic time). Sometime in the year 2036, \(2^{32}\) s will have elapsed since January 1, 1970, and all NTP clocks will overflow. IEEE 1588 uses 48 bits, so the first overflow will not occur for approximately 9.1 million years. If we define the time origin to be, say, the start time of a simulation, then the NTP representation will be able to simulate approximated 62 years before its representation of time overflows. A VHDL simulator using a 64 bit integer representation of time with units of femtoseconds can simulate approximately 2.56 h of operation before overflow occurs, so clearly choosing the origin to be January 1, 1900, would not be reasonable. In Ptolemy II, overflow cannot occur, because the integer representation of time uses an unbounded data structure to represent an arbitrarily large integer.^{1} And the resolution is a parameter of the model, so Ptolemy II simulations can handle highspeed circuits as well as astronomical simulations.
In computers, addition and subtraction of integers is extremely efficient. In the IEEE 1588 representation, however, the two numbers cannot be conjoined into a single number, and arithmetic on the numbers must account for carried digits from the nanoseconds representation (32 bits) to the seconds representation (48 bits). Since computers do not have hardware support for such arithmetic, such a representation will be more computationally expensive to support. Adding two IEEE 1588 times takes quite a few steps in software. For the Ptolemy II unbounded integers, addition and subtraction are also potentially more expensive than addition on ordinary 32 or 64bit integers, but the cost is not as high as for IEEE 1588 because overflow is more easily detected in the hardware.
In modern computers, addition and subtraction of 32 and 64bit integers is at least as fast as addition and subtraction of floatingpoint numbers, and it requires significantly less energy. Multiplication, however, is a more complicated story. The problem with multiplication of integers lies in the units. Consider two integers with units of microseconds. If we multiply the two times, the units of the result will be microseconds squared. First, this is not a time, and hence there is no reason to insist that this result be represented the same way times are represented. Second, whether the result is representable in a 32 or 64bit integer will depend on the origin and resolution of the times.
Multiplication of two times, however, is a relatively rare operation. A more common operation is multiplication of a time by a unitless scalar. For the example above, instead of adding \(p_1\) to itself 12 times, we might have multiplied \(12*p_1\). As long as there is no overflow, such multiplication will typically be performed without quantization error and reasonably efficiently in a computer. However, not all processors have hardware support for integer division. And multiplication by a noninteger, as in \(0.1 * p_1\), will yield a floatingpoint representation of the result, not an integer representation. Hence, it will be vulnerable to quantization errors.
We claim that for the purposes of coordinating FMUs, addition, subtraction, and multiplication by integers are mostly sufficient, and hence an integer representation of time can be very efficient. Within an FMU, however, there may be more complex operations involving time, and the FMU may include legacy software or ODE solvers that use floatingpoint representations of time. Such FMUs will suffer a (hopefully small) cost of conversion of time values at the interface. Presumably, since such FMUs already tolerate quantization errors inherent in a floatingpoint representation, any errors that are introduced in the conversion process will also be tolerated. For example, such FMUs should never compare two times for equality, because if they are using a floatingpoint representation of time, such a comparison is meaningless. They should also not have behavior that depends strongly on the relative ordering of two time values.
We choose to represent time with a 64bit unsigned integer with arbitrary resolution, where the resolution is a parameter of the model, and origin equal to the simulation start time. It is computationally efficient on modern machines. And for wellchosen resolutions, will tolerate very long simulations without overflow. It is also easily converted to and from floatingpoint representations (with losses, of course). Also, given the enormous range of time scales that might be encountered in different simulation models, choosing a fixed universal resolution that applies to all models probably does not make sense. We believe further that all the acceptance criteria of [8] can be met without an explicit representation of the microstep.
2.4 The choice of resolution
The only remaining issue is how to choose the resolution. There are two questions here. First, what data type should be used to represent the resolution? Second, should an FMU be able to constrain the selected resolution?
The latter question seems relatively easier to answer. In hybrid cosimulation, an FMU may encapsulate considerable expertise about a system that it models, and the FMU’s model may only be valid over a range of time scales. It seems reasonable, therefore, that an FMU should be able to insist on a resolution. On the other hand, to be composable with other FMUs, the FMU should be capable of adapting to a finer resolution than the one it requests. If two FMUs provide different resolutions, or if their resolution differ from the default resolution of the simulation, then how should the differences be reconciled?
 (i)
The selected resolution for the model is the finest of all specified resolutions.
 (ii)
The selected resolution for the model is the greatest common divisor (GCD) of all specified resolutions.
 (a)
Double. In Ptolemy II, all components share a single doubleprecision floatingpoint number, the unit, which specifies the resolution of the model. All timestamps are interpreted as an integer multiple of this value.
 (b)
Rational. Alternatively, resolution can be specified given as a pair of integers, a numerator and a denominator. In this case, it is always possible in theory to find a GCD, although there is risk of overflow if the numerator and denominator are represented with a bounded number of bits. In addition, conversion to and from a floatingpoint representation, which is often needed internally by an FMU, may be costly.
 (c)
Decimal. Finally, the resolution can also be specified using integer exponent n that stipulates a resolution of \(10^{n}\) s. For instance, IEEE 1588 resolution is achieved with \(n = 9\), VHDL resolution is achieved with \(n = 12\). Using a decimal resolution, the finest resolution is always the same as the GCD and is always precise.
Also, since parameters are often related to one another, the ability to, for example, calculate the difference between two parameter values without quantization error can be important. For instance, if a component specifies using parameter values that it produces events at times \(t_1\) and \(t_2\), given in decimal, then the time interval \(t_2t_1\) can be calculated without error.
If parameter values are not given in decimal, for example “1/3,” then decimal resolution is not sufficient to avoid quantization errors. Such errors can be avoided by selecting rational resolution instead. A rational resolution has the advantage that if an FMU internally performs computation according to its specified resolution, then simultaneity of any events it generates compared to events generated by other similar FMUs is well defined. The simultaneity of such events will not be subject to quantization errors. However, this choice comes at a possibly considerable cost in converting time values to and from floatingpoint numbers. And, as we have noted, this choice has a more complicated overflow risk.
3 Hybrid cosimulation with integer time
In order to examine the effects of using integer time in hybrid cosimulation, we need a practical framework for our analysis. Instead of defining our own, we leverage the existing FMICS 2.0 standard, and extend it to use integer time. In addition, in order to support discrete events, we enrich the interface to encode the absence of an event and allow FMUs to react instantaneously, i.e., without moving forward in time. We call this framework FMIHC (FMI for hybrid cosimulation).
3.1 Extensions to the FMI standard
As a consequence of using integer time, the FMUs and MA need to agree on a resolution before the simulation starts. Two new functions are introduced for this: getPreferredResolution and setResolution. In addition, we introduce a hybrid step function doStep Hybrid that uses integer time instead of doubles, a function getMaxStepSizeHybrid that returns the maximal allowed communication step size, and the functions getHybrid and setHybrid that in addition to the exchange of regular signal values can also communicate “absent,” to indicate that there is no value present at the corresponding time instant.
3.1.1 Advancing time
The first parameter points to a particular FMU. The second parameter states the current time, using a doubleprecision floatingpoint value (named fmi2Real in the standard). The third parameter states the communication step size, which is the time interval over which the master requests the FMU to advance. Finally, the fourth parameter provides information as to whether any rollbacks can occur, which allows the FMU to abandon any kept state if this parameter is true.
Instead of using fmi2Real data type, we use the type definition fmiXIntegerTime, a 64bit unsigned integer type. The additional parameter performedStepSize is used for communicating back to the master the size of the performed step, which could be smaller than the requested step, communicationStepSize. If the performed step size is equal to the requested step, then the FMU has accepted the requested step. If the performed step is smaller than the requested step, then the FMU has rejected the requested step, but nevertheless advanced to time currentCommunicationPoint \(+\) performedStepSize.
This function returns an upper bound of the step size that the FMU will accept on the next invocation of doStepHybrid. The master algorithm should query this function before calling doStepHybrid.
3.1.2 Negotiating the resolution of time
The preferred resolution is returned as an integer using the second parameter. As described in the previous section, the resolution represents an integer n, that stipulates a resolution of \(10^{n}\) s.
3.1.3 Discrete events
The aforementioned functions are introduced to offer support for integer time. In order to support discrete signals, an FMU must be able to output or take in discrete events, which are present only for a duration of zero time (one microstep in superdense time) and absent otherwise.
The FMI standard defines two kinds of functions for setting and getting input and output signal values: fmi2SetXXX and fmi2GetXXX. There are different functions for different variable types. The substring XXX is a placeholder for the type.^{3} For instance, fmiSetReal is the function that is used to set input signal values of type fmi2Real, which is implemented using doubleprecision floatingpoint numbers.
If flag[i] == present, the signal is considered to be present and the value of the variable vr[i] is value[i]. In case flag[i] == absent, the signal is not present and the value[i] of variable vr[i] should be ignored. Note that there are many alternative ways of extending the standard with capabilities of expressing absent and present signals. For instance, instead of creating new get and set functions, separate functions can be used for indicating if a signal is present or absent. However, these implementation details are outside the scope of this paper and would be a decision for the FMI steering committee.
3.2 Categories of FMUs in FMIHC

Category 0: An FMU in this category internally uses floatingpoint numbers to represent time (see the top of Fig. 3). This category can be further refined into two sub categories. Category \(0_A\) denotes an FMU that is compatible with existing FMI 2.0 cosimulation FMUs. Such FMUs are not suitable for hybrid cosimulation because the standard disallows zero step sizes and insists on always calling \(\texttt {doStep}\) (advance time) in between \(\texttt {set}\) (providing inputs) and \(\texttt {get}\) (retrieving outputs). The former rules out the use of superdense time while the latter prohibits the handling of direct feedthrough loops. Therefore, we do not further discuss category \(0_A\) FMUs in this paper. In contrast, Category \(0_B\) follows the assumptions in [7], which allows a zero step size and getting and setting values (multiple times) without having to advance time.

Category 1: This is the first out of four possible categories of FMUs that use integers to represent time. Note that categories 1–4 represent the exhaustive combinations of getPreferredResolution and set Resolution. In category 1, neither of these functions is supported. The operation of an FMU in this category is timeinvariant; it does not use time to determine its outputs or state updates. Such a component can implement, for example, a timeinvariant memoryless function such as addition.

Category 2: In category 2, function getPreferred Resolution is supported, but setResolution is not. This means that the FMU states which resolution it will use, but does not allow the master to change its resolution. That is, the resolution is actually required, not just preferred. A composition of multiple category 2 FMUs may result in a heterogeneous model with respect to the resolution of time. Category 2 FMUs are natural to use in cases where the FMU should output data at periodic time internals, e.g., periodic samplers or signal generators. Tools that have a fixed time resolution, such as Rhapsody from IBM or VHDL programs, would produce FMUs of this category.

Category 3: FMUs in this category support set Resolution, but do not support getPreferred Resolution. This means that the FMU is using the integer notation of time (in contrast to category 1), but any resolution is acceptable. For instance, a zerocrossing detector would fall into to this category.

Category 4: An FMU in this category supports both getPreferredResolution and setResolution. This means that the FMU may first communicate to the master the resolution that it prefers, and be followed by the master telling the FMU what resolution it should use. An ODE solver FMU would belong to this category.
3.3 Modular support for FMIHC
Figure 4 depicts an architectural view of how an FMI simulation tool can interact with different category FMUs modularly, without drastic changes to its simulation engine.
In the left part of the figure, the dashed line represent an FMI simulation tool that takes a set of connected FMUs as input and produces a simulation result as output. The basic idea is to separate the concerns of the master algorithm from the logic that handles the translation between different resolutions for different categories of FMUs. This logic is instead encoded into wrappers, that is, software components that translate function calls from the master algorithm to the FMUs. Both the implementation of the master algorithm component and the wrapper components are internal to a specific tool implementation. The design discussed in this article does not assume any specific implementation language. Hence, the wrapper interface functions (with prefix “_”) are arbitrary; tool vendors can choose the specifics of their wrapper interface as they see fit and are not tied to a specific programming language for the implementation of their execution engine.
3.4 An implementation of FMIHC
The interface extensions described in this paper can be used by making relatively small adaptations to existing master algorithms. In a nutshell, it requires adopting our integerbased representation time, using specific wrappers for FMUs based on their category, and letting the master negotiate a time resolution as part of its initialization procedure.
In the following, we outline how to make these adjustments based on our own reference implementation called FIDE [15], an FMI Integrated Development Environment. FIDE implements a master algorithm based on the work of Broman et al. [7]. It is capable of deterministically simulating mixtures of continuoustime and discreteevent dynamics, has a superdense model of time [33, 34], and features extended data types to support an explicit notion of absent. Superdense time is modeled by allowing the master to take zerosize steps, allowing the simulation to iterate over a number of lexicographically ordered indexes before it advances to the next Newtonian time instant. The absence of events in signals is enabled through the FMIHC functions getHybrid and setHybrid described in Sect. 3.1.3.
Figure 5 provides a schematic description of the MA implemented in FIDE. Any tool that supports FMI will feature an execution engine much like the one in FIDE. The simulation tool reads a model description that describes how a set of FMUs is connected. It loads each FMU by reading the FMI XMLfile and dynamically linking the required C libraries as it normally would. However, in order to accommodate FMIHC, each FMU is now identified by category and a matching wrapper object is instantiated for every FMU. The wrapper is specifically designed to interface with FMUs of a particular category. Since all wrappers are using the same interface (all are using integer time), the logic of the execution engine is not complicated by the different ways that FMUs may interpret time: all the necessary conversions are performed by the wrappers. For instance, if a category \(0_B\) FMU is used, the wrapper is handling the correct conversion between integer time and floatingpoint time.
During initialization, the master calls the function _ determineResolution() that determines the time resolution for the simulation. This function iterates over all the wrappers and queries them for the time resolution exponent using \(\texttt { \_getPreferredResolution}\). The time resolution exponent of the simulation is computed as the minimum among a default value and the resolution exponents obtained from all the FMUs that partake in the simulation. The chosen resolution exponent is then communicated to the wrappers using \(\texttt { \_setResolution}\). The wrappers will eventually use it to convert the integer time stamps used by the master to whichever model of time is used internally by the wrapped FMU, if necessary.
FIDE must keep track of the global time of the master and the current step size using integers. Hence, it cannot use the fmi2Real data type that is prescribed by FMICS 2.0 for this purpose. We use a new data type, fmiXIntegerTime, instead. Finally, all direct calls from the master to the FMU functions \(\texttt {fmi2DoStep}\), \(\texttt {getMaxStepSize}\), get, and set have to be removed and replaced by function calls to the corresponding intermediate functions provided by the wrappers.
4 Time conversion and quantization
Using integers for representing time does not completely remove time quantization errors: an FMU may still use a floatingpoint number internally for time keeping. In such case, conversion from the floatingpoint representation of the time kept inside of the FMU to the integer time used by the master comes with a loss of precision. Specifically, the effects of time quantization come into play when a category \(0_B\) FMU rejects a proposed step size and makes partial progress over an interval of which the length (a floatingpoint number) cannot be losslessly converted into a corresponding fixedresolution integer time for the master to interpret. Similar problems arise when there is a mismatch in time resolution between the master and a category 2 FMU. Quantization errors also result when a higherresolution integer time is converted to a lowerresolution integer time. For instance, the master may instruct a category 2 FMU to take a step that is too small to represent with the resolution that the FMU uses internally.
The key insight here is that in cosimulation participants are treated as a black box; each component has its own isolated understanding of time that is based on the characteristics of its local clock. Each level of hierarchy in a cosimulation gives rise to a different clock domain in which the passing of time may register differently from another clock domain. The degree to which two components can be synchronized therefore depends on compatibility of their clock domains. The issue of translating time across different clock domains gets complicated by corner cases, which, if not handled appropriately, may lead to Zeno behavior or may cause discrete events emitted by one component to be missed by another. These kinds of issues play a role only in the interaction with category \(0_B\) and category 2 FMUs. The former does not admit integer time and hence requires an conversion from and to integer time, and the latter cannot adapt its resolution to its environment, which requires a conversion between integer times. On the other hand, category 1 FMUs have no time resolution at all, and therefore their behavior must be timeinvariant, while categories 3 and 4 can adapt their resolution and therefore synchronize perfectly with their environment. Hence, category \(0_B\) and category 2 FMUs are the main focus of the remainder of this section. A full C implementation of wrappers sufficient to cosimulate any combination of FMUs of any of the aforementioned categories is given in “Appendix”.
4.1 Converting from integer to realvalued time
4.2 Converting from realvalued to integer time
No matter how fine a time resolution we choose, an arbitrary realvalued time instant is unlikely to align perfectly with some integertime instant. As a result, conversion from a realvalued time to integer time can introduce a time quantization error up to one unit of the integer time resolution. Notice that this quantization error is controlled by the user, modeler, or tool integrator through the simulation parameter r, the time resolution. The finer the resolution, the smaller the quantization error.
For simplicity, we assume the master adopted a time resolution of 1 second (\(n = 0\)). At time \(t=0\) the master calls \(\texttt { \_getMaxStepSize}\) of the FMU A wrapper (FMU B does not implement \(\texttt { \_getMaxStepSize}\), and hence gives no indication as to what step size it will accept). The FMU A wrapper then calls \(\texttt {getMaxStepSize}\) which returns \(\varDelta t = 0.4\). Because of the master’s time resolution \(r=1\), \(\varDelta t\) cannot be represented exactly in terms of multiples of r. Therefore, using the ceiling operator as its quantization method, the wrapper reports back 1, indicating to the master that the FMU will accept a step size of size 1. Notice that had we used the floor operator instead, the wrapper would have returned 0, and the simulation would have gotten stuck forever at \(t=0\) because the master will proceed with the smallest of the step sizes that the wrappers return through \(\texttt { \_getMaxStepSize}\). Similarly, if we had used rounding, and rounding of 0.4 returns 0, then again, the simulation would get stuck.
The master will next invoke _doStep with a proposed step size of 1. It can invoke this function in either order, first for FMU A, or first for FMU B. Assume FMU B goes first. It will reject the step and indicate that it has made progress up to time 0.8. But that time is not representable in integer time either, so the wrapper rounds it up using the ceiling function. Since \(\lceil 0.8 \rceil = 1\), the wrapper accepts the step, but makes an internal annotation that the FMU only progressed to time 0.8. The next invocations of get and set will provide inputs and retrieve outputs that for the master will appear to occur at time 1, but will look to FMU B as if they occur at time 0.8.
The procedure for FMU A is similar, but the wrapper has a bit more information to work with, since the FMU has previously indicated that it would accept a maximum step of 0.4. Hence, when the master proposes a step of size 1, the wrapper can propose a step of 0.4 to the FMU. The next invocations of get and set will provide inputs and retrieve outputs that, again, for the master appear to occur at time 1, but will look to FMU A as if they occur at time 0.4.
Assume further that the output of FMU A has a discontinuity at time 0.4. This means that the FMU requires that the next invocation of doStep has a step size of 0. It indicates this by returning 0 when \(\texttt {getMaxStepSize}\) is called. The wrapper passes this on to the master by returning 0 in \(\texttt { \_getMaxStepSize}\). The master now has no choice but to propose a zero step size. Upon invocation of this zero step, FMU A advances in superdense time because of the discontinuity, so its local time remains at 0.4. Since FMU B produces a discrete event at this time, its local time will remain at 0.8. The outputs of both FMUs will appear to the master to occur at time 1.
Suppose that after this neither FMU has any anticipated events and therefore will accept any step size. Suppose the master proposes a step of size 10 to the wrappers. The wrappers will need to compensate for the lag of their FMUs, and instead propose a step of 10.6 and 10.2, respectively, to FMU A and FMU B.
It may be possible to design other wrappers with a different API that use the floor or other rounding functions instead of the ceiling function, but the our solution appears to be simple, to work well, to preserve causality, and to ensure that time continues to advance. We observe that using the floor in \(\texttt { \_getMaxStepSize}\) will always make the FMU lag behind with respect to the master. For \(0_B\) FMUs, the quantization effects due to the use of integer time only play a part in the conversion of time steps in the FMU’s clock domain to time steps in the master’s clock domain, not vice versa. Conversion from master time to FMU time suffers only from ordinary rounding that is a consequence of the floatingpoint representation of time inside the FMU.
4.3 Converting between differentresolution integer times
It should be noted that we can compute the floor or ceiling of \(\varDelta j\) using solely integer arithmetic; there is no need for floatingpoint arithmetic for either of them. The floor can be implemented using an integer division that truncates toward zero, which is standard in C99 and most other contemporary programming languages. The ceiling function can also be implemented using integer division: if the division truncates to zero then \(\lceil \frac{x}{y} \rceil \) can be computed using the following expression: (x+y1)/y.
It is important to emphasize that time quantization plays a different role for category 2 FMUs than it does for category \(0_B\) FMUs. The former experience quantization only in the conversion from master time to time FMU time, while the latter experience quantization only in the conversion from FMU time to master time; the directionality of time quantization is opposite in comparison between the two. This observation also explains why it is no issue to use ceiling quantization for category 2 FMUs, because the Zeno condition described in Sect. 4.1 is due to loss of precision in the conversion from FMU time to master time, which for category 2 FMUs is lossless. A detailed description of the application of Eqs. (7) and (8) in the wrapper for category 2 FMUs can be found in “Appendix”.
Finally, let us examine the effects of time quantization using an example. Consider the model in Fig. 7 that depicts two category 2 FMUs. The master, along with FMU A, uses a resolution of 1 s, while FMU B uses a resolution of 10 s. In other words, a step of size 1 in the clock domain of FMU B represents a step size of 10 in the clock domain of FMU A. Conversion the other way around, dividing by \(10^1\), may not yield a whole number and therefore incurs a quantization error.
Interestingly, the event emitted by FMU A at internal time index \(j_A=15\), which corresponds to time \(t=15\), will appear on the input of FMU B (due the use of the floor function) when it is at internal time index \(j_B=1\) which corresponds to time \(t=10\). Superficially, this may look like a causality violation, but it is not, because the two internal clock domains are completely isolated from each other. They are analogous to two people having a phone conversation, but where one is looking at a clock that is ahead compared to a clock the other is looking at. They cannot see each other’s clocks. An outside observer (the master) has its own clock, which may differ from both the internal clocks (although in this particular example it is perfectly synchronized with FMU A because they use the same resolution). In all three clock domains, causality is preserved.
5 Conclusions
Although we all harbor a simple intuitive notion of time, how it is measured, how it progresses, and what it means for two events to be simultaneous, a deeper examination of the notion, both in models and in physics, reveals considerable subtleties. Cyberphysical systems pose particularly interesting challenges, because they marry a world, the cyber side, where time is largely irrelevant and is replaced by sequences and precedence relations, with a physical world, where even the classical Newtonian idealization of time stumbles on discrete, instantaneous behaviors and notions of causality and simultaneity. Since CPS entails both the smooth continuous dynamics of classical Newtonian physics, and the discrete, algorithmic dynamics of computation, it becomes impossible to ignore these subtleties.
We have shown that the approach taken in FMI (and many other modeling frameworks) that embraces a naive Newtonian physical model of time, and a cyberapproximation of this model using floatingpoint numbers, is inadequate for CPS. It is suitable only for modeling continuous dynamics without discrete behaviors. Using this unfortunate choice for CPS results in models with unnecessarily inexplicable, nondeterministic, and complex behaviors. Moreover, we have shown that these problems are solvable in a very practical way, resulting in CPS models with clear semantics that are invulnerable to the pragmatics of limitedprecision arithmetic in computers. To accomplish this, our solution requires an explicit choice of time resolution that quantizes time so that arithmetic on time values is performed on integers only, something that modern computers can do exactly, without quantization errors. Moreover, we have shown that such an integer model of time can be used in a practical cosimulation environment, and that this environment can even embrace components that internally use floatingpoint representations of Newtonian time, for example to model continuous dynamics without discrete behaviors.
We have gone to considerable effort in this paper to show that choosing a better model of time does not complicate a cosimulation framework such as FMI by much. A small number of very simple extensions to the existing standard are sufficient, and these extensions can be realized in a way that efficiently supports legacy simulation environments that use floatingpoint Newtonian time. But while supporting such legacy simulators, it also admits integration of a new class of simulators, including discreteevent simulators, software engineering models, hybrid systems modelers, and even the new QSS classes of simulators for continuous dynamics. Such a cosimulation framework has the potential for offering a clean and universal modeling framework for CPS. And although we have only worked out the details for FMI, we are convinced that the same principles can be applied to other cosimulation frameworks such as HLA and to simulators that directly embrace mixed discrete and continuous behaviors such as Simulink/Stateflow. We hope that our readers include the people who can make this happen.
Footnotes
 1.
Strictly speaking, overflow can occur in the sense that the machine may run out of memory to represent the integer times. But this would occur at such absurdly large times, beyond the age of the universe with any imaginable resolution, that it is simply not worth worrying about.
 2.
Our proposed extension does not target a specific version of FMI. For this reason (and for brevity), we have removed the prefix “\(\texttt {fmi2}\)” from all newly proposed functions. Newly introduced datatypes honor the naming convention but have the FMI version number replaced by a wildcard, “X.”
 3.
For simplicity, we omit this implementation detail from the remainder of the discussion and refer to these functions without the “XXX” wildcard suffix.
References
 1.Alur, R., Courcoubetis, C., Halbwachs, N., Henzinger, T., Ho, P., Nicollin, X., Olivero, A., Sifakis, J., Yovine, S.: The algorithmic analysis of hybrid systems. Theoret. Comput. Sci. 138, 3–34 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
 2.Bastian, J., Clauss, C., Wolf, S., Schneider, P.: Master for cosimulation using FMI. In: 8th Modelica Conference, pp. 115–120 (2011)Google Scholar
 3.Benveniste, A., Bourke, T., Caillaud, B., Pouzet, M.: Nonstandard semantics of hybrid systems modelers. J. Comput. Syst. Sci. 78, 877–910 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
 4.Bliudze, S., Krob, D.: Modelling of complex systems: systems as dataflow machines. Fundam. Inform. 91(2), 251–274 (2009)MathSciNetzbMATHGoogle Scholar
 5.Blochwitz, T., Otter, M., Arnold, M., Bausch, C., Clauß, C., Elmqvist, H., Junghanns, A., Mauss, J., Monteiro, M., Neidhold, T., Neumerkel, D., Olsson, H., Peetz, J.V., Wolf, S.: The functional mockup interface for tool independent exchange of simulation models. In: Proceedings of the 8th International Modelica Conference, Dresden, Germany. Modelica Association (2011)Google Scholar
 6.Bogomolov, S., Greitschus, M., Jensen, P. G., Larsen, K. G., Mikucionis, M., Strump, T., Tripakis, S.: Cosimulation of hybrid systems with SpaceEx and Uppaal. In: Proceedings of the 11th International Modelica Conference. Linkoping University Electronic Press (2015)Google Scholar
 7.Broman, D., Brooks, C., Greenberg, L., Lee, E.A., Masin, M., Tripakis, S., Wetter, M.: Determinate composition of FMUs for cosimulation. In: Proceedings of the International Conference on Embedded Software (EMSOFT 2013). IEEE (2013)Google Scholar
 8.Broman, D., Greenberg, L., Lee, E.A., Masin, M., Tripakis, S., Wetter, M.: Requirements for hybrid cosimulation standards. In: Proceedings of 18th ACM International Conference on Hybrid Systems: Computation and Control (HSCC), pp. 179–188. ACM (2015)Google Scholar
 9.Broman, D., Lee, E.A., Tripakis, S., Törngren, M.: Viewpoints, formalisms, languages, and tools for cyberphysical systems. In: Proceedings of the 6th International Workshop on MultiParadigm Modeling, pp. 49–54. ACM (2012)Google Scholar
 10.Broman, D., Siek, J.G.: Modelyze: a gradually typed host language for embedding equationbased modeling languages. Technical report UCB/EECS2012173, EECS Department, University of California, Berkeley (2012)Google Scholar
 11.Carloni, L.P., Passerone, R., Pinto, A., SangiovanniVincentelli, A.: Languages and tools for hybrid systems design. Found. Trends Electron. Des. Autom. 1(1/2), 1–193 (2006)CrossRefzbMATHGoogle Scholar
 12.Cassandras, C.G.: Discrete Event Systems, Modeling and Performance Analysis. CRC Press, Boca Raton (1993)Google Scholar
 13.Cataldo, A., Lee, E., Liu, X., Matsikoudis, E., Zheng, H: A constructive fixedpoint theorem and the feedback semantics of timed systems. Michigan. In: Workshop on Discrete Event Systems (WODES), Ann Arbor (2006)Google Scholar
 14.Cremona, F., Lohstroh, M., Broman, D., Di Natale, M., Lee, E.A., Tripakis, S.: Step revision in hybrid cosimulation with FMI. In: International Conference on Formal Methods and Models for System Design (MEMOCODE) (2016)Google Scholar
 15.Cremona, F., Lohstroh, M., Tripakis, S., Brooks, C., Lee, E.A.: FIDE—an FMI integrated development environment. In: Symposium on Applied Computing (SAC) (2016)Google Scholar
 16.Eidson, J.C.: Measurement, Control, and Communication Using IEEE 1588. Springer, Berlin (2006)Google Scholar
 17.Feldman, Y.A., Greenberg, L., Palachi, E.: Simulating rhapsody SysML blocks in hybrid models with FMI. In: 10th Modelica Conference, pp. 43–52 (2014)Google Scholar
 18.Fishman, G.S.: DiscreteEvent Simulation: Modeling, Programming, and Analysis. Springer, Berlin (2001)CrossRefzbMATHGoogle Scholar
 19.Frehse, G., Le Guernic, C., Donzé, A., Cotton, S., Ray, R., Lebeltel, O., Ripado, R., Girard, A., Dang, T., Maler, O.: Spaceex: scalable verification of hybrid systems. In: International Conference on Computer Aided Verification, pp. 379–395. Springer (2011)Google Scholar
 20.Fritzson, P.: Principles of ObjectOriented Modeling and Simulation with Modelica 2.1. Wiley, Hoboken (2003)Google Scholar
 21.Goldberg, D.: What every computer scientist should know about floatingpoint arithmetic. ACM Comput. Surv. 23(1), 5–48 (1991)CrossRefGoogle Scholar
 22.Henzinger, T.A.: The theory of hybrid automata. In: Inan, M.K., Kurshan, R.P. (eds.) Verification of Digital and Hybrid Systems, Volume 170 of NATO ASI Series F: Computer and Systems Sciences, pp. 265–292. Springer, Berlin (2000)CrossRefGoogle Scholar
 23.Karsai, G., Lang, A., Neema, S.: Design patterns for open tool integration. Softw. Syst. Model. 4(2), 157–170 (2005)CrossRefGoogle Scholar
 24.Karsai, G., Sztipanovits, J., Ledeczi, A., Bapty, T.: Modelintegrated development of embedded software. Proc. IEEE 91(1), 145–164 (2003)CrossRefGoogle Scholar
 25.Kofman, E., Junco, S.: Quantizedstate systems: a DEVS approach for continuous system simulation. Trans. Soc. Model. Simul. Int. 18(1), 2–8 (2001)Google Scholar
 26.Kopke, P., Henzinger, T., Puri, A., Varaiya, P.: What’s decidable about hybrid automata? In: 27th Annual ACM Symposium on Theory of Computing (STOCS), pp. 372–382 (1995)Google Scholar
 27.Kuhl, F., Weatherly, R., Dahmann, J.: Creating Computer Simulation Systems: An Introduction to the High Level Architecture. Prentice Hall PTR, Upper Saddle River (1999)zbMATHGoogle Scholar
 28.Larsen, K.G., Pettersson, P., Yi, W.: Uppaal in a nutshell. Int. J. Softw. Tools Technol. Transf. (STTT) 1(1), 134–152 (1997)CrossRefzbMATHGoogle Scholar
 29.Lee, E.A.: Modeling concurrent realtime processes using discrete events. Ann. Softw. Eng. 7, 25–45 (1999)CrossRefGoogle Scholar
 30.Lee, E.A.: Fundamental limits of cyberphysical systems modeling. ACM Trans. Cyber Phys. Syst. 1(1), 26 (2016)CrossRefGoogle Scholar
 31.Lee, E.A., Niknami, M., Nouidui, T.S., Wetter, M.: Modeling and simulating cyberphysical systems using CyPhySim. In: International Conference on Embedded Software (EMSOFT) (2015)Google Scholar
 32.Lee, E.A., SangiovanniVincentelli, A.: A framework for comparing models of computation. IEEE Trans. Comput. Aided Des. Circuits Syst. 17(12), 1217–1229 (1998)CrossRefGoogle Scholar
 33.Lee, E.A., Zheng, H.: Operational semantics of hybrid systems. In: Morari, M., Thiele, L. (eds.) Hybrid Systems: Computation and Control (HSCC), Volume LNCS 3414, pp. 25–53. Springer, Zurich (2005)Google Scholar
 34.Maler, O., Manna, Z., Pnueli, A.: From timed to hybrid systems. In: RealTime: Theory and Practice, REX Workshop, pp. 447–484. Springer (1992)Google Scholar
 35.Manna, Z., Pnueli, A.: Verifying hybrid systems. Hybrid Syst. 736, 4–35 (1993)CrossRefGoogle Scholar
 36.Migoni, G., Bortolotto, M., Kofman, E., Cellier, F.E.: Linearly implicit quantizationbased integration methods for stiff ordinary differential equations. Simul. Model. Pract. Theory 35, 118–136 (2013)CrossRefGoogle Scholar
 37.Mills, D.L.: A brief history of NTP time: confessions of an internet timekeeper. ACM Comput. Commun. Rev. 33, 9–21 (2003)CrossRefGoogle Scholar
 38.Modelica Association. Functional mockup interface for model exchange and cosimulation. Report 2.0 (2014)Google Scholar
 39.Modelica Association. Modelica—A Unified ObjectOriented Language for Physical Systems Modeling—Language Specification Version 3.3 Revision 1. http://www.modelica.org (2014)
 40.Modelisar Consortium and the Modelica Association. Functional mockup interface for model exchange and cosimulation. Report Version 2.0. https://www.fmistandard.org/downloads (2014)
 41.Otter, M., Elmqvist, H., López, J.: Collision handling for the Modelica multibody library. In: Modelica Conference, pp. 45–53 (2005). Describes three approaches, impulsive, springdamper ignoring contact area, and springdamper including contact areaGoogle Scholar
 42.Otter, M., Malmheden, M., Elmqvist, H., Mattsson, S.E., Johnsson, C.: A new formalism for modeling of reactive and hybrid systems. In: Modelica Conference. The Modelica Association (2009)Google Scholar
 43.Pohlmann, U., Schäfer, W., Reddehase, H., Röckemann, J., Wagner, R.: Generating functional mockup units from software specifications. In: 9th Modelica Conference, pp. 765–774 (2012)Google Scholar
 44.Ptolemaeus, C. (ed.): System Design, Modeling, and Simulation using Ptolemy II. Ptolemy.org, Berkeley, CA (2014)Google Scholar
 45.Schierz, T., Arnold, M., Clauss, C.: Cosimulation with communication step size control in an FMI compatible master algorithm. In: 9th Modelica Conference, pp. 205–214 (2012)Google Scholar
 46.Tabuada, P.: Verification and Control of Hybrid Systems: A Symbolic Approach. Springer, Berlin (2009)CrossRefzbMATHGoogle Scholar
 47.Tripakis, S.: Bridging the semantic gap between heterogeneous modeling formalisms and FMI. In: International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation—SAMOS XV (2015)Google Scholar
 48.Tripakis, S., Stergiou, C., Shaver, C., Lee, E.A.: A modular formal semantics for Ptolemy. Math. Struct. Comput. Sci. 23, 834–881 (2013)MathSciNetCrossRefzbMATHGoogle Scholar
 49.Zeigler, B.P., Praehofer, H., Kim, T.G.: Theory of Modeling and Simulation, 2nd edn. Academic Press, Cambridge (2000)zbMATHGoogle Scholar
 50.Zhu, Y., Westbrook, E., Inoue, J., Chapoutot, A., Salama, C., Peralta, M., Martin, T., Taha, W., O’Malley, M., Cartwright, R., Ames, A., Bhattacharya, R.: Mathematical equations as executable models of mechanical systems. In: Proceedings of the 1st ACM/IEEE International Conference on CyberPhysical Systems, ICCPS’10, pp. 1–11. ACM, New York, NY, USA (2010)Google Scholar
 51.Zimmer, D.: EquationBased Modeling of VariableStructure Systems. PhD thesis, Swiss Federal Institute of Technology, Zurich, Switzerland (2010)Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.