Introduction

Modern-day manufacturing undergoes one of the biggest changes ever, with computers tightly integrated in industry products. The number of silicon chips in industry products is dramatically increasing, thanks to those chips industry products nowadays have acquired not only enhanced performance and efficiency but also new functionalities that had been unforeseen. The most prominent examples of such phenomena are in the automotive industry, where engine control units (ECUs) control ignition timing in precision for enhanced fuel efficiency, and we have seen the rise of autonomous driving, a paradigm that had been almost unimaginable a decade ago. The term cyber-physical systems (CPS) refers to those systems that combine physical components with digital control by computers; modern industry products such as cars are thus representative of CPS.

Computers in industry products pose one of the greatest challenges on manufacturing, too. Discrete dynamics governed by computers exhibits a kind of complexity that is essentially different from continuous dynamics: lack of continuity is lack of uniformity, which makes it hard for us to grasp simple governing principles. As a consequence it is as hard as ever to reason about industry products, e.g. for their safety guarantee. For example, nowadays most cars are equipped with electronic throttle control (drive-by-wire); such systems not behaving in an expected manner can lead to severe consequences including loss of human lives. In the modern world, industry products play increasingly important roles and, therefore, their safety and correct behavior is a pressing issue.

It is in this context that the author was awarded an ERATO grant from Japan Science and Technology Agency (JST), leading to the commencement of ERATO HASUO Metamathematics for Systems Design Project (ERATO MMSD), October 2016–March 2022. In this project that will eventually employ more than ten researchers, we will use formal methods—mathematical techniques for quality assurance originally developed for software—for quality assurance of CPS like modern-day industry products.

This goal of extending formal methods from (homogeneous) software systems to (heterogeneous) cyber-physical systems is in fact a standard one: it has been pursued by many researchers in a number of organized research efforts. However, in the extension of formal methods techniques T by additional concerns e relevant to CPS, the construction of the extended techniques \(T^{e}\) has conventionally been done in an individual and one-by-one manner. As an example of such individual extended techniques \(T^{e}\), hybrid automata take finite automata as their basis T and add the additional concern e of continuous dynamics.

What makes our ERATO MMSD project stand out is its unique research strategy that we call metatheoretical transfer: we formalize the process of extension \(T+e\rightarrow T^{e}\) itself as a theory that is parametrized over an arbitrary T and e. This way we obtain a meta-level theory on how an object-level theory T is extended by an additional concern e. Here the word theory refers to a body of definitions and theorems; the object-level theory T thus consists of definitions and theorems about the formal methods technique in question. Such a metatheory on the extension \(T+e\rightarrow T^{e}\) can be thought of as a general recipe that works for various Ts and es in a uniform manner; it yields many concrete extensions \(T^{e}\) as instances. Moreover, via this identification of mathematical essences that goes deeper than superficial application of known algorithms, we would be able to apply a known technique from formal methods to a problem that is not the original goal of the techniques.

We have so far obtained a couple of examples of such metatheoretical transfer. In the nonstandard transfer methodology, for example, the original technique T can be in principle an arbitrary formal method technique as long as it is formalized in the language of first-order logic for real numbers. We fix the additional concern e to be continuous dynamics. Our methodology consists of adding a new constant \(\mathtt {dt}\) to T, and constructing a denotational model of T that accommodates infinitesimals relying on Robinson’s nonstandard analysis. This way we obtain the extended technique \(T^{e}\), which is almost the same as the original discrete technique T (except for the addition of \(\mathtt {dt}\)) but applies to continuous/hybrid dynamics. Moreover, the correctness of the technique \(T^{e}\) is guaranteed by the nonstandard denotational model. We have obtained a few concrete techniques by following the methodology. There we take the following techniques as a base technique T: the Floyd–Hoare program logic; a stream-processing language and a refinement type system for it; and abstract interpretation by Cousot–Cousot.

In our project we aim to let this distinctively (meta-)theoretical approach drive concrete applications in real-world problems in industry. Doing so requires careful planning: we need to match concrete industry problems with our theoretical techniques. There are many “no-go” type results (such as undecidability of reachability in hybrid automata) that suggest that conventional formal method goals such as verification are unlikely to be feasible for CPS. We are thus forced to aim at more light-weight goals like testing and monitoring; techniques towards these light-weight goals can still make huge difference in reality, given the big roles played by CPS in society. The flexibility we have from the metatheoretical approach is especially useful here: it allows us to adapt ideas from formal methods research to those light-weight methods; and those ideas strengthen the methods significantly, thus contributing to quality assurance measures for CPS with rigorous mathematical bases.

The current position paper lays out the goals, the strategy and the tactics of the ERATO MMSD project. The organization of the paper is as follows. In Sect. 2 we describe the project’s backgrounds and contexts in further detail, and identify the challenges that the project shall address. In Sect. 3 our metatheoretical transfer strategy is elaborated on. We shall do so through revisiting the academic field of metamathematics that we draw inspiration from, and illustrating two instances of metatheoretical transfer. These examples are nonstandard transfer from discrete to continuous/hybrid, and coalgebraic unfolding. In Sect. 4 we describe our tactics with which we align the strategy with the practical goal of helping design processes of industry products. Here we describe some concrete research topics: they span from category theory to logic, automata, control theory, software engineering, machine learning, artificial intelligence, mathematical optimization, and many other research fields. Our tactics are also concerned with how our results are put to real-world use. Our philosophy here is to be incremental (finding small portions of design processes in which we can help) rather than monolithic (like requiring comprehensive formal specification to start with). We also describe how we seek to contribute to forming new software-centric design processes of CPS—this is in collaboration with Autonomoose, an autonomous vehicle project at the University of Waterloo. In Sect. 5 we discuss the project organization that mirrors those strategy and tactics; in Sect. 6 we conclude.

Backgrounds, Contexts and Challenges

Computers in Industry Products and the Challenges Imposed Thereby

Most modern industry products incorporate computers as their vital components. Digital control realized by those computers has brought drastic changes in the landscape of manufacturing.

On the one hand, digital control has enabled radically enhanced capabilities of industry products. Examples include engine control units (ECUs) that achieve a level of fuel efficiency that would otherwise be impossible, and autonomous driving, a game-changing paradigm in automotive industry that had been almost unimaginable a decade ago.

On the other hand, greater complexity is inherent in digital dynamics governed by computers. A computer program is a recipe for dynamics in the form of a number of lines of code. In such a program it is hard to find the kind of uniformity and continuity that one would often find in physical dynamics: a simple governing equation is unlikely to exist and a small mutation of a program can have drastic effects. This complexity of digital control, and the complexity of modern industry products under digital control, poses a big challenge in the design of safe, reliable and efficient industry products. The challenge is being taken even more seriously now that computer-controlled artifacts play increasingly important roles in society: their failure can result in huge economic and social damage, and even loss of human lives.

This complexity challenge in manufacturing can be discerned in various concrete forms. Let us take the V-model (Fig. 1) as an example: it is one of the representative schemes of product development processes, originally devised for software development processes but nowadays widely applied also to industry products [28]. Here one would start with devising a high-level design (top-left), then refine the high-level design in a step-by-step manner, implement the design (bottom), and validate the implementation against those requirements in each design step (on the right). Validation is typically by extensive testing.

Fig. 1
figure 1

The V-Model

The following is a (far from comprehensive) list of concrete challenges that one is likely to encounter.

  1. 1.

    Cost of hardware testing/simulation In designing complex industry products following the V-model, one big challenge is in each validation step. A product (or a component of it) has so many digital operation modes and the behaviors in different modes are so different from each other. This makes extensive testing that covers all the different operation modes prohibitively expensive. In case of manufacturing (unlike software industry), the fact that an implementation is given by hardware is an additional major burden: testing (or simulation as it is commonly called) with a hardware prototype is time consuming and expensive.

  2. 2.

    Correctness of designs and requirements Extensive validation efforts (the dashed horizontal lines in Fig. 1) would be in vain if the designs and requirements (on the left in the figure) are faulty. Those designs and requirements sometimes fail to comply with laws and regulations, they can also fail to adequately address customers’ needs and they can simply have inconsistencies within. It is in fact widely recognized in the industry that a majority of faults originate in the phase of deciding designs and requirements. Given the increasing complexity of industry products, together with increasingly diverse demand from society, it is a big challenge to come up with a “correct” set of desirable properties such as safety, reliability, efficiency, accountability and so on. Even if we succeed in doing so, coming up with a design that satisfies all those desirable properties is an even bigger challenge.

  3. 3.

    Management of designs and requirements In Fig. 1 and other schemes for design processes, design is conducted not in one shot but in an incremental manner, starting with a rough design and refining it to detailed ones in a step-by-step manner. Communicating designs and requirements along this workflow, from one step to another, is another major challenge. In many design processes a design is communicated in the form of a document written in a natural language. Such documents can be thousands of pages long; they are, therefore, hard to understand, communicate, get right, process and manage. A similar difficulty emerges in derivational development, where one aims at deriving a new design from an existing one.

  4. 4.

    Optimization of complex systems Modern industry products are complex systems whose performance—such as torque, fuel efficiency and rattle noise of cars—depends on a number of diverse parameters. Tuning those parameters for optimal performance is often a highly nonconvex problem and hence is extremely challenging. In industry products such as cyber-physical systems, coexistence of discrete and continuous parameters poses another big burden.

Formal Methods for Computer Systems

Many of the challenges that we have just described—arising from the complexity of digital control—are already present in computer systems, that is, systems that solely consist of digital components. In this context a body of techniques called formal methods has been introduced and developed.

Formal methods is a broad term but it generally refers to mathematically based techniques for getting systems right. Their emphasis on symbolic formalisms brings both rigor and possibility of automation with the help of computers that operate on symbolic representations. One can also see formal methods as means for reasoning formally about correctness and efficiency of systems, where logic, as a branch (or a foundation) of mathematics, naturally plays an important role.

Typical goals of formal methods are verification, synthesis and specification.

  • Verification Verification means to give a mathematical proof to correctness.Footnote 1 The idea comes naturally when one deals with software systems. There, programs and their behaviors are mathematical objects with rigorous definitions; therefore, it makes sense to state their correctness as theorems and to prove them much like in mathematics. More specifically, a verification procedure would take a system model \(\mathcal {M}\) and a desired property \(\varphi \) of \(\mathcal {M}\)—called a specification—as input. It would return either a proof for \(\mathcal {M}\models \varphi \) (“\(\mathcal {M}\) satisfies \(\varphi \)”) or a counterexample that shows \(\mathcal {M}\not \models \varphi \). Two major approaches to verification are model checking by exhaustive search in finite state spaces and (automatic or interactive) theorem proving. An example of the latter is in Fig. 2.

  • Synthesis Synthesis has been another major goal in software science. It aims at generating a suitable parameter set of a given system—or its scheduler, its controller or the system itself—that satisfies a given desirable property (i.e. specification). A fully automatic example (by graph algorithms) is in Fig. 3.

  • Specification Specification, i.e. expressing a desirable property in a formal and symbolic way that is mathematically rigorous and operable by a computer, is vital in the above goals. It is an interesting topic on its own too, e.g. for requirement management towards consistency among different stages of development.

Fig. 2
figure 2

Program verification by program logic. Given a program as a system model \(\mathcal {M}\) and a specification \(\varphi \), one aims at constructing a proof for \(\mathcal {M}\models \varphi \) that is given in the form of a symbolic proof tree in program logic (such as the Floyd–Hoare logic)

Fig. 3
figure 3

Controller synthesis by automata. Given a finite-state automaton \(\mathcal {M}\) as a system model, and a specification \(\varphi \), we synthesize input to \(\mathcal {M}\) that steers \(\mathcal {M}\) desirably so that it realizes \(\varphi \). This is by the following steps: 1) combining \(\mathcal {M}\) and \(\varphi \) into a new automaton \(\mathcal {M}_{\varphi }\) and 2) conducting suitable search in \(\mathcal {M}_{\varphi }\). The latter is often a well-known graph–theoretic problem and we can exploit efficient algorithms

Formal methods have a potential of bringing a number of breakthroughs. In verification, for example, one could prove “the system \(\mathcal {M}\) satisfies \(\varphi \) under arbitrary input i”, in a computer-assisted way (sometimes fully automatically). Dealing with infinitely many potential input values and giving a mathematical proof as a certificate, verification provides a level of guarantee that is out of the reach of testing or simulation (i.e. trying finitely many test cases and obtaining empirical assurance).

Formal methods for computer systems have a long history. Some foundational observations can be found in early papers like [14, 27, 47]. For some time, formal methods had been considered by industry practitioners “leisure activities” of researchers in academia, with most of their techniques not quite scaling up to the complexity of real-world software. The trend changed around the turn of the twenty-first century, however. Thanks to increasingly efficient algorithms together with faster computing hardware and identification of application domains where one can make differences, there have been a number of tools that have been successfully applied to real-world systems. Examples include various theorem provers and SAT/SMT solvers employed in verification of the design of microprocessors (see, e.g. recent [89]); sophisticated model checkers such as Uppaal [11], PRISM [61] and Spin [48]; the tool Astrée employed in verification of flight control codes of Airbus aircrafts [77]; the SLAM tool for verifying Windows device drivers [10]; a separation logic-based verification engine integrated in Facebook’s software development cycle [16]; and many more.

Formal Methods for Cyber-Physical Systems

It is, therefore, natural that application of formal methods is pursued beyond computer systems. Two main fields in which this is happening are industry products and biological systems; these two fields are combined to form a broad discipline of cyber-physical systems (CPS). Extensive research efforts have been made there in the last decade or so, with different emphases that reflect the heterogeneous nature of cyber-physical systems. For example, CPS Week—a major event in the CPS research community—consists of several conferences each of which covers a major scope in the CPS research. These major scopes include hybrid dynamics that combine digital and discrete dynamics with physical and continuous dynamics, real-time properties and so on.

The formal methods community today is heading to cyber-physical systems, taking up the challenge of heterogeneity therein, trying to embrace new features such as continuous dynamics and quantitative performance measures. In the United States, partly thanks to the initiatives organized by NSF and other government agencies, rapid integration is going on between the formal methods community and the control engineering community—a discipline of long history that conventionally studies continuous dynamics. In Europe, formal methods techniques have been vigorously turned quantitative, so that they are sensitive to quantitative performance measures such as probability, elapsed time and energy. This is in contrast with the conventional view of formal methods where algorithms would give plain Boolean yes/no answers.

This trend of formal methods towards heterogeneity is hard to miss: in recent editions of major conferences in formal methods (like CAV, POPL and TACAS), typically one of a few keynote talks is given by a researcher pursuing heterogeneity in formal methods. Often this speaker is from the control engineering community, a “continuous counterpart” of formal methods.

The CPS community recognizes the potential role of formal methods, too, of bringing the same breakthroughs as they did to computer systems through specification, verification and synthesis. For example, application of formal methods is recommended in the international standard ISO 26262 for automotive functional safety.

Let us elaborate on typical challenges we encounter when formal methods try to embrace heterogeneity in cyber-physical systems.

Continuous and Hybrid Dynamics

The dynamics of computer systems are governed by clocks; consequently, their mathematical modeling—on which conventional formal methods techniques are based—has the discrete notion of time. This is in stark contrast with physical dynamics, such as position, velocity, and yaw/pitch/roll angles of a car, that are typically modeled with ordinary differential equations (ODEs) based on the continuous notion of time. Most cyber-physical systems combine discrete and continuous dynamics because of its digital and physical components. This combination is called hybrid dynamics and has been a major scope in the CPS research.

Fig. 4
figure 4

A hybrid automaton, with ODEs (for continuous dynamics) in states (for discrete modes)

To extend formal methods techniques so that their scope encompasses hybrid dynamics, a natural idea is to incorporate ODEs in an (originally discrete) framework. This idea is indeed adopted in many successful extended frameworks. In hybrid automata [6] (see Fig. 4) one augments automata (a standard formalism for algorithmic analysis of discrete dynamics) with ODEs inside each state. Here the transitions between states represent discrete dynamics, while the ODEs in a state represent continuous dynamics within the discrete mode. Another well-known example that follows the same idea is differential dynamic logic [72], where dynamic logic (a variant of the Floyd–Hoare logic, cf. Fig. 2) is extended with ODEs.

These extended frameworks have produced remarkable successes in modeling and analysis of hybrid dynamics. Theoretical studies of them, at the same time, have revealed fundamental gaps of complexity between hybrid dynamics and purely discrete ones. For example, it is known that the reachability problem in hybrid automata is undecidable [6]; this “no-go” theorem implies that most automata-based verification/synthesis algorithms for discrete systems (cf. Fig. 3) do not extend to hybrid systems.

  • Those discrete algorithms achieve scalability via reducing a verification/synthesis problem to reachability, which is in turn efficiently determined by various graph algorithms.

  • However, in the case of hybrid dynamics one cannot rely on this workflow because of the aforementioned undecidability result.

More generally, presence of continuous dynamics makes many problems significantly more difficult—so much that we should not expect the same push button-style algorithms (like automata-based ones) would solve those problems.

Quantitative Performance Measures

Conventional verification problems ask for a yes/no answer to the question if a system model \(\mathcal {M}\) satisfies a specification \(\varphi \). This qualitative view on systems, specifications and correctness is not necessarily suitable for cyber-physical systems.

For example, a CPS often involves probabilistic behaviors—they can come from physical components that operate in a stochastic manner (e.g. due to mechanical wear), human behaviors that we cannot precisely predict, randomized algorithms therein e.g. for fairness or security, OEM components whose behaviors are not totally known, and so on. For CPS real-time properties are also important: for a specification that asks a certain desirable event to eventually occur, one would be interested in how soon the event occurs, or if the occurrence meets a certain deadline. Various notions of cost like energy are critical in many CPS scenarios, too. For example, in synthesis, one would be interested not only in any controller that steers a system \(\mathcal {M}\) so that a specification \(\varphi \) is satisfied—one’s true interest would be in the optimal controller that achieves \(\varphi \) at the smallest possible cost.

These quantitative performance measures (as opposed to qualitative, Boolean and yes/no ones) have been energetically integrated in various formal methods techniques, with a number of theoretical successes and practical tools. This is probably because of the general tendency that a shift from qualitative to quantitative does not necessarily add significant complexity—which is in sharp contrast with the situation with continuous and hybrid dynamics (Sect. 2.3.1). For example, for Markov chains,Footnote 2 reachability probabilities can be efficiently computed via linear programming (LP). This fact opens up possibilities of various efficient verification/synthesis algorithms, as shown in [9]. For weighted automata, where transitions are labeled with weights (or costs) taken from a certain semiring, the situation is similar. Many problems reduce to solving linear inequalities in the semiring and often it can be done rather efficiently [25]. For timed automata [7], too, several variants of zone construction are known to yield finite-state abstraction of timed automata (see, e.g. [45]).

Our Unique Strategy: Metatheoretical Transfer

So far we have argued the following:

  • manufacturing in modern days faces a big challenge as a result of computers in products;

  • formal methods have potential to help the situation; and

  • application of formal methods to industry products (as CPS) calls for significant research efforts, to cope with heterogeneity like continuous/hybrid dynamics and quantitative performance measures.

In the ERATO MMSD project that the current position paper is about, our mission is to explore formal methods’ potential to help dealing with problems in design processes of real-world industry products.

There have already been quite a few large-scale organized efforts towards formal methods applied to CPS, and these programs have successfully brought notable research outcomes. So it is natural to ask the following question: what makes our endeavor special and interesting? Is it not just a latecomer after many similar projects? What unique scientific values can we expect from our project? What about practical and industrial values? In this section we shall describe our answers to these questions, by laying out a scientific strategy that we believe makes our project stand out. We shall call the strategy metatheoretical transfer.

Metamathematics

Our strategy of metatheoretical transfer draws its inspiration from metamathematics, a branch of mathematics that studies mathematical activities themselves as its subjects.Footnote 3 That is to say:

  • in mathematical physics one would model and study natural phenomena in mathematical languages (such as that of differential equations);

  • in metamathematics, similarly, one studies mathematicians’ activities (like setting axioms and writing proofs) in rigorous mathematical terms. Mathematical logic is the typical language used here.

To get a feeling see Fig. 5, it is meant to be an excerpt from a textbook in metamathematics (or mathematical logic). We notice that terms such as “definition”, “theorem” and “proof” occur multiple times in the figure, and that they designate two different notions that lie at two different levels. The object-level notions—such as proofs that are trees consisting of symbols—designate the activities of mathematicians as our study subject; and the meta-level notions—such as proofs written in a natural language—designate the activities of us metamathematicians who observe mathematicians. Metamathematics have brought about some striking breakthroughs, theoretically and practically—one of them is computers themselves, that arose from the effort by Turing, von Neumann and others to formalize mathematicians’ capabilities.

Fig. 5
figure 5

Metamathematics. This is meant as an excerpt from a textbook in mathematical logic that covers Gentzen’s sequent calculus (see, e.g. [15]). Distinguish object-level notions (that are symbolic, i.e. syntactic constructs) from meta-level notions

Metamathematical Transfer

Our research strategy of metamathematical transfer is then explained as follows.

In most of the existing attempts towards heterogeneity and quantity, an individual technique in formal methods is extended, accounting for a single new concern at a time. See Table 1 for examples.

Table 1 Existing approaches to heterogeneity in formal methods. Heterogeneity is achieved in a one-by-one manner, each time demanding substantial theoretical work

In contrast:

  • we parametrize both a theory T, and a new concern e (time, probability, continuous dynamics, etc.), and

  • establish a theory, from a meta-level point of view, on how one can incorporate e in the (object-level) theory T and obtain a new extended (object-level) theory \(T^{e}\).

Here a theory (either on the object-level or the meta-level) refers to a body of definitions, lemmas and theorems.

A schematic overview is in Fig. 6. We first analyze the individual examples of extension of formal methods theories/techniques \(T_{i}\) with additional concerns \(e_{i}\). By identification of some “mathematical essence”—that is, arguments and structures that are common to multiple of those individual extensions—we formulate a metatheory for such extensions \(T+e\rightarrow T^{e}\). Concretely the metatheory consists of a general construction \(T+e\rightarrow T^{e}\) that takes an original theory T and an additional concern e as input and yields the extended theory \(T^{e}\) as output. It is also desired that the metatheory ensures some correctness properties of the resulting theory \(T^{e}\), such as soundness of the verification techniques in \(T^{e}\). The general construction \(T+e\rightarrow T^{e}\) would apply to various input Te that is subject to certain conditions, yielding many concrete extensions \(T_j+e_j\rightarrow T_j[e_j]\) for new input \(T_j, e_j\).

The resulting theory/technique \(T_j[e_j]\) may look different from any known theory/technique; however, brought by the general construction \(T+e\rightarrow T^{e}\), the extension \(T_j[e_j]\) shares some mathematical essence with the known extensions \(T_i[e_i]\) that inspired the general construction. We may wonder at this stage if we could not have directly come up with the new extension \(T_j[e_j]\) without going through the general construction \(T+e\rightarrow T^{e}\). This is indeed possible, but the general construction often serves as a valuable guide. Another advantage is that the “correctness” of \(T_j[e_j]\) (such as soundness of a verification technique therein) is most of the time automatically guaranteed, since it is already established at the abstract level of \(T+e\rightarrow T^{e}\).

Fig. 6
figure 6

Metamathematical transfer, where we aim at a general construction (a meta-level theory) that takes an original theory T and an additional concern e as input, and yields the extended theory \(T^{e}\) as output

This metatheoretical approach to heterogenization of formal methods techniques is made possible by the mathematical languages of category theory and logic—quintessences of modern mathematics’ aspiration for structure, rigor, genericity and abstraction. These languages allow us to “define” T and e as mathematical entities, and to mathematically “construct” the extension \(T^{e}\). This construction \(T+e\rightarrow T^{e}\) (or \((T,e)\mapsto T^{e}\) to be more precise) is nothing but a (meta-level) theory of extending a (object-level) theory T with a new concern e, and the relationship between the last two “theories” is precisely like in metamathematics (Fig. 5).

A general construction \(T+e\rightarrow T^{e}\) has practical benefits: it allows us to adapt existing formal methods techniques to heterogeneous application domains more quickly and comprehensively. This adaptability is all the more important now that the environments in which computers operate are rapidly diversifying. We will see many new concerns e, and a general recipe \(T+e\rightarrow T^{e}\) makes us prepared for them.

Below in Sects. 3.33.4 we shall exhibit two examples of such uniform extension from Te to \(T^{e}\).

Metamathematical Transfer, Example I: Nonstandard Transfer from Discrete to Continuous/Hybrid

Table 2 A general extension methodology of nonstandard transfer, and its instances

In our papers [44, 56, 78, 79] we introduced and elaborated a scheme called nonstandard transfer (NST in Table 2). It takes an arbitrary discrete formal technique T, and extends it to a technique \(T^\mathtt{dt}\) for continuous and hybrid dynamics, a new concern that is significant in CPS. This extension is syntactically easy: one simply adds to (the logically formalized theory of) T a constant \(\mathtt dt\) that designates an infinitesimal (infinitely small) value.

However, ensuring that the theory \(T^\mathtt{dt}\) is consistent and meaningful is nontrivial. The use of infinitesimals is intuitive not only in our framework but also in formulating various notions in calculus—this is indeed the case in Leibniz’s formulation. Unfortunately, the existence of infinitesimals in the set \(\mathbb {R}\) of reals causes contradiction. Assume a positive real \(\partial \) is infinitesimal, that is, smaller than any positive real; then \(\partial /2\) is even smaller than \(\partial \). This is why modern calculus employs arguments by \(\varepsilon \) and \(\delta \), and avoids infinitesimals.

It was Robinson’s nonstandard analysis [73] that gave rigorous meaning to infinitesimals. This theory builds on formal logic and in particular results of model theory such as ultrafilters and ultraproducts, it extends the set \(\mathbb {R}\) of reals to the set \({}^*\mathbb {R}\) of hyperreals that additionally contains infinitesimals, infinites, and so on. In Robinson’s nonstandard analysis a positive infinitesimal is a hyperreal that is smaller than any standard real. Construction of hyperreals hinges on an ultrafilter, whose existence is usually shown via the axiom of choice.

This way nonstandard analysis offers a consistent theory in which reals and infinitesimals coexist. Even better, Robinson’s nonstandard analysis offers the powerful transfer principle that states the following: a formula \(\varphi \) is true for reals (\(\mathbb {R}\models \varphi \)) if and only it is for hyperreals (\({}^*\mathbb {R}\models \varphi \)). This means that a theorem obtained by usual deductive reasoning (for \(\mathbb {R}\)) is also a theorem in the extended setting of hyperreals (that include infinitesimals). In the original theory of Robinson, the formula \(\varphi \) in the transfer principle is taken from the first-order language of real arithmetic; our series of work [44, 56, 78, 79] can be understood as an attempt to accommodate program logics in nonstandard analysis.

Fig. 7
figure 7

Continuous dynamics as a program. Here t increases continuously to 1

In [44, 56, 78, 79] we have pursued uniform “hybridization” of discrete deductive verification frameworks (such as the Floyd–Hoare logic [27, 47] and abstract interpretation [20]) so that they also apply to continuous and hybrid dynamics. In these works we add an infinitesimal constant \(\mathtt dt\) to a programming language so that it serves as a modeling language for continuous/hybrid dynamics. See Fig. 7, where the value of t continuously increases towards 1. We remark that the loop is executed infinitely many times—if it terminates within N iterations then this implies \(\mathtt dt\) is not smaller than 1 / N, a contradiction. Therefore, the language \(\mathbf {While}^\mathtt{dt}\)—an imperative language with while loops augmented with \(\mathtt dt\)—is a modeling language rather than a programming language. Using nonstandard analysis, however, we can give rigorous denotational semantics for such programs (specifically the loop is iterated \(1+1/\mathtt{dt}\) times and the final value of t is \(1+\mathtt{dt}\)). Moreover, much like the transfer principle, we show that the same Floyd–Hoare logic works as well as in the discrete setting (cf. Fig. 2). The logic allows us to conclude, for example, that the value of t never exceeds 1.001 for the dynamics in Fig. 7. This theoretical framework of \(\mathbf {While}^\mathtt{dt}\) and the Floyd–Hoare logic is developed in [78], and in [44] we present an automatic theorem prover based on the theoretical framework.

The same workflow of adding \(\mathtt{dt}\) to a programming language and using the same deductive verification technique (which is sound thanks to the transfer principle) has been pursued for other discrete frameworks, as we summarize in Table 2. This way we have established our nonstandard transfer workflow, a general extension scheme \(T+e\rightarrow T^{e}\) where e is fixed to be continuous/hybrid dynamics (encoded by the infinitesimal constant \(\mathtt dt\)) and T is arbitrary, as long as T is formalized in some first-order language. The resulting extension \(T^{e}=T^\mathtt{dt}\) looks quite different from other attempts of accommodating continuous/hybrid dynamics (see Sect. 2.3.1): ODEs do not occur in it, and a discrete deductive framework can be used literally as it is.

Metamathematical Transfer, Example II: from Automata to Coalgebras

In another example of general extension schemes \(T+e\rightarrow T^{e}\), automata—a well-studied model of computation—are generalized to F-coalgebras for various choices of a signature functor F, see Table 3. Specifically we fix T to be the body of automata-theoretic techniques and leave e as arbitrary. Some es allow encoding as a parameter \(F=F_{e}\), and the resulting theory of \(F_{e}\)-coalgebras gives us “the theory of automata extended by e.”

Table 3 A general extension methodology from automata to coalgebras

The mathematical language that allows us to do so is that of category theory, an abstract formalism that is originally from algebraic topology but is nowadays used in various places in mathematics [8, 63, 66]. Category theory is centered around arrows instead of equations; see Table 4 for how notions in system theory are categorically formulated as coalgebras.

We shall briefly introduce the basics of coalgebraic modeling now, leaving further details to [50, 74]. Let \(\mathbb {C}\) be a category and \(F:\mathbb {C}\rightarrow \mathbb {C}\) be a functor. An F-coalgebra is a pair \(\left(X,X\xrightarrow {c}FX\right)\) of an object X of \(\mathbb {C}\) and an arrow \(c:X\rightarrow FX\) in \(\mathbb {C}\). For an example we can fix the base category \(\mathbb {C}\) to be \(\mathbb {C}=\mathbf {Sets}\), the category in which objects are sets and arrows are functions between them. We can further fix the signature functor F to be \(F=2\times (\underline{\phantom {n}}\,)^{\Sigma }\) where \(2=\{\mathrm {t{}t},\mathrm {f{}f}\}\) is the set of Boolean truth values and \(\Sigma \) is a fixed alphabet. Then an F-coalgebra is nothing but a pair (Xc) of a set X and a function \(c:X\rightarrow 2\times X^{\Sigma }\). This is a coalgebraic modeling of deterministic automata (whose state spaces are not necessarily finite): X is the set of states and for each state \(x\in X\), the value \(c(x)=(b,\,f)\) tells us if the state x is accepting or not (whether \(b\in 2\) is \(\mathrm {t{}t}\) or \(\mathrm {f{}f}\)), and its successors (\(f(a)\in X\) is the a-successor for each character \(a\in \Sigma \)).

F-coalgebras, by changing the parameter F, instantiate to various systems. This enables many results for (conventional) automata—they are identified with F-coalgebras for the specific choice \(F=2\times (\underline{\phantom {n}}\,)^{\Sigma }\), as we have seen—to be transferred smoothly to a variety of systems, leading to uniform verification methods that work for systems that are quantitative, probabilistic, timed, etc. alike. One among the notable successes is a uniform categorical definition of bisimulation by spans of coalgebra homomorphisms, see [50] for a comprehensive treatment.

Table 4 Notions around system dynamics formulated in categorical terms

This coalgebraic methodology has been actively pursued since mid-90s [2, 50, 51, 74]. In this context our unique standing is in the emphasis on automatic verification algorithms. For example, in [84, 86] we introduced matrix simulations, a probabilistic notion we obtained via a coalgebraic generalization of conventional simulations from [64]. It is demonstrated in [84, 86] that they allow effective algorithmic search by linear programming. Through this work and others [85, 86] we are now convinced of the following coalgebraic unfolding scheme (Fig. 8):

  • For many formal methods problems, the key is to unfold complex structures and reduce them to (structurally) simpler, well-known problems such as satisfiability (SAT), linear programming (LP) and semidefinite programming (SDP, a well-known class of nonlinear convex optimization problems). For those well-known problems we can utilize known efficient algorithms.

  • Different original problems have different target problems (Boolean to SAT, probabilistic to LP, etc.).

  • However, the unfolding part exhibits structures of the same mathematical essence, and can, therefore, be unified, using coalgebraic abstraction (Fig. 8, the bottom row).

Fig. 8
figure 8

Coalgebraic unfolding, using [86] as an example. Target problems (SAT, LP, etc.) are different, but the “unfolding” part is coalgebraically unified. Here our view on the decision problem SAT is that it is an optimization problem on the discrete domain \(\{0,1\}\)

Our Tactics

We have described our strategy from a theoretical point of view. Another important point of our project is, on the practical side, to make real difference in manufacturing, at present and in the future. In this section we thus describe our tactics with which we align our theoretical research with industrial activities of designing high-quality products.

Technical Research Topics in Formal Methods Towards Heterogeneity

We shall first discuss some research topics towards heterogeneity in formal methods techniques. These topics naturally follow from our previous research activities and from our metamathematical transfer strategy (Sect. 3). Theoretical results in these topics are also expected to play pivotal roles in achieving the project’s goal.

Coalgebraic Model Checking that Unifies Qualitative and Quantitative

In the conventional theory of (discrete and qualitative) model checking, the following automata-theoretic workflow is widely employed. See, e.g. [88] and also Fig. 3.

  • A target system is expressed by a finite-state automaton \(\mathcal {M}\).

  • A specification is expressed by a formula \(\varphi \) in temporal logic like LTL, CTL*, the modal \(\mu \)-calculus, etc.

  • To check if \(\mathcal {M}\) satisfies \(\varphi \) (denoted by \(\mathcal {M}\models \varphi \)) we take the following steps.

    • One first translates the negated formula \(\lnot \varphi \) into an automaton \(\mathcal {A}_{\lnot \varphi }\). The translation is such that an execution trace \(\sigma \) satisfies \(\lnot \varphi \) if and only if \(\sigma \) is accepted by the automaton \(\mathcal {A}_{\lnot \varphi }\).

    • By suitable product construction one obtains \(\mathcal {M}\otimes \mathcal {A}_{\lnot \varphi }\), an automaton that accepts those execution traces which can arise from \(\mathcal {M}\) and at the same time is accepted by \(\mathcal {A}_{\lnot \varphi }\).

    • Now one conducts the emptiness check for \(\mathcal {M}\otimes \mathcal {A}_{\lnot \varphi }\). If it succeeds (i.e. no traces are accepted by \(\mathcal {M}\otimes \mathcal {A}_{\lnot \varphi }\)), we conclude that there are no execution traces of \(\mathcal {M}\) that satisfy \(\lnot \varphi \), that is, \(\mathcal {M}\models \varphi \).

Notable in the above workflow are the following facts. First, it heavily relies on automata—finite-state representation of dynamics—as one can exploit many known algorithms for their analysis and many known constructions for operating on them (e.g. Boolean operations). Second, to accommodate reactive and nonterminating behaviors and fixed-point specifications on them (such as safety, liveness, recurrence, persistence), the automata in the workflow must operate on infinite words (as opposed to finite words, e.g. in [49]). They consequently use seemingly complicated acceptance conditions like the Büchi and parity ones (see e.g. [37, 88, 92]). Lastly, in this verification workflow and in others (e.g. for synthesis, Fig. 3) the original problem is eventually reduced to emptiness check, a problem that allows an efficient algorithmic solution.

Automata-theoretic techniques in formal methods (such as the above example) have been energetically extended to quantitative settings—especially probabilistic ones. This is as we discussed in Sect. 2.3.2. In the probabilistic version of the above automata-theoretic workflow the greatest difference is that emptiness check, to which the original problem is reduced to, is replaced by calculation of acceptance probabilities. The latter is further reduced to calculation of reachability probabilities via notions like bottom strongly connected components, and reachability probabilities are typically solved by linear programming (LP), for which many efficient solvers are available (see, e.g. [9]).

From this trend in formal methods and also from our metamathematical transfer strategy (especially its instance of coalgebraic unfolding, see Sect. 3.4), it arises as a natural research question to extract essence of automata-theoretic model checking that underlies both the qualitative and quantitative settings. We aim at formulating such common structures in categorical (especially coalgebraic) terms.

In this direction we have obtained some promising preliminary observations. In [43] we presented the notion of lattice-theoretic progress measure, a general notion of witness for nested and alternating fixed-point specifications, and we applied it to coalgebraic model checking. The notion generalizes invariants for greatest fixed-point specifications (like safety) and ranking functions for least fixed-point specifications (like liveness and termination). It also generalizes Jurdzinski’s combinatorial notion of progress measure [53] for algorithmic solution of parity games; based on general complete lattices, our generalized notion can also be employed in quantitative settings. In [19] we present preliminary results towards coalgebraic linear-time model checking that encompasses quantitative settings.

Quantitative Semantics for Enhanced Expressivity

Another research direction on quantitative logics that we wish to pursue is quantitative semantics for enhanced expressivity.

One example of such semantics is temporal logics with discounting [4, 5, 70], where events in the farther future have the smaller significance. For example, for the LTL specification \(\mathsf {F}p\) (“p eventually becomes true”) we have the following difference:

  • in the usual Boolean semantics the formula is true no matter how long it takes before p is true for the first time;

  • in contrast, in the quantitative discounted semantics of [5], the formula’s truth value is \(1/2^{i}\) in case p is true for the first time at time i.

One can argue that the latter semantics is more expressive and potentially more faithful to a designer’s intention.

Another example of such quantitative semantics is robustness semantics of temporal logics for continuous-time signals [3, 24, 26]. There the truth value of a formula \(\varphi \) is a real number that designates how robustly the formula is satisfied. This theoretical idea has opened up the field of falsification by optimization, a scalable methodology that attracts attention in quality assurance of CPS. See Fig. 10 and the discussion later in Sect. 4.2.1). Specifically, the original work [26] addressed space/vertical robustness (e.g. the bigger the value of x, the more robustly the formula \(x>0\) is satisfied); the work [24] introduced time/horizontal robustness (e.g. a deadline specification is more robustly satisfied if the desired event occurs way in advance). The work [3] proposes to combine space and time robustness by suitable integration.

We wish to further pursue the study of quantitative semantics of specification logics. We have to say the role of metatheories in this direction is less clear than in coalgebraic model checking—perhaps the flexibility in the choice of truth value domain in coalgebraic logics can be exploited (see e.g. [40, 43, 46]). In any case specification logics and their expressivity will be central in our work towards quality assurance of CPS.

Simulation and Bisimulation Notions

The notions of bisimulation and bisimilarity were originally introduced as a definition of equivalence between concurrent processes [68, 71]; they require two processes to be able to mimic each other’s actions and, moreover, to be able to continue doing so. Their one-sided variations are the notions of simulation and similarity, in which one process is required to mimic the other, but not conversely.

These notions of (bi)simulation have been extensively studied and used over years. A notable use of them is as a (sound but not necessarily complete) proof method. Consider nondeterministic finite automata (NFA), for example; there are many different possible definitions of their “behavior”, as organized in the so-called linear time–branching time spectrum [35]. A standard definition (language equivalence) is given by accepted languages; another standard one (bisimilarity), given by bisimulation, is strictly finer than language equivalence since it also takes internal branching structure into account.

Here one possible way to look at the situation is that one can use bisimulation as a “proof method” for language equivalence. A bisimulation is a binary relation between states subject to suitable “mimicking” conditions, therefore finding a bisimulation witnesses language inclusion. It is sound (since bisimilarity is coarser than language equivalence), although it is not complete (since there are processes that are language equivalent but not bisimilar). Bisimulation as a proof method is often more computationally tractable, too, since its mimicking conditions are formulated in a local, one-step manner. This is in contrast to language equivalence, a global notion that deals with unboundedly long words and thus arbitrarily many steps.

Bisimilarity as a proof method for language equivalence may not sound too appealing since language equivalence for NFA is decidable.Footnote 4 The situation is different when one moves to quantitative settings. For a probabilistic variant of NFA, language inclusion (i.e. the one-way variant of language equivalence) is known to be undecidable [12]; therefore, one needs to rely on incomplete proof methods, among which is simulation (i.e. one-way bisimulation). It is our contribution in [86] to exploit our coalgebraic characterization of simulation [39] and formulate quantitative simulation in terms of matrices. The latter notion of quantitative simulation allows efficient search by linear programming, see Fig. 8.

Use of (bi)simulation has been advocated in the context of CPS and more specifically in hybrid systems, too. In [33] the notion of approximate bisimulation—in which behavior discrepancy is tolerated up to a prescribed threshold \(\varepsilon \)—is introduced, and it has been extensively studied and applied since. One of the main applications is discrete abstraction of continuous dynamics: via an approximate bisimulation one establishes that a certain discrete finite-state system is an adequate abstraction of an original continuous dynamics. The finite-state abstraction can then be used for various purposes such as verification and controller synthesis (see, e.g. [34]).

In our ERATO MMSD project we will pursue both theoretical studies and practical applications of various (bi)simulation notions. For one, in the theoretical direction, notions for hybrid dynamics like approximate (bi)simulation have not yet been subject to categorical/coalgebraic studiesFootnote 5 that would help us organizing different (bi)simulation notions and also formulating relationship to specification logics. For another, in the practical direction, we believe the role of (bi)simulation as an incomplete but tractable proof method is even more important for CPS and hybrid dynamics. There many problems are far more difficult than in conventional discrete settings. For hybrid automata even the basic problem of reachability is undecidable [6]; this means the automata-theoretic model checking workflow in Sect. 4.1.1 is unlikely to work. Therefore, we will be relying more often on incomplete proof methods by various (bi)simulation notions.Footnote 6

Compositionality

One of the principles advocated in formal methods is compositionality: it is desirable if the analysis of a large system can be conducted in a bottom-up way, based on the analysis of its constituent parts. Such a divide-and-conquer approach not only makes the whole analysis task tractable, but also allows modular analysis where one can reuse part of the analysis when the system is partially updated. The modularity is particularly important in our project, because we aim at assisting existing design processes where many components are black-box (see Sect. 4.2 later).

Let us look at a simple example. In the Floyd–Hoare logic one derives Hoare triples \(\{A\}\,c\,\{B\}\): it means “after any successful execution of the program c under the precondition A, the postcondition B is guaranteed”. In Fig. 9 we find two rules of our current interest. The (Seq) rule is about the sequential composition \(c_{1};c_{2}\) of two programs \(c_{1}\) and \(c_{2}\): if \(c_{1}\) guarantees C assuming A and \(c_{2}\) guarantees B assuming C, then \(c_{1};c_{2}\) guarantees B assuming A. The (Seq) rule is thus an exemplar of compositional reasoning. The (Conseq) rule says that one can strengthen preconditions and weaken postconditions.

Fig. 9
figure 9

Two rules from the Floyd–Hoare logic

For example, the following is known (see, e.g. [93]): in the setting of the usual imperative programming language and pre- and postconditions expressed in first-order logic, the weakest precondition \({wp}\llbracket c,B \rrbracket \) for a given program c and a postcondition B can be expressed in first-order logic. The weakest precondition \( wp \llbracket c,B \rrbracket \) is defined to be such a formula that (1) the Hoare triple \(\bigl \{ wp \llbracket c,B \rrbracket \bigr \}\,c\,\{B\}\) is valid; (2) it is the weakest among such, that is, \(\{A\}\,c\,\{B\}\) implies that the formula \(A\supset wp \llbracket c,B \rrbracket \) is valid.

Thus, one may wonder if it is ever needed to invoke the (Seq) rule in Fig. 9 for C other than \( wp \llbracket c_{2},B \rrbracket \). We need the second assumption \(\{C\}\,c_{2}\,\{B\}\) to be valid, which means \(C\supset wp \llbracket c_{2},B \rrbracket \) is valid. Replacing C with a weaker assertion \( wp \llbracket c_{2},B \rrbracket \), there is a better chance of establishing \(\{A\}\,c_{1}\,\{ wp \llbracket c,B \rrbracket \}\) than the original second assumption \(\{A\}\,c_{1}\,\{C\}\).Footnote 7

In the case of hybrid dynamics there is little hope of symbolically expressing weakest preconditions. For example, assume that the program c contains continuous dynamics governed by an ODE. Then to symbolically express the weakest precondition \( wp \llbracket c,B \rrbracket \) in general, we would need a closed-form solution for the ODE, which is not available most of the time. This means, in the application of the (Seq) rule, we need careful “negotiation” between the verification task of \(\{A\}\,c_{1}\,\{\underline{\phantom {n}}\,\}\) and that of \(\{\underline{\phantom {n}}\,\}\,c_{2}\,\{B\}\), so that we come up with a good “contract” C. The choice of C should balance the difficulty of establishing the two assumptions.

In the study of CPS and hybrid systems, pursuit of compositionality via a certain form of “contract” is nowadays widespread [29, 57, 90, 94]. One use of contracts typical of CPS is for separating safety from efficiency: the system architecture is hierarchical with the higher, symbolic level addressing safety and the lower, numeric level addressing efficiency; the higher level determines high-level policies such as discrete actions (as in maneuver automata [29]) or so-called safety envelopes (as in [90]); and the lower level synthesizes a controller that complies with the high-level policies and at the same time aims at efficiency. One possible view is that the high-level policies are contracts through which low-level controllers collectively achieve safety of the whole system.

The study of such hierarchical system architectures from the computer science viewpoint is a topic of our interest. Mathematically speaking, system composition is algebraic—the field of process algebra in theoretical computer science [1] is devoted to algebraic composition of state-based dynamics. The relationship to modal logics as specification languages is well studied, too. Some of these results there have seen their categorical generalization [42, 58, 81], where system dynamics and their algebraic composition together form a categorical notion of bialgebra, and modal logics are nicely accommodated in the categorical picture via Stone-like dualities. We shall start with adapting these existing results to CPS examples, also using our results in the topics we discussed in Sects. 4.1.14.1.3.

Later in the project we shall also study security of CPS. Security properties occupy special status in formal methods since they are notoriously non-compositional. We shall base ourselves on recent results on compositional security, including [22].

Collaboration and Integration with Control Theory and Robotics

The existing efforts towards extension of formal methods (FM) to CPS have been centered around their integration with control theory (CT) and control engineering—as we described in Sect. 2.3. There have been a number of techniques from the two fields that have proved to have the same mathematical essence, and similarly, a number of techniques exported from one field to the other. The following examples are known: the correspondence between ranking functions (FM) and Lyapunov functions (CT) (see, e.g. [17]); that between invariants (FM) and barrier certificates (CT); notions of robustness exported from CT to FM (see, e.g. [26]); and use of (bi)simulation (as witness for behavioral inclusion/equivalence) and modal logics (as specification languages) exported from FM to CT (see, e.g. [67]). Another notable trend is use of numeric optimization algorithms in FM: it has been one of the basic tools in CT (see, e.g. [80]); nowadays a lot of problems in FM are solved by reducing to continuous optimization problems, too.

We will naturally follow the same strategy of close integration between formal methods and control theory/control engineering. Here our strategy of metamathematical transfer (Sect. 3) has potential to be advantageous. Very often, identification of commonalities between FM and CT techniques is done in the level of intuition; this means that, in the actual transfer of a theory from one field to the other, one needs to build a new theory from scratch, contemplating definitions and statements so that the new theory is consistent and useful. Having the existing theory in the other field as a guide is a big advantage, but the actual task of transfer is far from mechanical or trivial. This is in contrast with our metatheoretical methodology in which the (object-level) theory is formalized as a mathematical entity. Once we establish a (meta-level) theory for transfer, the task of transfer becomes mechanical, and the “correctness” of the resulting theory is trivially guaranteed. We have seen example of such transfer in Sects. 3.33.4.

Identification of Suitable Target Problems: We Start Small, Think Big

After going through the technical research topics in Sect. 4.1, we are now convinced of the following point: with CPS, we should not expect that many familiar problems (such as verification and synthesis) be solved as efficiently as with the conventional discrete settings. Once we have continuous dynamics in a system the following happen: (1) automata-theoretic techniques hardly work because reachability in hybrid automata is undecidable; (2) deductive techniques would typically call for closed-form solutions of ODEs, which are a rarity. In the shift from qualitative to quantitative (probability, time, etc.) linear programming (LP) works as an efficient counterpart of graph algorithms in the discrete and qualitative setting; this looks like a fortunate exception.

Another challenge that we face in heterogenization of formal methods techniques is scarcity of formal specification. This has been already identified as a major challenge for software systems, and various techniques for aiding formal specification have been in the software engineering community (see, e.g. [59]). Unfortunately the situation seems to be far worse in CPS. Formulating requirements and specifications for CPS calls for a lot of domain expertise in their physical components such as car engines; reflecting such domain expertise in a logical specification formalism is much harder a task than formalizing software designers’ intentions. In addition, in view of the complexity of modern industry products and the size of their design processes, it seems impossible to have a comprehensive formal specification for some product that would underlie its design process that is totally formalized.

Our ERATO MMSD project emphasizes application of its research results in industry—we wish to make real difference in real-world applications, which in turn would stimulate further theoretical work too. This forces us to take a pragmatic approach to formal methods applied to CPS. We do not take an abrupt approach in which we would insist every step in CPS design be formal, starting from a comprehensive formal specification. Instead we start small: we first identify small portions of existing design processes of industry products in which ideas from formal methods can be helpful. The resulting practical benefits might look small, say the cost of design is suppressed by one percent. This seemingly marginal cost cut can add up to a big whole, however, if one imagines how many cars are currently built in a year. Moreover, accumulation of small “success stories” will give further momentum to formal methods applied to CPS, and hopefully lead to more systematic efforts towards formal specification in CPS design.

In our pragmatic approach to application of formal methods to CPS, there are two challenges as we discussed in earlier paragraphs, namely infeasibility of traditional formal methods goals (like verification and synthesis), and scarcity of formal specification. We shall cope with them by the following two principles: testing rather than verification (described below) and compositionality (Sect. 4.1.4).

Testing rather than Verification

Testing is currently a principal quality assurance means for software, and there have been a lot of work towards enhanced efficiency and confidence (including [65]). For CPS testing is a principal means, too. Since formal verification (i.e. mathematical proofs for correctness under every possible input) is hard for CPS as we discussed, we naturally turn to improving testing for CPS. At the same time, testing for CPS seems to be an area that is still under development, when compared to testing for software that is a mature area with a big body of literature and efficient tools.

Another advantage of testing rather than verification, at the current early stage of deployment of formal methods in CPS design, is that a bug is more “useful” than a correctness proof. To appreciate a correctness proof calls for backgrounds in mathematics and logic, as well as understanding of all the assumptions that the proof is based on. For this reason it is often hard to convince real-world engineers of the benefits of formal verification. In contrast, a bug of a system speaks for itself—it is something which a system designer can immediately work on.

We believe that study of formal methods can bring many benefits to testing too: even from a technique for verification (not for testing), we can extract ideas that can be used to make testing more efficient. For example, in [91] we successfully employed a few constructions for timed automata—originally devised for verification—for the purpose of monitoring real-time behaviors (monitoring can be understood for an oracle for testing). Our orientation towards abstraction (Sect. 3) helps, too: we always aim to identify mathematical essence in various formal methods technique; such essence can then be used for other purposes (like testing), independent of its original goal.

In this direction of testing we shall discuss three concrete research topics. One is monitoring of real-time behaviors that we already mentioned. In monitoring, given a log of events and a specification, one seeks all the segments of the log that matches the specification. This may sound like much simpler a task than verification and synthesis—the latter deal with all the possible input while monitoring deals with a single log. However, efficient monitoring of a number of big logs in embedded applications is a nontrivial matter. Monitoring is an important building block of testing frameworks, too, since we need to judge if an execution of a system is erroneous or not. Recent works on real-time monitoring include [55, 82, 91].

Another concrete topic in testing is search-based testing, and more specifically falsification by optimization. The work [26] pioneers this topic, in which the quantitative robustness semantics (Sect. 4.1.2) allows us to find an error input (a “bug”) by hill-climb style stochastic optimization (Fig. 10). Also with the recent surge in advancement of techniques of machine learning and stochastic optimization, falsification by optimization attracts much attention as a scalable method for CPS quality assurance. We shall continue working on this topic, exploiting our expertise in logic (like in [3]) and also collaborating with machine learning and optimization techniques. A work related in essence is [31] for satisfiability check of assertions that involve floating-point operations.

Fig. 10
figure 10

Robustness semantics and falsification by hill-climbing

Yet another concrete topic related to testing is statistical model checking. It is similar to testing—one samples some input values and observes if the system operates correctly—but in statistical model checking a bigger emphasis is on estimation of likelihood of errors and its statistical confidence. There one would rely on statistical arguments like Chernoff-like bounds and Bayesian inference (see, e.g. [62]).

Collaboration with Machine/Human Intelligence

One rough way of looking at various problems in quality assurance of systems is that they are search problems: in testing one searches for bugs, in verification one searches for proofs and in synthesis one searches for a suitable parameter or controller. Other search problems that we have discussed include the following: search for (bi)simulations as witnesses for behavior inclusion/equivalence (Sect. 4.1.3) and search for a mediating condition (“contract”) in compositional reasoning (Sect. 4.1.4).

Automata-theoretic approaches (sketched in Sect. 4.1.1) can be understood as reduction of various problems to the reachability problem, i.e. searching for a path in a graph to a certain set of states. Being one of the structurally simplest search problems, the reachability problem allows for efficient algorithmic solutions by exhaustive search. Nevertheless, sometimes the state space is too large for exhaustive search already in discrete settings where we deal with software systems. This is the case when a state of the system in question is determined by a number of variables—the number of states is exponential in the number of variables. This is the phenomenon called the curse of dimensionality.

Continuous dynamics in CPS poses an obvious challenge: a state space is typically (uncountably) infinite, thus exhaustive search is no longer an option.

All these call for efficient search methods in a space that is too big for exhaustive search. We identify this as one of the key challenges that affect the efficiency of various techniques that we shall be developing. In our project we will in particular try exploiting the following machine/human intelligence: (1) numeric and statistical methods from mathematical optimization and machine learning and (2) human expertise, such as that of engineers who have worked on car engines for years. These two shall altogether be called machine/human intelligence.

Numeric and Statistical Methods

Numeric methods from mathematical optimization (such as the interior point method for convex optimization) and statistical methods from machine learning (such as simulated annealing, Monte Carlo sampling, Gaussian process learning, and so on) are widely known countermeasures for the curse of dimensionality. Their use for formal methods for CPS has been pursued, too: falsification by optimization (Sect. 4.2.1) employs statistical optimization, as we already saw. Reduction of problems to convex optimization is increasingly popular too [17, 21, 76]. This reflects a common workflow in control engineering, where various control problems are reduced to mixed integer optimization problems (see, e.g. [80]).

In our project we will aim at systematic integration of these methods in formal methods techniques. There are two issues here: numeric errors and statistical confidence.

Numeric Errors Numeric errors are inherent in numeric and stochastic methods. In a typical workflow we reduce the original problem P to an optimization problem \(P'\), and the latter is solved, say, by the interior point method. Its numeric solution x for \(P'\) is subject to numeric errors, and it can be the case that the numeric solution x for \(P'\) is not a valid solution for P. In many scenarios P is a symbolic problem (like verification) and such error is not tolerated.

Even if x is a valid solution of P, there is another question of the “quality” of a solution. Assume P is a problem whose solution is a polynomial inequality, and that both of the inequalities \(1.0013 x^{2} + 2.208 \times 10^{-9} xy > 0\) and \(x^{2} >0\) are solutions. One can say that the latter solution \(x^{2} >0\) is more desirable—in terms of human understanding and also for the purpose of symbolic reasoning that uses a solution of P. However, relying on numeric solvers, what we get looks more often like the former complicated solution. This challenge—which our colleague Takamasa Okudono calls the problem of interpretability, borrowing the term from machine learning—is identified in [75] and has been raised several times ever since, but without a systematic solution to the best of our knowledge. We shall strive for a systematic solution that combines (1) checking validity of numeric solutions and (2) perturbing numeric solutions, if necessary, for validity and better interpretability.

Statistical confidence This is a problem inherent in heuristic methods like stochastic sampling: if a solution is found then it is a solution; if a solution is not found, however, this does not guarantee absence of solutions. One thing we can do here is to polish up the method so that it will find solutions more often; the other thing is statistical inference that tells, based on the observed absence of solutions, the confidence with which we can conclude that no solutions exist. See [23] for an example of works in this direction. We shall combine results and observations in statistical model checking (Sect. 4.2.1) to systematically study confidence bounds for statistical methods in formal methods for CPS.

Human Expertise

Numeric and statistical methods can be understood to put suitable bias on otherwise exhaustive and blind search. The same role can be played by human specialists, exploiting their domain expertise. This makes even more sense in the CPS context, where systems are currently designed by experts with ample experience and deep knowledge (although they are stored and communicated informally). We shall, therefore, pursue formal methods frameworks that have human-in-the-loop—not only in the sense that their target systems encompass humans (drivers and pedestrians in autonomous driving, for example), but also in the sense that for analyses such as testing, specification, verification and synthesis, the frameworks interact with human specialists and exploit their expertise. In doing so, a key is user interface (UI); see [69] for an example where spreadsheets—a formalism familiar to wide audience including CPS engineers—are used as a modeling formalism.

Formal Methods “for” Intelligence

So far we have discussed how we will be using “intelligent bias” in search problems in formal methods; its source can be numeric and statistical algorithms, or human specialists. Here we discuss the other direction, that is, how formal methods can be used to help intelligence, especially machine learning (ML) and artificial intelligence (AI).

In the recent rapid advancement of the autonomous driving technology, a notable trend is to let ML/AI learn how to drive, using, e.g. deep neural networks. An obvious challenge here is safety guarantee of such learned driving algorithms: learning algorithms like neural networks are often black-boxes, in which case there is no telling what a car does in safety-critical corner cases.

In our ERATO MMSD project we will pursue formal methods techniques that address correctness guarantees of ML/AI algorithms. One direction of doing so is to analyze the ML/AI algorithms themselves, for which the theory of probabilistic programming languages (see, e.g. [36]) and their categorical abstraction (see, e.g. [40, 46]) can be useful. Another possible direction is to regard ML/AI algorithms as black-box components of a whole system, and to try to guarantee that the whole system is safe no matter how those black-box components behave. This hierarchical view on systems is much like what we discussed in Sect. 4.1.4.

Putting to Real-World Use

Here we describe how we put our research results to real-world use.

Incremental Support of CPS Design Processes

As we described in Sect. 4.2 our goal is to make real differences, if small, in real-world CPS design. They will provide useful feedback that will drive further theoretical developments; given the scale of the market of CPS, even a fractional improvement can yield a big difference in total; and accumulating small success stories will pave the way to more comprehensive and systematic attempts to integrate formal methods in the design processes of industry products.

For such an incremental approach to formal methods in CPS design, it is important to work with actual practitioners closely, so that the collaboration leads to identification of actual issues in which formal methods can be of help. In software engineering there are a lot of mature techniques and tools; therefore, for many industrial issues there already exist tools that readily address those issues. This is not the case with CPS, however; conventional problems in formal methods are much harder with CPS (Sect. 4.2) and, therefore, it is unlikely that an efficient tool is available to an obvious problem in industry. Consequently, the issues for which we can be of help will be small and subtle; to identify them we theoreticians need to listen carefully to practitioners.

We believe our orientation to abstraction and genericity is an advantage in doing so. To focus on mathematical essence in a formal method technique is to understand how and why the technique works; such understanding of the working mechanism would then allow us to apply the technique to a seemingly different but essentially similar problem.

Supporting Software-Centric CPS Design Processes

In the above incremental formal methods in support of existing CPS design processes, we observe the existing design processes and find small portions of them in which formal methods can be of help. Besides that, in our ERATO MMSD project we aim to contribute to CPS design processes to come, too.

Recent industry trends have turned many industry products into commodities—these trends are notable, e.g. with smartphones and television sets. In this process design processes become more software centric. It is natural to expect the same to happen to many other industry products, too. An example is cars, whose functionalities are dramatically changing due to the advancement of autonomous driving. As to such safety-critical applications as cars, their software-centric design processes should require a high level of safety guarantee—one that is only realizable with formal methods.

To explore the shape of such new CPS design processes, and the roles of formal methods in it, our project collaborates closely with the Autonomoose projectFootnote 8 at the University of Waterloo. Unlike incremental help of already established CPS design processes (Sect. 4.4.1), with the Autonomoose project we collaborate in an early stage where an autonomous driving system is being built from scratch. This will allow us to try more exploratory techniques.

The ERATO MMSD Project

The strategy of metatheoretical transfer (Sect. 3) will be implemented in the ERATO MMSD Project through the tactics described in Sect. 4. The organizational structure of the project—consisting of four research groups—reflects the strategy and tactics.

  • Group 0, called Metatheoretical Integration Group and led by Shin-ya Katsumata, aims to formulate the extension processes \(T+e\rightarrow T^{e}\) themselves in rigorous mathematical terms (Sect. 3). Relevant technical fields are mathematical logic and category theory as languages for describing mathematical theories. A remote site of Group 0 is at RIMS, Kyoto University that is led by Masahito “Hassei” Hasegawa.

  • Group 1, called Heterogeneous Formal Methods Group and led by the author, aims to conduct the extension \(T+e\rightarrow T^{e}\) for a variety of Ts and es. The extension will be guided by real-world applications as well as by our metatheories (i.e. general extension schemes); conversely, developments in Group 1 will also provide concrete extensions \(T+e\rightarrow T^{e}\) that will then inspire their metatheoretical unification conducted by Group 0. Collaboration with control theory—a discipline that is complementary to computer science in the study of CPS—is a key aspect of the activities of Group 1. A remote site of Group 1 is at Osaka University that is led by Toshimitsu Ushio.

  • Group 2, called Formal Methods in Industry Group and led by Krzysztof Czarnecki, is based at the University of Waterloo, where a number of strategic initiatives are taking place towards collaborations with automotive industry (such as WatCAR and Autonomoose). The group’s mission is to pursue effectiveness of the theories and techniques developed in the project (especially by Groups 1 and 3). It applies the techniques to real-world applications and at the same time identifies key theoretical challenges that are then fed back to the other groups. Out of the two directions of real-world applications in Sect. 4.4, Group 2 is more focused on software-centric CPS design processes (Sect. 4.4.2), while the other direction (incremental support of existing CPS design processes, Sect. 4.4.1) is pursued by all the groups altogether.

  • Group 3, called Formal Methods and Intelligence Group and led by Fuyuki Ishikawa, pursues collaboration between formal methods and machine/human intelligence (Sect. 4.3). The group’s goal is similar to that of Group 1—namely concrete extensions \(T+e\rightarrow T^{e}\)—but Group 3 approaches it principally via the fields such as software engineering, ML/AI and user interface (while Group 1 is via software science and control theory).

This project organization allows researchers with different backgrounds from formal methods to control theory, software engineering, logic and category theory, to closely collaborate with each other. This helps Group 0’s efforts to identify theoretical essence that is common to different academic fields. This heterogeneous project organization also helps to provide multiple views on concrete industry problems so that we can come up with a bigger variety of solutions to them.

Conclusions and Future Perspectives

We have described the ERATO MMSD Project: its contexts (complexity of industry products under computer control) and the challenges it aims to address (formal methods applied to CPS), our strategy of metatheoretical transfer, and how we implement the strategy so that our distinctively theoretical (or metatheoretical) approach also brings real differences to real-world problems. We emphasize that our approaches are incremental: our results will be put to use not only in verification but also in testing; in doing so, our theoretical approach is beneficial since it allows transfer of ideas to seemingly different problems. Exploitation of intelligence—such as ML/AI and human expertise—is featured, too.

In industry application, we start small, identifying small portions of existing design processes in which our techniques can be of help. We hope that by the end of the project (March 2022) we will have seen several such “success stories”—they will give further momentum to formal methods applied to CPS, hopefully leading to formal methods in CPS design as industry practice.

From an academic viewpoint, the project is a unique venue where the integration of software science and control theory—two major disciplines for analysis of dynamics—is pursued on the solid basis of logical and categorical metatheories. We believe our attempt is special, too, when seen as an instance of applied mathematics. Modern mathematics with its orientation towards abstraction and genericity is not very often associated with concrete applications. In our project, through our use of logic and category theory, we wish to make a case that abstraction is power in application: by abstraction we obtain a theory that generalizes a known, more specific one; and this general theory makes us be better prepared against challenges that are yet to come. Such power of abstraction is all the more important in the modern world where technological, industrial and social situations around us are changing so rapidly.