Abstract
We release the first tool suite implementing MoXI (Model eXchange Interlingua), an intermediate language for symbolic model checking designed to be an international research-community standard and developed by a widespread collaboration under a National Science Foundation (NSF) CISE Community Research Infrastructure initiative. Although we focus here on hardware verification, the MoXI language is useful for software model checking and verification of infinite-state systems in general. MoXI builds on elements of SMT-LIB 2; it is easy to add new theories and operators. Our contributions include: (1) introducing the first tool suite of automated translators into and out of the new model-checking intermediate language; (2) composing an initial example benchmark set enabling the model-checking research community to build future translations; (3) compiling details for utilizing, extending, and improving upon our tool suite, including usage characteristics and initial performance data. Experimental evaluations demonstrate that compiling SMV-language models through MoXI to perform symbolic model checking with the tools from the last Hardware Model Checking Competition performs competitively with model checking directly via nuXmv.
This work was funded by NSF: CCRI Awards #2016592, #2016597, and #2016656.
You have full access to this open access chapter, Download conference paper PDF
1 Overview
As model checking becomes more integrated into the standard design and verification process for safety-critical systems, the platforms for model-checking research have become more limited (e.g., for the SMV language [47], neither CadenceSMV [46] nor NuSMV [24] are actively maintained; only closed source nuXmv [15] remains). Continuing advances in the field require utilizing higher-level languages that offer sufficient expressive power to describe modern, complex systems and enable validation by industrial system designers. At the same time, contributing advances to back-end model-checking algorithms requires the ability to compare across the full range of state-of-the-art algorithms without regard for which open- or closed-source model checkers implement them or what input languages those tools accept. Comparing new advances in model-checking algorithms to state-of-the-art algorithms requires re-implementing entire model checkers, e.g., [30]. We need a sustainable tool flow that can model the system in the most domain-appropriate high-level modeling language, analyze it with the full range of state-of-the-art model-checking algorithms, and return counterexamples or certificates in the original modeling language.
Our tool suite represents an initial step in unifying model-checking research platforms. We seed an extensible framework designed around a model-checking intermediate language, MoXI (Model eXchange Interlingua). MoXI aims to serve as a common language for the international research community that can connect popular front-end modeling languages with the state of the art in back-end model-checking algorithms. Our vision is that MoXI will enable researchers to model-check a new or extended modeling language simply by writing translators to and from MoXI. Similarly, developing a new backend model-checking algorithm will only require writing a translator to and from MoXI to enable comparisons with existing algorithms and evaluations on every benchmark model, regardless of its original modeling language.
Our initial tool suite accepts models in the higher-level language SMV [47] and efficiently interfaces with the back-end model checkers that competed in the last Hardware Model Checking Competition (HWMCC) [13]. We choose SMV because it is a popular, expressive modeling language successfully used in a wide range of industrial verification efforts [14, 17, 23, 29, 30, 33, 34, 36, 42, 45, 48, 49, 54, 61, 63,64,65]. SMV is important because, uniquely from other model-checking input languages, it includes high-level constructs critically required for modeling and validating safety-critical systems, such as many aerospace operational systems from Boeing’s Wheel Braking System [14] to NASA’s Automated Airspace Concept [34, 45, 64, 65] to a variety of Unmanned Aerial Systems [55, 59]. SMV has been used extensively by the hardware model-checking community as well (e.g., at FMCAD [38]) and has appealing qualities that could further the integration of formal methods with the embedded-systems community. Two freely available model checkers, CadenceSMV [46] and NuSMV [24] (which is integrated into today’s nuXmv [52]), previously provided viable research platforms. However, today, CadenceSMV’s 32-bit pre-compiled binary and nuXmv ’s closed-source releases are no longer suitable for research, e.g., into improved model-checking algorithms. We provide accessibility to continue the progression of high-level language model checking in SMV via an open-source research platform that allows the use of new algorithms under the hood.
Pushing the state of the art are several open-source, award-winning model-checking tools, including AVR [35], Pono [44], BtorMC [51], and ABC [18]. These tools support a hardware-oriented bit-level input language like Aiger or a bit-precise, word-level format like Btor2. Unfortunately, such languages do not enable the direct modeling of modern complex systems as SMV does, hindering validation efforts. For instance, it is challenging to convince industrial system designers that Aiger models correctly capture their higher-level systems. Perhaps driven by HWMCC, most systems for translating from high-level models to Aiger currently focus on hardware designs, without providing a natural way to describe other computational systems, e.g., embedded systems. Also, the problem of translating counterexamples produced by low-level model-checking algorithms back into meaningful counterexamples for a non-hardware-centric higher-level language model, such as one in SMV, remains a challenge.
Section 2 provides a basic introduction to MoXI, sufficient to enable understanding of the tool suite functionality; a description of the full language and its semantics appears in [57, 58]. Section 3 details the extensible research and verification suite of tools, including translators between the languages SMV, MoXI (in concrete and JSON dialects), and Btor2; utilities for validation; and a full model-checking implementation. Here, we provide a detailed example of behaviorally equivalent models in SMV, MoXI, and Btor2. Our efforts to validate their correctness appear in Sect. 4. Section 5 demonstrates the efficiency of model checking SMV-language models with a tool portfolio including nuXmv and via translation through MoXI, which performs better than checking with nuXmv alone. The toolFootnote 1 and all of the benchmarksFootnote 2 used in this experiment are available online for others to utilize in building additional translators to extend our tool suite and the use of MoXI as an intermediate language for symbolic model checking. Section 6 concludes with a discussion of future work.
2 Intermediate Language
MoXI (detailed in [57]) is an intermediate language designed to serve as a common input and output standard for model checkers for finite- and infinite-state systems. It is general enough to encode high-level modeling languages like SMV yet simple enough to enable efficient model checking, including through low-level languages such as Btor2 or SAT/SMT-based engines. Key features include a simple and easily parsable syntax, a rich set of data types, minimal syntactic sugar (at least for now), well-understood formal semantics, and a small but comprehensive set of commands.
MoXI maximizes machine-readability. Therefore, it does not support several human-interface features found in high-level languages such as SMV, TLA+ [43], PROMELA [37], Simulink [27], SCADE [28], and Lustre [19]; nor does it directly support the full features of hardware modeling languages such as VHDL [40], or Verilog [39]. However, many models and queries expressed in these languages can be reduced to MoXI representations. MoXI development was directly informed by previous intermediate formats for formal verification, their successful applications, and their limitations. The eventual form of MoXI stems from a combination of previous work as well as direct conversations with model checking and SMT researchers, including the developers of Aiger [2,3,4], Btor2 [51], Kind 2 [22], NuSMV [21], nuXmv [16, 20], SAL/SALLY [9, 32, 50], VMT [26, 41], and SMT-LIB (the standard I/O language for SMT solvers) [6, 7]. MoXI also benefited from the feedback from a technical advisor board of prominent researchers and practitioners in academia and industry [58].
MoXI ’s base logic is the same as that of SMT-LIB Version 2: many-sorted first-order logic with equality, quantifiers, let binders, and algebraic datatypes. MoXI extends this logic to (first-order) temporal logic while adopting a discrete and linear notion of time with standard finite and infinite trace-based semantics. MoXI also extends the SMT-LIB language with new commands for defining and verifying multi-component reactive systems. For the latter, it focuses on the specification and checking of reachability conditions (or, indirectly, state and transition invariants) and deadlocks, possibly under fairness conditions on system inputs. Each system definition command defines a transition system by specifying an initial state condition, a transition relation, and system invariants. These are provided as SMT formulas, with minimal syntactic restrictions, for flexibility and future extensibility. Each defined system is parameterized by a state signature, provided as a sequence of typed variables, and can be expressed as the synchronous composition of other systems.Footnote 3 The signature partitions state variables into input, output, and local variables. Each system verification command expresses one or more reachability queries over a previously defined system. The queries can be conditional on environmental assumptions on the system’s inputs and fairness conditions on its executions. Together with the ability to write observer systems, this allows the expression of arbitrary LTL specifications via standard encodings [56]. Responses to a system verification command can contain (finite or lasso) witness traces for reachable properties or proof certificates for unreachable ones.
Figure 1 contains an example (adapted from [5]) of a three-bit counter and its modular definition in MoXI, together with a reachability query and a sample response to the query. Figure 2 contains an extension of that model with an observer system and a query for checking the observational equivalence of the three-bit counter with a bit-vector counter of matching width. The various components of each system definition or check command are provided as attribute-value pairs, following the syntax of SMT-LIB annotations. Transition predicates use primed variables to denote next-state values.
3 Tool Suite
We provide a suite of tools for translating into and out of MoXI and validating MoXI scripts. The tools are implemented in type-annotated Python with a focus on finite-state systems (for now). Figure 3 illustrates the end-to-end toolchain for model checking using MoXI, including relationships between the various tools.
3.1 Translators
The tool suite provides four translators that take as input a model, query, or witness specified in a source language and output a behaviorally equivalent model, query, or witness in the configured target language.
(1) smv2moxi translates specifications written in (a common subset of) the SMV language into MoXI. Broadly, this tool supports Finite State Machine (FSM) definitions (nuXmv manual, Sect. 2.3 [16]). It currently supports only statically typed expressions; for example, all module instantiations of the same defined module must share the same signature. (For a module M with parameters p1 and p2, the types of p1, p2 must be the same across all instantiations of M.) Fig. 4 shows that the translation preserves the hierarchy between the SMV modules and submodule instantiations.
The MoXI encoding captures SMV macro and function declarations (DEFINE, FUN), variable declarations (VAR, IVAR, FROZENVAR), state machine declarations (INIT, TRANS, INVAR, ASSIGN), invariant specifications (AG [property], INVARSPEC) and fairness constraints (FAIRNESS, JUSTICE, COMPASSION). To support LTL specifications (LTLSPEC), smv2moxi runs PANDA [56], an open-source tool offering a portfolio of LTL-to-symbolic automaton translations in SMV format.
The smv2moxi tool consists of (1) preprocessing that renames identifiers deviating from the SMV grammar (discussed in Sect. 4); (2) running the C preprocessor (SMV supports C-style macros) and PANDA [56] (for LTL specifications); (3) parsing via a SLY-generated [8] parser; (4) running an SMV type checker; (5) translating to MoXI. We emphasize that tool guarantees apply to well-formed SMV models as determined by nuXmv.
(2) moxi2btor translates MoXI to Btor2 by creating a Btor2 file for each :query attribute in each check-system command. Some crucial differences between MoXI and Btor2 present non-trivial challenges. Firstly, Btor2 does not support hierarchical models. moxi2btor flattens the system hierarchy in its translation as a result. Secondly, MoXI allows for declarative-style initial, transition, and invariant conditions while Btor2 allows only assignment-style. Figure 4 shows how moxi2btor encodes each system’s conditions using three variants of each variable. Thirdly, a MoXI query with multiple reachability properties asks for a trace that eventually satisfies each property. In Btor2, multiple bad properties in a file ask for a trace that eventually satisfies at least one such property. Figure 4 again shows how the translation resolves this difference. The moxi2btor tool’s workflow consists of (1) parsing via a SLY-generated parser [8]; (2) running sortcheck (Sect. 3.2); (3) translating to a set of Btor2 files, each behaviorally equivalent to its corresponding :query.
(3) btorwit2moxiwit translates Btor2 witnesses to MoXI witnesses using the check-system-response syntax. It assumes moxi2btor created the Btor2 input files used to generate the witness and uses information that moxi2btor encodes in the comments of each Btor2 file, e.g., to map bit vectors to enumeration values for variables of such sorts.
(4) moxiwit2smvwit translates MoXI witnesses to SMV-language witnesses.
3.2 Utilities
sortcheck We provide a sort-checker for MoXI that supports the following SMT-LIB logics: QF_BV, QF_ABV, QF_LIA, QF_NIA, QF_LRA, and QF_NRA.
validate We define a JSON Schema for MoXI and support a JSON dialect for MoXI in our tools. Given the evolving nature of new languages and their standards, tool writers often pay an unnecessary overhead keeping front-end tools up to date. By supporting the representation of MoXI constructs in the JSON dialect, we expect to facilitate tool development, improve tool interoperability, and ensure conformance to the language standard. Tool writers can use off-the-shelf JSON parsers (e.g., simdjson, RapidJSON) to obtain industrial-strength MoXI parsers in the language they choose “for free.” We plan to include a JSON schema for each MoXI release, enabling seamless front-end compatibility with the latest MoXI standard along with language/platform independence. The validate utility invokes a JSON validator from Python’s jsonschema package to validate a MoXI script (in the JSON dialect) against the MoXI JSON schema.
4 Tool Suite Validation
We validate our tools using a combination of manual inspection, sort checking of translated output, and comparing witnesses between those generated by nuXmv and our end-to-end tool suite. We use catbtor [51] for sort checking and BtorMC, AVR, and Pono for bounded model checking (BMC) of Btor2 files. For benchmark generation, we use the set of nuXmv input files provided in the most recent release of nuXmv (Fig. 5).
Manual Inspection. We provide an initial set of hand-written MoXI benchmarks to perform manual validation. Each benchmark is well-sorted according to sortcheck, generates well-sorted Btor2 via moxi2btor according to catbtor, and generates correct, manually-inspected witnesses via BtorMC and btorwit2moxiwit.Footnote 4
Sort Checked Translations. Using the benchmarks distributed with nuXmv as input, we check that the output of smv2moxi and moxi2btor are well-sorted according to sortcheck and catbtor. We discovered discrepancies in benchmarks distributed with nuXmv while developing these utilities, where the benchmarks did not conform to the grammar defined in Chap. 2 of the nuXmv User Manual [16] but were accepted by nuXmv nonetheless, particularly concerning identifiers. The preprocessor of smv2moxi transforms these identifiers into valid ones. There were also numerous ill-typed benchmarks that smv2moxi ’s type checker correctly rejects.
Output Comparison. Using the nuXmv benchmarks again as input, we run nuXmv and our tool suite to generate witnesses for each specification. Both nuXmv and our tool suite agree on the result of every model-checking query. Section 5 describes how our toolchain (using BtorMC, AVR, or Pono as its back end) shows a similar number of timeouts compared with nuXmv when the latter is set to use BMC or k-induction.
5 Benchmarks
We provide an initial set of MoXI benchmarks for the model-checking community generated from the set of SMV input files provided in the most recent release of nuXmv. Noting that many of the SMV benchmarks are results of a Btor2 to nuXmv translation themselves, we stress that this set of benchmarks is intended to be an initial set. We expect to achieve greater benchmark diversity with continued toolchain development and increased adoption of MoXI by other researchers.
Experimental Evaluation. We compare the end-to-end performance of model-checking SMV-language models with a portfolio comprising nuXmv and Btor2 model checkers: AVR, Pono, and BtorMC, on a set of 960 QF_ABV-compatible SMV benchmarks, i.e., SMV models with boolean, word or array types. We use the HWMCC 2020 versions of AVR and Pono, the version of BtorMC from the latest version of Boolector [51], and the latest public release of nuXmv (version 2.0.0). Each checker is configured with a 1-h time limit and 8GB memory limit and runs BMC [12] and k-induction [60] with a max bound of 1000. (We do not run BtorMC with k-induction due to a bug in its implementation.)
Figure 6 shows our evaluation, with portfolio performance depicted as virtual-best (vb). While we consider this a proof-of-concept evaluation, we observe that SMV-language model checking using Btor2 model checkers, enabled via a translation through MoXI, delivers superior performance on unsafe queries compared to model checking with nuXmv alone: vb-bmc solves 57% more benchmarks than nuXmv-bmc while ensuring all Btor2 witnesses are correctly translated to SMV traces. We measure competitive performance with vb-kind solving 6% more benchmarks than nuXmv-kind for safe queries. The vb performance gains are due to its ability to use a variety of model checkers with different SMT solver backends of varying strengths, e.g., nuXmv uses MathSAT [25], AVR uses Yices [31], and Pono uses Boolector [51], while ensuring correct model and witness translation through MoXI. Section 4 of Rozier et al. [57] includes experimental data using each tool’s IC3-based algorithms.
6 Conclusion and Future Work
The presented tool suite provides the foundational step in developing an open-source, state-of-the-art symbolic model-checking framework for the research community. It constitutes the first tool support for the new intermediate language MoXI, the first experimental evidence of the potential for efficient translation through MoXI, and a basis upon which the hardware and software model-checking communities can build. Adding support for checking models in a high-level modeling language is now as easy as adding a translator between that language and MoXI to this tool suite. Similarly, experimenting with a novel back-end model-checking algorithm to check all supported input modeling languages only requires writing a new MoXI translator interfacing with that algorithm. Benchmarking against other model-checking algorithms no longer require re-implementing existing tools to achieve an apples-to-apples comparison.
Connecting this toolchain to existing tools enables the immediate application of verification techniques for Btor2 to MoXI beyond just hardware model checkers. For example, a software model checker can verify a MoXI model via Btor2C [11], making at least 59 other backend verifiers for MoXI available [10].
This release enables future instantiations of HWMCC [13] to add competition tracks centered around MoXI, with extensions from the model-checking research community. Specifying, proving correct, and extracting efficient C code for our translation using a theorem prover such as PVS [53] would provide an additional trusted translation between languages beyond the validation techniques in Sect. 4. We are writing a back end to Yosys [62], the open-source RTL synthesis framework, to generate files directly from Verilog designs and facilitate a more extensive set of realistic benchmarks to add to the initial set in Sect. 5. Additionally, once MoXI certificates are fully defined, we can translate Btor2-Cert [1] certificates back to MoXI from Btor2-Cert-supported verifiers. Finally, we expect developers of model checkers for higher-level modeling languages than a language like Btor2 may choose to support MoXI directly. We have work in this direction underway for the Kind 2 checker [22].
Notes
- 1.
- 2.
- 3.
We plan to include asynchronous composition in a later release.
- 4.
Many thanks to Daniel Larraz for writing many of the MoXI examples.
References
Ádám, Z., Beyer, D., Chien, P.C., Lee, N.Z., Sirrenberg, N.: Btor2-Cert: a certifying hardware-verification framework using software analyzers. In: Finkbeiner, B., Kovács, L. (eds.) TACAS 2024. LNCS, vol. 14572, pp. 129–149. Springer, Cham (2024). https://doi.org/10.1007/978-3-031-57256-2_7
The AIGER and-inverter graph (AIG) format version 20071012. http://fmv.jku.at/aiger/FORMAT. Accessed 25 July 2016
AIGER 1.9 and beyond. http://fmv.jku.at/hwmcc11/beyond1.pdf. Accessed 25 July 2016
AIGER website. http://fmv.jku.at/aiger/. Accessed 25 July 2016
Alur, R.: Principles of Cyber-physical Systems. MIT Press, Cambridge (2015)
Barrett, C., Fontaine, P., Tinelli, C.: The Satisfiability Modulo Theories Library (SMT-LIB). https://smt-lib.org
Barrett, C., Stump, A., Tinelli, C.: The SMT-LIB standard: version 2.0. In: Gupta, A., Kroening, D. (eds.) Proceedings of the 8th International Workshop on Satisfiability Modulo Theories (Edinburgh, UK) (2010)
Beazley, D.: SLY (sly lex yacc) (2018). https://sly.readthedocs.io/en/latest/
Bensalem, S., et al.: An overview of SAL. In: Holloway, C.M. (ed.) LFM 2000: Fifth NASA Langley Formal Methods Workshop, pp. 187–196. NASA Langley Research Center, Hampton, June 2000. http://www.csl.sri.com/papers/lfm2000/
Beyer, D.: State of the art in software verification and witness validation: SV-COMP 2024. In: Finkbeiner, B., Kovács, L. (eds) TACAS 2024. LNCS, vol. 14572, pp. 299–329. Springer, Cham (2024). https://doi.org/10.1007/978-3-031-57256-2_15
Beyer, D., Chien, P.C., Lee, N.Z.: Bridging hardware and software analysis with BTOR2C: a word-level-circuit-to-C translator. In: Sankaranarayanan, S., Sharygina, N. (eds.) TACAS 2023. LNCS, vol. 13994, pp. 152–172. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30820-8_12
Biere, A., Cimatti, A., Clarke, E., Zhu, Y.: Symbolic model checking without BDDs. In: Cleaveland, W.R. (ed.) TACAS 1999. LNCS, vol. 1579, pp. 193–207. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-49059-0_14
Biere, A., Froleyks, N., Preiner, M.: Hardware Model Checking Competition (HWMCC) (2020). https://fmv.jku.at/hwmcc20/index.html
Bozzano, M., et al.: Formal design and safety analysis of AIR6110 wheel brake system. In: Kroening, D., Păsăreanu, C.S. (eds.) CAV 2015. LNCS, vol. 9206, pp. 518–535. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21690-4_36
Bozzano, M., et al.: nuXmv 1.0 User Manual. Technical report, FBK - Via Sommarive 18, 38055 Povo (Trento) - Italy (2014)
Bozzano, M., et al.: nuXmv 2.0. 0 user manual. Fondazione Bruno Kessler, Technical report, Trento, Italy (2019)
Bozzano, M., Cimatti, A., Katoen, J.-P., Nguyen, V.Y., Noll, T., Roveri, M.: The COMPASS approach: correctness, modelling and performability of aerospace systems. In: Buth, B., Rabe, G., Seyfarth, T. (eds.) SAFECOMP 2009. LNCS, vol. 5775, pp. 173–186. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-04468-7_15
Brayton, R., Mishchenko, A.: ABC: an academic industrial-strength verification tool. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 24–40. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_5
Caspi, P., Pilaud, D., Halbwachs, N., Plaice, J.: LUSTRE: a declarative language for programming synchronous systems. In: Proceedings of the 14th Annual ACM Symposium on Principles of Programming Languages, pp. 178–188 (1987)
Cavada, R., et al.: The nuXmv symbolic model checker. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 334–342. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08867-9_22
Cavada, R., et al.: NuSMV 2.6 user manual (2016)
Champion, A., Mebsout, A., Sticksel, C., Tinelli, C.: The Kind 2 model checker. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9780, pp. 510–517. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41540-6_29
Choi, Y., Heimdahl, M.: Model checking software requirement specifications using domain reduction abstraction. In: IEEE ASE, pp. 314–317 (2003)
Cimatti, A., et al.: NuSMV 2: an opensource tool for symbolic model checking. In: Brinksma, E., Larsen, K.G. (eds.) CAV 2002. LNCS, vol. 2404, pp. 359–364. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45657-0_29
Cimatti, A., Griggio, A., Schaafsma, B.J., Sebastiani, R.: The MathSAT5 SMT solver. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 93–107. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-36742-7_7
Cimatti, A., Griggio, A., Tonetta, S., et al.: The VMT-LIB language and tools. In: Proceedings of the 20th Internal Workshop on Satisfiability ModuloTheories co-located with the 11th International Joint Conference on Automated Reasoning \(\{\)(IJCAR\(\}\) 2022) part of the 8th Federated Logic Conference (FLoC 2022), Haifa, Israel, 11–12 August 2022, vol. 3185, pp. 80–89. CEUR-WS. org (2022)
Documentation, S.: Simulation and model-based design (2020). https://www.mathworks.com/products/simulink.html
Documentation, SCADE: Ansys SCADE Suite (2023). https://www.ansys.com/products/embedded-software/ansys-scade-suite
Dureja, R., Rozier, E.W.D., Rozier, K.Y.: A case study in safety, security, and availability of wireless-enabled aircraft communication networks. In: Proceedings of the 17th AIAA Aviation Technology, Integration, and Operations Conference (AVIATION). American Institute of Aeronautics and Astronautics, June 2017. https://doi.org/10.2514/6.2017-3112
Dureja, R., Rozier, K.Y.: FuseIC3: an algorithm for checking large design spaces. In: Proceedings of Formal Methods in Computer-Aided Design (FMCAD), Vienna, Austria. IEEE/ACM, October 2017
Dutertre, B.: Yices 2.2. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 737–744. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08867-9_49
Dutertre, B., Jovanović, D., Navas, J.A.: Verification of fault-tolerant protocols with sally. In: Dutle, A., Muñoz, C., Narkawicz, A. (eds.) NFM 2018. LNCS, vol. 10811, pp. 113–120. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77935-5_8
Gan, X., Dubrovin, J., Heljanko, K.: A symbolic model checking approach to verifying satellite onboard software. Sci. Comput. Programm. (2013). http://dx.doi.org/10.1016/j.scico.2013.03.005
Gario, M., Cimatti, A., Mattarei, C., Tonetta, S., Rozier, K.Y.: Model checking at scale: automated air traffic control design space exploration. In: Chaudhuri, S., Farzan, A. (eds.) CAV 2016. LNCS, vol. 9780, pp. 3–22. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41540-6_1
Goel, A., Sakallah, K.: AVR: abstractly verifying reachability. In: TACAS 2020. LNCS, vol. 12078, pp. 413–422. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-45190-5_23
Gribaudo, M., Horváth, A., Bobbio, A., Tronci, E., Ciancamerla, E., Minichino, M.: Model-checking based on fluid petri nets for the temperature control system of the ICARO co-generative plant. In: Anderson, S., Felici, M., Bologna, S. (eds.) SAFECOMP 2002. LNCS, vol. 2434, pp. 273–283. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45732-1_27
Holzmann, G.: Design and Validation of Computer Protocols. Prentice-Hall Int, Editions (1991)
Hunt, W.: FMCAD organization home page. http://www.cs.utexas.edu/users/hunt/FMCAD/
IEEE: IEEE standard for Verilog hardware description language (2005)
IEEE: IEEE standard for VHDL language reference manual (2019)
Kessler, F.B.: Verification modulo theories. https://vmt-lib.fbk.eu/. Accessed 30 Sept 2017
Lahtinen, J., Valkonen, J., Björkman, K., Frits, J., Niemelä, I., Heljanko, K.: Model checking of safety-critical software in the nuclear engineering domain. Reliab. Eng. Syst. Safety 105(0), 104–113 (2012). http://www.sciencedirect.com/science/article/pii/S0951832012000555
Lamport, L.: Specifying Systems: The TLA+ Language and Tools for Hardware and Software Engineers. Addison-Wesley, Reading (2002)
Mann, M., et al.: Pono: a flexible and extensible SMT-based model checker. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021, Part II. LNCS, vol. 12760, pp. 461–474. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81688-9_22
Mattarei, C., Cimatti, A., Gario, M., Tonetta, S., Rozier, K.Y.: Comparing different functional allocations in automated air traffic control design. In: Proceedings of Formal Methods in Computer-Aided Design (FMCAD 2015). IEEE/ACM, Austin, Texas, U.S.A, September 2015
McMillan, K.: The SMV language. Technical report, Cadence Berkeley Lab (1999)
McMillan, K.L.: Symbolic Model Checking, chap. The SMV System, pp. 61–85. Springer, Boston (1993). https://doi.org/10.1007/978-1-4615-3190-6_4
Miller, S.P.: Will this be formal? In: Mohamed, O.A., Muñoz, C., Tahar, S. (eds.) TPHOLs 2008. LNCS, vol. 5170, pp. 6–11. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-71067-7_2
Miller, S.P., Tribble, A.C., Whalen, M.W., Per, M., Heimdahl, E.: Proving the shalls. STTT 8(4–5), 303–319 (2006)
de Moura, L., Owre, S., Shankar, N.: The SAL language manual. CSL Technical report SRI-CSL-01-02 (Rev. 2), SRI Int’l, 333 Ravenswood Ave., Menlo Park, CA 94025, August 2003
Niemetz, A., Preiner, M., Wolf, C., Biere, A.: Btor2, BtorMC and Boolector 3.0. In: Chockler, H., Weissenbacher, G. (eds.) CAV 2018. LNCS, vol. 10981, pp. 587–595. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-96145-3_32
The nuXmv model checker (2015). https://nuxmv.fbk.eu/
Owre, S., Rushby, J.M., Shankar, N.: PVS: a prototype verification system. In: Kapur, D. (ed.) CADE 1992. LNCS, vol. 607, pp. 748–752. Springer, Heidelberg (1992). https://doi.org/10.1007/3-540-55602-8_217
Lomuscio, A., Łasica, T., Penczek, W.: Bounded model checking for interpreted systems: preliminary experimental results. In: Hinchey, M.G., Rash, J.L., Truszkowski, W.F., Rouff, C., Gordon-Spears, D. (eds.) FAABS 2002. LNCS (LNAI), vol. 2699, pp. 115–125. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45133-4_10
Reinbacher, T., Rozier, K.Y., Schumann, J.: Temporal-logic based runtime observer pairs for system health management of real-time systems. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 357–372. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-642-54862-8_24
Rozier, K.Y., Vardi, M.Y.: A multi-encoding approach for LTL symbolic satisfiability checking. In: Butler, M., Schulte, W. (eds.) FM 2011. LNCS, vol. 6664, pp. 417–431. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21437-0_31
Rozier, K.Y., et al.: MoXI: an intermediate language for symbolic model checking. In: Proceedings of the 30th International Symposium on Model Checking Software (SPIN). LNCS, Springer (2024)
Rozier, K.Y., Shankar, N., Tinelli, C., Vardi, M.Y.: Developing an open-source, state-of-the-art symbolic model-checking framework for the model-checking research community (2019). https://modelchecker.github.io
Schumann, J., Rozier, K.Y., Reinbacher, T., Mengshoel, O.J., Mbaya, T., Ippolito, C.: Towards real-time, on-board, hardware-supported sensor and software health management for unmanned aerial systems. In: Proceedings of the 2013 Annual Conference of the Prognostics and Health Management Society (PHM2013), pp. 381–401, October 2013
Sheeran, M., Singh, S., Stålmarck, G.: Checking safety properties using induction and a SAT-solver. In: Hunt, W.A., Johnson, S.D. (eds.) FMCAD 2000. LNCS, vol. 1954, pp. 127–144. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-40922-X_8
Tribble, A., Miller, S.: Software safety analysis of a flight management system vertical navigation function-a status report. In: DASC, pp. 1.B.1–1.1–9 v1 (2003)
Wolf, C.: Yosys open synthesis suite (2016)
Yoo, J., Jee, E., Cha, S.: Formal modeling and verification of safety-critical software. Softw. IEEE 26(3), 42–49 (2009)
Zhao, Y., Rozier, K.Y.: Formal specification and verification of a coordination protocol for an automated air traffic control system. In: Proceedings of the 12th International Workshop on Automated Verification of Critical Systems (AVoCS 2012). Electronic Communications of the EASST, vol. 53, pp. 337–353. European Association of Software Science and Technology (2012)
Zhao, Y., Rozier, K.Y.: Formal specification and verification of a coordination protocol for an automated air traffic control system. Sci. Comput. Programm. J. 96(3), 337–353 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this paper
Cite this paper
Johannsen, C. et al. (2024). The MoXI Model Exchange Tool Suite. In: Gurfinkel, A., Ganesh, V. (eds) Computer Aided Verification. CAV 2024. Lecture Notes in Computer Science, vol 14681. Springer, Cham. https://doi.org/10.1007/978-3-031-65627-9_10
Download citation
DOI: https://doi.org/10.1007/978-3-031-65627-9_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-65626-2
Online ISBN: 978-3-031-65627-9
eBook Packages: Computer ScienceComputer Science (R0)