Introduction

A blockchain is a data structure to implement a distributed ledger in a trustless yet secure way. The idea of blockchains is initially devised for the Bitcoin cryptocurrency [13] platform. Many cryptocurrencies are implemented using blockchains, in which value equivalent to a significant amount of money is exchanged.

Recently, many cryptocurrency platforms allow programs to be executed on a blockchain. Such programs are called smart contracts [20] (or, simply contracts in this article) since they work as a device to enable automated execution of a contract. Technically speaking, a smart contract is a program \(P_a\) associated with an account a on a blockchain. (The word contract is also used to denote the account with which a smart contract is associated.) When the account a receives money from another account b with a parameter v, the computation defined in \(P_a\) is conducted, during which the state of the account a (e.g., the balance of the account and values that are stored by the previous invocations of \(P_a\)) which is recorded on the blockchain may be updated. The contract \(P_a\) may execute money transactions to another account (say c), which result in invocations of other contracts (say \(P_c\)) during or after the computation; therefore, contract invocations may be chained.

Although smart contracts’ original motivation was handling simple transactions (e.g., money transfer) among the accounts on a blockchain, recent contracts are being used for more complicated purposes (e.g., establishing a fund involving multiple accounts). Following this trend, the languages for writing smart contracts also evolve from those that allow a contract to execute relatively simple transactions (e.g., Script for Bitcoin) to those that allow a program that is as complex as one written in standard programming languages (e.g., EVM for Ethereum and Michelson [14] for Tezos [6]).

Due to a large amount of money they deal with, verification of smart contracts is imperative. Static verification is especially needed since a smart contract cannot be fixed once deployed on a blockchain. Attack on a vulnerable contract indeed happened. For example, the DAO attack, in which the vulnerability of a fundraising contract was exploited, resulted in the loss of cryptocurrency equivalent to approximately 150M USD [19].

In this article, we describe our type-based static verifier HelmholtzFootnote 1 for smart contracts written in Michelson. The Michelson language is a statically and simply typed stack-based language equipped with rich data types (e.g., lists, maps, and higher-order functions) and primitives to manipulate them. Although several high-level languages that compile to Michelson are being developed, Michelson is most widely used to write a smart contract for Tezos as of writing.

A Michelson program expresses the above computation in a purely functional style, in which the Michelson program corresponding to \(P_a\) is defined as a function. The function takes a pair of the parameter v and a value s that represents the current state of the account (called storage) and returns a pair of a list of operations and the updated storage \(s'\). Here, an operation is a Michelson value that expresses the computation (e.g., transferring money to an account and invoking the contract associated with the account) that is to be conducted after the current computation (i.e., \(P_a\)) terminates. After the computation specified by \(P_a\) finishes with a pair of an operation list and a storage value, a blockchain system invokes the computation specified in the operation list. This purely functional style admits static verification methods for Michelson programs similar to those for standard functional languages.

As the theoretical foundation of Helmholtz , we design a refinement type system for Michelson as an extension of the original simple type system. In contrast to standard refinement types that refine the types of values, our type system refines the type of stacks.

We show that our tool can verify several practical smart contracts. In addition to the contracts we wrote ourselves, we apply our tool to the sample Michelson programs used in Mi-cho-coq [3], a formalization of Michelson in Coq proof assistant [22]. These contracts consist of practical contracts such as one that checks a digital signature and one that transfers money.

We note that Helmholtz currently supports approximately 80% of the whole instructions of the Michelson language. Another limitation of the current Helmholtz is that it can verify only a single contract, although one often uses multiple contracts for an application, in which a contract may call another by a money transfer operation, and their behavior as a whole is of interest. We are currently extending Helmholtz so that it can deal with more programs.

Our contribution is summarized as follows: (1) Definition of the core calculus Mini-Michelson and its refinement type system; (2) Automated verification tool Helmholtz for Michelson contracts implemented based on the type system of Mini-Michelson ; the interface to the implementation can be found at https://www.fos.kuis.kyoto-u.ac.jp/trylang/Helmholtz; and (3) Evaluation of Helmholtz with various Michelson contracts, including practical ones. A preliminary version of this article was presented at International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS) in 2021. We have given detailed proofs of properties of Mini-Michelson and a more detailed description about the verifier implementation, in addition to revision of the text.

The rest of this article is organized as follows. Before introducing the technical details, we present an overview of the verifier Helmholtz in the next section using a simple example of a Michelson contract. The following section introduces the core calculus Mini-Michelson with its refinement type system and states soundness of the refinement type system. (Detailed proofs are deferred to Appendix A.) We also discuss a few extensions implemented in the verifier. The next section describes the verifier Helmholtz , a case study, and experimental results. After discussing related work in the next section, we conclude in the last section.

Overview of Helmholtz and Michelson

We give an overview of our tool Helmholtz in this section before presenting its technical details. We also explain Michelson by example (An Example Contract in Michelson) and user-written annotation added to a Michelson program for verification purposes (Specification).

Helmholtz

As input, Helmholtz takes a Michelson program annotated with (1) its specification expressed in a refinement type and (2) additional user annotations such as loop invariants. It typechecks the annotated program against the specification using our refinement type system; the verification conditions generated during the typechecking is discharged by the SMT solver Z3 [4]. If the code successfully typechecks, then the program is guaranteed to satisfy the specification.

Helmholtz is implemented as a subcommand of tezos-client, the client program of the Tezos blockchain. For example, to verify boomerang.tz in Fig. 1, we run tezos-client refinement boomerang.tz. If the verification succeeds, the command outputs VERIFIED to the terminal screen (with a few log messages); otherwise, it outputs UNVERIFIED.

Fig. 1
figure 1

boomerang.tz. The comment inside \(\text{/* */}\) describes the stack at the program point

An Example Contract in Michelson

Figure 1 shows an example of a Michelson program called \(\texttt {boomerang}\). A Michelson program is associated with an account on the Tezos blockchain; the program is invoked by transferring money to this account. This artificial program in Fig. 1, when it is invoked, is supposed to transfer the received money back to the account that initiated the transaction.

A Michelson program starts with type declarations of its \(\texttt {parameter}\), whose value is given by contract invocation, and \(\texttt {storage}\), which is the state that the contract account stores. Lines 1–2 declare that the types of both are \(\texttt {unit}\), the type inhabited by the only value \(\texttt {Unit}\). Lines 3–8 surrounded by \(\texttt {<<}\) and \(\texttt {>>}\) are a user-written annotation used by Helmholtz for verification; we will explain this annotation later. The \(\texttt {code}\) section in Lines 10–29 is the body of this program.

Let us take a look at the \(\texttt {code}\) section of the program. In the following explanation of each instruction, we describe the state of the stack after each instruction as comments; stack elements are delimited by \(\triangleright \).

  • Execution of a Michelson program starts with a stack with one value, which is a pair \(\texttt {(param, st)}\) of a parameter \(\texttt {param}\) and a storage value \(\texttt {st}\).

  • \(\texttt {CDR}\) pops the pair at the top of the stack and pushes the second value of the popped pair; thus, after executing the instruction, the stack contains the single value \(\texttt {st}\).

  • \(\texttt {NIL}\) pushes the empty list \(\texttt {[]}\) to the stack; the instruction is accompanied by the type \(\texttt {operation}\) of the list elements for typechecking purposes.

  • \(\texttt {AMOUNT}\) pushes the nonnegative \(\texttt {amount}\) of the money sent to the account to which this program is associated.

  • \(\texttt {PUSH mutez 0}\) pushes the value 0. The type \(\texttt {mutez}\) represents a unit of money used in Tezos.

  • \(\texttt {IFCMPEQ b1 b2}\), if the state of the stack before executing the instruction is \(\texttt {v1}\) \(\triangleright \) \(\texttt {v2}\) \(\triangleright \) \(\texttt {tl}\), (1) pops \(\texttt {v1}\) and \(\texttt {v2}\) and (2) executes the then-branch \(\texttt {b1}\) (resp., the else-branch \(\texttt {b2}\)) if \(\texttt {v2}\) \(=\) \(\texttt {v1}\) (resp., \(\texttt {v2}\) \(\ne \) \(\texttt {v1}\)). In \(\texttt {boomerang}\), this instruction does nothing if \(\texttt {amount}\) \(=\) 0; otherwise, the instructions in the else-branch are executed.

  • \(\texttt {SOURCE}\) at the beginning of the else-branch pushes the address \(\texttt {src}\) of the source account, which initiated the chain of contract invocations that the current contract belongs to, resulting in the stack \(\texttt {src}\) \(\triangleright \) \(\texttt {[]}\) \(\triangleright \) \(\texttt {st}\).

  • \(\texttt {CONTRACT}\) T pops an address \(\texttt {addr}\) from the stack and typechecks whether the contract associated with \(\texttt {addr}\) takes an argument of type T. If the typechecking succeeds, then \(\texttt {Some (Contract addr)}\) is pushed; otherwise, \(\texttt {None}\) is pushed. The constructor \(\texttt {Contract}\) creates an object that represents a typechecked contract at the given address. In Tezos, the source account is always a contract that takes the value \(\texttt {Unit}\) as a parameter; thus, \(\texttt {Some (Contract src)}\) will always be pushed onto the stack.

  • \(\texttt {ASSERT\_SOME}\) pops a value \(\texttt {v}\) from the stack and pushes \(\texttt {v'}\) if \(\texttt {v}\) is \(\texttt {Some v'}\); otherwise, it raises an exception.

  • \(\texttt {UNIT}\) pushes the unit value \(\texttt {Unit}\) to the stack.

  • \(\texttt {TRANSFER\_TOKENS}\), if the stack is of the shape \(\texttt {varg}\) \(\triangleright \) \(\texttt {vamt}\) \(\triangleright \) \(\texttt {vcontr}\) \(\triangleright \) \(\texttt {tl}\), pops \(\texttt {varg}\), \(\texttt {vamt}\), and \(\texttt {vcontr}\) from the stack and pushes (Transfer varg vamt vcontr) onto \(\texttt {tl}\). The value \(\texttt {Transfer varg vamt vcontr}\) is an operation object expressing that money (of amount \(\texttt {vamt}\)) shall be sent to the account \(\texttt {vcontr}\) with the argument \(\texttt {varg}\) after this program finishes without raising an exception. Therefore, the program associated with \(\texttt {vcontr}\) is invoked after this program finishes. Otherwise, an operation object is an opaque tuple and no instruction can extract its elements.

  • \(\texttt {CONS}\) with the stack \(\texttt {v1}\) \(\triangleright \) \(\texttt {v2}\) \(\triangleright \) \(\texttt {tl}\) pops \(\texttt {v1}\) and \(\texttt {v2}\), and pushes a cons list \(\texttt {v1{:}{:}v2}\) onto the stack. (We use the list notation in OCaml here.)

  • After executing one of the branches associated with \(\texttt {IFCMPEQ}\) in this program, the shape of the stack should be \(\texttt {ops}\) \(\triangleright \) \(\texttt {storage}\), where \(\texttt {ops}\) is \(\texttt {[]}\) if \(\texttt {amount}\) \(= 0\) or [Transfer varg vamt vcontr] if \(\texttt {amount}\) \(> 0\). The instruction \(\texttt {PAIR}\) pops \(\texttt {ops}\) and \(\texttt {storage}\), and pushes \(\texttt {(ops,storage)}\).

A Michelson program is supposed to finish its execution with a singleton stack whose unique element is a pair of (1) a list of operations to be executed after the current execution of the contract finishes and (2) the new value for the storage.

Michelson is a statically typed language. Each instruction is associated with a typing rule that specifies the shapes of stacks before and after it by a sequence of simple types such as \(\texttt {int}\) and \(\texttt {int list}\). For example, \(\texttt {CONS}\) requires the type of top element to be T and that of the second to be T \(\texttt { list}\) (for any T); it ensures the top element after it has type T \(\texttt { list}\).

Other notable features of Michelson include first-class functions, hashing, instructions related to cryptography such as signature verification, and manipulation of a blockchain using operations.

Specification

A user can specify the behavior of a program by a \(\texttt {ContractAnnot}\) annotation, which is a part of the augmented syntax of our verification tool. A \(\texttt {ContractAnnot}\) annotation gives a specification of a Michelson program by the following notation inspired by the refinement types: {(param,st) | pre} -> {(ops,st’) | post} & {exc | abpost}, where \(\texttt {pre}\), \(\texttt {post}\), and \(\texttt {abpost}\) are predicates.

This specification reads as follows: if this program is invoked with a parameter \(\texttt {param}\) and storage \(\texttt {st}\) that satisfy the property \(\texttt {pre}\), then (1) if the execution of this program succeeds, then it returns a list of operations \(\texttt {ops}\) and new storage \(\texttt {st'}\) that satisfy the property \(\texttt {post}\); (2) if this program raises an exception with value \(\texttt {exc}\), then \(\texttt {exc}\) satisfies \(\texttt {abpost}\). The specification language, which is ML-like, is expressive enough to cover the specifications for practical contracts, including the ones we used in the experiments in 'Experiments'. In the predicates, one can use several keywords such as \(\texttt {amount}\) for the amount of the money sent to this program when it is invoked and \(\texttt {source}\) for the source account’s address.

The \(\texttt {ContractAnnot}\) annotation in Fig. 1 (Lines 3–8) formalizes this program’s specification as follows. This program can take any parameter and storage (Line 3). Successful execution of this program results in a pair \(\texttt {(ops,st')}\) that satisfies the condition in Lines 4–7 that expresses (1) if \(\texttt {amount}=0\), then \(\texttt {ops}\) is empty, that is, no operation will be issued; (2) if \(\texttt {amount}>0\), then \(\texttt {ops}\) is a list of a single element \(\texttt {Transfer Unit amount c}\), where \(\texttt {c}\) is bound for \(\texttt {Contract source}\)Footnote 2, which expresses transfer of money of the amount \(\texttt {amount}\) to the account at \(\texttt {source}\) with the unit argument.Footnote 3 In the specification language, \(\texttt {source}\) and \(\texttt {amount}\) are keywords that stand for the source account and the amount of money sent to this program, respectively. The part \( \texttt { \& \; \{\; \_ \;| \; False \}}\) expresses that this program does not raise an exception. This specification correctly formalizes the intended behavior of this program.

Refinement Type System for Mini-Michelson

In this section, we formalize Mini-Michelson, a core subset of Michelson with its syntax, operational semantics, and refinement type system. We omit many features from the full language in favor of conciseness but includes language constructs—such as higher-order functions and iterations—that make verification difficult.

Syntax

Fig. 2
figure 2

Syntax of Mini-Michelson

Figure 2 shows the syntax of Mini-Michelson . Values, ranged over by \(V\), consist of integers \(i\); addresses \(a\); operation objects \( \texttt {Transfer} ( V , i , a ) \) to invoke a contract at a by sending money of amount i and an argument V; pairs \(( V_{{\mathrm {1}}} , V_{{\mathrm {2}}} )\) of values; the empty list \([ ]\); cons \(V_{{\mathrm {1}}} {:}{:} V_{{\mathrm {2}}}\); and code \( \langle IS \rangle \) of first-class functions.Footnote 4 Unlike Michelson, which has primitive Boolean literals \(\texttt {True}\) and \(\texttt {False}\), we use integers as a substitute for Boolean values so that 0 means \(\texttt {False}\) and the others mean \(\texttt {True}\). As we have mentioned, there is no instruction to extract elements from an operation object but the elements can be referenced in refinement types to state what kind of operation object is constructed by a smart contract. Simple types, ranged over by \(T\), consist of base types (\(\mathtt {int}\), \(\mathtt {address}\), and \(\mathtt {operation}\), which are self-explanatory), pair types \(T_{{\mathrm {1}}} \times T_{{\mathrm {2}}}\), list types \(T \, \mathtt {list}\), and function types \(T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}}\). Instruction sequences, ranged over by \( IS \), are a sequence of instructions, ranged over by \(I\), enclosed by curly braces. A Mini-Michelson program is an instruction sequence.

Instructions include those for operand stack manipulation (to \(\mathtt {DROP}\), \(\mathtt {DUP}\)licate, \(\mathtt {SWAP}\), and \(\mathtt {PUSH}\) values); \(\mathtt {NOT}\) and \(\mathtt {ADD}\) for manipulating integers; \(\mathtt {PAIR}\), \(\mathtt {CAR}\), and \(\mathtt {CDR}\) for pairs; \(\mathtt {NIL}\) and \(\mathtt {CONS}\) for constructing lists; \(\mathtt {LAMBDA}\) for a first-class function; \(\mathtt {EXEC}\) for calling a function; and \(\texttt {TRANSFER\_TOKENS} \) to create an operation. Instructions for control structures are \(\mathtt {IF}\) and \( \texttt {IF\_CONS} \), which are for branching on integers (whether the stack top is \(\texttt {True}\) or not) and lists (whether the stack top is a cons or not), respectively, and \(\mathtt {LOOP}\) and \(\mathtt {ITER}\), which are for iteration on integers and lists, respectively. \(\mathtt {LAMBDA}\) pushes a function (described by its operand \( IS \)) onto the stack and \(\mathtt {EXEC}\) calls a function. Perhaps unfamiliar is \(\mathtt {DIP} \, IS \), which pops and saves the stack top somewhere else, executes \( IS \), and then pushes back the saved value.

We also use a few kinds of stacks in the following definitions: operand stacks, ranged over by \(S\), type stacks, ranged over by \({\bar{T}}\), and type binding stacks, ranged over by \(\varUpsilon \). The empty stack is denoted by \( \ddagger \) and push is by \( \triangleright \). We often omit the empty stack and write, for example, \(V_{{\mathrm {1}}} \triangleright V_{{\mathrm {2}}}\) for \(V_{{\mathrm {1}}} \triangleright V_{{\mathrm {2}}} \triangleright \ddagger \). Intuitively, \(T_{{\mathrm {1}}} \triangleright \, .. \, \triangleright T_{n}\) and \( x_{{\mathrm {1}}} : T_{{\mathrm {1}}} \triangleright \, .. \, \triangleright x_{n} : T_{n} \) describe stacks \(V_{{\mathrm {1}}} \triangleright \, .. \, \triangleright V_{n}\), where each value \(V_i\) is of type \(T_i\). We will use variables to name stack elements in the refinement type system.

We summarize main differences from Michelson proper:

  • Michelson has the notion of type attributes, which classify types, according to which generic operations such as \(\mathtt {PUSH}\) can be applied. For example, values of pushable types can be put on the stack by \(\mathtt {PUSH}\). Since type \(\mathtt {operation}\) is not pushable, an instruction such as \(\mathtt {PUSH} \, \mathtt {operation} \, \texttt {Transfer} ( V , i , a ) \) is not valid in Michelson—all operations have to be created by designated instructions. We ignore type attributes for simplicity here, but the implementation of Helmholtz, which calls the typechecker of Michelson, does not.

  • As we saw in ‘Overview of HELMHOLTZ and Michelson’, an operation is created from an address in two steps via a contract value. Since we model only one kind of operations, i.e., \( \texttt {Transfer} ( V , i , a ) \), we simplify the process to let instruction \( \texttt {TRANSFER\_TOKENS} \) directly creates an operation from an address in one step. We also omit the typecheck of the contract associated with an address.

  • In Michelson, each execution of a smart contract is assigned a gas to control how long the contract can run to prevent contracts from running too long. Current Helmholtz, however, never takes into account gas consumption because it depends on the size of values manipulated. Incorporating the gas consumption in our framework is left as future work.

  • We do not formally model exceptions for simplicity and, thus, the refinement type system does not (have to) capture exceptional behavior. Our verifier, however, does handle exceptions; we will informally discuss how we extend the type system with exceptions in Section ‘Extension with Exceptions’.

Operational Semantics

Fig. 3
figure 3

Operational semantics of Mini-Michelson

Figure 3 defines the operational semantics of Mini-Michelson . A judgment of the form \(S \vdash I \Downarrow S'\) (or \(S \vdash IS \Downarrow S'\), resp.) means that evaluating the instruction \(I\) (or the instruction sequence \( IS \), resp.) under the stack \(S\) results in the stack \(S'\). Although the defining rules are straightforward, we will make a few remarks about them.

The rule (E-Dip) means that \(\mathtt {DIP} \, IS \) pops and saves the stack top somewhere else, executes \( IS \), and then push back the saved value, as explained above. This instruction implicitly gives Mini-Michelson (and Michelson) a secondary stack. (E-Push) means that \(\mathtt {PUSH} \, T \, V\) does not check if the pushed value is well formed at run time: the check is the job of the simple type system, discussed soon.

The rules (E-IfT) and (E-IfF) define the behavior of the branching instruction \(\mathtt {IF} \, IS _{{\mathrm {1}}} \, IS _{{\mathrm {2}}}\), which executes \( IS _{{\mathrm {1}}}\) or \( IS _{{\mathrm {2}}}\), depending on the top of the operand stack. As we have mentioned, nonzero integers mean \(\texttt {True}\). Thus, (E-IfT) is used for the case in which \( IS _{{\mathrm {1}}}\) is executed, and otherwise, (E-IfF) is used. There is another branching instruction \(\texttt {IF\_CONS} \, IS _{{\mathrm {1}}} \, IS _{{\mathrm {2}}}\), which executes either instruction sequence depending on whether the list at the top of the stack is empty or not (cf. (E-IfConsT) and (E-IfConsF)).

The rules (E-LoopT) and (E-LoopF) define the behavior of the looping instruction \(\mathtt {LOOP} \, IS \). This instruction executes \( IS \) repeatedly until the top of the stack becomes \(\texttt {False}\). (E-LoopT) means that, if the condition is \(\texttt {True}\), \( IS \) is executed, and then \(\mathtt {LOOP} \, IS \) is executed again. (E-LoopF) means that, if the condition is \(\texttt {False}\), the loop is finished after dropping the stack top. A similar looping instruction is \(\mathtt {ITER} \, IS \), which iterates over a list (see (E-IterNil) and (E-IterCons)).

The rule (E-Lambda) means that \(\mathtt {LAMBDA} \, T_{{\mathrm {1}}} \, T_{{\mathrm {2}}} \, IS \) pushes the instruction sequence to the stack and (E-Exec) means that \(\mathtt {EXEC}\) pops the instruction sequence \( \langle IS \rangle \) and the stack top \(V\), saves the rest of the stack \(S\) elsewhere, runs \( IS \) with \(V\) as the sole value in the stack, pushes the result \(V'\) back to the restored stack \(S\).

The rule (E-TransferTokens) means that \(\texttt {TRANSFER\_TOKENS} \, T\) creates an operation object and pushes onto the stack. (As we have discussed, we omit a run-time check to see if \(T\) is really the argument type of the contract that the address \(a\) stores.)

Simple Type System

Fig. 4
figure 4

Simple typing

Mini-Michelson (as well as Michelson) is equipped with a simple type system. The type judgment for instructions is written \({\bar{T}} \vdash I \Rightarrow {\bar{T}}'\), which means that instruction \(I\) transforms a stack of type \({\bar{T}}\) into another stack of type \({\bar{T}}'\). The type judgment for values is written \( V : T \), which means that \(V\) is given simple type \(T\). The typing rules, which are shown in Fig. 4, are fairly straightforward. Note that these two judgment forms depend on each other—see (RTV-Fun) and (T-Push).

Refinement Type System

Now, we extend the simple type system to a refinement type system. In the refinement type system, a simple stack type \(T_{{\mathrm {1}}} \triangleright \, .. \, \triangleright T_{n}\) is augmented with a formula \(\varphi \) in an assertion language to describe the relationship among stack elements. More concretely, we introduce refinement stack types, ranged over by \(\varPhi \), of the form \(\{ x_{{\mathrm {1}}} : T_{{\mathrm {1}}} \triangleright \, ... \, \triangleright x_{n} : T_{n} \,|\,\varphi ( x_{{\mathrm {1}}} , \, ... \, , x_{n} ) \}\), which denotes a stack \(V_{{\mathrm {1}}} \triangleright \, .. \, \triangleright V_{n}\) such that \( V_{{\mathrm {1}}} : T_{{\mathrm {1}}} \), ..., \( V_{n} : T_{n} \) and \(\varphi ( V_{{\mathrm {1}}} , \, ... \, , V_{n} )\) hold, and refine the type judgment form, accordingly. We start with an assertion language, which is many-sorted first-order logic and proceed to the refinement type system.

Assertion Language

Fig. 5
figure 5

Syntax of Assertion Language

Fig. 6
figure 6

Well-Sorted Terms and Formulae

The assertion language is many-sorted first-order logic, where sorts are simple types. We show the syntax of terms, ranged over by \(t\), and formulae, ranged over by \(\varphi \), in Fig. 5. As usual, \(x\) is bound in \(\exists \, x : T . \varphi \). Most of them are straightforward but some constructions are worth explaining. A formula of the form \( \mathtt {call} ( t_{{\mathrm {1}}} , t_{{\mathrm {2}}} ) = t_{{\mathrm {3}}} \) means that, if instruction sequence denoted by \(t_{{\mathrm {1}}}\) is called with (a singleton stack that stores) a value denoted by \(t_{{\mathrm {2}}}\) (and terminates), it yields the value denoted by \(t_{{\mathrm {3}}}\). The term constructor \( \texttt {Transfer} ( t_{{\mathrm {1}}} , t_{{\mathrm {2}}} , t_{{\mathrm {3}}} ) \) allows us to refer to the elements in an operation object, which is opaque. Conjunction \(\varphi _{{\mathrm {1}}} \wedge \varphi _{{\mathrm {2}}}\), implication \(\varphi _{{\mathrm {1}}} \implies \varphi _{{\mathrm {2}}}\), and universal quantification \(\forall \, x : T . \varphi \) are defined as abbreviations as usual. We use several common abbreviations such as \(t_{{\mathrm {1}}} \ne t_{{\mathrm {2}}}\) for \(\lnot \, ( t_{{\mathrm {1}}} = t_{{\mathrm {2}}} )\), \(\exists \, x_{{\mathrm {1}}} : T_{{\mathrm {1}}} , \, .. \, , x_{n} : T_{n} . \varphi \) for \(\exists x_{{\mathrm {1}}} : T_{{\mathrm {1}}} . \dots \exists x_{n} : T_{n} . \varphi \), etc. A typing environment, ranged over by \(\varGamma \), is a sequence of type binding. We assume all variables in \(\varGamma \) are distinct. We abuse a comma to concatenate typing environments, e.g., \(\varGamma _{{\mathrm {1}}} , \varGamma _{{\mathrm {2}}}\). We also use a type binding stack \(\varUpsilon \) as a typing environment, explicitly denoted by \( {\widehat{\varUpsilon }} \), which is defined as \( {\widehat{\ddagger }} = \mathtt {empty}\) and \( \widehat{ x : T \triangleright \varUpsilon } = x : T , {\widehat{\varUpsilon }} \). To save space, we (ab)use V to denote a subset of terms and the value typing (see (WT-Val)). Strictly speaking, a term like \(( 0 , 0 )\) has two type derivations but it is easy to show that a term has at most one type under a given type environment. A similar abuse of V will be found elsewhere (such as Definition 2 below).

Well-sorted terms and formulae are defined by the judgments \(\varGamma \vdash t : T\) and \( \varGamma \vdash \varphi : {*} \), respectively. The former means that the term \(t\) is a well-sorted term of the sort \(T\) under the typing environment \(\varGamma \) and the latter that the formula \(\varphi \) is well sorted under the typing environment \(\varGamma \), respectively. The derivation rules for each judgment, shown in Fig. 6, are straightforward. Note that well-sortedness depends on the simple type system via (WT-Val).

Let a value assignment \(\sigma \) be a mapping from variables to values. We write \( \sigma [ x \mapsto V ] \) to denote the value assignment which maps \(x\) to \(V\) and otherwise is identical to \(\sigma \). As we are interested in well-sorted formulae, we consider a value assignment that respects a typing environment, as follows.

Definition 1

A value assignment \(\sigma \) is typed under a typing environment \(\varGamma \), denoted by \(\sigma : \varGamma \), iff \( \sigma ( x ) : T \) for every \( x : T \in \varGamma \).

Now, we define the semantics of the well-sorted terms and formulae in a standard manner as follows.

Definition 2

(Semantics of terms) The semantics \( {}[\![t ]\!]_{ \sigma : \varGamma } \) of term \(t\) under typed value assignment \(\sigma : \varGamma \) is defined as follows:

$$\begin{aligned} {}[\![x ]\!]_{ \sigma : \varGamma } = \sigma ( x )&\qquad {}[\![V ]\!]_{ \sigma : \varGamma } = V \\ {}[\![\texttt {Transfer} ( t_{{\mathrm {1}}} , t_{{\mathrm {2}}} , t_{{\mathrm {3}}} ) ]\!]_{ \sigma : \varGamma }&= \texttt {Transfer} ( {}[\![t_{{\mathrm {1}}} ]\!]_{ \sigma : \varGamma } , {}[\![t_{{\mathrm {2}}} ]\!]_{ \sigma : \varGamma } , {}[\![t_{{\mathrm {3}}} ]\!]_{ \sigma : \varGamma } ) \\ {}[\![( t_{{\mathrm {1}}} , t_{{\mathrm {2}}} ) ]\!]_{ \sigma : \varGamma }&= ( {}[\![t_{{\mathrm {1}}} ]\!]_{ \sigma : \varGamma } , {}[\![t_{{\mathrm {2}}} ]\!]_{ \sigma : \varGamma } ) \\ {}[\![t_{{\mathrm {1}}} {:}{:} t_{{\mathrm {2}}} ]\!]_{ \sigma : \varGamma }&= {}[\![t_{{\mathrm {1}}} ]\!]_{ \sigma : \varGamma } {:}{:} {}[\![t_{{\mathrm {2}}} ]\!]_{ \sigma : \varGamma } \\ {}[\![t_{{\mathrm {1}}} + t_{{\mathrm {2}}} ]\!]_{ \sigma : \varGamma }&= {}[\![t_{{\mathrm {1}}} ]\!]_{ \sigma : \varGamma } + {}[\![t_{{\mathrm {2}}} ]\!]_{ \sigma : \varGamma } . \end{aligned}$$

Definition 3

(Semantics of formulae). For a typed value assignment \(\sigma : \varGamma \), a valid well-sorted formula \(\varphi \) under \(\varGamma \) is denoted by \(\sigma : \varGamma \models \varphi \) and defined as follows.

  • \(\sigma : \varGamma \models \top \).

  • \(\sigma : \varGamma \models t_{{\mathrm {1}}} = t_{{\mathrm {2}}}\) iff \( {}[\![t_{{\mathrm {1}}} ]\!]_{ \sigma : \varGamma } = {}[\![t_{{\mathrm {2}}} ]\!]_{ \sigma : \varGamma } \).

  • \(\sigma : \varGamma \models \mathtt {call} ( t_{{\mathrm {1}}} , t_{{\mathrm {2}}} ) = t_{{\mathrm {3}}} \) iff \( {}[\![t_{{\mathrm {1}}} ]\!]_{ \sigma : \varGamma } = \langle IS \rangle \) and \( {}[\![t_{{\mathrm {2}}} ]\!]_{ \sigma : \varGamma } \triangleright \ddagger \vdash IS \Downarrow {}[\![t_{{\mathrm {3}}} ]\!]_{ \sigma : \varGamma } \triangleright \ddagger \).

  • \(\sigma : \varGamma \models \lnot \, \varphi \) iff \(\sigma : \varGamma \not \models \varphi \).

  • \(\sigma : \varGamma \models \varphi _{{\mathrm {1}}} \vee \varphi _{{\mathrm {2}}}\) iff \(\sigma : \varGamma \models \varphi _{{\mathrm {1}}}\) or \(\sigma : \varGamma \models \varphi _{{\mathrm {2}}}\).

  • \(\sigma : \varGamma \models \exists \, x : T . \varphi \) iff \( \sigma [ x \mapsto V ] : \varGamma , x : T \models \varphi \) for some \(V\).

We write \(\varGamma \models \varphi \) iff \(\sigma : \varGamma \models \varphi \) for any \(\sigma \).

Typing Rules

Fig. 7
figure 7

Typing rules (I)

Fig. 8
figure 8

Typing rules (II)

The type system is defined by subtyping and typing: a subtyping judgment is of the form \(\varGamma \vdash \varPhi _{{\mathrm {1}}} <: \varPhi _{{\mathrm {2}}}\), which means stack type \(\varPhi _{{\mathrm {1}}}\) is a subtype of \(\varPhi _{{\mathrm {2}}}\) under \(\varGamma \), and a type judgment for instructions (resp. instruction sequences) is of the form \( \varGamma \vdash \varPhi _{{\mathrm {1}}} \; I \; \varPhi _{{\mathrm {2}}} \) (resp. \( \varGamma \vdash \varPhi _{{\mathrm {1}}} \; IS \; \varPhi _{{\mathrm {2}}} \)), which means that if \(I\) (resp. \( IS \)) is executed under a stack satisfying \(\varPhi _{{\mathrm {1}}}\), the resulting stack (if terminates) satisfies \(\varPhi _{{\mathrm {2}}}\). We often call \(\varPhi _{{\mathrm {1}}}\) pre-condition and \(\varPhi _{{\mathrm {2}}}\) post-condition, following the terminology of Hoare logic. Note that the scopes of variables declared in \(\varGamma \) include \(\varPhi _{{\mathrm {1}}}\) and \(\varPhi _{{\mathrm {2}}}\) but those bound in \(\varPhi _{{\mathrm {1}}}\) do not include \(\varPhi _{{\mathrm {2}}}\). To express the relationship between the initial and final stacks, we use the type environment \(\varGamma \). In writing down concrete specifications, it is sometimes convenient to allow the scopes of variables bound in \(\varPhi _{{\mathrm {1}}}\) to include \(\varPhi _{{\mathrm {2}}}\), but we find that it would clutter the presentation of typing rules.

In our type system, subtyping is defined semantically as follows.

Definition 4

(Subtyping relation). A refinement stack type \(\{ \varUpsilon \,|\,\varphi _{{\mathrm {1}}} \}\) is called subtype of a refinement stack type \(\{ \varUpsilon \,|\,\varphi _{{\mathrm {2}}} \}\) under a typing environment \(\varGamma \), denoted by \(\varGamma \vdash \{ \varUpsilon \,|\,\varphi _{{\mathrm {1}}} \} <: \{ \varUpsilon \,|\,\varphi _{{\mathrm {2}}} \}\), iff \(\varGamma , {\widehat{\varUpsilon }} \models \varphi _{{\mathrm {1}}} \implies \varphi _{{\mathrm {2}}}\).

We show the typing rules in Figs. 7 and 8. It is easy to observe that the type binding stack parts in the pre- and post-conditions follows the simple type system. We will focus on predicate parts below.

  • (RT-Dip) means that \(\mathtt {DIP} \, IS \) is well typed if the body \( IS \) is typed under the stack type obtained by removing the top element. However, since a property \(\varphi \) for the initial stack relies on the popped value \(x\), we keep the binding in the typing environment.

  • (RT-If) means that the instruction is well typed if both branches have the same post-condition; the pre-conditions of the branches are strengthened by the assumptions that the top of the input stack is \(\texttt {True}\) (\(x \ne 0\)) and \(\texttt {False}\) (\(x = 0\)). The variable \(x\) is existentially quantified because the top element will be removed before the execution of either branch.

  • (RT-Loop) is similar to the proof rule for while-loops in Hoare logic. The formula \(\varphi \) is a loop invariant. Since the body of \(\mathtt {LOOP}\) is executed while the stack top is nonzero, the pre-condition for the body \( IS \) is strengthened by \(x \ne 0\), whereas the post-condition of \(\mathtt {LOOP} \, IS \) is strengthened by \(x = 0\).

  • (RT-Iter) can be understood as a variant of (RT-Loop). In the premise, \( x_{{\mathrm {1}}} {:}{:} x_{{\mathrm {2}}} = x \) represents the condition under which iteration goes on, that is a list on top of the stack is non-nil. In addition to that, \( x_{{\mathrm {2}}} = x \) guarantees that the loop-invariant \(\varphi \) holds for the tail of the list since in the next iteration stack top becomes the tail of the list.

  • (RT-Lambda) is for the instruction to push a first-class function onto the operand stack. The premise of the rule means that the body \( IS \) takes a value (named \(y_{{\mathrm {1}}}\)) of type \(T_{{\mathrm {1}}}\) that satisfies \(\varphi _{{\mathrm {1}}}\) and outputs a value (named \(y_{{\mathrm {2}}}\)) of type \(T_{{\mathrm {2}}}\) that satisfies \(\varphi _{{\mathrm {2}}}\) (if it terminates). The post-condition in the conclusion expresses, by using call, that the function \(x\) has the property above. The extra variable \(y'_{{\mathrm {1}}}\) in the type environment of the premise is an alias of \(y_{{\mathrm {1}}}\); being a variable declared in the type environment \(y'_{{\mathrm {1}}}\) can appear in both \(\varphi _{{\mathrm {1}}}\) and \(\varphi _{{\mathrm {2}}}\)Footnote 5 and can describe the relationship between the input and output of the function.

  • (RT-Exec) just adds to the post-condition \( \mathtt {call} ( x_{{\mathrm {2}}} , x_{{\mathrm {1}}} ) = x_{{\mathrm {3}}} \), meaning the result of a call to the function \(x_{{\mathrm {2}}}\) with \(x_{{\mathrm {1}}}\) as an argument yields \(x_{{\mathrm {3}}}\). It may look simpler than expected; the crux here is that \(\varphi \) is expected to imply \(\forall \, x_{{\mathrm {1}}} : T_{{\mathrm {1}}} , x_{{\mathrm {3}}} : T_{{\mathrm {2}}} . \varphi _{{\mathrm {1}}} \wedge \mathtt {call} ( x_{{\mathrm {2}}} , x_{{\mathrm {1}}} ) = x_{{\mathrm {3}}} \implies \varphi _{{\mathrm {2}}}\), where \(\varphi _{{\mathrm {1}}}\) and \(\varphi _{{\mathrm {2}}}\) represent the pre- and post-conditions, respectively, of function \(x_{{\mathrm {2}}}\). If \(x_{{\mathrm {1}}}\) satisfies \(\varphi _{{\mathrm {1}}}\), then we can derive that \(\varphi _{{\mathrm {2}}}\) holds.

  • (RT-Sub) is the rule for subsumption to strengthening the pre-condition and weakening the post-condition.

Properties

In this section, we show soundness of our type system. Informally, what we show is that, for a well-typed program, if we execute it under a stack which satisfies the pre-condition of the typing, then (if the evaluation halts) the resulting stack satisfies the post-condition of the typing. We only sketch proofs with important lemmas. The detailed proofs are found in Appendix A.

To state the soundness formally, we give additional definitions.

Definition 5

(Free variables). The set of free variables in \(\varphi \) is denoted by \( \text {fvars}( \varphi ) \).

Definition 6

(Erasure) We define \( \lfloor \varPhi \rfloor \), which is the simple stack type obtained by erasing predicates from \(\varPhi \), as follows.

$$\begin{aligned} \lfloor \{ \ddagger \,|\,\varphi \} \rfloor = \ddagger \qquad \lfloor \{ x : T \triangleright \varUpsilon \,|\,\varphi \} \rfloor = T \triangleright \lfloor \{ \varUpsilon \,|\,\varphi \} \rfloor \end{aligned}$$
Fig. 9
figure 9

Simple and refinement stack typing

Definition 7

(Stack typing). Stack typing \( S : {\bar{T}} \) and refinement stack typing \(\sigma : \varGamma \models S : \varPhi \) are defined by the rules in Fig. 9.

Note that the definition of refinement stack typing follows the informal explanation of the refinement stack types in Section ‘Refinement Type System’.

We start from soundness of simple type system as follows, that is, for a simply well-typed instruction sequence \({\bar{T}}_{{\mathrm {1}}} \vdash IS \Rightarrow {\bar{T}}_{{\mathrm {2}}}\), if evaluation starts from correct stack \(S_{{\mathrm {1}}}\), that is \( S_{{\mathrm {1}}} : {\bar{T}}_{{\mathrm {1}}} \), and results in a stack \(S_{{\mathrm {2}}}\); then \(S_{{\mathrm {2}}}\) respects \({\bar{T}}_{{\mathrm {2}}}\), that is \( S_{{\mathrm {2}}} : {\bar{T}}_{{\mathrm {2}}} \). This lemma is not only just a desirable property but also one we use for proving the soundness of the refinement type system in the case \(\mathtt {EXEC}\).

Lemma 8

(Soundness of the simple type system) If \({\bar{T}}_{{\mathrm {1}}} \vdash IS \Rightarrow {\bar{T}}_{{\mathrm {2}}}\), \(S_{{\mathrm {1}}} \vdash IS \Downarrow S_{{\mathrm {2}}}\), and \( S_{{\mathrm {1}}} : {\bar{T}}_{{\mathrm {1}}} \), then \( S_{{\mathrm {2}}} : {\bar{T}}_{{\mathrm {2}}} \).

Proof

Proved with a similar statement

If \({\bar{T}}_{{\mathrm {1}}} \vdash I \Rightarrow {\bar{T}}_{{\mathrm {2}}}\), \(S_{{\mathrm {1}}} \vdash I \Downarrow S_{{\mathrm {2}}}\), and \( S_{{\mathrm {1}}} : {\bar{T}}_{{\mathrm {1}}} \), then \( S_{{\mathrm {2}}} : {\bar{T}}_{{\mathrm {2}}} \)

for a single instruction by simultaneous induction on \({\bar{T}}_{{\mathrm {1}}} \vdash IS \Rightarrow {\bar{T}}_{{\mathrm {2}}}\) and \({\bar{T}}_{{\mathrm {1}}} \vdash I \Rightarrow {\bar{T}}_{{\mathrm {2}}}\). \(\square \)

We state the main theorem as follows.

Theorem 9

(Soundness of the refinement type system) If \( \varGamma \vdash \varPhi _{{\mathrm {1}}} \; IS \; \varPhi _{{\mathrm {2}}} \), \(S_{{\mathrm {1}}} \vdash IS \Downarrow S_{{\mathrm {2}}}\), and \(\sigma : \varGamma \models S_{{\mathrm {1}}} : \varPhi _{{\mathrm {1}}}\), then \(\sigma : \varGamma \models S_{{\mathrm {2}}} : \varPhi _{{\mathrm {2}}}\).

The proof is close to a proof of soundness of Hoare logic, with a few extra complications due to the presence of first-class functions. One of the key lemmas is the following one, which states that a value assignment can be represented by a logical formula or a stack element:

Lemma 10

The following statements are equivalent:

  1. (1)

    \(\sigma : \varGamma \models S : \{ \varUpsilon \,|\,\exists \, x : T . \varphi \wedge x = V \}\);

  2. (2)

    \( \sigma [ x \mapsto V ] : \varGamma , x : T \models S : \{ \varUpsilon \,|\,\varphi \}\); and

  3. (3)

    \(\sigma : \varGamma \models V \triangleright S : \{ x : T \triangleright \varUpsilon \,|\,\varphi \}\).

Then, we prove a few lemmas related to \(\mathtt {LOOP}\) (Lemma 11), \(\mathtt {ITER}\) (Lemma 12), predicate call (Lemmas 13 and 14), and subtyping (Lemma 15).

Lemma 11

Suppose \( IS \) satisfies that \(S_{{\mathrm {1}}} \vdash IS \Downarrow S_{{\mathrm {2}}}\) and \(\sigma : \varGamma \models S_{{\mathrm {1}}} : \{ \varUpsilon \,|\,\exists \, x : \mathtt {int} . \varphi \wedge x \ne 0 \}\) imply \(\sigma : \varGamma \models S_{{\mathrm {2}}} : \{ x : \mathtt {int} \triangleright \varUpsilon \,|\,\varphi \}\) for any \(S_{{\mathrm {1}}}\) and \(S_{{\mathrm {2}}}\). If \(S_{{\mathrm {1}}} \vdash \mathtt {LOOP} \, IS \Downarrow S_{{\mathrm {2}}}\) and \(\sigma : \varGamma \models S_{{\mathrm {1}}} : \{ x : \mathtt {int} \triangleright \varUpsilon \,|\,\varphi \}\), then \(\sigma : \varGamma \models S_{{\mathrm {2}}} : \{ \varUpsilon \,|\,\exists \, x : \mathtt {int} . \varphi \wedge x = 0 \}\).

Proof

By induction on the derivation of \(S_{{\mathrm {1}}} \vdash \mathtt {LOOP} \, IS \Downarrow S_{{\mathrm {2}}}\). \(\square \)

Lemma 12

Suppose \( x_{{\mathrm {1}}} \notin \text {fvars}( \varphi ) \), \( x_{{\mathrm {2}}} \notin \text {fvars}( \varphi ) \), and that \(S'_{{\mathrm {1}}} \vdash IS \Downarrow S'_{{\mathrm {2}}}\) and \(\sigma ' : \varGamma , x_{{\mathrm {2}}} : T \, \mathtt {list} \models S'_{{\mathrm {1}}} : \{ x_{{\mathrm {1}}} : T \triangleright \varUpsilon \,|\,\exists \, x : T \, \mathtt {list} . \varphi \wedge x_{{\mathrm {1}}} {:}{:} x_{{\mathrm {2}}} = x \}\) imply \(\sigma ' : \varGamma , x_{{\mathrm {2}}} : T \, \mathtt {list} \models S'_{{\mathrm {2}}} : \{ \varUpsilon \,|\,\exists \, x : T \, \mathtt {list} . \varphi \wedge x_{{\mathrm {2}}} = x \}\) for any \(S'_{{\mathrm {1}}}\), \(S'_{{\mathrm {2}}}\), and \(\sigma '\). If \(S_{{\mathrm {1}}} \vdash \mathtt {ITER} \, IS \Downarrow S_{{\mathrm {2}}}\) and \(\sigma : \varGamma \models S_{{\mathrm {1}}} : \{ x : T \, \mathtt {list} \triangleright \varUpsilon \,|\,\varphi \}\), then \(\sigma : \varGamma \models S_{{\mathrm {2}}} : \{ \varUpsilon \,|\,\exists \, x : T \, \mathtt {list} . \varphi \wedge x = [ ] \}\).

Proof

By induction on the derivation of \(S_{{\mathrm {1}}} \vdash \mathtt {ITER} \, IS \Downarrow S_{{\mathrm {2}}}\). \(\square \)

Lemma 13

If \(y_{{\mathrm {1}}} \ne y_{{\mathrm {2}}}\), \( y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {1}}} : T_{{\mathrm {1}}} \vdash \varphi _{{\mathrm {1}}} : {*} \), \( y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {2}}} : T_{{\mathrm {2}}} \vdash \varphi _{{\mathrm {2}}} : {*} \), \( \langle IS \rangle : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} \), and

$$\begin{aligned} \begin{aligned} \text {for any } V_{{\mathrm {1}}}, V_{{\mathrm {2}}}, \sigma , \text { if }&V_{{\mathrm {1}}} \triangleright \ddagger \vdash IS \Downarrow V_{{\mathrm {2}}} \triangleright \ddagger \text { and } \\&\sigma : y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} \models V_{{\mathrm {1}}} \triangleright \ddagger : \{ y_{{\mathrm {1}}} : T_{{\mathrm {1}}} \triangleright \ddagger \,|\,y'_{{\mathrm {1}}} = y_{{\mathrm {1}}} \wedge \varphi _{{\mathrm {1}}} \} \\ \text { then }&\sigma : y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} \models V_{{\mathrm {2}}} \triangleright \ddagger : \{ y_{{\mathrm {2}}} : T_{{\mathrm {2}}} \triangleright \ddagger \,|\,\varphi _{{\mathrm {2}}} \}, \end{aligned} \end{aligned}$$

then \(\varGamma \models \forall \, y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {2}}} : T_{{\mathrm {2}}} . y'_{{\mathrm {1}}} = y_{{\mathrm {1}}} \wedge \varphi _{{\mathrm {1}}} \wedge \mathtt {call} ( \langle IS \rangle , y'_{{\mathrm {1}}} ) = y_{{\mathrm {2}}} \implies \varphi _{{\mathrm {2}}}\) for any \(\varGamma \).

Proof

By the definition of the semantics of \(\mathtt {call}\). \(\square \)

Lemma 14

If \(V_{{\mathrm {1}}} \triangleright \ddagger \vdash IS \Downarrow V_{{\mathrm {2}}} \triangleright \ddagger \), \( V_{{\mathrm {1}}} : T_{{\mathrm {1}}} \), \( V_{{\mathrm {2}}} : T_{{\mathrm {2}}} \), and \( \langle IS \rangle : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} \), then \(\varGamma \models \mathtt {call} ( \langle IS \rangle , V_{{\mathrm {1}}} ) = V_{{\mathrm {2}}} \) for any \(\varGamma \).

Proof

By the definition of the semantics of \(\mathtt {call}\). \(\square \)

Lemma 15

If \(\varGamma \vdash \varPhi _{{\mathrm {1}}} <: \varPhi _{{\mathrm {2}}}\) and \(\sigma : \varGamma \models S : \varPhi _{{\mathrm {1}}}\), then \(\sigma : \varGamma \models S : \varPhi _{{\mathrm {2}}}\).

Proof

Straightforward from Definition 4. \(\square \)

Proof of Theorem 9

It is proved together with a similar statement

If \( \varGamma \vdash \varPhi _{{\mathrm {1}}} \; I \; \varPhi _{{\mathrm {2}}} \), \(S_{{\mathrm {1}}} \vdash I \Downarrow S_{{\mathrm {2}}}\), and \(\sigma : \varGamma \models S_{{\mathrm {1}}} : \varPhi _{{\mathrm {1}}}\), then \(\sigma : \varGamma \models S_{{\mathrm {2}}} : \varPhi _{{\mathrm {2}}}\).

for a single instruction by simultaneous induction on \( \varGamma \vdash \varPhi _{{\mathrm {1}}} \; IS \; \varPhi _{{\mathrm {2}}} \) and \( \varGamma \vdash \varPhi _{{\mathrm {1}}} \; I \; \varPhi _{{\mathrm {2}}} \) with case analysis on the last typing rule used. We show a few representative cases.

Case (RT-Dip)::

We have \( I = \mathtt {DIP} \, IS \) and \( \varPhi _{{\mathrm {1}}} = \{ x : T \triangleright \varUpsilon \,|\,\varphi \} \) and \( \varPhi _{{\mathrm {2}}} = \{ x : T \triangleright \varUpsilon ' \,|\,\varphi ' \} \) and \( \varGamma , x : T \vdash \{ \varUpsilon \,|\,\varphi \} \; IS \; \{ \varUpsilon ' \,|\,\varphi ' \} \) for some \( IS \), \(x\), \(T\), \(\varUpsilon \), \(\varUpsilon '\), \(\varphi \), and \(\varphi '\). By (E-Dip), we have \( S_{{\mathrm {1}}} = V \triangleright S'_{{\mathrm {1}}} \) and \( S_{{\mathrm {2}}} = V \triangleright S'_{{\mathrm {2}}} \) and \(S'_{{\mathrm {1}}} \vdash IS \Downarrow S'_{{\mathrm {2}}}\) for some \(V\), \(S'_{{\mathrm {1}}}\), and \(S'_{{\mathrm {2}}}\). By Lemma 10, we have \( \sigma [ x \mapsto V ] : \varGamma , x : T \models S'_{{\mathrm {1}}} : \{ \varUpsilon \,|\,\varphi \} \). By applying IH, we have \( \sigma [ x \mapsto V ] : \varGamma , x : T \models S'_{{\mathrm {2}}} : \{ \varUpsilon ' \,|\,\varphi ' \} \) from which \(\sigma : \varGamma \models V \triangleright S'_{{\mathrm {2}}} : \{ x : T \triangleright \varUpsilon ' \,|\,\varphi ' \}\) follows.

Case (RT-Loop)::

We have \( I = \mathtt {LOOP} \, IS \) and \( \varPhi _{{\mathrm {1}}} = \{ x : \mathtt {int} \triangleright \varUpsilon \,|\,\varphi \} \) and \( \varPhi _{{\mathrm {2}}} = \{ \varUpsilon \,|\,\exists \, x : \mathtt {int} . \varphi \wedge x = 0 \} \) and \( \varGamma \vdash \{ \varUpsilon \,|\,\exists \, x : \mathtt {int} . \varphi \wedge x \ne 0 \} \; IS \; \{ x : \mathtt {int} \triangleright \varUpsilon \,|\,\varphi \} \) and \( S_{{\mathrm {1}}} = i \triangleright S \) for some \( IS \), \(x\), \(\varUpsilon \), \(S\), and \(\varphi \). By IH, we have that, \( \text {for any } S'_{{\mathrm {1}}}, S'_{{\mathrm {2}}}, \text { if } S'_{{\mathrm {1}}} \vdash IS \Downarrow S'_{{\mathrm {2}}} \text { and } \sigma : \varGamma \models S'_{{\mathrm {1}}} : \{ \varUpsilon \,|\,\exists \, x : \mathtt {int} . \varphi \wedge x \ne 0 \}, \text { then } \sigma : \varGamma \models S'_{{\mathrm {2}}} : \{ x : \mathtt {int} \triangleright \varUpsilon \,|\,\varphi \}.\) The goal easily follows from Lemma 11.

Case (RT-Lambda)::

We have \( I = \mathtt {LAMBDA} \, T_{{\mathrm {1}}} \, T_{{\mathrm {2}}} \, IS \) and \( \varPhi _{{\mathrm {1}}} = \{ \varUpsilon \,|\,\varphi \} \) and \( \varPhi _{{\mathrm {2}}} = \{ x : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} \triangleright \varUpsilon \,|\,\varphi \wedge \forall \, y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {2}}} : T_{{\mathrm {2}}} . y'_{{\mathrm {1}}} = y_{{\mathrm {1}}} \wedge \varphi _{{\mathrm {1}}} \wedge \mathtt {call} ( x , y'_{{\mathrm {1}}} ) = y_{{\mathrm {2}}} \implies \varphi _{{\mathrm {2}}} \} \) and \( y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} \vdash \{ y_{{\mathrm {1}}} : T_{{\mathrm {1}}} \triangleright \ddagger \,|\,y'_{{\mathrm {1}}} = y_{{\mathrm {1}}} \wedge \varphi _{{\mathrm {1}}} \} \; IS \; \{ y_{{\mathrm {2}}} : T_{{\mathrm {2}}} \triangleright \ddagger \,|\,\varphi _{{\mathrm {2}}} \} \) and \( x \notin \text {dom}( \varGamma , {\widehat{\varUpsilon }} ) \cup \{ y_{{\mathrm {1}}} , y'_{{\mathrm {1}}} , y_{{\mathrm {2}}} \} \) and \( y_{{\mathrm {1}}} \ne y'_{{\mathrm {1}}} \) and \( y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {1}}} : T_{{\mathrm {1}}} \vdash \varphi _{{\mathrm {1}}} : {*} \) and for some \( IS \), \(x\), \(y_{{\mathrm {1}}}\), \(y'_{{\mathrm {1}}}\), \(y_{{\mathrm {2}}}\), \(T_{{\mathrm {1}}}\), \(T_{{\mathrm {2}}}\), \(\varUpsilon \), \(\varphi \), \(\varphi _{{\mathrm {1}}}\), and \(\varphi _{{\mathrm {2}}}\). By (E-Lambda), we also have \( S_{{\mathrm {2}}} = \langle IS \rangle \triangleright S_{{\mathrm {1}}} \). By IH, we have that, \( \text {for any } V_{{\mathrm {1}}}, V_{{\mathrm {2}}}, \sigma , \text { if } V_{{\mathrm {1}}} \triangleright \ddagger \vdash IS \Downarrow V_{{\mathrm {2}}} \triangleright \ddagger \text { and } \sigma : y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} \models V_{{\mathrm {1}}} \triangleright \ddagger : \{ y_{{\mathrm {1}}} : T_{{\mathrm {1}}} \triangleright \ddagger \,|\,y'_{{\mathrm {1}}} = y_{{\mathrm {1}}} \wedge \varphi _{{\mathrm {1}}} \}, \text { then } \sigma : y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} \models V_{{\mathrm {2}}} \triangleright \ddagger : \{ y_{{\mathrm {2}}} : T_{{\mathrm {2}}} \triangleright \ddagger \,|\,\varphi _{{\mathrm {2}}} \}.\) By Lemma 13, \( \varGamma , {\widehat{\varUpsilon }} \models \forall \, y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {2}}} : T_{{\mathrm {2}}} . y'_{{\mathrm {1}}} = y_{{\mathrm {1}}} \wedge \varphi _{{\mathrm {1}}} \wedge \mathtt {call} ( \langle IS \rangle , y'_{{\mathrm {1}}} ) = y_{{\mathrm {2}}} \implies \varphi _{{\mathrm {2}}}. \) Then, it is easy to show \( \sigma : \varGamma \models S_{{\mathrm {1}}} : \{ \varUpsilon \,|\,\varphi \wedge \forall \, y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {2}}} : T_{{\mathrm {2}}} . y'_{{\mathrm {1}}} = y_{{\mathrm {1}}} \wedge \varphi _{{\mathrm {1}}} \wedge \mathtt {call} ( \langle IS \rangle , y'_{{\mathrm {1}}} ) = y_{{\mathrm {2}}} \implies \varphi _{{\mathrm {2}}} \} . \) Then, we have

$$\begin{aligned}& \sigma : \varGamma \models S_{{\mathrm {1}}} : \{ \varUpsilon \,|\,\exists \, x : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} . ( \varphi \wedge {} \\ &\quad \forall \, y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {2}}} : T_{{\mathrm {2}}} . y'_{{\mathrm {1}}} = y_{{\mathrm {1}}} \wedge \varphi _{{\mathrm {1}}} \wedge \mathtt {call} ( x , y'_{{\mathrm {1}}} ) = y_{{\mathrm {2}}} \implies \varphi _{{\mathrm {2}}} ) \wedge x = \langle IS \rangle \}. \end{aligned}$$

Therefore, by Lemma 10, we have

$$\begin{aligned} &\sigma : \varGamma \models \langle IS \rangle \triangleright S_{{\mathrm {1}}} : \{ x : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} \triangleright \varUpsilon \,|\,\varphi \wedge {} \\ &\quad \forall \, y'_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {1}}} : T_{{\mathrm {1}}} , y_{{\mathrm {2}}} : T_{{\mathrm {2}}} . y'_{{\mathrm {1}}} = y_{{\mathrm {1}}} \wedge \varphi _{{\mathrm {1}}} \wedge \mathtt {call} ( x , y'_{{\mathrm {1}}} ) = y_{{\mathrm {2}}} \implies \varphi _{{\mathrm {2}}} \} \end{aligned}$$

as required.

Case (RT-Exec)::

We have \( I = \mathtt {EXEC} \) and, for some \(x_{{\mathrm {1}}}\), \(x_{{\mathrm {2}}}\), \(x_{{\mathrm {3}}}\), \(T_{{\mathrm {1}}}\), \(T_{{\mathrm {2}}}\), \(\varUpsilon \), and \(\varphi \), \( \varPhi _{{\mathrm {1}}} = \{ x_{{\mathrm {1}}} : T_{{\mathrm {1}}} \triangleright x_{{\mathrm {2}}} : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} \triangleright \varUpsilon \,|\,\varphi \} \) and \( \varPhi _{{\mathrm {2}}} = \{ x_{{\mathrm {3}}} : T_{{\mathrm {2}}} \triangleright \varUpsilon \,|\,\exists \, x_{{\mathrm {1}}} : T_{{\mathrm {1}}} , x_{{\mathrm {2}}} : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} . \varphi \wedge \mathtt {call} ( x_{{\mathrm {2}}} , x_{{\mathrm {1}}} ) = x_{{\mathrm {3}}} \} \) and \( x_{{\mathrm {3}}} \notin \text {dom}( \varGamma , \widehat{ x_{{\mathrm {1}}} : T_{{\mathrm {1}}} \triangleright x_{{\mathrm {2}}} : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} \triangleright \varUpsilon } ) \). By (E-Exec), we have \( S_{{\mathrm {1}}} = V_{{\mathrm {1}}} \triangleright \langle IS \rangle \triangleright S \) and \( S_{{\mathrm {2}}} = V_{{\mathrm {2}}} \triangleright S \) and \( V_{{\mathrm {1}}} \triangleright \ddagger \vdash IS \Downarrow V_{{\mathrm {2}}} \triangleright \ddagger \) for some \(V_{{\mathrm {1}}}\), \(V_{{\mathrm {2}}}\), \( IS \), and \(S\). By the assumption \( \sigma : \varGamma \models S_{{\mathrm {1}}} : \varPhi _{{\mathrm {1}}} \), we have \( \sigma : \varGamma \models S : \{ \varUpsilon \,|\,\exists \, x_{{\mathrm {2}}} : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} . ( \exists \, x_{{\mathrm {1}}} : T_{{\mathrm {1}}} . \varphi \wedge x_{{\mathrm {1}}} = V_{{\mathrm {1}}} ) \wedge x_{{\mathrm {2}}} = \langle IS \rangle \}. \) By Lemma 14, we have \( \varGamma , {\widehat{\varUpsilon }} \models \mathtt {call} ( \langle IS \rangle , V_{{\mathrm {1}}} ) = V_{{\mathrm {2}}}. \) Therefore, we have \( \sigma : \varGamma \models S : \{ \varUpsilon \,|\,( \exists \, x_{{\mathrm {2}}} : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} . ( \exists \, x_{{\mathrm {1}}} : T_{{\mathrm {1}}} . \varphi \wedge x_{{\mathrm {1}}} = V_{{\mathrm {1}}} ) \wedge x_{{\mathrm {2}}} = \langle IS \rangle ) \wedge \mathtt {call} ( \langle IS \rangle , V_{{\mathrm {1}}} ) = V_{{\mathrm {2}}} \}. \) Finally, we have \(\sigma : \varGamma \models S : \{ \varUpsilon \,|\,\exists \, x_{{\mathrm {3}}} : T_{{\mathrm {2}}} . ( \exists \, x_{{\mathrm {1}}} : T_{{\mathrm {1}}} , x_{{\mathrm {2}}} : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} . \varphi \wedge \mathtt {call} ( x_{{\mathrm {2}}} , x_{{\mathrm {1}}} ) = x_{{\mathrm {3}}} ) \wedge x_{{\mathrm {3}}} = V_{{\mathrm {2}}} \}\) and, thus, \(\sigma : \varGamma \models V_{{\mathrm {2}}} \triangleright S : \{ x_{{\mathrm {3}}} : T_{{\mathrm {2}}} \triangleright \varUpsilon \,|\,( \exists \, x_{{\mathrm {1}}} : T_{{\mathrm {1}}} , x_{{\mathrm {2}}} : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}} . \varphi \wedge \mathtt {call} ( x_{{\mathrm {2}}} , x_{{\mathrm {1}}} ) = x_{{\mathrm {3}}} ) \}\) as required.

Case (RT-Sub)::

We have \( \varGamma \vdash \varPhi _{{\mathrm {1}}} <: \varPhi '_{{\mathrm {1}}} \) and \( \varGamma \vdash \varPhi '_{{\mathrm {2}}} <: \varPhi _{{\mathrm {2}}} \) and \( \varGamma \vdash \varPhi '_{{\mathrm {1}}} \; I \; \varPhi '_{{\mathrm {2}}} \) for some \(\varPhi '_{{\mathrm {1}}}\) and \(\varPhi '_{{\mathrm {2}}}\). By Lemma 15, we have \( \sigma : \varGamma \models S_{{\mathrm {1}}} : \varPhi '_{{\mathrm {1}}}. \) By IH, we have \( \sigma : \varGamma \models S_{{\mathrm {2}}} : \varPhi '_{{\mathrm {2}}}. \) Then, the goal follows from Lemma 15. \(\square \)

Extension with Exceptions

The type system implemented in Helmholtz is extended to handle instruction FAILWITH, which immediately aborts the execution, discarding all the stack elements but the top element. The type judgment form is extended to

$$ \begin{aligned} \varGamma \vdash \varPhi _{{\mathrm {1}}} \; IS \; \varPhi _{{\mathrm {2}}} \mathrel { \& } \varPhi _{{\mathrm {3}}}, \end{aligned}$$

which means that, if \( IS \) is executed under a stack satisfying \(\varPhi _{{\mathrm {1}}}\), then the resulting stack satisfies \(\varPhi _{{\mathrm {2}}}\) (if normally terminates), or \(\varPhi _{{\mathrm {3}}}\) (if aborted by FAILWITH). The typing rule for instruction FAILWITH, which raises an exception with the value at the stack top, is given as follows:

$$ \begin{aligned} \varGamma \vdash \{ x : T \triangleright \varUpsilon \,|\,\varphi \}\; \texttt {FAILWITH} \; \{ \varUpsilon \,|\,\bot \} \mathrel { \& } \{ \, \mathtt {err} \, \,|\,\exists \, x : T , {\widehat{\varUpsilon }} . \varphi \wedge x = \mathtt {err} \}. \end{aligned}$$

The rule expresses that, if FAILWITH is executed under a non-empty stack that satisfies \(\varphi \), then the program point just after the instruction is not reachable (hence, \(\{ \varUpsilon \,|\,\bot \}\)). The refinement \(\exists \, x : T , {\widehat{\varUpsilon }} . \varphi \wedge x = \mathtt {err}\) for the exception case states that \(\varphi \) in the pre-condition with the top element \(x\) is equal to the raised value \(\mathtt {err}\); since \(x\) is not in the scope in the exception refinement, \(x\) is bound by an existential quantifier. Most of the other typing rules can be extended with the “ &” part easily.

For (RT-Lambda) and (RT-Exec), we first extend the the assertion language with a new predicate \( \texttt {call\_err}( t_{{\mathrm {1}}} , t_{{\mathrm {2}}} ) = t_{{\mathrm {3}}} \) meaning the call of \(t_{{\mathrm {1}}}\) with \(t_{{\mathrm {2}}}\) aborts with the value \(t_{{\mathrm {3}}}\). (The semantics of call is unchanged.) Using the new predicate, (RT-Lambda) and (RT-Exec) are modified as in Fig. 10.

Fig. 10
figure 10

Modified typing rules for \(\texttt {LAMBDA}\) and \(\texttt {EXEC}\)

Tool Implementation

In this section, we present \(\textsc {Helmholtz} \), the verification tool based on the refinement type system. We first discuss how Michelson code can be annotated. Then, we give an overview of the verification algorithm, which reduces the verification problem to SMT solving, and discuss how Michelson-specific features are encoded. Finally, we show a case study of contract verification and present verification experiments.

Annotations

Helmholtz supports several forms of annotations (surrounded by \(\texttt {<<}\) and \(\texttt {>>}\) in the source code), other than \(\texttt {ContractAnnot}\) explained in Section ‘Overview of HELMHOLTZ and Michelson’. As we have already seen, the syntax of refinement stack types used in the implementation is slightly different from the formal definition: we use a colon-separated list of ML-like patterns to bind stack values and ML-like expressions to describe the predicate, which have to be quantifier free (mainly because state-of-the-art SMT solvers do not handle quantifiers very well). We explain several constructs for an annotation in the following.

Fig. 11
figure 11

lambda.tz, which uses higher-order functions, and length.tz, which uses a measure function in the contract annotation

Assert \(\varPhi \) and Assume \(\varPhi \) can appear before or after an instruction. The former asserts that the stack at the annotated program location satisfies the type \(\varPhi \); the assertion is verified by Helmholtz . If there is an annotation Assume \(\varPhi \), Helmholtz assumes that the stack satisfies the type \(\varPhi \) at the annotated program location. A user can give a hint to Helmholtz by using Assume \(\varPhi \). The user has to make sure that it is correct; if an Assume annotation is incorrect, the verification result may also be incorrect.

An annotation LoopInv \(\varPhi \) may appear before a loop instruction (e.g., \(\texttt {LOOP}\) and \(\texttt {ITER}\)). It asserts that \(\varPhi \) is a loop invariant of the loop instruction. In the current implementation, annotating a loop invariant using LoopInv \(\varPhi \) is mandatory for a loop instruction. Helmholtz checks that \(\varPhi \) is indeed a loop invariant and uses it to verify the rest of the program.

In the current implementation, a \(\texttt {LAMBDA}\) instruction, which pushes a function on the top of the stack, must be accompanied by a \(\texttt {LambdaAnnot}\) annotation. \(\texttt {LambdaAnnot}\) comes with a specification of the pushed function written in the same way as \(\texttt {ContractAnnot}\). Concretely, the specification of the form \( \varPhi _\text {pre} \rightarrow \varPhi _\text {post} \mathrel { \& } \varPhi _\text {abpost} (x_{{\mathrm {1}}}:T_{{\mathrm {1}}},\dots ,x_{n}:T_{n})\) specifies the precondition \(\varPhi _\text {pre}\), the (normal) postcondition \(\varPhi _\text {post}\), and the (abnormal) postcondition \(\varPhi _\text {abpost}\) as refinement stack types. The binding \((x_{{\mathrm {1}}}:T_{{\mathrm {1}}},\dots ,x_{n}:T_{n})\) introduces the ghost variables that can be used in the annotations in the body of the annotated \(\texttt {LAMBDA}\) instruction;Footnote 6; one can omit if it is empty.

The first contract in Fig. 11, which pushes a function that takes a pair of integers and returns the sum of them, exemplifies \(\texttt {LambdaAnnot}\). The annotated type of the function (Line 6) expresses that it returns 4 if it is fed with a pair (3, 1). The ghost variables a and b are used in the annotations \(\texttt {Assume}\) (Line 10) and \(\texttt {Assert}\) (Line 12) in the body to denote the first and the second arguments of the pair passed to this function.

To describe a property for recursive data structures, Helmholtz supports measure functions introduced by Kawaguchi et al. [11] and also supported in Liquid Haskell [24]. Measure functions are one of the potent techniques to handle such properties without universal quantifiers. (Recall that we should avoid quantifiers as much as possible.) A measure function is a (recursive) function over a recursive data structure that can be used in assertions. The annotation \(\texttt {Measure}\) \(x : T_{{\mathrm {1}}} \rightarrow T_{{\mathrm {2}}}\ \mathtt {where}\ p_1 = e_1 \,|\,\dots \,|\,p_n = e_n\) defines a measure function x over the type \(T_{{\mathrm {1}}}\). The measure function \(x\) takes a value of type \(T_{{\mathrm {1}}}\), destructs it by the pattern matching, and returns a value of type \(T_{{\mathrm {2}}}\). Metavariables p and e represent ML-like patterns and expressions. The second contract in Fig. 11, which computes the length of the list passed as a parameter, exemplifies the usage of the \(\texttt {Measure}\) annotation. This contract defines a measure function \(\texttt {len}\) that takes a list of integers and returns its length; it is used in \(\texttt {ContractAnnot}\) and \(\texttt {LoopInv}\).

Overview of the Verification Algorithm

Fig. 12
figure 12

Operational semantics and typing rules for MAP

Helmholtz takes an annotated Michelson program and conducts typechecking based on the refinement type system in Section ‘Refinement Type System’. All the instructions except for MAP can be dealt with by a straightforward extension of the present typing rules. One exception is MAP, which is a Michelson instruction for applying a function to each element in the list or the associative array at the top of the stack. Figure 12 shows the operational semantics and typing rules for MAP on lists, where @ denotes the list concatenation operator. They are similar to those for ITER, but the fact that MAP manipulates and pushes back each element of the listFootnote 7 (cf. \(V_{{\mathrm {1}}}\) to \(V'_{{\mathrm {1}}}\) and \(V_{{\mathrm {2}}}\) to \(V'_{{\mathrm {2}}}\) in (E-MapCons)) makes differences. In the typing rule (RT-Map), \(z'\) stands for the manipulated list. So, it is assumed that \(z' = [ ]\) at the beginning of the execution of MAP. One non-trivial issue is that, unlike (RT-Loop) or (RT-Iter), the precondition itself is not a loop invariant because, in the middle of the manipulation, \(z' = [ ]\) does not hold. Although we do not show a proof, the typing rule (RT-Map) is sound. Intuitively, it is because MAP can be simulated by ITER, DIP, and other instructions, and the rule (RT-Map) is obtained from the typing derivation of the simulating instruction sequence.

The typechecking procedure (1) computes the verification conditions (VCs) for the program to be well typed and (2) discharges it using an SMT solver. The latter is standard: We decide the validity of the generated VCs using an SMT solver (Z3 in the current implementation.) We explain the VC-generation step in detail.

For an annotated contract, Helmholtz conducts forward reasoning starting from the precondition and generates VCs if necessary. During the forward reasoning, Helmholtz keeps track of the \(\varGamma \)-and-\(\varUpsilon \) part of the type judgment.

The typing rules are designed so that they enable the forward reasoning if a program is simply typed. For example, consider the rule (RT-Add) in Fig. 7. This rule can be read as a rule to compute a postcondition \(\exists \, x_{{\mathrm {1}}} : \mathtt {int} , x_{{\mathrm {2}}} : \mathtt {int} . \varphi \wedge x_{{\mathrm {1}}} + x_{{\mathrm {2}}} = x_{{\mathrm {3}}}\) from a precondition \(\varphi \) if the first two elements in \(\varUpsilon \) are \(x_{{\mathrm {1}}}\) and \(x_{{\mathrm {2}}}\). The other rules can also be read as postcondition-generation rules in the same way.

There are three places where Helmholtz generates a verification condition.

  • At the end of the program: Helmholtz generates a condition that ensures that the computed postcondition of the entire program implies the postcondition annotated to the program.

  • Before and after instruction LAMBDA: Helmholtz generates conditions that ensure that the pre- and post-conditions of the instruction LAMBDA is as annotated in LambdaAnnot.

  • At a loop instruction: Helmholtz generates verification conditions that ensure the condition annotated by LoopInv is indeed a loop invariant of this instruction.

A VC generated by Helmholtz at these places is of the form \(\forall \mathbf {x}{{\,\mathrm{:}\,}}\mathbf {T}. \varphi _1 \Longrightarrow \varphi _2\), where \(\mathbf {x}{{\,\mathrm{:}\,}}\mathbf {T}\) is a sequence of bindings.

To discharge each VC, as many verification condition discharging procedures do, Helmholtz checks whether its negation, \(\exists \mathbf {x}{{\,\mathrm{:}\,}}\mathbf {T}. \varphi _1 \wedge \lnot \varphi _2\), is satisfiable; if it is unsatisfiable, then the original VC is successfully discharged. We remark that our type system is designed so that \(\varphi _1\) and \(\varphi _2\) are quantifier free for a program that does not use \(\texttt {LAMBDA}\) and \(\texttt {EXEC}\). Indeed, \(\varphi _2\) comes only from the annotations, which are quantifier-free. \(\varphi _1\) comes from the postcondition-computation procedure, which is of the form \(\exists \mathbf {x'}{{\,\mathrm{:}\,}}\mathbf {T'}. \varphi _1'\) for quantifier-free \(\varphi _1'\) for the instructions other than \(\texttt {LAMBDA}\) and \(\texttt {EXEC}\); the formula \(\exists \mathbf {x}{{\,\mathrm{:}\,}}\mathbf {T}. \varphi _1 \wedge \lnot \varphi _2\) is equivalent to \(\exists \mathbf {x}{{\,\mathrm{:}\,}}\mathbf {T},\mathbf {x'}{{\,\mathrm{:}\,}}\mathbf {T'}. \varphi _1' \wedge \lnot \varphi _2\).

Encoding Michelson-Specific Features

Since our assertion language includes several specific features that originates from Michelson and our assertion language, Helmholtz needs to encode them in discharging VCs so that Z3 can handle them. We explain how this encoding is conducted.

Michelson-Specific Functions and Predicates

We encode several Michelson-specific functions using uninterpreted functions. For example, Helmholtz assumes the following typing rule for instruction SHA256, which converts the top element to its SHA256 hash.

$$\begin{aligned} \varGamma \vdash \{ x:\mathtt {bytes} \rhd \varUpsilon \,|\,\phi \} \; \mathtt {SHA256} \; \{ y:\mathtt {bytes} \rhd \varUpsilon \,|\,\exists x. \phi \wedge y = \text {sha256}(x) \}. \end{aligned}$$

In the post condition, we use an uninterpreted function sha256 to express the SHA256 hash of x. In Z3, this uninterpreted function is declared as follows.

figure a

The first line declares the signature of sha256. The second line is the axiom for sha256 that the length of a hash is always 32.

Notice that, Helmholtz cannot prove that a calculated hash is equal to a specific constant since the represented uninterpreted function has under-specified axioms. For instance, Helmholtz cannot prove:

$$\begin{aligned}&\small \varGamma \vdash \{x:\mathtt {bytes} \,|\,x = 0 \} \; \mathtt {SHA256} \; \{y:\mathtt {bytes} \rhd \varUpsilon \,|\,\exists x. \phi \wedge y \\ &\quad =\text {``6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d''} \}. \end{aligned}$$

Nevertheless, practical contracts still can be verified with this light-weight axiomatization. For example, in Fig. 13, we use the sig uninterpreted function, which is given no specific axiom. Despite of the fact, the contract can be verified because the typing rule for \(\texttt {CHECK\_SIGNATURE}\) in Line 15 ensures the stack top (Boolean value) after the instruction is equal to \(\texttt {sig pubkey sign (pack data)}\) in Line 8, and then, \(\texttt {ASSERT}\) just after the instruction ensures the top value is \(\texttt {True}\) in normal termination. That is why Helmholtz can verify the contract without the behavior of the sig function itself.

Implementation of Measure Functions

A measure function is encoded as an uninterpreted function accompanied with axioms that specify the behavior of the function, which is defined by the Measure annotation. Let us consider, for example, about the following measure function for lists.

figure b

Theoretically, one could insert the following declarations and assertions when it generates an input to Z3 to encode this definition:

figure c

However, Z3 tends to timeout if we naively insert the above axioms to Z3 input which contains a universal quantifier in the encoded definition.

To address this problem, Helmholtz rewrites VCs so that heuristically instantiated conditions on a measure function are available where necessary. Consider the above definition of \(\texttt {f}\) as an example. Suppose Helmholtz obtains a VC of the form \(\exists \mathbf {x}{{\,\mathrm{:}\,}}\mathbf {T}. \varphi _1 \wedge \lnot \varphi _2\) mentioned in Section ‘Overview of the Verification Algorithm’ and \(\varphi _1\) and \(\varphi _2\) contains (cons \(e_{h,i}\) \(e_{t,i}\)) for \(i \in \{1,\dots ,N\}\). Then, Helmholtz constructs a formula \(\varphi _{ meas } {:}{=} \bigwedge _{i \in \{1,\dots ,N\}} \texttt {(= (f (cons}\ e_{h,i}\ e_{t,i}{} \texttt {)) e2)}\) and rewrites the VC to \(\exists \mathbf {x}{{\,\mathrm{:}\,}}\mathbf {T}. \varphi _{ meas } \wedge \varphi _1 \wedge \lnot \varphi _2\).

We remark that, in LiquidHaskell, measure functions are treated as a part of the type system [11]: the asserted axioms are systematically (instantiated and) embedded into the typing rules. In Helmholtz , measure functions are treated as an ingredient that is orthogonal to the type system; the type system is oblivious of measure functions until its definition is inserted to Z3 input.

Overloaded Functions

Due to the polymorphically typed instructions in Michelson, our assertion language incorporates polymorphic uninterpreted functions. For example, Michelson has an instruction \(\texttt {PACK}\), which pops a value (of any type) from the stack, serializes it to a binary representation, and pushes the serialized value. Helmholtz typechecks this instruction based on the following rule.

$$\begin{aligned} \varGamma \vdash \{x:T \rhd \varUpsilon \,|\,\varphi \} \; \mathtt {PACK} \; \{y:\mathtt {bytes} \rhd \varUpsilon \,|\,\exists x:T. \varphi \wedge y = \mathrm {pack}(x) \} \end{aligned}$$

The term \(\mathrm {pack}(x)\) in the postcondition represents a serialized value created from x. Since x may be of any simple type T, \(\mathrm {pack}\) must be polymorphic.

Having a polymorphic uninterpreted function in assertions is tricky because Z3 does not support a polymorphic value. Helmholtz encodes polymorphic uninterpreted functions to a monomorphic function whose name is generated by mangling the name of its instantiated parameter type. For example, the above \(\mathrm {pack}(x)\) is encoded as a Z3 function \(\mathtt {pack!int}(x)\) whose type is \(\mathtt {int} \rightarrow \mathtt {bytes}\). Although there are infinitely many types, the number of the encoded functions is finite since only finitely many types appear in a single contract.

Michelson-Specific Types

In encoding a VC as Z3 constraints, Helmholtz maps types in Michelson into sorts in Z3, e.g., the Michelson type nat for nonnegative integers to the Z3 sort Int. A naive mapping from Michelson types to Z3 sorts is problematic; for example, \(\forall x:\mathtt {nat}. x \ge 0\) in Helmholtz is valid, but a naively encoded formula (forall ((x Int)) (>= x 0)) is invalid in Z3. This naive encoding ignores that a value of sort nat is nonnegative.

To address the problem, we adapt the method encoding a many-sorted logic formula into a single-sorted logic formula [5]. Concretely, we define a sort predicate \(P_T(x)\) for each sort T, which characterizes the property of the sort T. For example, \(P_\mathtt {nat}(x) {:}{=} x \ge 0\). We also define sort predicates for compound data types.

Using the sort predicates, we can encode a VC into a Z3 constraint as follows: \(\forall x:T. \phi \) is encoded into \(\forall x:[\![ T ]\!]. P_T(x) \Rightarrow \phi \) and \(\exists x:T. \phi \) is encoded into \(\exists x:[\![ T ]\!]. P_T(x) \wedge \phi \), where \([\![ T ]\!]\) denotes the target sort of T (e.g., \([\![\mathtt {nat} ]\!] = \mathtt {Int}\)). Furthermore, we also add axioms about co-domain of uninterpreted functions as \(\forall \mathbf {x_i}:\mathbf {T_i}. P_T(f(\mathbf {x_i}))\) for the function f of \(\mathbf {T_i} \rightarrow T\).

Case Study: Contract with Signature Verification

Fig. 13
figure 13

checksig.tz, which involves signature verification

Figure 13 presents the code of the contract \(\texttt {checksig.tz}\), which verifies that a sender indeed signed certain data using her private key. This contract uses instruction \(\texttt {CHECK\_SIGNATURE}\), which is supposed to be executed under a stack of the form \(\texttt {key}\) \(\triangleright \) \(\texttt {sig}\) \(\triangleright \) \(\texttt {bytes}\) \(\triangleright \) \(\texttt {tl}\), where \(\texttt {key}\) is a public key, \(\texttt {sig}\) is a signature, and \(\texttt {bytes}\) is some data. \(\texttt {CHECK\_SIGNATURE}\) pops these three values from the stack and pushes \(\texttt {True}\) if \(\texttt {sig}\) is the valid signature for \(\texttt {bytes}\) with the private key corresponding to \(\texttt {key}\). The instruction \(\texttt {ASSERT}\) after \(\texttt {CHECK\_SIGNATURE}\) checks if the signature checking has succeeded; it aborts the execution of the contract if the stack top is \(\texttt {False}\); otherwise, it pops the stack top (\(\texttt {True}\)) and proceeds the next instruction.

The intended behavior of checksig.tz is as follows. It stores a pair of an address \(\texttt {addr}\), which is the address of a contract that takes a \(\texttt {string}\) parameter, and a public key \(\texttt {pubkey}\) in its storage. It takes a pair (sign,data) of type (pair signature string) as a parameter; here, \(\texttt {signature}\) is the primitive Michelson type for signatures. This contract terminates without exception if \(\texttt {sign}\) is created from the serialized (packed) representation of \(\texttt {data}\) and signed by the private key corresponding to \(\texttt {pubkey}\). In a normal termination, this contract transfers 1 \(\texttt {mutez}\) to the contract with address \(\texttt {addr}\). If this signature verification fails, then an exception is raised.

This behavior is expressed as a specification in the \(\texttt {ContractAnnot}\) annotation in \(\texttt {checksig.tz}\) as follows.

  • The refinement of its pre-condition part expresses that the address stored in the first element \(\texttt {addr}\) of the storage is an address of a contract that takes a value of type \(\texttt {string}\) as a parameter. This is expressed by the pattern-matching on \(\texttt {contract\_opt addr}\), which checks if there is an intended parameter type of contract stored at the address \(\texttt {addr}\) and returns the contract (wrapped by \(\texttt {Some}\)) if there is. The intended parameter type is given by the pattern expression \(\texttt {Contract<string>\_}\), which matches a contract that takes a \(\texttt {string}\).

  • The refinement of the post-condition forces the following three conditions: (1) the store is not updated by this contract ((addr, pubkey) = new_store); (2) sign is the signature created from the packed bytes pack data of the string in the second element of the parameter and signed by the private key corresponding to the second element pubkey of the store (sig pubkey sign (pack data)); and (3) the operations ops returned by this contract is [Transfer data 1 c], which represents an operation of transferring 1 mutez to the contract c, which is bound to Contract addr, with the parameter data. The predicate sig and the function pack are primitives of the assertion language of Helmholtz .

  • The refinement in the exception part expresses that if an exception is raised, then the signature verification should have failed (not (sig pubkey sign (pack data))).

Helmholtz successfully verifies checksig.tz without any additional annotation in the \(\texttt {code}\) section. If we change the instruction \(\texttt {ASSERT}\) in Line 15 to \(\texttt {DROP}\) to let the contract drop the result of the signature verification (hence, an exception is not raised even if the signature verification fails), the verification fails as intended.

Experiments

We applied Helmholtz to various contracts; Table 1 is an excerpt of the result, in which we show (1) the number of the instructions in each contract (column #instr.) and (2) time (ms) spent to verify each contract. The experiments are conducted on MacOS Big Sur 11.4 with Quad-Core Intel Core i7 (2.3 GHz), 32 GB RAM. We used Z3 version 4.8.10. The contracts boomerang.tz, deposit.tz, manager.tz, vote.tz, and reservoir.tz are taken from the benchmark of Mi-cho-coq [3]. checksig.tz, discussed above, is derived from weather_insurance.tz of the official Tezos test suite.Footnote 8vote_for_delegate.tz and xcat.tz are taken from the official test suite; xcat.tz is simplified from the original. tzip.tz is taken from Tezos Improvement proposals.Footnote 9triangular_num.tz is a simple test case that we made as an example of using \(\texttt {LOOP}\). The source code of these contracts can be found at the Web interface of Helmholtz . Each contract is supposed to work as follows.

  • boomerang.tz: Transfers the received amount of money to the source account.

  • deposit.tz: Transfers money to the sender if the address of the sender is identical to that is stored in the storage.

  • manager.tz: Calls the passed function if the address of the caller matches the address stored in the storage.

  • vote.tz: Accepts a vote to a candidate if the voter transfers enough voting fee, and stores the tally.

  • tzip.tz: One of the components implementing Tezos smart contracts API. We verify one entrypoint of the contract.

  • checksig.tz: The one explained in Section ‘Case Study: Contract with Signature Verification’.

  • vote_for_delegate.tz: Delegates one’s ballot in voting by stakeholders, which is one of the fundamental features of Tezos, to another using a primitive operation of Tezos.

  • xcat.tz: Transfers all stored money to one of the two accounts specified beforehand if called with the correct password. The account that gets money is decided based on whether the contract is called before or after a deadline.

  • reservoir.tz: Sends a certain amount of money to either a contract or another depending on whether the contract is executed before or after the deadline.

  • triangular_num.tz: Calculates the sum from 1 to n, which is the passed parameter.

In the experiments, we verified that each contract indeed works according to the intention explained above. triangular_num.tz was the only contract that required a manual annotation for verification in the \(\texttt {code}\) section; we needed to specify a loop invariant in this contract.

Table 1 Benchmark result

Although the numbers of instructions in these contracts are not large, they capture essential features of smart contracts; every contract except triangular_num.tz executes transactions; deposit.tz and manager.tz check the identity of the caller; and checksig.tz conducts signature verification. The time spent on verification is small.

Related Work

There are several publications on the formalization of programming languages for writing smart contracts. Hirai [9] formalizes EVM, a low-level smart contract language of Ethereum and its implementation, using Lem [15], a language to specify semantic definitions; definitions written in Lem can be compiled into definitions in Coq, HOL4, and Isabelle/HOL. Based on the generated definition, he verifies several properties of Ethereum smart contracts using Isabelle/HOL. Bernardo et al. [3] implemented Mi-Cho-Coq, a formalization of the semantics of Michelson using the Coq proof assistant. They also verified several Michelson contracts. Compared to their approach, we aim to develop an automated verification tool for smart contracts. Park et al. [16] developed a formal verification tool for EVM by using the K-framework [18], which can be used to derive a symbolic model checker from a formally specified language semantics (in this case, formalized EVM semantics [8]), and successfully applied the derived model checker to a few EVM contracts. It would be interesting to formalize the semantics of Michelson in the K-framework to compare Helmholtz with the derived model checker.

The DAO attack [19], mentioned in Section ‘Introduction’, is one of the notorious attacks on a smart contract. It exploits a vulnerability of a smart contract that is related to a callback. Grossman et al. [7] proposed a type-based technique to verify that execution of a smart contract that may contain callbacks is equivalent to another execution without any callback. This property, called effectively callback freedom, can be seen as one of the criteria for execution of a smart contract not to be vulnerable to the DAO-like attack. Their type system focuses on verifying the ECF property of the execution of a smart contract, whereas ours concerns the verification of generic functional properties of a smart contract.

Benton proposes a program logic for a minimal stack-based programming language [2]. His program logic can give an assertion to a stack as our stack refinement types do. However, his language does not support first-class functions nor instructions for dealing with smart contracts (e.g., signature verification).

Our type system is an extension of the Michelson type system with refinement types, which have been successfully applied to various programming languages [1, 11, 12, 17, 21, 23,24,25,26,27,28]. DTAL [27] is a notable example of an application of refinement types to an assembly language, a low-level language like Michelson. A DTAL program defines a computation using registers; we are not aware of refinement types for stack-based languages like Michelson.

We notice the resemblance between our type system and a program logic for PCF proposed by Honda and Yoshida [10], although the targets of verification are different.

Their logic supports a judgment of the form \(A \vdash e {{\,\mathrm{:}\,}}_u B\), where e is a PCF program, A is a pre-condition assertion, B is a post-condition assertion, and u represents the value that e evaluates to and can be used in B, which resembles our type judgment in the formalization in Section ‘Refinement Type System for Mini-Michelson’. Their assertion language also incorporates a term expression \(f \bullet x\), which expresses the value resulting from the application of f to x; this expression resembles the formula \( \mathtt {call} ( t_{{\mathrm {1}}} , t_{{\mathrm {2}}} ) = t_{{\mathrm {3}}} \) used in a refinement predicate. We have not noticed an automated verifier implemented based on their logic. Further comparison is interesting future work.

Conclusion

We described our automated verification tool Helmholtz for the smart contract language Michelson based on the refinement type system for Mini-Michelson . Helmholtz verifies whether a Michelson program follows a specification given in the form of a refinement type. We also demonstrated that Helmholtz successfully verifies various practical Michelson contracts.

Currently, Helmholtz supports approximately 80% of the whole instructions of the Michelson language. The definition of a measure function is limited in the sense that, for example, it can define only a function with one argument. We are currently extending Helmholtz so that it can deal with more programs.

Helmholtz currently verifies the behavior of a single contract, although a blockchain application often consists of multiple contracts in which contract calls are chained. To verify such an application as a whole, we plan to extend Helmholtz so that it can verify an inter-contract behavior compositionally by combining the verification results of each contract.