1 Introduction

Suppose you want to prove a conjecture such as:

or to find replacements for the ?s that would allow a dependent type such as the following to be inhabited:

$$\begin{aligned} \varPi {u{:}{(\varPi {x{:}a}.\, {\varPi {y{:}(b\,x)}.\, c\,x\,y})}}.\, \varPi {v{:}{(\varPi {x{:}a}.\, b\,x)}}.\, \varPi {w{:}a}.\, (c\,?\,?). \end{aligned}$$

In a mainstream interactive theorem proving system you would attempt it by giving instructions to a carefully constructed proof verification engine using a formal proof language, often with a read-eval-print loop for immediate feedback. Your instructions would guide the verifier through the twists and turns of a formal derivation until it is satisfied that all formal obligations have been established. Your language of instructions could be tactics-based (such as in Coq), or it could be a programming language itself (such as in HOL-Light or Agda); it could also have a formal structure or be declarative (such as Isabelle/Isar).Footnote 1 Despite these superficial differences, all such systems can broadly be called linguistic because the internal state of the verifier can only be modified by means of the formal proof language (and the whims—or semantics, if you prefer—of the interpreter of the language).

An alternative to such a linguistic system would be a system of direct manipulation, wherein there is a tangible representation of the state of the verifier that one can modify directly using such tools as one’s fingers, pointing devices, or eye movements. The verifier’s job is then to make sure that the direct manipulation attempts are allowed when they are logically permissible and prevented when they are not. A prominent example of such a direct manipulation system is the proof by pointing technique [3], where mouse clicks on the representation of a proof state (in a version of Coq) are given a meaning: a click on a connective deep in a formula is interpreted as a sequence of Coq tactics that bring the connective to the top, at which point it could be made to interact with the other hypotheses or the conclusion in the usual manner.

A generalization of this idea, called proof by linking, was proposed in [4]. It allows the user not only to point but also to link different subformulas, say with a multi-touch input device or with a drag-and-drop metaphor. There are two immediate benefits of linking over pointing: (1) the surrounding context of a formula is not destroyed because the linked subformulas are not brought to the top, and (2) the interaction mode is easier to describe to complete novices. For instance, a novice could be instructed to “match the atoms” for the first example above, in which case they might start by attempting the following link:

The linking procedure would interpret this link as a desire to “bring” the “to” the . Without touching any other part of the conjecture except the smallest subformula containing both the source and the destination of the link, the conjecture would be rewritten to a different one:

The surrounding context of the link is preserved as nothing is brought to the top; instead, the source moves through the formula tree to meet the destination. The rewrites that underlie the transformation are provability preserving: if the rewritten conjecture is provable, then so is the original conjecture. Eventually, the conjecture (if true) would be reduced to a trivial theorem such as \(\top \). Note that the novice user does not need to know any proof language to draw these links, not even a conceptual proof system such as the sequent calculus.

The original proof by linking technique was proposed for classical linear logic and freely exploited the calculus of structures [17]. In this paper we show how to adapt the technique to intuitionistic logics and intuitionistic type theories, where the calculus of structures is not so well behaved [8, 18] (or, in the case of dependent type theory, entirely missing), and where preserving the context of the rewrites is a more delicate task. We do this by first defining the technique for intuitionistic first-order logic over \(\lambda \)-terms, and then we use an existing complete (shallow) embedding of dependent type theory in this logic [6, 15]. A secondary contribution is to give some insight into what a deep inference formalism might look like for dependent type theory.

2 Subformula Linking for Intuitionistic First-Order Logic

This section will serve both as an introduction to the subformula linking procedure, and as evidence that the technique can be applied to intuitionistic logics. Let us do this in two phases: first for the the propositional fragment, and then extended with first-order quantification.

2.1 The Propositional Fragment

We will use the following grammar of formulas (written \(A, B, \dotsc \)), where atomic formulas are written in lowercase (\(a, b, \dotsc \)).

Following usual conventions, the connectives and are left-associative, while is right-associative; the binding priority from strongest to weakest is .

The true formulas of this calculus can be defined in terms of derivability in a variety of formal systems such as with the sequent calculus LJ or G3ip [11]. In this paper the precise sequent calculus is not of primary concern; however, we will use the notation \({\varGamma \vdash C}\) where \(\varGamma \) is a multiset of formulas to denote that the formula C is derivable from the assumptions \(\varGamma \) using any such calculus.

A positively signed formula context (written \({\mathcal C}\{ \}\)) is a formula with a single occurrence of a hole \(\{\}\) in the place where a positively signed subformula may occur; it is defined mutually recursively with an negatively signed formula context (written \({\mathcal A}\{ \}\)) by the following grammar, where .

The replacement of the hole in \({\mathcal C}\{ \}\) (resp. \({\mathcal A}\{ \}\)) with a formula A yields a new formula, which we write as \({\mathcal C}\{A\}\) (resp. \({\mathcal A}\{A\}\)). For instance, if \({\mathcal C}\{ \}\) is , then \({\mathcal C}\{c \mathbin {\supset }\bot \}\) is .

Theorem 1

Suppose that \({A \vdash B}\). Then:

  • for any positively signed context \({\mathcal C}\{ \}\), it is the case that \({{\mathcal C}\{A\} \vdash {\mathcal C}\{B\}}\); and

  • for any negatively signed context \({\mathcal A}\{ \}\), it is the case that \({{\mathcal A}\{B\} \vdash {\mathcal A}\{A\}}\).

Proof

Induction on the structure of the contexts \({\mathcal C}\{ \}\) or \({\mathcal A}\{ \}\).    \(\square \)

In order to define the subformula linking procedure for this calculus, we work with interaction formulas; an interaction formula is a formula where:

  • either a single occurrence of is replaced with ,

  • or a single occurrence of is replaced with .

Fig. 1.
figure 1

Inference rules for interaction formulas

We will define an inference system for interaction formulas that consist of inference rules with a single conclusion and a single premise, both of which are either formulas or interaction formulas. The inference rule represents an admissible rule of intuitionistic logic: if the premise is a theorem, then so is the conclusion. The full collection of rules is shown in fig. 1. There are three kinds of rules, explained below in an upwards (conclusion to premises) reading.

  • Terminal rules are used to terminate a -interaction in a positively signed context. In the case where the -interaction links two occurrences of the same atom, the result is \(\top \); otherwise the turns back into . These are the only rules that can transition out of interaction formulas.

  • Positively signed rules operate on a -interaction in a positively signed context. The rules are written in fig. 1 in such a way that the subformulas A and B are brought together in the premise, and occurrences of F (if they exist) are side formulas.

  • Negatively signed rules operate on a -interaction in an negatively signed context. Fig. 1 only shows one of the two symmetric variants for each case; the other variant is built by permuting A with B and transposing the operands of . For instance, has the following symmetric variant.

    We will use primes to systematically name the symmetric variants of rules.

Proposition 2

(Soundness). Interpreting as and as , each rule of fig. 1 with premise P and conclusion Q has the property that \({P \vdash Q}\).

Proof

Straightforward consequence of theorem 1.    \(\square \)

Fig. 2.
figure 2

Link creation, contraction, and simplification. The conclusion in each case must not be an interaction formula.

Two further administrative steps remain to complete the technique. First, since the rules of fig. 1 always contain an interaction formula in the conclusion, we need to add some rules that can conclude ordinary (non-interaction) formulas. Since we read each inference rule from conclusion to premise, we will call these the interaction creation rules, which are shown in the first part of fig. 2. To incorporate non-linearity, we add a separate contraction rule; this keeps the interaction creation rules simple, but it needs to be explicitly invoked. These interaction creation rules are obviously sound under the interpretation of proposition 2.

The final step is to detect when a proof is complete. Since every inference rule presented so far has a single premise, we will say that a proof is complete when the final (again reading bottom to top) premise is, effectively, \(\top \). What do we mean by “effectively”? One candidate definition could be that a purely algorithmic procedure can detect when a proof is finished in linear time. For instance, we can say that a proof is complete if its premise can be established using only the simplification rules shown in the second part of fig. 2. These rules may be applied in any arbitrary order and at any time. An implementation of the technique may choose to apply these simplification rules on the fly.

Definition 3

The collection of rules in figures 1 and 2 will be known as the proof system Lnip. If A and B are formulas or interaction formulas, we write to mean that either \(A = B\) or there is an Lnip derivation where the topmost rule has premise A and the bottom-most rule has conclusion B.    \(\square \)

Theorem 4

(Completeness of Lni\(\lambda \)). If \({\vdash F}\), then .

Proof (Sketch)

There are many ways to prove this, both syntactic and semantic. An instructive syntactic proof goes as follows. For a small variant of the G3ip sequent calculus [11], we show that every inference rule is admissible in Lnip under a suitable formula interpretation of sequents. Thus, any sequent proof is recoverable in terms of Lnip inferences. We then just appeal to completeness of the sequent calculus.    \(\square \)

Fig. 3.
figure 3

Lnip derivation fragment for the S-combinator

Example 5

A Lnip derivation of the S-combinator formula, , is shown in fig. 3. The interaction connectives and take the precedence and associativity of and respectively. The locus where a Lnip rule is applied is depicted with a highlight. Of course, the S-combinator formula cannot be proved without appealing to contraction at least once, which is seen by the appeal to cont in the derivation.

An extremely interesting aspect of this example Lnip derivation is that it begins by considering the first two assumptions, and , of the S-combinator formula. The user might have indicated this consideration by drawing a link between the two occurrences of b, highlighted in and in fig. 3. The effect of this consideration is to perform a “composition” of the two assumptions into the stronger assumption , which could of course have been simplified to immediately. In shallow proof systems such as the sequent calculus or natural deduction this kind of compositional step cannot be taken as such, and would require cuts or lemmas.

As explained in the introduction, this kind of composition might have been discovered in the process of exploration by the simple strategy of drawing a link between the two occurrences of b. Such a link is legal because in the common context that contains both occurrences of b, their ancestral connective is , which can be turned into a interaction using the rule. Once these two occurrences are linked, we can interpret the interaction rules (fig. 1) as trying to bring the two ends of the link closer. Indeed, in each of the rules of fig. 1, we can say that one of the ends of the link is in the formula A and the other is in the formula B. We are therefore ready to formulate the linking procedure.

Definition 6

(Subformula Linking Procedure). Repeat the following sequence of steps until the conjecture formula (i.e., end-formula) F is transformed to \(\top \) (success), no fruitful progress can be made (failure), or the proof attempt is aborted by the user.

  1. 1.

    (Optional) Ask the user to indicate negatively signed subformulas of F that need to be contracted using the cont rule.

  2. 2.

    Ask the user to indicate two different subformulas of F; this is the link.

  3. 3.

    If the first common ancestor connective of the two linked subformulas is a that occurs in a positively signed context, use the rule to turn it into a ; likewise, if the ancestor is a in an negatively signed context, use the rule to turn it into a . If neither case applies, then the user indicated an invalid link, so we return immediately to step 2.

  4. 4.

    Use the interaction rules (fig. 1) in such a way that the endpoints of the link stay in the same interaction from conclusion to premise.

  5. 5.

    Eventually, one of the terminal rules in or rel will be applicable to remove the interaction; at this point we say that the link is resolved.

  6. 6.

    After resolving a link, the simplification rules may be applied eagerly in an arbitrary order.

The most important step in the inner loop of the procedure is step 4. The rules for interaction are not unambiguous because the conclusions of different rules can overlap. Let us start by examining the positively signed rules; as an example, consider the interaction , with the understanding that the endpoints of the indicated link in step 2 are present in A and B. There are two possible ways to resolve this link:

Does the choice matter? Yes, because the formulas and are not intuitionistically equivalent; indeed, the former strictly entails the latter. Hence, one of the two alternatives produces a strictly stronger—and potentially unprovable!—premise. Which one should the procedure pick?

This ambiguity also existed in the original formulation of the formula linking procedure for classical linear logic [4], and we can use the same answer used in that work. The key insight is that many of the ambiguous cases can be resolved by a simple analysis of polarities. A detailed discussion of polarity (and the oft-associated focusing discipline [1]) is not relevant to this work, however.Footnote 2 We will instead just use the observation that some of the interaction rules of fig. 1 are asynchronous, meaning that the premise of the rule is equiderivable as the conclusion—assuming we replace and with and respectively—while other rules are synchronous, which means that the premise strictly entails the conclusion. For the specific example above, the rule is asynchronous, because the order of assumptions in an implication is immaterial (at least in intuitionistic logic), while the rule is synchronous since its conclusion cannot justify the premise. We can draw up this table for all the positively signed rules.

figure k

Whenever there is a choice between a synchronous and an asynchronous rule to apply first (reading from bottom to top), we should pick the asynchronous rule, since that does not destroy derivability. If we have a choice of two asynchronous rules, then the choice is immaterial, as derivability is preserved regardless; the procedure can pick arbitrarily. Different choices would just lead to associative-commutative variants of the same ultimate premise. Finally, for a choice between two synchronous rules, we can consider all such pairs from the table above to see that the choice is immaterial: all choices have the same result.

The story is not quite as simple for the negatively signed rules of fig. 1, where every single rule would be synchronous by our definition. Unlike in the positively signed case, here we have a critical pair.

As before, the premises are not equiderivable. Resolving this ambiguity is going to be as hard as fully automated proof search, which will therefore not be recursively solvable as soon as we introduce quantifiers. The subformula linking procedure needs further guidance from the user to resolve the ambiguity. A variant of this ambiguity can also be found in the original subformula linking work for classical linear logic [4]; there, the solution was to make the links directed. Then, whenever there is a choice to be made—which will necessarily have to be a choice between one subformula containing the source of the link and the other containing the destination—the procedure can choose to perform the rule corresponding to the destination first. In the above critical pair, for instance, if A contained the source and B the destination, then we would perform the step first (i.e., follow the left derivation). This choice is made to evoke the intuition that the source is brought to the destination; the context of the destination swallows the context of the source.

Definition 7

(Directed Subformula Linking Procedure). We modify the procedure of definition 6 by making the links in step 2 directed, and in the resolution step 4 we break synchronous/synchronous ties for negatively signed rules by performing the rule for the destination first.

2.2 Quantifiers

Extending Lnip with first-order quantifiers can be done in a number of ways. Here we present a parsimonious extension that avoids any up front commitments with regard to the strength of the term language. Our terms (written \(s, t, \dotsc \)) have the following grammar:

where we write \(\vec s\) to stand for a list of terms \([s_1, s_2, \dotsc , s_n]\). We use \(x, y, \dotsc \) to range over variables and \(\mathsf {f}, \mathsf {g}, \dotsc \) to range over function symbols, and we abbreviate \({\mathsf {f}}{\cdot }{[]}\) to \(\mathsf {f}\). We also extend atomic formulas: they are now written \({a}{\cdot }{\vec s}\) where a is a predicate symbol, and we again abbreviate \({a}{\cdot }{[]}\) to a. To formulas and contexts we now add the two quantifiers, \(\forall \) and \(\exists \), to give the following extended grammars, where and \(Q \in \left\{ {\forall ,\exists }\right\} \).

We write \({\mathcal C}\{t\text { term}\}\) to assert that the term t is well-formed for the hole in \({\mathcal C}\{ \}\), i.e., all the (free) variables of t are bound by some quantifier that the hole in \({\mathcal C}\{ \}\) is in the scope of. We also write \({x}{\#}{t}\) or \({x}{\#}{A}\) to indicate that the variable x is not free in t or A respectively. Finally, the capture-avoiding substitution of t for x in a term u or formula A is written or respectively. The replacement of formulas in contexts, on the other hand, is not capture-avoiding \({\mathcal C}\{A\}\); instead, this replacement is considered to be well-formed whenever every free variable x of A has the property that \({\mathcal C}\{x\text { term}\}\).

In order to give ourselves maximum freedom in the definition of the first-order extension, we will use the additional binary predicate symbol to denote equality. Given two lists of terms \(\vec s = [s_1, \dotsc , s_n]\) and \(\vec t = [t_1, \dotsc , t_n]\) of equal length, we will write to stand for if \(n > 0\) and for \(\top \) otherwise. Using this additional predicate, the terminal rule in of Lnip is modified to account for the term arguments.

Definition 8

(System Lni). The system Lni is an extension of Lnip by removing the in rule of Lnip and adding the rules of fig. 4.

Fig. 4.
figure 4

System Lni: rules for quantifiers and terms

Theorem 9

(Completeness of Lni). If \({\vdash F}\) in a complete sequent calculus for first-order intuitionistic logic (e.g., G3i [11]) then .

Proof (Sketch)

We can follow the same strategy as for theorem 4. Note that for any term t, the rules refl and cong suffice to reduce to \({\mathcal C}\{\top \}\). A transitivity rule for is not needed: no is created in an negatively signed context.    \(\square \)

Fig. 5.
figure 5

Two example Lni derivations

Example 10

Two example Lni derivations are shown in fig. 5.

  1. (a)

    This is a derivation for a provable formula where the user may have linked the two occurrences of a. Observe that the simplification rules {cong, inst, refl} help to implement first-order unification under a mixed quantifier prefix. However, since Lni simplification rules can be applied at any time, we can solve unification problems incrementally, in tandem with logical reasoning.

  2. (b)

    This is a derivation for an unprovable formula containing an illegal quantifier exchange, where once again the indicated link is between the two occurrences of a. This derivation cannot be completed because there is no instantiation for x for which is true.

3 Incorporating Arity-Typed \(\lambda \)-Terms

To make the calculus Lni of the previous section suitable to host a type theory as an object language, we will need to generalize from first-order terms to general \(\lambda \)-terms. We will follow a standard technique known variously as higher-order abstract syntax (HOAS) [12] or \(\lambda \)-tree syntax [7] that treats the pure \(\lambda \)-calculus—together with \(\alpha \beta \eta \)-equality as its equational theory—to represent object languages. To keep things computable, we will use simply typed \(\lambda \)-terms with only one basic type, which is sometimes known as arity typing. Arity types (\(\alpha , \beta , \dotsc \)) and terms (\(s, t, \dotsc \)) have the following grammar.

where \(x, y, \dotsc \) range over variables, and sans-serif identifiers such as \(\mathsf {k}\) range over term constants. For formulas, we also change the quantifiers \(Q{x}.\, F\) to their arity typed forms \(Q{x{:}\alpha }.\, F\), where \(Q \in \left\{ {\forall , \exists }\right\} \).

We keep \(\lambda \)-terms in canonical spine form, where the head (h) of an application is identified and separated; in more usual notation, \({h}{\cdot }{[s_1, \dotsc , s_n]}\) would be written as the iterated application \((\cdots (h\ s_1)\ \cdots \ s_n)\). The definition of substitution, , must be modified to retain spine forms, which is usually done by removing redexes on the fly; for example (using as an auxiliary operation):

Most of the inference rules of system Lni generalize easily to this setting. The immediate differences will be with respect to the simplification rules. For the inst rule, we use a variant judgement to mean that the \(\lambda \)-term t is well-typed at type \(\alpha \) based on the type assumptions of its free variables that are bound in the scope of the hole in \({\mathcal C}\{ \}\). It is possible to view this judgement as being defined by inference rules; for instance (for \(Q \in \left\{ {\forall , \exists }\right\} \)):

The rules refl and cong of Lni are replaced with:

Definition 11

(System Lni\(\lambda \)). The system Lni\(\lambda \) is a modification of Lni with the rules, cong, abs, , and in above.

Theorem 12

(Completeness of Lni\(\lambda \)). For any formula F in the language of first-order logic over \(\lambda \)-terms but without any occurrence of , if \({\vdash F}\) in a complete sequent calculus then .

Proof (Sketch)

Once again, this is a straightforward extension of the proof of theorem 9. Since there are no occurrences of in F, and in particular no occurrence of it in a negatively signed context, the rules cong, abs and are sufficient to implement \(\alpha \beta \eta \)-equivalence.    \(\square \)

4 Application: Embedding Intuitionistic Type Theories

The first-order language over arity-typed \(\lambda \)-terms of the previous section has enough expressive power for a complete encoding of any pure type system [6, 15]. To keep things simple in this paper, we will demonstrate the case for LF (aka \(\lambda \varPi \)) using the simple embedding from [15]. Expressions in LF belong to one of the following three syntactic categories: kinds, types, or terms.

The LF type system is formally specified using inference rules in [9] and will not be repeated here. Instead, we will directly present a complete encoding of LF expressions using the language of Lni\(\lambda \).

The encoding proceeds in two steps. First, we transform the dependently typed terms of LF into their simply typed forms, normalizing them as necessary. However, since LF terms can mention their types, we simultaneously transform LF types into simple types. This transformation erases not just the type dependencies but also the identities of the types by collapsing all of them to the same base type \(\star \).

Definition 13

The forgetful map \(\phi \) specified below transforms LF terms into Lni\(\lambda \) \(\lambda \)-terms and LF types and kinds into Lni\(\lambda \) types.

The second stage of the transformation recovers the information that was lost in the \(\phi \) map by means of one atomic propositions, \(\mathsf {has}\). Using this we define a mapping that transforms types and kinds to formulas in such a way that if holds then is true.

Definition 14

The mapping transforms an LF type/kind and a Lni\(\lambda \) \(\lambda \)-terms into a Lni\(\lambda \) formula, specified recursively as follows.

(where J can be a LF type or kind).

Proposition 15

(Completeness [15]). If the judgement is derivable in LF [9], then the following formula is provable in Lni\(\lambda \): .    \(\square \)

The converse of proposition 15 does not necessarily hold, since the forgetful map \(\phi \) is injective, not surjective.Footnote 3 In particular, since the encoding of atomic types forgets the term arguments, we have that \(\phi (\lambda {x{:}A_1}.\, s) = \phi (\lambda {x{:}A_2}.\, s)\) if \(\phi (A_1) = \phi (A_2)\); however, the latter does not guarantee that \(A_1 = A_2\). Thus, may hold even when \(A_1 \ne A_2\). To guarantee surjectivity, we must use the canonical LF variant of the LF type theory where the type ascription on \(\lambda \) is omitted and the type system is made bidirectional [19]; this will guarantee that only \(\varPi \)-types will ascribe types to bound variables, removing the issue highlighted above.

Example 16

Consider the following LF type \(A \triangleq \varPi {u{:}\mathsf {a}}.\, {\varPi {z{:}{(\varPi {x{:}\mathsf {a}}.\, \mathsf {b}\,x)}}.\, \mathsf {b}\,u} \). By definition 14, we have:

Fig. 6 has an example Lni\(\lambda \) derivation of this formula where k is existentially quantified. As usual, highlights are used to indicate the two links the user indicated in the two rules. The derivation can be complete with the instantiation ; this means that the LF type A is inhabited by some LF term M for which \(\phi (M) = {z}{\cdot }{[u]}\).

Fig. 6.
figure 6

A Lni\(\lambda \) derivation of an embedded LF type (example 16). Some type ascriptions are elided, and doubled lines denote simplifications.

Note that the fact that we have not discovered a LF term for k using the Lni\(\lambda \) derivation is not a problem. Given a Lni\(\lambda \) term k for which is derivable, it is possible to find a term M for which \(\phi (M) = k\) and M : A holds in LF. One way to do this would be to use bidirectional type checking [14, 19] to recreate—deterministically—the missing LF types.

While the encoding of LF in Lni\(\lambda \) suffices to implement the proof by linking technique, it is a leaky encoding. As the derivation in fig. 6 proceeds, the conjecture resembles the image of the map less and less; in particular, the conjecture starts to accumulate things that are not fundamentally present in the LF type system, such as term equations, conjunctions, and existential quantifiers. The purported novice user mentioned in the introduction thus needs to be familiar with at least two languages: LF and (a somewhat esoteric variant of) first-order logic. One way to improve matters would be to try to define the linking procedure directly on the LF type system, but this example seems to indicate that the LF language is not expressive enough to capture all the structures that will occur when resolving a link. At the very least, it seems that some kind of pairing construct—i.e., \(\varSigma \)-types—is essential. Moreover, to capture free floating \(\mathsf {has} \) assumptions, the language of LF might need to be extended further with judgemental expressions of the form \(\langle {M:A}\rangle \).

5 Conclusion and Future Directions

We have presented a formal system of proof by linking for intuitionistic logic and a derived system for the dependent type theory LF. We are currently in the process of implementing this system as a variant of the Profound tool, which was initially developed for classical linear logic in [4].

In order for this system to be usable in a general purpose interactive theorem prover based on first-order logic (such as Abella [2]) or dependent type theory (such as Twelf [13]), the most important missing ingredient is support for inductive definitions and reasoning by induction. The first step in a proof by structural induction is to indicate which assumption(s) will drive the analysis, which is closer to a pointing than a linking. Thus, proof by linking and pointing will need to co-exist.

A further improvement that would be made as a matter of course in an implementation would be the use of a unification engine to remove the clutter of formulas. It is worth investigating (in future work) if the linking metaphor can also be used for algebraic operations on terms based on . In many systems -assumptions can be used to rewrite terms, which is readily incorporated into the linking scheme: just link a term to one side of a . We can in fact see it as variants of the inst rule:

It is worth investigating if such variants of inst can make the embedding of LF into Lni\(\lambda \) less leaky.

Note that proof by linking, like proof by pointing, can easily be incorporated as a tactic in an existing proof system. After all, each of the inference rules of Lni\(\lambda \) is logically motivated, and can therefore be established as a certifying tactic. The quality of the formal proof terms produced in this way will be poor since most proof term languages are not designed for deep rewriting – indeed, the proof term for each Lni\(\lambda \) inference rule may have a size that is exponential in that of the conjecture. It is perhaps better to see proof by linking as a proof exploration tool for quickly testing out logical properties of a conjecture before attempting a traditional structured proof. In the hands of an expert user, this exploration mode can also help to discover useful lemmas to bridge the gap between an existing collection of proved theorems and a desired target theorem.