1 Introduction

Inductive-inductive types allow for the mutual inductive definition of a type and a family over that type. As an example, we can simultaneously define contexts and types defined in a context, with dependently typed context extension:

$$\begin{aligned} \text {Ctx}&: \text {Type,}\qquad&\text {Ty}&: \text {Ctx} \rightarrow \text {Type,}\\ \epsilon&: \text {Ctx},\qquad&\text {U}&: (\varGamma : \text {Ctx}) \rightarrow \text {Ty}\;\varGamma ,\\ \texttt {ext}&: (\varGamma : \text {Ctx}) \rightarrow \text {Ty}\; \varGamma \rightarrow \text {Ctx},\qquad&\text {El}&: (\varGamma : \text {Ctx}) \rightarrow \text {Ty}\;(\texttt {ext}\;\varGamma \;(\text {U}\;\varGamma )). \end{aligned}$$

Such definitions have been used for example by Danielsson [9] and Chapman [5] to define intrinsically typed syntax of a dependent type theory, and Agda supports such definitions natively.

These types have been studied extensively in Nordvall Forsberg [15]. There, in §5.3, inductive-inductive types with simple elimination rules (defined in op. cit. §3.2.5) are constructed from indexed inductive types in extensional type theory, and in §5.4 this is conjectured to work in intensional type theory as well.

In this paper, we first show that this construction does not work in intensional type theory without assuming Uniqueness of Identity Proofs (UIP), which is incompatible with the Univalence axiom of Homotopy Type Theory [18]. We then give an alternate construction in cubical type theory [6], which is compatible with Univalence. Specifically, this paper makes the following contributions:Footnote 1

  1. 1.

    In Sect. 2, we show that, in intensional type theory, if the types constructed by Nordvall Forsberg satisfy the simple elimination rules, then UIP holds (formalized in both Coq and Agda).

  2. 2.

    In Sect. 3, we give the construction of a particular inductive-inductive type with simple elimination rules in cubical type theory (formalized in cubical Agda).

1.1 Syntax and Conventions

We mostly mimic Agda syntax. The double bar symbol \(=\) is used for definitions directly and by pattern matching, and for equality of terms up to conversion. We write \((a : A) \rightarrow B\) for the dependent product type, and \(A \rightarrow B\) for the non-dependent version. Functions are given by pattern matching \(f\; x = y\) or by lambda expressions \(f = \lambda x.y\). Similarly \((a : A) \times B\) is the dependent pair type, and \(A \times B\) the non-dependent version. Pairs are (ab), and projections are p.1 and p.2. The unit type is \(\top \), with unique inhabitant \(\star \). Identity types are \({x}\equiv _{X}{y}\) for the type of identifications of x with y in type X, and we write \(\texttt {refl}\) for a proof of reflexivity. We do not assume that axiom K holds for identity types. We write Type for a universe of types (where Agda uses \(\text {Set}\)). In Sect. 3 we work in cubical type theory, which will be explained there.

1.2 Running Example of an Inductive-Inductive Definition

For the purposes of this paper, we will focus on one relatively simple inductive-inductive definition (with only 5 clauses), parametrized by a type X, which is given in Fig. 1. We will use this definition to prove that Nordvall Forsberg’s construction implies UIP in Sect. 2 and as a running example to demonstrate our construction in cubical type theory in Sect. 3.

Our example starts with the simplest inductive-inductive sorts, taking \(A :\,\text {Type}\) and \(B : A \rightarrow \,\text {Type}\), and then populates A and B with simple constructors which suffice for our proof of UIP. We have \(\texttt {inj}\), which is supposed to give exactly one element of each \(B\;a\), while \(\texttt {ext}\) lets us mix Bs back into the As (mirroring the type of context extension), and \(\eta \) gives us something to start with: one element of A for each element of X (following the use of \(\eta \) in [15, Example 3.3]). The proof of UIP in Sect. 2 proceeds by considering the type \(B\;(\texttt {ext}\;(\eta \;x)\;(\texttt {inj}\;(\eta \;x))\) for some x : X, and noticing that, while the simple elimination rules tell us that there should only be one element of this type (given by \(\texttt {inj}\)), in Nordvall Forsberg’s construction there are actually as many as there are proofs of \({x}\equiv _{X}{x}\).

Fig. 1.
figure 1

Running example

Our goal in this paper is to construct \((A,B,\eta ,\texttt {ext},\texttt {inj})\) of the types given in Fig. 1 such that the simple elimination rules hold without using UIP. But first, we will show why Nordvall Forsberg’s approach is not sufficient.

2 Deriving UIP

Uniqueness of Identity proofs (UIP) for a type X is the principle that, for all x : X, y : X, \(p : {x}\equiv _{X}{y}\), \(q : {x}\equiv _{X}{y}\), the type \({p}\equiv _{{x}\equiv _{X}{y}}{q}\) is inhabited. Equivalently, for all x : X, \(p : {x}\equiv _{X}{x}\), the type \({p}\equiv _{{x}\equiv _{X}{x}}{\texttt {refl}}\) is inhabited. It expresses that there is at most one proof of any equality. UIP is independent of standard intensional type theory [13], and is inconsistent with Homotopy Type Theory [18].

Nordvall Forsberg’s construction of inductive-inductive types is described in [15, §5.3]. In this section, we show that if the simple elimination rules hold for this construction of the inductive-inductive type in Fig. 1, then UIP holds for the type X (Theorem 1). This argument has been formalized in both Coq version 8.8.0 [8] (see UIP_from_Forsberg_II.v) and Agda using the –without-K flag (see UIP_from_Forsberg_II.agda).

To recap, Nordvall Forsberg [15, §5.3] constructs an inductive-inductive type by first defining an approximation (the pre-syntax) which drops the A index from B leaving a mutual inductive definition. Concretely, we have \({A}_\text {pre}\) and \({B}_\text {pre}\) defined as in Fig. 2. Then a mutual indexed inductive definition is used to define the index relationship between \({A}_\text {pre}\) and \({B}_\text {pre}\); these are the goodness predicates \({A}_\text {good}\) and \({B}_\text {good}\). Finally, the inductive object \((A, B, \eta , \texttt {ext}, \texttt {inj})\) is defined by pairing the pre-syntax with goodness proofs (see Fig. 3).

Fig. 2.
figure 2

Pre-syntax for the running example

Fig. 3.
figure 3

Construction given by Nordvall Forsberg

In extensional type theory, Nordvall Forsberg proved that \({A}_\text {good}\;a\) is a mere proposition (all inhabitants are equal) [15, Lemma 5.37(ii)]. In intensional type theory as well, if function extensionality and UIP hold then \({A}_\text {good}\) is a mere proposition. This uniqueness of goodness proofs justifies having the definition of B ignore the goodness proof \({a}_\text {good}\), since \({a}_\text {good}\) can have at most one value.

In the next two subsections, we prove that:

  1. 1.

    If \({A}_\text {good}\;a\) is a mere proposition then UIP holds for the type X (Lemma 2).

  2. 2.

    If the simple elimination rules from Fig. 1 hold for the \((A, B, \eta , \texttt {inj}, \texttt {ext})\) constructed above then \({A}_\text {good}\;a\) is a mere proposition (Lemma 5).

Combining these results, we conclude that Nordvall Forsberg’s construction satisfies the simple elimination rules in intensional type theory only if UIP holds (Theorem 1).

2.1 Unique Goodness Implies UIP

We define notation \((x == y)\) to mean the term

$$\begin{aligned} {\texttt {ext}}_\text {pre}\;({\eta }_\text {pre}\;x)\;({\texttt {inj}}_\text {pre}\;({\eta }_\text {pre}\;y)) : {A}_\text {pre}. \end{aligned}$$

We first prove that there are at least as many proofs of \({A}_\text {good}\;(x == y)\) as there are of \({x}\equiv _{X}{y}\).

Lemma 1

(\({x}\equiv _{X}{y}\) is a retract of \({\varvec{A}}_\mathbf{good}\)). For all x : X and y : X, there are functions

$$\begin{aligned} f : {x}\equiv _{X}{y} \rightarrow {A}_\text {good}\;(x == y),\qquad g : {A}_\text {good}\;(x == y) \rightarrow {x}\equiv _{X}{y}, \end{aligned}$$

such that for all \(e : {x}\equiv _{X}{y}\), \({g\;(f\;e)}\equiv {e}\).


To define f, we let \(f\;\texttt {refl}=\)

$$\begin{aligned} {\texttt {ext}}_\text {good}\;({\eta }_\text {pre}\;x)\;({\eta }_\text {good}\;x)\;({\texttt {inj}}_\text {pre}\;({\eta }_\text {pre}\;x))\;({\texttt {inj}}_\text {good}\;({\eta }_\text {pre}\;x)\;({\eta }_\text {good}\;x)). \end{aligned}$$

To define g, pattern matching on \({a}_\text {good}\) has only one possibility: \({a}_\text {good} = \)

$$\begin{aligned} {\texttt {ext}}_\text {good}\;({\eta }_\text {pre}\;x)\;({\eta }_\text {good}\;x)\;({\texttt {inj}}_\text {pre}\;({\eta }_\text {pre}\;x))\;({\texttt {inj}}_\text {good}\;({\eta }_\text {pre}\;x)\;({\eta }_\text {good}\;x)), \end{aligned}$$

forcing y to be x, and in this case \({x}\equiv _{X}{y}\) holds by reflexivity. Then when \(e = \texttt {refl}\), \(f\;e\) returns a proof in the format matched by g, so \({g\;(f\;\texttt {refl})}\equiv {\texttt {refl}}\), and thus \({g\;(f\;e)}\equiv {e}\).

Lemma 2

(Unique goodness implies UIP). If \({A}_\text {good}\;t\) is a mere proposition for all \(t : {A}_\text {pre}\), then UIP holds for the type X.


Assume goodness proofs are unique, and take x : X, y : X, with \(p : {x}\equiv {y}\), \(q : {x}\equiv {y}\). We want to show that \({p}\equiv {q}\). Using the f and g from Lemma 1,

2.2 Simple Elimination Rules Imply Unique Goodness

Now we prove that there are at least as many proofs of \(B\;({t}_\text {pre},{t}_\text {good})\) as there are of \({A}_\text {good}\;{t}_\text {pre}\).

Lemma 3

(\({\varvec{A}}_\mathbf{good}\) is a retract of B). For all \({t}_\text {pre} : {A}_\text {pre}\) and \({t}_\text {good} : {A}_\text {good}\;{t}_\text {pre}\), there are functions

$$\begin{aligned} f : {A}_\text {good}\;{t}_\text {pre} \rightarrow B\;({t}_\text {pre},{t}_\text {good}),\qquad g : B\;({t}_\text {pre},{t}_\text {good})\rightarrow {A}_\text {good}\;{t}_\text {pre} \end{aligned}$$

such that for all \({a}_\text {good} : {A}_\text {good}\;{t}_\text {pre}\), \({g\;(f\;{a}_\text {good})}\equiv {{a}_\text {good}}\).


We define \(f\;{a}_\text {good} = {\texttt {inj}}_\text {pre}\;{t}_\text {pre}, {\texttt {inj}}_\text {good}\;{t}_\text {pre}\;{a}_\text {good}\). By induction on \({B}_\text {good}\), we define a function

$$\begin{aligned} g' : ({a}_\text {pre} : {A}_\text {pre})\rightarrow ({b}_\text {pre}: {B}_\text {pre})\rightarrow {B}_\text {good}\;{a}_\text {pre}\;{b}_\text {pre}\rightarrow {A}_\text {good}\;{a}_\text {pre} \end{aligned}$$


$$\begin{aligned} g'\;{a}_\text {pre}\;({\texttt {inj}}_\text {pre}\;{a}_\text {pre})\;({\texttt {inj}}_\text {good}\;{a}_\text {pre}\;{a}_\text {good}) = {a}_\text {good}. \end{aligned}$$

Then we can define \(g\;({b}_\text {pre},{b}_\text {good}) = g'\;{t}_\text {pre}\;{b}_\text {pre}\;{b}_\text {good}\). Then \({g\;(f\;{a}_\text {good})}\equiv {{a}_\text {good}}\) holds by reflexivity.

Lemma 4

(\(B\;a\) is contractible). Assuming the simple elimination rules from Fig. 1 hold for the \((A, B, \eta , \texttt {inj}, \texttt {ext})\) constructed above, for all a : A and \(b : B\;a\), \({\texttt {inj}\;a}\equiv _{B\;a}{b}\).


Referring to the simple elimination rules given in Fig. 1, we pattern match on B by giving motives \(( PA , PB )\) and methods \((P\eta , P\texttt {ext}, P\texttt {inj})\), and then using the resulting \( EB \).

We set \( PA \;a = \top \), and take \( PB \;a\;b = {\texttt {inj}\;a}\equiv _{B\;a}{b}\). Then we have \(P\eta \;x = \star \), and \(P\texttt {ext}\;a\;\star \;b\;H = \star \), and we take \(P\texttt {inj}\;a\;\star = \texttt {refl}: {\texttt {inj}\;a}\equiv _{B\;a}{\texttt {inj}\;a}\). The conclusion follows by \( EB : (a : A) \rightarrow (b : B\;a) \rightarrow {\texttt {inj}\;a}\equiv _{B\;a}{b}\).

Lemma 5

(Simple elimination rules imply unique goodness). If the simple eliminators hold for the \((A,B,\eta ,\texttt {inj},\texttt {ext})\) constructed above, then for all \(t : {A}_\text {pre}\), \({A}_\text {good}\;t\) is a mere proposition.


Assume that the simple elimination rules hold, and take \(t : {A}_\text {pre}\), and \(a_1\) and \(a_2\) in \({A}_\text {good}\;t\). We use the definition of f and g from Lemma 3 with \({t}_\text {pre} = t\) and \({t}_\text {good} = a_1\).

By Lemma 4, we know that

$$\begin{aligned} {\texttt {inj}\;(t, a_1)}\equiv _{B\;(t, a_1)}{f\;a_2}. \end{aligned}$$

Applying g to both sides, and recognizing that \(g\;(\texttt {inj}\;(t,a_1))\) computes to \(a_1\), while \(g\;(f\;a_2)\) computes to \(a_2\) we find that

$$\begin{aligned} a_1 = g\;(\texttt {inj}\;(t,a_1)) \equiv _{{A}_\text {good}\;t} g\;(f\;a_2) = a_2. \end{aligned}$$

2.3 Simple Elimination Rules for Nordvall Forsberg’s Construction only if UIP

Theorem 1

If the simple elimination rules hold for Nordvall Forsberg’s construction, then UIP holds for the type X.


Compose the results of Lemmas 2 and 5.

Therefore Nordvall Forsberg’s approach to constructing inductive-inductive types requires UIP. Since UIP is inconsistent with the Univalence axiom at the center of Homotopy Type Theory (HoTT) [18], we have an incentive to come up with a different construction which is consistent with HoTT.

3 Constructing an Inductive-Inductive Type in Cubical Type Theory

Cubical type theory [6] is a recently developed type theory which gives a constructive interpretation of the Univalence axiom of Homotopy Type Theory. It has an implementation as a mode for Agda [19], which we use to formalize the construction given in this section of the running example from Fig. 1.

The most important difference between cubical type theory and standard intensional type theory as implemented by Coq or vanilla Agda is that the identity type \({x}\equiv _{X}{y}\) is represented (loosely speaking) by the type of functions p from an interval type \(\mathbb {I}\) with two endpoints \(i_0\) and \(i_1\) to X such that \(p\;i_0\) reduces to x and \(p\;i_1\) reduces to y. This allows, for example, a simple proof of function extensionality: if we have \(A :\,\text {Type}\), \(B : A\rightarrow \,\text {Type}\), f and g functions of type \((a : A) \rightarrow B\;a\), and \(h : (a : A) \rightarrow {f\;a}\equiv {g\;a}\), then we have \((\lambda i.\lambda a.h\;a\;i) : {f}\equiv {g}\). Taking \(\text {cong}\; f = \lambda p. \lambda i. f\; (p\;i) : {x}\equiv {y} \rightarrow {f\;x}\equiv {f\;y}\) and \(\circ \) for function composition, we also have nice properties such as \((\text {cong}\; f) \circ (\text {cong}\; g) = \text {cong}\; (f \circ g)\).

In this section, we construct the running example from Fig. 1, along with the simple elimination rules, in cubical type theory. Our construction proceeds in several steps:

  • In Sect. 3.1, we approximate by dropping the indices, leaving a standard mutual inductive definition called the pre-syntax. This is the same as the pre-syntax given in Fig. 2.

  • In Sect. 3.2, we define goodness algebras, collections of predicates over the pre-syntax which define the index relationship (analogously to \({A}_\text {good}\) and \({B}_\text {good}\) from Sect. 2). We also show that a goodness algebra exists, and call it \(\mathbb {O}\).

  • In Sect. 3.3, we define a predicate nice on goodness algebras, such that if we have a nice goodness algebra, then we can construct the simple elimination rules. Being nice is similar to having proofs of goodness be unique as in Sect. 2.

  • In Sect. 3.4, we use pattern matching over the pre-syntax to define a function S from goodness algebras to goodness algebras.

  • In Sect. 3.5, we define the limit of the sequence

    $$\begin{aligned} \mathbb {O}, S\;\mathbb {O}, S\;(S\;\mathbb {O}), \dots , S^n\;\mathbb {O},\dots \end{aligned}$$

    and show that it is nice. This is the only section that utilizes the differences between cubical type theory and standard intentional type theory.

Given the nice goodness algebra in Sect. 3.5 we can then construct the simple elimination rules by Sect. 3.3. This construction has been formalized in AgdaFootnote 2 using the –cubical flag which implies –without-K (see RunningExample.agda).

The intuition for our construction is that the Nordvall Forsberg’s approach of pairing an approximation with goodness predicates can be repeated, and each time the approximation gets better. Using HoTT terminology, we showed in Sect. 2 that one iteration suffices only if X has homotopy level 0 (is a homotopy set, satisfies UIP). In general, \(n+1\) iterations are sufficient if only if X has homotopy level n. The successor goodness algebra defined in Sect. 3.4 is a slightly simplified version of Nordvall Forsberg’s construction, and taking the limit (in Sect. 3.5) gives a construction which works for arbitrary homotopy levels.

3.1 Pre-syntax

The pre-syntax is the same as that used in Sect. 2, defined as a mutually inductive type in Fig. 2. The constructors of the pre-syntax have the same types as the constructors of the full inductive-inductive definition (given in Fig. 1), except we replace \(B\;a\) with \({B}_\text {pre}\) everywhere, ignoring the dependence of B on A.

Consider this as the closest approximation of the target inductive-inductive type by a standard inductive type; the dependence of B on A is the only new element that inductive-inductive definitions add. Of course, this is only an approximation. We can form elements of the pre-syntax, such as

$$\begin{aligned} {\texttt {ext}}_\text {pre}\;({\eta }_\text {pre}\;x)\;({\texttt {inj}}_\text {pre}\;({\eta }_\text {pre}\;y)) \end{aligned}$$

for \(x \ne y\) that should be excluded from the inductive-inductive formulation, since \(\texttt {inj}\;(\eta \;y) : B\;(\eta \;y)\) while \(\texttt {ext}\;(\eta \;x) : B\;(\eta \;x) \rightarrow A\).

We will use definitions by induction and by pattern-matching on the pre-syntax in sections Sects. 3.3 and 3.4 respectively.

3.2 Goodness Algebras

As we saw in Sect. 3.1, the pre-syntax is too lenient, and contains terms we want to exclude from the inductive-inductive object. In this section, we define a notion of sub-algebra of the pre-syntax, which we will call a goodness algebra, and explain how to combine a goodness algebra with the pre-syntax to get an inductive-inductive object \((A, B, \eta , \texttt {ext}, \texttt {inj})\). We also define a goodness algebra \(\mathbb {O}\).

Fig. 4.
figure 4

Goodness algebras

In Fig. 4, for each clause of the inductive-inductive specification, we define 3 things:

  1. 1.

    For each sort X a type \(\text {Ix}\,X\) giving the data X depends on, and for each operation F constructing an element of sort X, a family \(\text {Arg}\,F : Y \rightarrow \text {Ix}\,X \rightarrow \,\text {Type}\) where Y collects the arguments of the operation in the pre-syntax, where \(\text {Arg}\,F\;y\;\phi \) gives the data needed to justify that pre-syntax constructed by \({F}_\text {pre}\) from y has index \(\phi \). In later sections we will also write \(\text {Ix}\,X\;\delta ^G\) and \(\text {Arg}\,F\;\delta ^G\) to specify which goodness algebra we are working in.

  2. 2.

    The type of the corresponding component in the goodness algebra. For sorts, this is a predicate relating \(\text {Ix}\) and the pre-syntax, while for the operations, this is a function witnessing that each element of \(\text {Arg}\) gives a goodness proof relating the index \(\phi \) to the pre-syntax.

  3. 3.

    A way to combine the goodness algebra with the pre-syntax to form an inductive-inductive object. For sorts, we pair the pre-syntax with a goodness proof, while for operations we apply the operation given by the goodness algebra, mimicking the construction in Fig. 3.

Comparing this definition to the construction in Sect. 2, the mutual inductive definition of \({A}_\text {good}\) and \({B}_\text {good}\) (in Fig. 3) has types equivalent to the result of dropping the dependence of \(\delta ^G.B\) on \(\delta ^G.A\) (defined in Fig. 4), going from

$$\begin{aligned} \delta ^G.B : (a : {A}_\text {pre}) \times \delta ^G.A\;\star \;a \rightarrow {B}_\text {pre} \rightarrow \,\text {Type} \text { to } {B}_\text {good} : {A}_\text {pre} \rightarrow {B}_\text {pre} \rightarrow \,\text {Type}. \end{aligned}$$

The other difference is that we replace the inductive index (call it s) in the conclusion by a fresh variable \(\phi \), with the condition \(s = \phi \) included in \(\text {Arg}\).

3.3 Niceness

In this section, we identify a property niceness that is sufficient for a goodness algebra to produce an inductive-inductive object \((A, B, \eta , \texttt {ext}, \texttt {inj})\) which satisfies the simple elimination rules, as given in Fig. 1.

To define niceness, we use the concept of equivalence, as defined in Univalent Foundations Program [18] (§4.4 Contractible fibers). Given a function \(f : A \rightarrow B\), we write \(\text {isEquiv}\,f\) (leaving A and B implicit) to denote that f is an equivalence between A and B. We will also write \(A \simeq B\) for the type of pairs of a function f with a proof that f is an equivalence.

We will say that a goodness algebra is nice if we have equivalence proofs \((\delta ^N.\eta , \delta ^N.\texttt {ext}, \delta ^N.\texttt {inj})\), with types

$$\begin{aligned} \delta ^N.\eta \;x\;\phi&: \text {isEquiv}\; (\delta ^G.\eta \;x\;\phi ),\\ \delta ^N.\texttt {ext}\;(a, b)\;\phi&: \text {isEquiv}\; (\delta ^G.\texttt {ext}\;(a, b)\;\phi ),\\ \delta ^N.\texttt {inj}\;a\;\phi&: \text {isEquiv}\; (\delta ^G.\texttt {inj}\;a\;\phi ). \end{aligned}$$

Equivalences between types are very close to equalities between types (the Univalence axiom makes this precise). If we have a nice goodness algebra, the combined data looks similar to a recursive definition:

$$\begin{aligned} \delta ^G.A&: \top \rightarrow {A}_\text {pre} \rightarrow \text {Type,}\\ \delta ^G.B&: ((a : {A}_\text {pre}) \times \delta ^G.A\;\star \;a) \rightarrow {B}_\text {pre} \rightarrow \text {Type,}\\ \delta ^G.A\;\phi \;({\eta }_\text {pre}\;x)&\simeq \text {Arg}\,\eta \;x\;\phi ,\\ \delta ^G.A\;\phi \;({\texttt {ext}}_\text {pre}\;a\;b)&\simeq \text {Arg}\,\texttt {ext}\;(a, b)\;\phi ,\\ \delta ^G.B\;\phi \;({\texttt {inj}}_\text {pre}\;a)&\simeq \text {Arg}\,\texttt {inj}\;a\;\phi . \end{aligned}$$

However, the dependence of \(\delta ^G.B\) on \(\delta ^G.A\) makes this what Nordvall Forsberg calls a “recursive-recursive” definition, and so we cannot use the standard eliminator of the pre-syntax. In Sect. 3.5, we will expend much effort to construct a solution to this system. Once we have done so, the inductive-inductive object produced by the goodness algebra will satisfy the simple elimination rules, as we show in the following lemma.

Lemma 6

(Nice goodness algebras give simple elimination rules). Given a goodness algebra \(\delta ^G\) with proof of niceness \(\delta ^N\), the inductive-inductive object \((A, B, \eta , \texttt {ext}, \texttt {inj})\) produced from \(\delta ^G\) as specified in Sect. 3.2 satisfies the simple induction rules given in Fig. 1.


The proof is formalized in RunningExample.agda. The main idea of the proof is to induct on the pre-syntax, and exploit the equivalences provided by niceness \(\delta ^N\). In the \(\texttt {inj}\) case for example, we have a proof of \(\delta ^G.B\;\phi \;({\texttt {inj}}_\text {pre}\;a)\). But without loss of generality, we can replace that goodness proof with \(\delta ^G.\texttt {inj}\) applied to an element of \(\text {Arg}\,\texttt {inj}\;a\;\phi \), which contains both a proof \({a}_\text {good} : \delta ^G.A\;\star \;a\) and a proof that \({(a, {a}_\text {good})}\equiv {\phi }\). Using J to eliminate that equality leaves a goal to which the provided simple induction step for \(\texttt {inj}\) applies. This proof does not use cubical type theory in any essential way.

3.4 Successor Goodness Algebra

We are trying to create a nice goodness algebra by taking the limit of successive approximations, so we need a step function, which we will call S, that takes a goodness algebra \(\delta ^G\) and returns a new goodness algebra \(S\;\delta ^G\), which is closer in some sense to being nice. We do so by pattern matching on the pre-syntax to unroll one level of the recurrence equations niceness encodes.

We define by pattern matching

$$\begin{aligned} (E\;\delta ^G).A : (a : {A}_\text {pre})&\rightarrow (\phi : \text {Ix}\,A\;\delta ^G) \rightarrow (Y :\,\text {Type}) \times (Y \rightarrow \delta ^G.A\;\phi \;a),\\ (E\;\delta ^G).B : (b : {B}_\text {pre})&\rightarrow (\phi : \text {Ix}\,B\;\delta ^G) \rightarrow (Y :\,\text {Type}) \times (Y \rightarrow \delta ^G.B\;\phi \;b),\\ (E\;\delta ^G).A\;({\eta }_\text {pre}\;x)&= \lambda \phi . \text { Arg }\eta \;\delta ^G\;x\;\phi , \delta ^G.\eta \;x\;\phi ,\\ (E\;\delta ^G).A\;({\texttt {ext}}_\text {pre}\;a\;b)&= \lambda \phi . \text { Arg }\texttt {ext}\;\delta ^G\;(a, b)\;\phi , \delta ^G.\texttt {ext}\;(a, b)\;\phi ,\\ (E\;\delta ^G).B\;({\texttt {inj}}_\text {pre}\;a)&= \lambda \phi . \text { Arg }\texttt {inj}\;\delta ^G\;a\;\phi , \delta ^G.\texttt {inj}\;a\;\phi , \end{aligned}$$

which gives a new property Y which maps back to \(\delta ^G.B\;\phi \;b\) for each b and \(\phi \), and similarly for A.

Then, in Fig. 5, we define the new goodness algebra \((S\;\delta ^G)\) along with projection functions \((\delta ^\pi \;\delta ^G)\) which take \(\text {Ix}\) and \(\text {Arg}\) from \((S\;\delta ^G)\) to \(\delta ^G\).

Fig. 5.
figure 5

Successor goodness algebra

The projection functions \((\delta ^\pi \;\delta ^G)\) consist of applying the map given by the second component of \((E\;\delta ^G)\) everywhere in sight. The sorts are then defined by the first component of \((E\;\delta ^G)\), while the operations can be defined to be the corresponding projection function itself.

Concretely, for the sort B, we define \((\delta ^\pi \;\delta ^G).B\) to map between \(\text {Ix}\,B\; (S \delta ^G)\) and \(\text {Ix}\,B\;\delta ^G\). This consists of applying the function \(((E\;\delta ^G).A\;{a}_\text {pre}\;\star \;.2)\) which we defined by pattern matching above to \({a}_\text {good}\). Then, since \((S\;\delta ^G).B\) gets an inductive index \(\phi \) in \((S\;\delta ^G)\) but \(((E\;\delta ^G)\;b\;\phi \;.1)\) is expecting an inductive index in \(\delta ^G\), we span the gap with the projection function \((\delta ^\pi \;\delta ^G).B\) just defined. The definition of A follows the same pattern, but \((\delta ^\pi \;\delta ^G).A\) is even simpler because \(\text {Ix}\,A\;\delta ^G = \top \) regardless of what goodness algebra we are working in.

For the operations, consider \(\texttt {inj}\). Like with the sorts, we first define a projection function \((\delta ^\pi \;\delta ^G).\texttt {inj}\;a\;\phi \), which maps from \(\text {Arg}\,\texttt {inj}\;(S\;\delta ^G)\) to \(\text {Arg}\,\texttt {inj}\;\delta ^G\), and we fix up the inductive index \(\phi \) with \((\delta ^\pi \;\delta ^G).B\). For the first component of \(\text {Arg}\), we use the function given by the second component of \((E\;\delta ^G).A\) to fix up \({a}_\text {good}\). For the second component, applying the projection \((\delta ^\pi \;\delta ^G).B\) to the equality proof works out on the left hand side because all these projection functions are doing the same thing: applying the function given by the second component of \((E\;\delta ^G)\) everywhere. Finally, we can define \((S\;\delta ^G).\texttt {inj}= (\delta ^\pi \;\delta ^G).\texttt {inj}\), because \((S\;\delta ^G).\texttt {inj}\;a\;\phi \) is supposed to have codomain

$$\begin{aligned} (S\;\delta ^G).B\;\phi \;({\texttt {inj}}_\text {pre}\;a), \end{aligned}$$

which is defined to be

$$\begin{aligned} (E\;\delta ^G).B\;({\texttt {inj}}_\text {pre}\;a)\;((\delta ^\pi \;\delta ^G).B\;\phi )\;.1, \end{aligned}$$

which reduces on \(({\texttt {inj}}_\text {pre}\;a)\) to

$$\begin{aligned} \text {Arg}\,\texttt {inj}\;\delta ^G\;a\;((\delta ^\pi \;\delta ^G).B\;\phi ), \end{aligned}$$

which is exactly the codomain of \((\delta ^\pi \;\delta ^G).\texttt {inj}\;a\;\phi \).

3.5 Limit of Goodness Algebras

We will now construct a nice goodness algebra by taking the limit of the sequence \(S^n\;\mathbb {O}\) and showing that it is nice, where \(S^n\;\mathbb {O}\) is defined by recursion on n with \(S^0\mathbb {O}= \mathbb {O}, S^{1+n}\mathbb {O}= S(S^n\;\mathbb {O})\). But first, we consider the limit of a chain of types.

Limit of Types. This subsection Limit of Types is formalized in Chain.agda.

In order to take the limit of successive goodness algebras, we need to know how to work with chains of types. Specifically, given \((X : \mathbb {N}\rightarrow \mathbb {I}\rightarrow \,\text {Type})\) and \(\pi : (n : \mathbb {N}) \rightarrow X\;(n+1)\;i_0 \rightarrow X\;n\;i_1\), we consider the limit given by the type

$$\begin{aligned} \text {chain.t}\;X\;\pi = (f : (n : \mathbb {N}) \rightarrow X\;n\;i_0)\times ((n : \mathbb {N}) \rightarrow {f\;n}\equiv _{X\;n}{\pi \;n\;(f\;(n+1))}. \end{aligned}$$

If we have \(x : \text {chain.t}\;X\;\pi \), then let x.p denote the second projection.

This definition is designed to work well in cubical type theory, and uses the interval \(\mathbb {I}\) and native heterogeneous equality \({x}\equiv _{X}{y}\) where \(X : \mathbb {I}\rightarrow \,\text {Type}\) (where we can form \(p = \lambda i.w : {x}\equiv _{X}{y}\) when \(p\;i_0 = x\), \(p\;i_1 = y\), and \(p\;i : X\;i\)). In particular, this definition allows for dependent chains without transporting over the base equality, which is problematic in cubical type theory because transport gets stuck on neutral types; instead given

we can form

$$\begin{aligned} LA&= \text {chain.t}\;(\lambda n.\lambda i. A\;n)\;f_A&:&\text {Type,}\\ LB&= \lambda a.\text {chain.t}(\lambda n.\text {cong}(B\;n)(a.p\;n))(\lambda n. f_B\;n\;(a.p\;(1+n)\;i_0))&:&LA \rightarrow \,\text {Type} \end{aligned}$$

using \(\text {cong}(B\;n)(a.p\;n)\) which is particularly well behaved in cubical type theory.

This construction commutes with most type formers: dependent function types, dependent pair types, identity types, and constants. We also note a dependent version of the fact that the limit of a chain is equivalent to the limit of a shifted chain to substitute for Ahrens et al. [1, Lemma 12].

Lemma 7

(Dependent chain equivalent to shifted chain). Given

$$\begin{aligned} X&: \mathbb {N}\rightarrow \text {Type,}\qquad \pi _X : (n : \mathbb {N}) \rightarrow X\;(1+n) \rightarrow X\;n,\\ Y_0&: (n : \mathbb {N}) \rightarrow X\;n \rightarrow \text {Type,}\qquad Y_1 : (n : \mathbb {N}) \rightarrow X\;n \rightarrow \text {Type,}\\ f&: (n : \mathbb {N}) \rightarrow (x : X\;n) \rightarrow Y_1\;n\;x \rightarrow Y_0\;n\;x,\\ g&: (n : \mathbb {N}) \rightarrow (x : X\;(1+n)) \rightarrow Y_0\;(1+n)\;x \rightarrow Y_1\;n\;(\pi _X\;n\;x),\\ x&: \text {chain.t}\;(\lambda n.\lambda i.X\;n)\;\pi _X, \end{aligned}$$

and letting the X arguments to f and g be implicit, we can define the types

$$\begin{aligned}t&= \text {chain.t}\;(\lambda n.\text {cong}\;(Y_0\;n)\;(x.p\;n))\;(\lambda n.\lambda y.f\;n\;(g\;n\;y)),\\t^+&= \text {chain.t}\;(\lambda n.\text {cong}\;(Y_1\;n)\;(x.p\;n))\;(\lambda n.\lambda y.g\;n\;(f\;(1+n)\;y)).\end{aligned}$$

Applying f component-wise gives a function from \(t^+\) to t. This function is an equivalence.

We only use Lemma 7 when \(Y_1\;n\;(\pi _X\;n\;x) = Y_0\;(1+n)\;x\), so we may take g to be the identity, leaving \(t^+\) the shifted chain of t up to X arguments.

Limit of Goodness Algebras. Now we use the lemmas about chains to construct a nice goodness algebra, and then conclude by constructing an inductive-inductive object \((A,B,\eta ,\texttt {ext},\texttt {inj})\) that satisfies the simple elimination rules.

Lemma 8

A nice goodness algebra exists.


The sorts of the limit goodness algebra are defined as a chain, and operations act pointwise on each component of the chain. To prove that the operations are equivalences, we compose a proof that \(\text {Arg}\) commutes with chains (given by combining the lemmas about chains commuting with type formers) with a proof that for each sort, the chain given by the \((E\;(S^n\;\mathbb {O}))\) is equivalent to the chain given by \((S^n\;\mathbb {O})\) (given by Lemma 7). Since \((E\;(S^n\;\mathbb {O}))\) is defined by pattern matching to reduce to \(\text {Arg}\), the right and left sides of these equivalences agree, and we find that the operations are indeed nice. See the formalization for details.

Theorem 2

There exists an inductive-inductive object \((A, B, \eta , \texttt {ext}, \texttt {inj})\) that satisfies the simple elimination rules as defined in Fig. 1.


A nice goodness algebra exists by Lemma 8, therefore we can construct \((A, B, \eta , \texttt {ext}, \texttt {inj})\) satisfying the simple elimination rules by Lemma 6.

We have therefore succeeded. In cubical type theory, the inductive-inductive definition from Fig. 1 is constructible.

4 Related Work

The principle of simultaneously defining a type and a family over that type has been used many times before. Danielsson [9] used an inductive-inductive-recursive definition to define the syntax of dependent type theory, and Chapman [5] used an inductive-inductive definition for the same purpose. Conway’s surreal numbers [7] are given (up to a defined equivalence relation) by the inductive-inductive definition of number and less than, where less than is a relation indexed by two numbers [15, §7.1]. The HoTT book §11.3 gives a definition of the Cauchy reals as a higher inductive-inductive definition [18].

In his thesis and previous papers [15,16,17], Nordvall Forsberg studies the general theory of inductive-inductive types, axiomatizing a limited class of such definitions, and giving a set theoretic model showing that they are consistent. He also considers various extensions such as allowing a third type indexed by the first two, allowing the second type to be indexed by two elements of the first, or combining inductive-inductive definitions with inductive-recursive definitions from Dybjer and Setzer [10].

There have been several attempts to define a general class of inductive-inductive types larger than that in Nordvall Forsberg’s thesis. Kaposi and Kovács [14] gives an external syntactic description of a class which includes higher inductive-inductive types, and Altenkirch et al. [2] gives a semantic description of a class including quotient inductive-inductive types, but neither gives a type of codes that can be reasoned about internally. Working with UIP, Altenkirch et al. [4] propose a class of quotient inductive-inductive types.

Nordvall Forsberg’s thesis [15] appears to give the best previously known reduction of inductive-inductive types to regular inductive types known. As we have shown, Nordvall Forsberg’s approach can only be applied to intensional type theory if UIP holds. Furthermore, the equations for both Nordvall Forsberg’s approach and our approach only hold propositionally.

Many other structures have been reduced to plain inductive types. Our construction of inductive-inductive types can be seen as an adaptation of the technique in Ahrens et al. [1], where coinductive types are constructed from \(\mathbb {N}\) by taking a limit. Indexed inductive types (which are used in Nordvall Forsberg’s construction) are constructed from plain inductive types in Altenkirch et al. [3], with good computational properties (provided an identity type that satisfies J strictly). And small induction-recursion is reduced to plain indexed inductive types in Hancock et al. [11].

5 Conclusions and Future Work

In this paper, we have:

  1. 1.

    Shown that the construction of inductive-inductive types given by Nordvall Forsberg implies UIP.

  2. 2.

    Given an alternative construction of one particular inductive-inductive type in cubical type theory, which is compatible with Homotopy Type Theory.

We claim that the construction of our specific running example is straightforwardly generalizable to other inductive-inductive types, and have formalized the construction of a number of other examples including types with non-finitary constructors and indices to support this claim (see the GitHub repository referenced in the introduction).

Going forward, we would like to investigate

  • An internal definition of inductive-inductive specifications in HoTT. Early experiments suggest that this requires surmounting difficulties related to increasingly complex coherence conditions similar to those encountered when defining semi-simplicial sets, c.f. Herbelin [12].

  • Extending the proof given here to construct the general elimination rules. The general elimination rules were defined in Nordvall Forsberg [15], but that formulation they relies on either strict computation rules or extensional type theory to be well typed. Kaposi and Kovács [14] give equivalent rules which are well typed in intensional type theory.

  • Identifying what needs to be added for the simple elimination rules to have the expected computational behavior. Given the similar construction method, this hopefully also allows the construction of coinductive types with nice computational behavior, c.f. Ahrens et al. [1].

  • In the opposite direction from the previous point, rewriting the construction given here in Coq + Function Extensionality. While the elimination rules will have poor computational behavior, this would make using inductive-inductive types in Coq possible without requiring any change to Coq itself, while being compatible with HoTT. In particular, using cubical type theory makes the proofs in Sect. 3.5 simpler, but we speculate that axiomatic function extensionality is sufficient.