First-order logic in the Medvedev lattice

Kolmogorov introduced an informal calculus of problems in an attempt to provide a classical semantics for intuitionistic logic. This was later formalised by Medvedev and Muchnik as what has come to be called the Medvedev and Muchnik lattices. However, they only formalised this for propositional logic, while Kolmogorov also discussed the universal quantifier. We extend the work of Medvedev to first-order logic using the notion of a first-order hyperdoctrine from categorical logic to a structure which we will call the Medvedev hyperdoctrine. We also study the intermediate logic that the Medvedev hyperdoctrine gives us, where we focus in particular on subintervals of the Medvedev hyperdoctrine in an attempt to obtain an analogue of Skvortsova's result that there is a factor of the Medvedev lattice characterising intuitionistic propositional logic. Finally, we consider Heyting arithmetic in the Medvedev hyperdoctrine and prove an analogue of Tennenbaum's theorem on computable models of arithmetic.


Introduction
In [10], Kolmogorov introduced an interpretation of intuitionistic logic through the use of problems (or Aufgaben). In this paper, he argued that proving a formula in intuitionistic logic is very much like solving a problem. The exact definition of a problem is kept informal, but he does define the necessary structure on problems corresponding to the logical connectives. His ideas were later formalised by Medvedev [14] as the Medvedev lattice, and a variation of this was introduced by Muchnik [15].
However, Medvedev and Muchnik only studied propositional logic, while Kolmogorov also briefly discussed the universal quantifier in his paper: "Im allgemeinen bedeutet, wenn x eine Variable (von beliebiger Art) ist und a(x) eine aufgabe bezeichnet, deren Sinn von dem Werte von x abhängt, (x)a(x) die Aufgabe "eine allgemeine Methode für die Lösung von a(x) bei jedem einzelnen Wert von x anzugeben". Man soll dies so verstehen: Die aufgabe (x)a(x) zu lösen, bedeutet, imstande sein, für jeden gegebenen Einzelwert x 0 von x die Aufgabe a(x 0 ) nach einer endlichen Reihe von im voraus (schon vor der Wahl von x 0 ) bekannten Schritten zu lösen." In the English translation [11] this reads as follows: "In the general case, if x is a variable (of any kind) and a(x) denotes a problem whose meaning depends on the values of x, then (x)a(x) denotes the problem "find a general method for solving the problem a(x) for each specific value of x". This should be understood as follows: the problem (x)a(x) is solved if the problem a(x 0 ) can be solved for each given specific value of x 0 of the variable x by means of a finite number of steps which are fixed in advance (before x 0 is set)." It is important to note that, when Kolmogorov says that the steps should be fixed before x 0 is set, he probably does not mean that we should have one solution that works for every x 0 ; instead, the solution is allowed to depend on x 0 , but it should do so uniformly. This belief is supported by one of the informal examples of a problem he gives: "given one solution of ax 2 + bx + c = 0, give the other solution". Of course there is no procedure to transform one solution to the other one which does not depend on the parameters a, b and c; however, there is one which does so uniformly. More evidence can be found in Kolmogorov's discussion of the law of the excluded middle, where he says that a solution of the problem ∀a(a ∨ ¬a), where a quantifies over all problems, should be "a general method which for any problem a allows one either to find its solution or to derive a contradiction from the existence of such a solution" and that "unless the reader considers himself omniscient, he will perhaps agree that [this formula] cannot be in the list of problems that he has solved". In other words, a solution of ∀a(a ∨ ¬a) should be a solution of a ∨ ¬a for every problem a which is allowed to depend on a, and it should be uniform because we are not omniscient.
In this paper, we will formalise this idea in the spirit of Medvedev. To do this, we will use the notion of a first-order hyperdoctrine from categorical logic, which naturally extends the notion of Brouwer algebras used to give algebraic semantics for propositional intuitionistic logic, to first-order intuitionistic logic. We will give a short overview of the necessary definitions and properties in section 2. After that, in section 3 we will introduce the ω-Medvedev lattice, which combines the idea of Medvedev that 'solving' should be interpreted as 'computing' with the idea of Kolmogorov that 'solving' should be uniform in the variables. Using this ω-Medvedev lattice, we will introduce the Medvedev hyperdoctrine in section 4. Next, in section 5 we study the intermediate logic which this Medvedev hyperdoctrine gives us, and we start looking at subintervals of it to try and obtain analogous results to Skvortsova's [20] remarkable result that intuitionistic propositional logic can be obtained from a factor of the Medvedev lattice. In section 6 we show that even in these intervals we cannot get every intuitionistic theory, by showing that there is an analogue of Tennenbaum's theorem [22] that every computable model of Peano arithmetic is the standard model. Finally, in section 7 we prove a partial positive result on which theories can be obtained in subintervals of the Medvedev lattice, through a characterisation using Kripke models.
Recently, Basu and Simpson [2] have independently studied an interpretation of higher-order intuitionistic logic based on the Muchnik lattice. One of the main differences between our approach and their approach is that our approach follows Kolmogorov's philosophy that the interpretation of the universal quantifier should depend uniformly on the variable. On the other hand, in their approach, depending on the view taken either the interpretation does not depend on the quantified variable at all or does so non-uniformly (as we will discuss below in Remark 2.4). Of course, an important advantage of their approach is that it is suitable for higherorder logic, while we can only deal with first-order logic. Another important difference between our work and theirs is that we start from the Medvedev lattice, while they take the Muchnik lattice as their starting point.
Our notation is mostly standard. We let ω denote the natural numbers and ω ω the Baire space of functions from ω to ω. We denote concatenation of strings σ and τ by σ ⌢ τ . For functions f, g ∈ ω ω we denote by f ⊕ g the join of the functions f and g, i.e. (f ⊕ g)(2n) = f (n) and (f ⊕ g)(2n + 1) = g(n). We let a 1 , . . . , a n denote a fixed computable bijection between ω n and ω. For any set A ⊆ ω ω we denote by A its complement in ω ω . When we say that a set is countable, we include the possibility that it is finite. We denote the join operation in lattices by ⊕ and the meet operation in lattices by ⊗. A Brouwer algebra is a bounded distributive lattice together with an implication operation → such that x ⊕ y ≥ z if and only if y ≥ x → z. For unexplained notions from computability theory, we refer to Odifreddi [16], for the Muchnik and Medvedev lattices, we refer to the surveys of Sorbi [21] and Hinman [6], for lattice theory, we refer to Balbes and Dwinger [1], and finally for unexplained notions about Kripke semantics we refer to Chagrov and Zakharyaschev [3] and Troelstra and van Dalen [23].

Categorical semantics for IQC
In this section we will discuss the notion of first-order hyperdoctrine, as formulated by Pitts [17], based on the important notion of hyperdoctrine introduced by Lawvere [13]. These first-order hyperdoctrines can be used to give sound and complete categorical semantics for IQC. Our notion of first-order logic in the Medvedev lattice will be based on this, so we will discuss the basic definitions and the basic properties before we proceed with out construction. We use the formulation from Pitts [19] (but we use Brouwer algebras instead of Heyting algebras, because the Medvedev lattice is normally presented as a Brouwer algebra).
Let us first give the definition of a first-order hyperdoctrine. After that we will discuss an easy example and discuss how first-order hyperdoctrines interpret firstorder intuitionistic logic. We will not discuss all details and the full motivation behind this definition, instead referring the reader to the works by Pitts [17,19]. However, we will discuss some of the motivation behind this definition in Remark 2.9 below. Definition 2.1. ([19, Definition 2.1]) Let C be a category such that for every object X ∈ C and every n ∈ ω, the n-fold product X n of X exists. A first-order hyperdoctrine P over C is a contravariant functor P : C op → Poset from C into the category Poset of partially ordered sets and order homomorphisms, satisfying: (i) For each object X ∈ C, the partially ordered set P(X) is a Brouwer algebra; (ii) For each morphism f : X → Y in C, the order homomorphism P(f ) : P(Y ) → P(X) is a homomorphism of Brouwer algebras; (iii) For each diagonal morphism ∆ X : X → X × X in C (i.e. a morphism such that π 1 • ∆ X = π 2 • ∆ X = 1 X ), the upper adjoint to P(∆ X ) at the bottom element 0 ∈ P(X) exists. In other words, there is an element = X ∈ P(X × X) such that for all A ∈ P(X × X) we have (iv) For each product projection π : Γ × X → Γ in C, the order homomorphism P(π) : P(Γ) → P(Γ × X) has both an upper adjoint (∃x) Γ and a lower adjoint (∀x) Γ , i.e.: Moreover, these adjoints are natural in Γ, i.e. given s : This condition is called the Beck-Chevalley condition. We will also denote P (f ) by f * .
Remark 2.2. We emphasise that the adjoints (∃x) Γ and (∀x) Γ only need to be order homomorphisms, and that they do no need to preserve the lattice structure. This should not come as a surprise: after all, the universal quantifier does not distribute over logical disjunction, and neither does the existential quantifier distribute over conjunction.
Let B be a complete Brouwer algebra. Then B induces a first-order hyperdoctrine P over the category Set of sets and functions as follows. We let P(X) be B X , which is again a Brouwer algebra under coordinate-wise operations. Furthermore, for each function f : X → Y we let P(f ) be the function which sends (B y ) y∈Y to the set given by A x = B f (x) . The equality predicates = X are given by For the adjoints we use the fact that B is complete: given B ∈ P(Γ × X) we let Then P is directly verified to be a first-order hyperdoctrine.
Remark 2.4. A special case of Example 2.3 is when we take B to be the Muchnik lattice. In that case we obtain a fragment of the first-order part of the structure studied by Basu and Simpson [2] mentioned in the introduction. Thus, if we have a sequence of problems B 0 , B 1 , . . . , we have in other words a solution of the problem ∀x(B(x)) computes a solution of every B i but does so non-uniformly. If, as in [2], we take each B i to be a canonical representative of its Muchnik degree, i.e. we take B i to be upwards closed under Turing reducibility, then we have that i.e. a solution of the problem ∀x(B(x)) is a single solution that solves every B i . Thus, depending on the view one has on the Muchnik lattice, either the solution is allowed to depend on x but non-uniformly, or it is not allowed to depend on x at all.
Next, let us discuss how first-order intuitionistic logic can be interpreted in firstorder hyperdoctrines. Most of the literature on this subject deals with multi-sorted first-order logic; however, to keep the notation easy and because we do not intend to discuss multi-sorted logic in the Medvedev case, we will give the definition only for single-sorted first-order logic.
Definition 2.5. (Pitts [17, p. B2]) Let P be a first-order hyperdoctrine over C and let Σ be a first-order language. Then a structure M for Σ in P consists of: (i) an object M ∈ C (the universe), (ii) a morphism f M : M n → M in C for every n-ary function symbol f in Σ, (iii) an element R M ∈ P(M n ) for every n-ary relation in Σ.
Case (iii) is probably the most interesting part of this definition, since it says that elements of P(M n ) should be seen as generalised n-ary predicates on M .
Definition 2.6. ([17, Table 6.4]) Let t be a first-order term in a language Σ and let M be a structure in a first-order hyperdoctrine P. Let x = (x 1 , . . . , x n ) be a context (i.e. an ordered list of distinct variables) containing all free variables in t. Then we define the interpretation t( x) M ∈ M n → M inductively as follows: Thus, we identify terms with the function mapping valuations of the variables occurring in the term to the value of the term when evaluated at that valuation. Table 8.2]) Let ϕ be a first-order formula in a language Σ and let M be a structure in a first-order hyperdoctrine P. Let x = (x 1 , . . . , x n ) be a context (i.e. an ordered list of distinct variables) containing all free variables in ϕ. Then we define the interpretation ϕ( x) M ∈ P(M n ) in the context Γ inductively as follows: (ix) If ϕ is ∀y.ψ, then ϕ( x) M is defined as (∀y) M n ( ψ( x, y) M ). . , x n ), and let M be a structure in a first-order hyperdoctrine P. Then we say that ϕ( x) is satisfied if ϕ( x) M = 0 in P(M n ). We let the theory of M be the set of sentences which are satisfied (in the empty context), and we denote this by Th(M). Given a language Σ, we let the theory of P be the intersection of the theories of all structures M for Σ in P, and we denote this theory by Th(P).
Remark 2.9. Let us make some remarks on the definitions given above.
• As mentioned above, we identify terms t( x) with functions t( x) M , and mary predicates R(y 1 , . . . , y m ) are elements of P(M n ). Since we required our category C to contain n-fold products, if we have terms t 1 , . . . , t m , then . This should be seen as the substitution of t 1 ( x), . . . , t m ( x) for y 1 , . . . , y m , which explains case (i) and (ii). • Quantifiers are interpreted as adjoints, which is an idea due to Lawvere. For example, for the universal quantifier this says that where we assume x does not occur freely in ψ. Reading ≥ as ⊢, the two implications are essentially the introduction and elimination rules for the universal quantifier. • The Beck-Chevalley condition is necessary to ensure that substitutions commute with the quantifiers (modulo restrictions on bound variables).
Let us introduce a notational convention: when the structure is clear from the context, we will omit the subscript M in − M . Having finished giving the definition of first-order hyperdoctrines, let us just mention that they are sound and complete for intuitionistic first-order logic IQC.

The ω-Medvedev lattice
In this section, we will introduce an extension of the Medvedev lattice, which we will need to define our first-order hyperdoctrine based on the Medvedev lattice. As mentioned in the introduction, Kolmogorov mentioned in his paper that solving the problem ∀xϕ(x) is the same as solving the problem ϕ(x) for all x, uniformly in x. We formalise this in the spirit of Medvedev and Muchnik in the following way.
Definition 3.1. An ω-mass problem is an element (A i ) i∈ω ∈ (P(ω ω )) ω . Given two ω-mass problems (A i ) i∈ω , (B i ) i∈ω , we say that (A i ) i∈ω (ω-)Medvedev reduces to (B i ) i∈ω (notation: . We call the equivalence classes of ω-Medvedev equivalence the ω-Medvedev degrees. We call this set of ω-Medvedev degrees the ω-Medvedev lattice. Definition 3.2. Let (A i ) i∈ω , (B i ) i∈ω be ω-mass problems. We say that (A i ) i∈ω (ω-)Muchnik reduces to (B i ) i∈ω (notation: (A i ) i∈ω ≤ Mwω (B i ) i∈ω ) if for every sequence (g i ) i∈ω with g i ∈ B i there exists a partial Turing functional Φ such that for every n ∈ ω we have Φ(n ⌢ g n ) ∈ A n . If both (A i ) i∈ω ≤ Mwω (B i ) i∈ω and (B i ) i∈ω ≤ Mwω (A i ) i∈ω we say that (A i ) i∈ω and (B i ) i∈ω are (ω-)Muchnik equivalent (notation: (A i ) i∈ω ≡ Mwω (B i ) i∈ω ). We call the equivalence classes of ω-Muchnik equivalence the ω-Muchnik degrees. We call this set of ω-Muchnik degrees the ω-Muchnik lattice.
The next proposition tells us that the ω-Medvedev lattice is a Brouwer algebra, like the Medvedev lattice. Proof. We claim that M ω is a Brouwer algebra under the component-wise operations on M , i.e. the operations induced by: The proof of this is mostly analogous to the proof for the Medvedev lattice, so we will only give the proof for the implication. Let us first show that Conversely, let (C i ) i∈ω be such that (A i ) i∈ω ⊕ (C i ) i∈ω ≥ Mω (B i ) i∈ω . Let e ∈ ω be such that Φ e witnesses this fact. Let ϕ be a computable function sending n to an index for the functional mapping h to Φ e (n ⌢ h). Let Ψ be the functional sending n ⌢ f to ϕ(n) ⌢ f . Then However, it turns out that this fails for the ω-Muchnik lattice: it is still a distributive lattice, but it is not a Brouwer algebra.
Proposition 3.4. The ω-Muchnik lattice is a distributive lattice, but not a Brouwer algebra. In particular, it is not a complete lattice.
Proof. It is easy to see that M wω is a distributive lattice under the same operations as M ω . Towards a contradiction, assume M wω is a Brouwer algebra, under some implication →. Let f, g ∈ ω ω be two functions of incomparable Turing degree. Let (A i ) i∈ω be given by Then, for every j ∈ ω we have ( So, since we assumed → makes M wω into a Brouwer algebra, we know that every Finally, let us show that the ω-Medvedev and ω-Muchnik lattices are extensions of the Medvedev and Muchnik lattices, in the sense that the latter embed into the first. Furthermore, we show that the countable products of M and M w are quotients of M ω and M wω . Proof. Direct, using the fact that the diagonal of M ω , i.e. {(A i ) i∈ω ∈ M ω | ∀n, m(A n = A m )}, is isomorphic to the diagonal of M ω , which is directly seen to be isomorphic to M . The same holds for M wω and M w .

The Medvedev hyperdoctrine
In this section, we will introduce our first-order hyperdoctrine based on M and M ω , which we will call the Medvedev hyperdoctrine P M . We will take the category C to be the category with objects {1}, {1, 2}, . . . and ω, and with functions the computable functions between them. We will define P M (ω) to be M ω . Now, let us look at how to define P M (α) = α * for functions α : ω → ω.
That α * is a Brouwer algebra homomorphism follows easily from the fact that the operations on M ω are component-wise.
Next, we will show that for every computable α we have that α * has both upper and lower adjoints, which will certainly suffice to satisfy condition (iv) of Definition 2.1. Proof. Let us first consider the upper adjoint. We define: We claim: Conversely, assume (A i ) i∈ω ≤ Mω ∃ α ((B i ) i∈ω ); say through Ψ. Let Φ be the functional sending i ⌢ h to Ψ(α(i) ⌢ i ⌢ h). Let n ∈ ω. We claim: Indeed, let f ∈ B n . Then n ⌢ f ∈ (∃ α ((B i ) i∈ω )) α(n) . Thus: Next, we consider the lower adjoint. We define: Then ∀ α is a well-defined function on M ω , as can be proven in a similar way as for Conversely, assume ∀ α ((A i ) i∈ω ) ≤ Mω (B i ) i∈ω , say through Ψ. Let n ∈ ω and let g ∈ α * ((B i ) i∈ω ) n = B α(n) . Then Ψ(α(n) ⌢ g) ∈ (∀ α ((A i ) i∈ω )) α(n) . Since clearly α(n) = α(n), it follows that Ψ(α(n) ⌢ g) [n] ∈ A n . Again this reduction is uniform in n and g, so Remark 4.4. Note that, if α : ω → ω is is the projection to the first coordinate (i.e. the function mapping n, m to n), then We will tacitly identify these two. Similarly, We now generalise this notion to include all the functions in our category C. We will define P M ({1, . . . , n}) to be the n-fold product M n .  Thus, everything we have done above leads us to the following definition.  Proof. First note that C is closed under all n-fold products, because ω n is isomorphic to ω through some fixed computable function a 1 , . . . , a n , and similarly {1, . . . , m} n is isomorphic to {1, . . . , mn}.
We now verify the conditions from Definition 2.1. Condition (i) follows from Proposition 3.3. Condition (ii) follows from Proposition 4.6. For condition (iii), use the fact that diagonal morphisms are computable together with Proposition 4.7. From the same theorem we know that the projections have lower and upper adjoints. Thus, we only need to verify that the Beck-Chevalley condition holds for them to verify condition (iv). Consider the diagram we need to show that it commutes.
We have: through the functional sending i ⌢ k ⌢ n, m ⌢ f to s(n), m ⌢ m ⌢ f , and the opposite inequality holds through the functional sending n ⌢ l, m ⌢ k ⌢ f to m ⌢ n, m ⌢ f .
we need to show that this also commutes.
Again by Remark 4.4 we have: as desired.
For future reference, we state the following lemma which directly follows from the formula for the upper adjoint given in the proof of Proposition 4.3.
Lemma 4.10. For any X, the equality = X in P M is given by: Proof. From the formula given for the upper adjoint in the proof of Proposition 4.3, and the definition of = X in a first-order hyperdoctrine in Definition 2.1.
Finally, let us give an easy example of a structure in P M . More examples will follow in the next sections. Thus, the Medvedev hyperdoctrine can be seen as an extension of Kleene's second realisability model with computable realisers. There is also a topos which can be seen as an extension of this model, namely the Kleene-Vesley topos, see e.g. van Oosten [24]. However, this topos does not follow Kolmogorov's philosophy that the interpretation of the universal quantifier should be uniform in the variable. On the other hand, a topos can interpret much more than just first-order logic.
Note that our category C only contains countable sets. On one hand this could be seen as a restriction, but on the other hand this should not come as a surprise since we are dealing with computability. That it is not that much of a restriction is illustrated by the rich literature on computable model theory dealing with computable, countable models.

Theory of the Medvedev hyperdoctrine
Given a first-order language Σ, we wonder what the theory of P M is. In particular, we want to know: is the theory of P M equal to first-order intuitionistic logic IQC? To this, the answer is 'no' in general: it is well-known that the weak law of the excluded middle ¬ϕ ∨ ¬¬ϕ holds in the Medvedev lattice; therefore ¬ϕ ∨ ¬¬ϕ holds for sentences in P M . However, for the Medvedev lattice we have the following remarking result by Skvortsova: Thus, Skortsova's result tells us that there is a principal factor of the Medvedev lattice which captures exactly intuitionistic propositional logic. There is a natural way to extend principal factors to the Medvedev hyperdoctrine: given A in M , let P M /A be as in Definition 4.8, but with M replaced by M /A, and M ω replaced by M ω /(A, A, . . . ). It is directly verified that P M /A is also a first-order hyperdoctrine.
Thus, there is a first-order analogue to the problem studied by Skortsova in the propositional case: is there an A ∈ M such that the sentences that hold in P M /A are exactly those that are deducible in IQC?
First, note that equality is always decidable (i.e. ∀x, y(x = y ∨ ¬x = y) holds) by the analogue of Lemma 4.10 (with ω ω replaced by A). So, can we get the theory to equal IQC with decidable equality? Surprisingly, the answer turns out to be 'no' in general. Recall that for a poset X and x, y ∈ X with x ≤ y we have that the interval [x, y] X denotes the set of elements z ∈ X with x ≤ z ≤ y. If B is a Brouwer algebra then so is [x, y] B , with lattice operations as in B and implication given by If x = 0, this gives us exactly the factor B/y. We can use this to introduce a specific kind of intervals in the Medvedev hyperdoctrine. It can be directly verified that this is a first-order hyperdoctrine; if one is not convinced this also follows from the more general Theorem 5.6 below.
The axiom schema CD, consisting of all formulas of the form ∀z(ϕ(z) ∨ ψ) → ∀z(ϕ(z))∨ψ, has been studied because it characterises the Kripke frames with finite domain. Our first counterexample is based on the fact that a specific instance of this schema holds in every structure in an interval of M with finite universe. Note that the schema CD can be refuted in P M , as long as we allow models over infinite structures: namely, let ϕ(z) = S(z) and ψ(z) = R. We build a structure M with ω as universe. Let A be computably independent. Let S n = A [n+1] and let R = A [0] . Towards a contradiction, assume CD holds in this structure and let Φ witness ∀z(S(z) ∨ R) ≥ M ∀z(S(z)) ∨ R . Now the function f given by , so Φ(f )(0) = 0. Let u be the use of this computation and let g be the function such that g [n] = f [n] for n ≤ u and g [n] = A [0] for n > u. Then Φ(g)(0) = 0 so g computes A [u+1] , contradicting A being computably independent.
Thus, one might object to our counterexample for being too unnatural by restricting the universe to be finite. However, the next example shows that even without this restriction we can find a counterexample.

holds. However, this formula is not in IQC.
Proof. Towards a contradiction, assume M is some structure satisfying the formula. Let f ∈ ∀x(S(x) ∨ ¬S(x)) and let g ∈ ¬∀x(¬S(x)) . If for every n ∈ M we have f so e ⌢f ∈ ¬¬S(x) n . Therefore n ⌢ e ⌢f ∈ ∃x(¬¬S(x)) . So To show that the formula is not in IQC, consider the following Kripke frame: What the last theorem really says is not that our approach is hopeless, but that instead of looking at intervals [B, A] P M , we should look at more general intervals. Right now we are taking the bottom element B to be the same for each i ∈ ω.
Compare this with what happens if in a Kripke model we take the domain at each point to be the same: then CD holds in the Kripke model. Proposition 5.3 should therefore not come as a surprise (although it is surprising that the full schema can be refuted). Instead, we should allow B i to vary (subject to some constraints); roughly speaking B i then expresses the problem of 'showing that i exists' or 'constructing i'. This motivates the next definition. Proof. First, note that the base category C is closed under n-fold products: indeed, the n-fold product of Y is just Y n , and the projections are computable functions satisfying the extra requirement. Furthermore, if α 1 , . . . , α n : Y → Z are in C, then (α 1 , . . . , α n ) : Y n → Z in in C because for all y 1 , . . . , y n ∈ Y we have B (y1,...,yn) = B y1 ⊕ · · · ⊕ B yn ≥ M B α(y1) · · · ⊕ B α(yn) = B (α1,...,αn)(y1,...,yn) , with reductions uniform in y 1 , . . . , y n . Finally, for each α in C we have that P M (α) ⊕ 0 P(Y ) is a Brouwer algebra homomorphism: that joins and meets are preserved follows by distributivity, that the top element is preserved follows directly from (A, A, . . . ) ≥ Mω (B i ) i∈ω ≥ M (B −1 , . . . , B −1 ) and that the bottom element is preserved follows from the assumption that B y ≥ M B α(y) for all y ∈ dom(α) uniformly in y. That implication is preserved is more work: let α : X → Y . Throughout the remainder of the proof we will implicitly identify ω n with ω and {1, . . . , m} n with {1, . . . , mn} through some fixed bijection a 1 , . . . , a n . Now: with uniform reductions. Thus, we need to verify that the product projections have adjoints; in fact, we will show that every morphism α in the base category C has adjoints. Let α : X → Y . We claim: P M (α) ⊕ (B i ) i∈X has as an upper adjoint ∃ α and as a lower adjoint the map sending (C i ) i∈X to ∀ α ((B i → M C i ) i∈X ) ⊕ (B i ) i∈Y , where ∃ α and ∀ α are as in Proposition 4.3. Indeed, we have: Similarly, for ∀ we have: Finally, we need to verify that [(B i ) i≥−1 , A] P M satisfies the Beck-Chevalley condition. We have (writing α * for the image of the morphism α under the functor for As in the proof of Theorem 4.9 we have The opposite inequality is also almost the same as in the proof of Theorem 4.9, except that we now need to use that C (s(n),m) uniformly computes an element of B (s(n),m) and hence of B m .
For the other part of the Beck-Chevalley condition we have: Now, using the fact that B s(n) uniformly reduces to B n : as desired.
Finally, let us rephrase Lemma 4.10 for our intervals.
Proof. From the formula given for the upper adjoint in the proof of Theorem 5.6, and the definition of = X in a first-order hyperdoctrine in Definition 2.1.
As a final remark, note that we cannot vary A (i.e. make intervals of the form [(B i ) i≥−1 , (A i ) i≥−1 ] P M ): if we did, then to make α * into a homomorphism we would need to meet with A i . While joining with B i was fine, if we meet with A i the implication will in general not be preserved.

Heyting arithmetic in intervals of the Medvedev hyperdoctrine
In the previous section we introduced the general intervals [(B i ) i≥−1 , A] P M . However, it turns out that even these intervals cannot capture every theory in IQC, which we will show by looking at models of Heyting arithmetic. Our approach is based on the following classical result about computable classical models of Peano arithmetic.
where p e denotes the eth prime.
These are all provable in PA. The first formula tells us that ϕ ′ and ψ ′ are monotone in s. The second formula expresses that A and B are disjoint. The third formula says that the Euclidean algorithm holds. The last formula tells us that for every n, we can code the elements of A[n] ∩ [0, n) as a single number. We can prove this inductively, by letting m be the product of those p e such that e ∈ A[n] ∩ [0, n).
Thus, every non-standard model of Peano arithmetic also satisfies these formulas. Towards a contradiction, let M be a computable non-standard model of PA. Let n ∈ M be a non-standard element, i.e. n > k for every standard k. Let m ∈ M be such that M |= ∀e < n(ϕ ′ (e, n) ↔ ∃a < n.ap e = m).
If e ∈ A, then ϕ ′ (e, s) holds in the standard model for large enough standard s, and since M is a model of Q and ϕ ′ is ∆ 0 0 we see that also M |= ϕ ′ (e, s) for large enough standard s. By monotonicity, we therefore have M |= ϕ ′ (e, n). Thus, M |= ∃a < n.ap e = m.
However, C is also computable: because the Euclidean algorithm holds in M, we know that there exist unique a, b with b < p S e (0) such that ap S e (0) + b = m. Since M is computable we can find those a and b computably. Now e is in C if and only if b = 0. This contradicts A and B being computably separable.
When looking at models of arithmetic, we often use that fairly basic systems (like Robinson's Q) already represent the computable functions (a fact which we used in the proof of Tennenbaum's theorem above). In other words, this tells us that there is not much leeway to change the truth of ∆ 0 1 -statements. The next two lemmas show that in a language without any relations except equality (like arithmetic), as long as our formulas are ∆ 0 1 , their truth value in the Medvedev hyperdoctrine is essentially classical; in other words, there is also no leeway to make their truth non-classical. Furthermore, it is decidable which of the two cases holds, and the reduction is uniform in a 1 , . . . , a n .
Proof. We prove this by induction on the structure of ϕ.
• ϕ is of the form t(x 1 , . . . , x n ) = s(x 1 , . . . , x n ): by Lemma 5.7 we know that t(x 1 , . . . , x n ) = s(x 1 , . . . , x n ) a1,...,an is either B −1 ⊕ B a1 ⊕ · · · ⊕ B an or A, with the first holding if and only if t(a 1 , . . . , a n ) = s(a 1 , . . . , a n ) holds classically. Since all functions are computable and equality is true equality, it is decidable which of the two cases holds. • ϕ is of the form ψ(x 1 , . . . , x n ) ∧ χ(x 1 , . . . , x n ): there are three cases: -If both ψ(x 1 , . . . , x n ) a1,...,an and χ(x 1 , . . . , x n ) a1,...,an are equivalent to B −1 ⊕ B a1 ⊕ · · · ⊕ B an , then ϕ( This case distinction is decidable because the induction hypothesis tells us that the truth of ψ and χ is decidable. with all the reductions uniform in a 1 , . . . , a n . Next, we slightly extend this to Π 0 1 -formulas and Σ 0 1 -formulas, although at the cost of dropping the uniformity. Proof. Let ϕ(x 1 , . . . , x n ) = ∀y 1 , . . . , y m ψ(x 1 , . . . , x n , y 1 , . . . , y n ) with ψ a ∆ 0 0 -formula. First, let us assume ϕ(a 1 , . . . , a n ) holds classically. Thus, for all b 1 , . . . , b m ∈ M we know that ψ(a 1 , . . . , a n , b 1 , . . . , b m ) holds classically. By Lemma 6.2 we then know that ψ(a 1 , . . . , a n , b 1 , . . . , b m ) gets interpreted as B a1 ⊕ · · · ⊕ B an ⊕ B b1 ⊕ · · · ⊕ B bm (by a reduction uniform in b 1 , . . . , b m ). Now note that ϕ a1,...,an ≡ M b1,...,bm ∈ω (B a1 ⊕ · · · ⊕ B an ⊕ B b1 ⊕ · · · ⊕ B bm ) → M ψ(x 1 , . . . , x n , y 1 , . . . , y m ) a1,...,an,b1,...,bm ⊕ (B −1 ⊕ B a1 ⊕ · · · ⊕ B an ) Now, let us assume ϕ(a 1 , . . . , a n ) does not hold classically. Let b 1 , . . . , b m ∈ M be such that ψ(a 1 , . . . , a n , b 1 , . . . , b m ) does not hold classically. By Lemma 6.2 we know that ψ(a 1 , . . . , a n , b 1 . . . , b m ) gets interpreted as A. Then it is directly checked that in fact ϕ(x 1 , . . . , x n ) a1,...,an ≥ M A, as desired. The proof for Σ 0 1 -formulas ϕ is similar. Now, we will prove an analogue of Theorem 6.1 for the Medvedev hyperdoctrine. Proof. Our proof is inspired by the proof of Theorem 6.1 given above. Let A, B, ϕ ′ and ψ ′ as in that proof. We first define a theory T ′ which consists of Q together with the formulas ∀e, s∀s ′ ≥ s((ϕ ′ (e, s) → ϕ ′ (e, s ′ )) ∧ (ψ ′ (e, s) → ψ ′ (e, s ′ ))) ∀n, s(¬ϕ ′ (n, s) ∧ ψ ′ (n, s)) Then T ′ is deducible in Peano arithmetic; in particular it holds in the standard model. Note that T ′ is equivalent to a Π 0 2 -formula. Furthermore, note that there are computable Skolem functions (for example, take the function mapping n to the least witness). Thus, we can get rid of the existential quantifiers; for example, we can replace ∀n, p∃a, b(b < p ∧ ap + b = n) by ∀n, p(g(n, p) < p ∧ f (n, p)p + g(n, p) = n) where f is the symbol representing the primitive recursive function sending (n, p) to n divided by p, and g is the symbol representing the primitive recursive function sending (n, p) to the remainder of the division of n by p. We can also turn Q into a Π 0 1 -theory using the predecessor function. So, let T consist of a Π 0 1 -formula which is equivalent to T ′ , together with Π 0 1 defining axioms for the finitely many computable functions we used. Then T is certainly deducible in PA, but it is also deducible in HA because every Π 0 2 -sentence which is in PA in also in HA, see e.g. Troelstra and van Dalen [23,Proposition 3.5]. Now, if T ≡ M A, we are done. We may therefore assume this is not the case. Then, by Lemma 6.4 we see that T holds classically in M. Therefore T ′ also holds classically in M, and by the proof of Theorem 6.1 we see that M is classically the standard model. Therefore χ holds classically in M so we see by Lemma 6.4 that χ ≡ M B −1 .

Decidable frames
In the last section we saw that even in our intervals [(B i ) i≥−1 , A] P M we cannot generally obtain IQC. However, note that Heyting arithmetic, like Peano arithmetic is undecidable. We therefore wonder: what happens if we look at decidable theories? In the classical case, we know that every decidable theory has a decidable model. The intuitionistic case was studied by Gabbay [4] and Ishihara, Khoussainov and Nerode [8,7], culminating in the following result.
Definition 7.1. A Kripke model is decidable if the underlying Kripke frame is computable, the universe at every node is computable, the forcing relation w ϕ(a 1 , . . . , a n ) is computable and equality is decidable, i.e. ∀x, y(x = y ∨ ¬x = y) holds.

Definition 7.2.
A theory is decidable if its deductive closure is computable and equality is decidable. Our next result shows how to encode such decidable Kripke models in intervals of the Medvedev hyperdoctrine. Unfortunately we do not know how to deal with arbitrary decidable Kripke frames; instead we have to restrict to those without infinite chains. As we will see later in this section, this nonetheless still proves to be useful. We will use the mass problems C({f i | i = j} ∪ D to represent the points t j of the Kripke frame T . If T were finite, we would only have to consider a finite sub-upper semilattice of V, and by Skvortsova [20, Lemma 2] the meet-closure of this would be exactly the Brouwer algebra of upwards closed subsets of T . However, since in our case T might be infinite, we need to suitably generalise this to arbitrary 'meets'.
Let us now describe how to do this. First, we define A: if T is not a chain, and A = D otherwise. The idea behind A is that if t k1 and t k2 are incomparable in T , then there should be no mass problem representing a point above their representations. Now, let U be the collection of upwards closed subsets of T . We then define the map α : U → M by: and α(∅) = A. Now let B −1 = α(T ) and let B i = α(Z i ), where Z i is the set of nodes where i is in the domain of K. Then α : U → [B −1 , A] as a function; we are not yet claiming that it preserves the Brouwer algebra structure. We will prove a stronger result for a suitable sub-collection of U below. First, let us show that α is injective. Indeed, assume α(Y ) ≤ M α(Z). We will show that Y ⊇ Z. By applying Lemma 7.5 below twice we then have that for every j with t j ∈ Z there exists a k with t k ∈ Y such that either C( In the first case, towards a contradiction let us assume that k = j. Then f k computes an element of C({f i | i = j}) ∪ D and therefore f k ∈ C({f i | i = k}) ∪ D since the latter is upwards closed. However, this contradicts the fact that the f i form an antichain in the Turing degrees. Thus, k = j and therefore t j ∈ Y .
In the latter case, we have that C({f i | i ∈ {k 1 , k 2 }) ∪ D ≤ M C({f i | i = j}) ∪ D for some k 1 , k 2 ∈ ω for which t k1 and t k2 are incomparable. Without loss of generality, let us assume that k 1 = j. Then, reasoning as above, we see that f k1 ∈ C({f i | i ∈ {k 1 , k 2 }) ∪ D, a contradiction.
For ease of notation, let us assume the union of the universes of K is ω; the general case follows in the same way. Let M be the structure with functions as in K, and let the interpretation of a relation R(x 1 , . . . , x n ) a1,...,an be α(Y ), where Y is exactly the set of nodes where R(a 1 , . . . , a n ) holds in K.
We show that M is as desired. To this end, we claim: for every formula ϕ(x 1 , . . . , x n ) and every sequence a 1 , . . . , a n , where Y is exactly the set of nodes where a 1 , . . . , a n are all in the domain and ϕ(a 1 , . . . , a n ) holds in the Kripke model K. Furthermore, we claim that this reduction is uniform in a 1 , . . . , a n and in ϕ. We prove this by induction on the structure of ϕ. First, if ϕ is atomic, this follows directly from the choice of the valuations, from the fact that K is decidable and from Lemma 5.7.
Next, let us consider ϕ(x 1 , . . . , x n ) = ψ(x 1 , . . . , x n ) ∨ χ(x 1 , . . . , x n ). Let U be the set of nodes where ψ(a 1 , . . . , a n ) holds in K and similarly let V be the set of nodes where χ(a 1 , . . . , a n ) holds. By induction hypothesis and by the definition of the interpretation of ∨ we have We need to show that this is equivalent to where Y is the set of nodes where ϕ(a 1 , . . . , a n ) holds. First, let j ⌢ f ∈ α(Y ). Then ϕ(a 1 , . . . , a n ) holds at t j . Thus, by the definition of truth in Kripke frames, we know that at least one of ψ(a 1 , . . . , a n ) and χ(a 1 , . . . , a n ) holds in t j , and because our frame is decidable we can compute which of them holds. So, send j ⌢ f to 0 ⌢ j ⌢ f if ψ(a 1 , . . . , a n ) holds, and to 1 ⌢ j ⌢ f otherwise. Thus, Conversely, if either ψ(a 1 , . . . , a n ) or χ(a 1 , . . . , a n ) holds then ϕ(a 1 , . . . , a n ) holds, so the functional sending i ⌢ j ⌢ f to j ⌢ f witnesses that α(Y ) ≤ M α(U ) ⊗ α(V ). The proof for conjunction is similar. Next, let us consider implication. So, let ϕ(x 1 , . . . , x n ) = ψ(x 1 , . . . , x n ) → χ(x 1 , . . . , x n ). Let U be the set of nodes where ψ(a 1 , . . . , a n ) holds in K, let V be the set of nodes where χ(a 1 , . . . , a n ) holds and let Y be the set of nodes where ϕ(a 1 , . . . , a n ) holds. By induction hypothesis, we know that ϕ(x 1 , . . . , x n ) a1,...,an We need to uniformly compute from this some m ∈ ω with t m ∈ Y and an element of First, if either the first bit of h or g is 1, then h respectively g computes an element of A. So, we may assume this is not the case. Then there are i 1 = j and There are now two cases: if t k and t j are incomparable then k ⌢ j ⌢ (h ⊕ g) ∈ A. Otherwise, compute m ∈ {k, j} such that t m = max(t k , t j ). Then, because t k ∈ Y and t j ∈ U , we know that t m ∈ V and that h ⊕ g ∈ C({f i | i = m}) ∪ D, which is exactly what we needed. Since this is all uniform we therefore see α(Y ) ≥ M ϕ(x 1 , . . . , x n ) a1,...,an .
Conversely, take any element We need to compute an element of α(Y ). Let Z be the collection of nodes where a 1 , . . . , a n are all in the domain. Then h computes some elementh ∈ α(Z), as follows from the definition of B (a1,...,an) and the fact that we have already proven the claim for conjunctions applied to x 1 = x 1 ∧ · · · ∧ x n = x n a1,...,an . If the second bit ofh is 1, thenh computes an element of A and therefore also computes an element of α(Y ). So, we may assume it is 0. Let k =h(0). First compute if ϕ(a 1 , . . . , a n ) holds in K at the node t k ; if so, we know thath ∈ α(Y ) so we are done. Otherwise, there must be a node tk (above t k ) such that tk ∈ U but tk ∈ V .
Let σ be the least string such that Φ(e) g ⊕ k ⌢ 0 ⌢ σ (0)↓ and such that Φ(e) g ⊕ k ⌢ 0 ⌢ σ (1)↓ and let m = Φ e g ⊕ k ⌢ 0 ⌢ σ (0) (such a σ much exist, since there is some initial segment ofk ⌢ 0 ⌢ fk +1 ∈ α(U ) for which this must halt by choice of g and e). Then we see, by choice of g and e that t m ∈ V and that In fact, since the value at 1 has also already been decided by choice of σ, we even get that either In the first case, we are clearly done. Otherwise, we claim: g⊕h ∈ C({f i | i = m)∪D. We distinguish several cases: • Ifh ∈ D, then g ⊕h ≥ Th ∈ D and D is upwards closed.
• Otherwise,h ≥ T f i for some i = k. If i =k, then we have just seen that g ⊕h computes an element of C({f i | i = m}) ∪ D. Since the latter is upwards closed, we see that g ⊕h ∈ C({f i | i = m}) ∪ D. • Ifh ≥ T fk, then g ⊕h ≥ Th ∈ C({f i | i = m): after all, t m ∈ V while tk ∈ V , sok = m.
By definition of the interpretation of the universal quantifier and the induction hypothesis, we know that Let Z b be the set of nodes where a 1 , . . . , a n and b are in the domain, and let Z be the set of nodes where a 1 , . . . , a n are in the domain. Then we get in the same way as above: Finally, let us introduce new predicates R b (x 1 , . . . , x n ), which are defined to hold in K if ϕ(x 1 , . . . , x n , b) holds in K, and let us introduce new nullary predicates S b which are defined to hold when all of a 1 , . . . , a n and b are in the domain. Then, applying the fact that we have already proven the claim for implications to S b → R b a1,...,an , we get We now claim that this is equivalent to α(Y ). We have Y ⊆ (Z b → U b ) ∩ Z by the definition of truth in Kripke frames, which suffices to prove that We show how to compute an element of α(Y ) from this. If the second bit of g 0 is 1, then h computes an element of A; thus, assume it is 0. Let m 0 = g 0 (0). First compute if ϕ(a 1 , . . . , a n ) holds in K at the node γ(t m0 ); if so, we know that g 0 ∈ α(Y ) so we are done. Therefore, we may assume this is not the case. So, we can compute a b 1 ∈ ω such that t m0 ∈ Z b1 → U b1 by the definition of truth in Kripke frames. Now consider g b1 . If the second bit of g b1 is 1, then g b1 computes an element of A so we are done. Otherwise, let m 1 = g b1 (0). Then t m1 ∈ Z b1 → U b1 and g b1 ∈ C(f i | i = m 1 ) ∪ D. Then m 1 ≤ m 0 because t m0 ∈ Z b1 → U b1 . If m 1 is incomparable with m 0 , then m 0 ⌢ m 1 ⌢ (g b1 ⊕ h) ∈ A so we are done. Thus, the only remaining case is when m 1 > m 0 .
Iterating this argument, if it does not terminate after finitely many steps, we obtain a sequence m 0 < m 1 < m 2 < . . . . However, we assumed that our Kripke frame does not contain any infinite ascending chains, so the algorithm has to terminate after finitely many steps. Thus, We note that this is the only place in the proof where we use the assumption about infinite ascending chains.
Finally, we consider the existential quantifier. To this end, let ϕ(x 1 , . . . , x n ) = ∃yψ(x 1 , . . . , x n , y). Let U b and Z be as for the universal quantifier. Then the induction hypothesis tells us that Conversely, let j ⌢ f ∈ α(Z). Then f ∈ C(f i | i = j) ∪ D ⊗ A and t j ∈ Z. Thus, there is some b ∈ ω such that ψ(a 1 , . . . , a n , b) holds, and therefore by induction hypothesis j ⌢ f ∈ α(Y b ). Furthermore, since K is decidable we can compute such a b. Thus, α(Z) ≥ M {b ⌢ α(Y b ) | b ∈ ω}, which completes the proof of the claim.
Thus, by the claim we have that, for any sentence ϕ, that ϕ = α(Y ), where Y is the set of nodes where ϕ holds in the Kripke model K. For the second part of the theorem, note that we only used the assumption about infinite ascending chains in the part of the proof dealing with the universal quantifier.
Lemma 7.5. Let C ⊆ ω ω be non-empty and upwards closed under Turing reducibility, let E i ⊆ ω ω and let Let σ be the least string such that Φ e (σ)(0)↓. Such a string must exist, because C is non-empty. Let i = Φ e (σ)(0). Then: Our proof relativises if our language does not contain function symbols, which gives us the following result. Furthermore, if we allow infinite ascending chains, then this still holds for the fragment of the theories without universal quantifiers.
Proof. Let h be such that K is h-decidable. We relativise the construction in the proof of Theorem 7.4 to h. We let all definitions be as in that proof, except where mentioned otherwise. This time we let f i be an antichain over h, i.e. for all i = j we have f i ⊕ h ≥ T f j . We change the definition of D into {g | ∃i(g ≤ T f i ⊕ h)} We let if T is not a chain, and let A = D ⊕ h otherwise. We let β(Y ) = α(Y ) ⊕ {h} for all Y ∈ U. Then β is still injective. Indeed, let us assume β(Y ) ≤ M β(Z); we will show that Y ⊇ Z. By applying Lemma 7.7 below we see that for every j with t j ∈ Z there exists a k with t k ∈ Y such that either C( If the first holds, let us assume that k = j; we will derive a contradiction from this. Then ∪ D since this set is upwards closed. However, we know that the f i form an antichain over h in the Turing degrees, which is a contradiction. So, k = j and therefore t j ∈ Y .
In the second case, we have that for some k 1 , k 2 ∈ ω for which t k1 and t k2 are incomparable. Without loss of generality, we may assume that k 1 = j. Then, in the same way as above, we see that f k1 ⊕ h ∈ C({f i | i ∈ {k 1 , k 2 }) ∪ D which is again a contradiction. We let B −1 = β(T ) and we let B i = β(Z i ), where Z i is the set of nodes where i is in the domain of K. We claim: for every formula ϕ(x 1 , . . . , x n ) and every sequence a 1 , . . . , a n , ϕ(x 1 , . . . , x n ) a1,...,an ≡ M β(Y ), where Y is exactly the set of nodes where a 1 , . . . , a n are all in the domain and ϕ(a 1 , . . . , a n ) holds in the Kripke model K. The proof is the same as before, except that this time we use that all mass problems we deal with are above B −1 = α(T ) ⊕ {h} and hence uniformly compute h. Thus, we can still decide all the properties about K which we need during the proof.
Lemma 7.7. Let C ⊆ ω ω be non-empty and upwards closed under Turing reducibility, let E i ⊆ ω ω , let h ∈ ω ω and let {i ⌢ E i } ≤ M C ⊕ {h}. Then there is an i ∈ ω such that E i ≤ M C.
Proof. Let Φ e (C) ⊆ {i ⌢ E i }. Let σ be the least string such that Φ e (σ ⊕ h)(0)↓. Such a string must exist, because C is non-empty. Let i = Φ e (σ ⊕ h)(0). Then: We will now use Theorem 7.6 to show that we can refute the formulas discussed in section 5. Proof. In the proof of Proposition 5.4 we showed that there is a finite Kripke frame refuting the given formula. So, the claim follows from Theorem 7.6.
Thus, moving to the more general intervals [(B i ) i≥−1 , A] P M did allow us to refute more formulas. Let us next note that Theorem 6.5 really depends on the fact that we chose the language of arithmetic to contain function symbols. Proof. Let K be a classical model refuting T → χ, which can be seen as a Kripke model on a frame consisting of one point. Now apply Theorem 7.6.
Finally, let us consider the schema ∀x¬¬ϕ(x) → ¬¬∀xϕ(x), called Double Negation Shift (DNS). It is known that this schema characterises exactly the Kripke frames for which every node is below a maximal node (see Gabbay [5]), so in particular it holds in every Kripke frame without infinite chains. We will show that we can refute it in an interval of the Medvedev hyperdoctrine, even though Theorem 7.6 does not apply. Proof. We let K be the Kripke model based on the Kripke frame (ω, <), where n is in the domain at m if and only if m ≥ n, and R(n) holds at m if and only if m > n. Let everything be as in the proof of Theorem 7.4, except we change the definition of A into: where by X being infinite we mean that the subset X ⊆ ω represented by X is infinite. We claim: α is still injective under this modified definition of A. Indeed, assume that A ≤ M C({f i | i = j}) ∪ D, say through Φ e ; we need to show that this still yields a contradiction. Let σ be the least string such that the right half of Φ e (σ) has a 1 at a position different from j, say at position k; such a σ must exist since Φ e (f j+1 ) ∈ A. Then Φ e (σ ⌢ f k ) ∈ C({f i | i = k}) ∪ D, which is a contradiction. All the other parts of the proof of Theorem 7.4 now go through as long as we look at formulas not containing existential quantifiers. Since ∀x¬¬R(x) is intuitionistically equivalent to ¬∃x¬R(x), we therefore see that ∀x¬¬R(x) ≡ M B −1 .
We claim: ¬∀x(R(x)) ≡ M B −1 , which is enough to prove the proposition. Note that ∀x(R(x)) ≡ M B −1 ⊕ m∈ω (B m → M B m+1 ). By introducing new predicates S m which hold if and only if m is in the domain and looking at S m → S m+1 , we therefore get that ∀x(R(x)) ≡ M m∈ω B m+1 . We claim that from every element g ∈ m∈ω B m+1 we can uniformly compute an element of A. In fact, we show how to uniformly compute from g a sequence k 0 < k 1 < . . . such that g ∈ C({f i | i = k j }) ∪ D for every j ∈ ω; then if we let X = {k j | j ∈ ω} we have g ⊕ X ∈ C({f i | i ∈ X) ∪ D ⊕ X ⊆ A. For ease of notation let k −1 = 0. We show how to compute k i+1 if k i is given. There are two possibilities: • The second bit of g [ki] is 0: take k i+1 to be the first bit of g [ki] ; then k i+1 > k i by the definition of B ki+1 . • The second bit of g [ki] is 1: then g [ki] computes an element of A and therefore computes infinitely many j such that g [ki] ∈ C({f i | i = j}) ∪ D, so take k i+1 to be such a j which is greater than k i .
We do not know how to combine the proof of the last Proposition with the proofs of Theorems 7.4 and 7.6, because it makes essential use of the fact that the formula is refuted in a model on a frame which is a chain, and of the fact that the subformulas containing universal quantifiers hold either everywhere or nowhere in this model. So, we solved part of the following question, but the definitive answer is still open.