1 Introduction

A Heyting algebra is said to be regular if it is generated by its subset of regular elements, i.e. elements x which are identical to their double negation \(\lnot \lnot x\). In this article we investigate regular Heyting algebras from the viewpoint of Esakia duality and we establish several connections to intermediate logics, inquisitive logic, dependence logic and \(\texttt{DNA}\)-logic [8, 16, 35].

Regular Heyting algebras have recently come to attention for their role in the algebraic semantics of inquisitive logic [7] and more generally of so-called \(\texttt{DNA}\)-logics [8, 30]. \(\texttt{DNA}\)-logics, for double negation on atoms, make for an interesting generalisation of inquisitive logic which arises when considering translations of intermediate logics under the double negation map. More precisely, \(\texttt{DNA}\)-logics are those set of formulas \(L^\lnot \) which contain a formula \(\phi \) whenever the intermediate logic L contains \(\phi [{\overline{\lnot p}}/{\overline{p}}]\), namely the formula obtained by replacing simultaneously p by \(\lnot p\) for any variable p. Such logics were originally introduced in [27] and it was shown already in [11] that inquisitive logic is a paradigmatic example of them.

Regular Heyting algebras also play a role in dependence logic. The connection between the team semantics of dependence logic and Heyting algebras was originally pointed out in [1, Section 3] and it was later proved in [31] that suitable expansions of regular Heyting algebras provide an algebraic semantics to propositional dependence logic. It was shown in [28] that such algebraic semantics, both for inquisitive, dependence and \(\texttt{DNA}\)-logics, are unique in the sense provided by a suitable notion of algebraizability for non-standard logics.

In this article we supplement the previous work on the subject by investigating regular Heyting algebras from the perspective of duality theory. Firstly, in Section 2, we review Esakia duality and the previous results on inquisitive and \(\texttt{DNA}\)-logics. In Section 3 we provide detailed proofs for several folklore results on the relation between regular clopen upsets and the Stone subspace of maximal elements of an Esakia space.

In Section 4 we consider at length the main question of this article and we provide two characterisations of (finite) Esakia spaces dual to (finite) regular Heyting algebras. In Section 4.1 we give a first characterisation of finite regular Esakia spaces in terms of p-morphisms, while in Section 4.2 we provide a necessary condition for an arbitrary Esakia space to be regular based on suitable equivalence relations, and we also give an alternative description of finite regular Esakia spaces. These characterisations allow us to consider a problem originally posed to us by Nick Bezhanishvili in a personal communication: how many varieties of Heyting algebras are generated by regular Heyting algebras? In Section 5 we answer this question by showing that there are continuum-many of such varieties, complementing the result from [8] showing that the sublattice of regularly generated varieties extending \(\texttt{ML}\) is dually isomorphic to \(\omega +1\).

Finally, in Section 6, we apply the previous results to the context of \(\texttt{DNA}\)-logics, inquisitive and dependence logic and we provide a topological semantics to these logical systems. We conclude the paper in Section 7 by highlighting some possible ideas of further research.

2 Preliminaries

We recall in this section the preliminary notions needed later in the paper. We review the algebraic semantics of intermediate and \(\texttt{DNA}\)-logics, the Esakia duality between Heyting algebras and Esakia spaces, and fix some notational conventions used throughout the paper. We refer the reader to [9, 10, 19, 25] for a detailed presentation of these notions and results.

2.1 Orders, lattices, Heyting algebras

For \((P,\le )\) a partial order and \(Q\subseteq P\) we indicate with \(Q^\uparrow \) and \(Q^\downarrow \) the upset and downset generated by Q respectively, that is

$$\begin{aligned} Q^\uparrow = \{ p\in P \mid \exists q \in Q. q \le p \} \qquad Q^\downarrow = \{ p\in P \mid \exists q \in Q.q \ge p \}. \end{aligned}$$

For \(p\in P\), we write \(p^\uparrow \) and \(p^\downarrow \) for the sets \(\{p\}^\uparrow \) and \(\{p\}^\downarrow \) respectively. We call a set Q such that \(Q = Q^{\uparrow }\) an upset, and similarly we call a set R such that \(R = R^{\downarrow }\) a downset. Given a finite poset P, we define the depth \(\textsf{depth}(p)\) of an element \(p\in P\) as the size of a maximal chain in \(p^\uparrow \setminus \{p\}\). We define \(\textsf{depth}(P){:}{=}\text {sup}\{\textsf{depth}(p)+1 \mid p\in P \}\) and \(\textsf{width}(P)\) as the size of the greatest antichain in P.

A Heyting algebra is a structure \((H,\wedge ,\vee ,\rightarrow ,1,0)\) where \((H,\wedge ,\vee ,1,0)\) is a bounded distributive lattice and \(\rightarrow \) is a binary operation on H such that for every \(a,b,c\in H\) we have \(a \le b\rightarrow c\) if and only if \(a \wedge b \le c\). Henceforth, we will write H to indicate a Heyting algebra (i.e., omitting the signature) for brevity. We use the symbol \(\textsf{HA}\) to indicate the class of all Heyting algebras. The class \(\textsf{HA}\) of Heyting algebras is equationally defined, that is, a variety. With a slight abuse of notation, we also use the notation \(\textsf{HA}\) to indicate the category of Heyting algebras, whose objects are Heyting algebras and whose arrows are algebra homomorphisms.

A Boolean algebra B is a Heyting algebra satisfying the equation \(x=\lnot \lnot x\) for all \(x\in B\). We write \(\textsf{BA}\) for both the class and the category of Boolean algebras. For any Heyting algebra H, we say that \(x\in H\) is regular if \(x=\lnot \lnot x\) and we let \(H_\lnot {:}{=}\{x\in H \mid x=\lnot \lnot x \}\). One can verify that \(H_\lnot \) is a subalgebra of H with respect to its \(\{\wedge ,\rightarrow ,0,1 \}\)-reduct and that it forms a Boolean algebra with join \(x\dot{\vee }y{:}{=} \lnot (\lnot x\wedge \lnot y) \). We say that a Heyting algebra is regular, or regularly generated, if \(H=\langle H_\lnot \rangle \), where \( \langle H_\lnot \rangle \) refers to the subalgebra of H generated by \(H_\lnot \).

We also recall that varieties are exactly those classes of algebras which are closed under subalgebras \({\mathbb {S}}\), products \({\mathbb {P}}\) and homomorphic images \({\mathbb {H}}\). We write \({\mathbb {V}}({\mathcal {C}})\) for the smallest variety containing a class of algebras \({\mathcal {C}}\).

2.2 Esakia duality

We recall the Esakia duality between Heyting algebras and Esakia spaces. We refer the reader to [19] for more details on Esakia spaces and Esakia duality.

Given a topological space \((X,\tau )\) we write \({\mathcal {C}}(X)\) for its collection of clopen subsets, i.e. subsets \(U\subseteq X\) which are both open and closed in the \(\tau \)-topology. For ease of read, in the remainder of the paper we will omit the reference to the topology \(\tau \) and simply write X to indicate a topological space. If such notation is needed for a given space X, we then write \(\tau _X\) for the collection of its open sets.

Recall that a topological space is totally disconnected if its only connected components are singletons. A Stone space is a compact, Hausdorff and totally disconnected space. Stone duality states that the category of Stone spaces with continuous maps is dually equivalent to the category of Boolean algebras with homomorphisms.

Esakia duality provides an analogue of this result for Heyting algebras. We define Esakia spaces as follows.

Definition 2.1

(Esakia Space). Let \(\mathfrak {E}=(X,\le )\) consist of a topological space X and a partial order \(\le \) over X. We say that \(\mathfrak {E}\) is an Esakia Space if:

  1. (i)

    X is a compact space;

  2. (ii)

    For all \(x,y\in \mathfrak {E}\) such that \(x\nleq y\), there is a clopen upset U such that \(x\in U\) and \(y\notin U\);

  3. (iii)

    If U is a clopen set, then also \(U^\downarrow \) is clopen.

Condition (ii) in the definition above is called Priestley Separation Axiom. Spaces satisfying conditions (i) and (ii) are called Priestley Spaces [29, Section 11], hence every Esakia Space is also a Priestley space. Moreover, it can be also verified that every Esakia space is a Stone space. We write \({{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\) for the set of clopen upsets over \(\mathfrak {E}\).

We write \(\textsf{Esa}\) to indicate the class of Esakia spaces. In analogy with \(\textsf{HA}\), we can see \(\textsf{Esa}\) as a category whose objects are Esakia spaces. A morphism between Esakia spaces is a map that preserves the topological structure, the order-theoretical structure and the relation between the two.

Definition 2.2

(p-morphism). Given Esakia spaces \(\mathfrak {E} = (X, \le )\) and \(\mathfrak {E'} = {(X',\le )}\), a p-morphism \(f: \mathfrak {E} \rightarrow \mathfrak {E'}\) is a continuous map such that:

  1. (i)

    For all \(x,y \in \mathfrak {E}\), if \(x \le y\) then \(f(x) \le f(y)\);

  2. (ii)

    For all \(x\in \mathfrak {E}\) and \(y' \in \mathfrak {E'}\) such that \(f(x) \le y'\), there exists \(y \in \mathfrak {E}\) such that \(x \le y\) and \(f(y) = y'\).

The continuity of the map ensures that the preimage of a clopen in \({\mathcal {C}}(\mathfrak {E'})\) is contained in \({\mathcal {C}}(\mathfrak {E})\). Additionally, condition (i) ensures that the preimage of upsets (downsets) of \(\mathfrak {E'}\) are again upsets (downsets) of \(\mathfrak {E}\). We write \(f:\mathfrak {E}\twoheadrightarrow \mathfrak {E}'\) when f is a surjective p-morphism from \(\mathfrak {E}\) to \(\mathfrak {E}'\).

Esakia spaces allow us to provide a duality for Heyting algebras, in the same spirit of the Stone duality for Boolean algebras or the Priestley duality for bounded distributive lattices. Since they will play a major role in the rest of the paper, we recall what are the underlying functors of this duality.

Given a Heyting algebra H, a proper subset \(F\subsetneq H\) is a prime filter if it is a filter and, whenever \(x\vee y\in F\), then \(x\in F\) or \(y\in F\). Let \(X_H={{\mathcal {P}}}{{\mathcal {F}}}(H)\) be the set of all prime filters over H, we can endow \(X_H\) with a topology \(\tau _H\), having as subbasis the following family of sets:

$$\begin{aligned} \{ \phi (a) \,|\, a\in H \} \cup \{ \phi (a)^c \,| \, a\in H \} \end{aligned}$$

where \(\phi (a) = \{ F\in X_H \,|\, a\in F \}\) and where \(\phi (a)^c\) denotes the complement of \(\phi (a)\) in \(X_H\). Moreover, if we consider the standard inclusion order \(\subseteq \) between prime filters, the ordered space \(\mathfrak {E}_H = (X_H, \tau _H, \subseteq )\) so obtained is an Esakia space: we call this the Esakia dual of H.

On the other hand, if \(\mathfrak {E}\) is an Esakia Space we can define the Heyting algebra \(H_{\mathfrak {E}}\) over the set \({{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\) of clopen upsets of \(\mathfrak {E}\):

$$\begin{aligned} \begin{array}{lll} U \wedge V = U \cap V&U \vee V = U \cup V&U \rightarrow V = ((U\setminus V)^\downarrow )^c \end{array} \end{aligned}$$

where \(U^c\) denotes the complement of U in \(\mathfrak {E}\). We shall also write \(\overline{U}\) for \((U^{\downarrow })^c\), namely for the pseudocomplement of U in \({{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\). The algebra \(H_{\mathfrak {E}}\) is a Heyting algebra, which we call the Esakia dual of \(\mathfrak {E}\). Esakia proved that these two maps are functorial and describe a dual equivalence between \(\textsf{HA}\) and \(\textsf{Esa}\), in particular the following holds with respect to objects.

Theorem 2.3

(Esakia). For every Heyting algebra H, we have \(H\cong H_{\mathfrak {E}_H}\). For every Esakia Space \(\mathfrak {E}\), we have \(\mathfrak {E}\cong \mathfrak {E}_{H_{\mathfrak {E}}}\).

At the level of arrows, we have the following correspondence:

  • Given a homomorphism \(f: H \rightarrow H'\) between two Heyting algebras, we define the p-morphism \({\hat{f}}: \mathfrak {E}_{H'} \rightarrow \mathfrak {E}_{H}\) by \({\hat{f}}(x) = f^{-1}[x]\);

  • Given a p-morphism \(g: \mathfrak {E} \rightarrow \mathfrak {E'}\) between two Esakia spaces, we define the homomorphism \({\hat{g}}: H_{\mathfrak {E'}} \rightarrow H_{\mathfrak {E}}\) by \({\hat{g}}(U) = g^{-1}[U]\).

These mappings, together with the ones presented above, provide a full duality between the categories \(\textsf{HA}\) and \(\textsf{Esa}\). We indicate with \({{\mathcal {C}}}{{\mathcal {U}}}: \textsf{HA}\rightarrow \textsf{Esa}\) and \({{\mathcal {P}}}{{\mathcal {F}}}: \textsf{Esa}\rightarrow \textsf{HA}\) the corresponding functors.

When restricted to the finite setting, Esakia duality delivers a dual equivalence between finite Heyting algebras and finite Esakia spaces. Since an Esakia space \(\mathfrak {E}\) is a Stone space, in the finite case its topology is discrete. This allows to study finite Esakia spaces only in terms of their order-theoretic structure and to treat them simply as finite partial orders.

2.3 Semantics for intermediate logics

Heyting algebras and Esakia Spaces are closely connected to intermediate logics, namely those logics which lie between intuitionistic and classical propositional logic. Let \(\texttt{AT}\) be a set of atomic variables and consider the set of formulas \({\mathcal {L}}_{\texttt{IPC}}\) generated by the following grammar:

where \(p\in \texttt{AT}\). We write \(\texttt{IPC}\) for intuitionistic logic and \(\texttt{CPC}\) for classical propositional logic. There is a standard way to interpret these formulas on Heyting algebras – see e.g. [10, Section 7.3]. Given a Heyting algebra H and a map \(\mu : \texttt{AT}\rightarrow H\) (also called a valuation), we can interpret formulas of \({\mathcal {L}}_{\texttt{IPC}}\) on H inductively as follows:

$$\begin{aligned} \begin{array}{llll} \llbracket p \rrbracket ^{H,\mu } &{}= \mu (p) &{}\llbracket \bot \rrbracket ^{H,\mu } &{}= 0 \\ \llbracket \top \rrbracket ^{H,\mu } &{}= 1xm &{}\llbracket \phi \wedge \psi \rrbracket ^{H,\mu } &{}= \llbracket \phi \rrbracket ^{H,\mu } \wedge \llbracket \psi \rrbracket ^{H,\mu }\\ \llbracket \phi \rightarrow \psi \rrbracket ^{H,\mu } &{}= \llbracket \phi \rrbracket ^{H,\mu } \rightarrow \llbracket \psi \rrbracket ^{H,\mu } &{}\llbracket \phi \vee \psi \rrbracket ^{H,\mu } &{}= \llbracket \phi \rrbracket ^{H,\mu } \vee \llbracket \psi \rrbracket ^{H,\mu }. \end{array} \end{aligned}$$

Given a Heyting algebra H, we say that a formula \(\phi \) is valid on H (in symbols \(H \vDash \phi \)) if for every valuation \(\mu \) we have \(\llbracket \phi \rrbracket ^{H,\mu } = 1\). Given a class of Heyting algebras \({\mathcal {C}}\), we say that \(\phi \) is valid on \({\mathcal {C}}\) (in symbols \({\mathcal {C}}\vDash \phi \)) if \(\phi \) is valid on every member of \({\mathcal {C}}\). We call the set of formulas valid on the class \({\mathcal {C}}\) the logic of \({\mathcal {C}}\) and we write \(Log({\mathcal {C}})\). It is well known that the logic of \(\textsf{HA}\) is \(\texttt{IPC}\).

We say that a set of formulas L in the signature \({\mathcal {L}}_{\texttt{IPC}}\) is an intermediate logic if \(\texttt{IPC}\subseteq L\subseteq \texttt{CPC}\) and, additionally, L is closed under modus ponens and uniform substitution. We shall write \(L\vdash \phi \) when \(\phi \in L\). A possibly surprising result is that not only \(\texttt{IPC}\), but every intermediate logic is sound and complete with respect to a variety of Heyting algebras [10].

This result can be combined with Theorem 2.3 to obtain a semantics based on Esakia spaces. Let \(\mathfrak {E}\) be an Esakia space and consider a map \(\mu : \texttt{AT}\rightarrow {\mathcal {C}}{\mathcal {U}}(\mathfrak {E})\), which we call a topological valuation. We can define an interpretation of formulas of \({\mathcal {L}}_{\texttt{IPC}}\) based on clopen upsets of \(\mathfrak {E}\):

$$\begin{aligned} \begin{array}{llll} \llbracket p \rrbracket ^{\mathfrak {E},\mu } &{}= \mu (p) &{}\llbracket \bot \rrbracket ^{\mathfrak {E},\mu } &{}= \emptyset \\ \llbracket \top \rrbracket ^{\mathfrak {E},\mu } &{}= \mathfrak {E} &{}\llbracket \phi \wedge \psi \rrbracket ^{\mathfrak {E},\mu } &{}= \llbracket \phi \rrbracket ^{^{\mathfrak {E},\mu }} \cap \llbracket \psi \rrbracket ^{\mathfrak {E},\mu } \\ \llbracket \phi \rightarrow \psi \rrbracket ^{\mathfrak {E},\mu } &{}= \overline{\llbracket \phi \rrbracket ^{\mathfrak {E},\mu } \setminus \llbracket \psi \rrbracket ^{\mathfrak {E},\mu }} &{}\llbracket \phi \vee \psi \rrbracket ^{\mathfrak {E},\mu } &{}= \llbracket \phi \rrbracket ^{^{\mathfrak {E},\mu }} \cup \llbracket \psi \rrbracket ^{^{\mathfrak {E},\mu }}. \end{array} \end{aligned}$$

Notice that these are exactly the Heyting algebra operations of the dual algebra \(H_{\mathfrak {E}}\). We say that a formula \(\phi \) is valid on a space \(\mathfrak {E}\) (in symbols \(\mathfrak {E} \vDash \phi \)) if for every valuation \(\mu :\texttt{AT}\rightarrow {\mathcal {C}}{\mathcal {U}}(\mathfrak {E})\) we have \(\llbracket \phi \rrbracket ^{\mathfrak {E},\mu } = \mathfrak {E}\). For a class \({\mathcal {E}}\) of Esakia spaces, we say that \(\phi \) is valid on \({\mathcal {E}}\) (in symbols \({\mathcal {E}} \vDash \phi \)) if \(\phi \) is valid on every member of the class. We call the set of formulas valid on the class \({\mathcal {E}}\) the logic of \({\mathcal {E}}\) and we write \(Log({\mathcal {E}})\). We stress that, in the literature on intuitionistic logic, the term topological semantics refers to a different semantics from the one presented here, one in which atomic formulas are assigned to opens of an arbitrary topological space (see [5]).

As a consequence of the results for classes of Heyting algebras, we have that the logic of the class \(\textsf{Esa}\) of all Esakia spaces is intuitionistic logic and that every intermediate logic is the logic of some class of Esakia spaces. Firstly, let us recall the correspondence between varieties of Heyting algebras and intermediate logics. Let L be an intermediate logic and \(Var(L)=\{H\in \textsf{HA}\mid H\vDash L \}\) the corresponding variety. The algebraic completeness theorem for intermediate logics states that for any intermediate logic L, \(L\vdash \phi \) if and only if \(Var(L)\vDash \phi \). Conversely, if \({\mathcal {V}}\) is a variety of Heyting algebras, then the definability theorem of varieties of Heyting algebra tells us that \(H\in {\mathcal {V}}\) if and only if \(H\vDash Log({\mathcal {V}})\), where \( Log({\mathcal {V}}) = \{ \phi \in {\mathcal {L}}_{\texttt{IPC}}\,|\, {\mathcal {V}}\vDash \phi \}\) is the logic of \({\mathcal {V}}\).

Using the Esakia duality and the semantics presented above, it follows that \(H\vDash \phi \) if and only if \(\mathfrak {E}_H\vDash \phi \). We can then translate the definability theorem and the algebraic completeness to the setting of Esakia spaces. To this end, we firstly define the concept corresponding to a variety of Heyting algebras: we say that a class of Esakia spaces \({\mathcal {E}}\) is a variety of Esakia spaces if \({\mathcal {E}}\) is closed under p-morphic images, closed upsets and coproducts. These operations correspond through Esakia duality to the operations of subalgebras, homomorphic images and products respectively. (Finite coproducts of Esakia spaces are simply disjoint unions thereof, while infinite coproducts require additionally that one takes a suitable compactification of infinite disjoint unions, see e.g. [22, Example 5.3.11].)

Let \(\Lambda (\textsf{HA})\) and \(\Lambda (\texttt{IPC})\) be the complete lattice of varieties of Heyting algebras and the complete lattice of intermediate logics respectively (see [10, Section 7.6]). It is straightforward to show that the arbitrary intersection of varieties of Esakia spaces is again a variety, thus we have that the family of varieties of Esakia spaces \(\Lambda (\textsf{Esa})\) forms a complete lattice too. In analogy with the algebraic case, we define the two functions \(Space: \Lambda (\texttt{IPC}) \rightarrow \Lambda (\textsf{Esa})\) and \(Log: \Lambda (\textsf{Esa}) \rightarrow \Lambda (\texttt{IPC})\) as follows:

$$\begin{aligned} Space(L)&= \{\mathfrak {E}\in \textsf{Esa}\mid \mathfrak {E}\vDash L \};\\ Log(\mathfrak {{\mathcal {E}}})&=\{\phi \in {\mathcal {L}}_{\texttt{IPC}}\mid {\mathcal {E}}\vDash \phi \}. \end{aligned}$$

By looking at these maps in light of the duality between Heyting algebras and Esakia spaces, we readily obtain the following result, which establishes a version of completeness and definability for varieties of Esakia spaces.

Theorem 2.4

Let L be an intermediate logic, \({\mathcal {E}}\) a variety of Esakia Spaces, \(\phi \) a formula and \(\mathfrak {E}\) an Esakia Space. Then we have the following:

$$\begin{aligned} \phi \in L&\Longleftrightarrow Space(L)\vDash \phi ; \\ \mathfrak {E}\in {\mathcal {E}}&\Longleftrightarrow \mathfrak {E}\vDash Log({\mathcal {E}}). \end{aligned}$$

Finally, Esakia duality can be lifted to the level of the lattices of varieties of Heyting algebras and of Esakia spaces. In particular, the maps

$$\begin{aligned} \begin{array}{ll} \overline{{{\mathcal {P}}}{{\mathcal {F}}}}: \Lambda (\textsf {HA}) \rightarrow \Lambda (\textsf {Esa}) &{}{}\overline{{\mathcal {C}}{\mathcal {U}}}: \Lambda (\textsf {Esa}) \rightarrow \Lambda (\textsf {HA}) \\ \overline{{{\mathcal {P}}}{{\mathcal {F}}}}({\mathcal {V}}) = \{ \mathfrak {E}\mid \mathfrak {E}\cong \mathfrak {E}_H \text{ for } H\in {\mathcal {V}}\} {}\qquad &{}\overline{{\mathcal {C}}{\mathcal {U}}}({\mathcal {E}}) = \{ H\mid H \cong H_{\mathfrak {E}} \text{ for } \mathfrak {E} \in {\mathcal {E}} \} \end{array} \end{aligned}$$
figure a

are inverse to each other. The names \(\overline{{{\mathcal {P}}}{{\mathcal {F}}}}\) and \(\overline{{\mathcal {C}}{\mathcal {U}}}\) indicate that these maps can be seen as liftings of the maps \({{\mathcal {P}}}{{\mathcal {F}}}\) and \({\mathcal {C}}{\mathcal {U}}\) respectively to varieties. The lattices \(\Lambda (\textsf{HA})\) and \(\Lambda (\textsf{Esa})\) are then isomorphic, whence we also obtain that \(\Lambda (\texttt{IPC})\cong ^{op}\Lambda (\textsf{HA})\cong \Lambda (\textsf{Esa})\). The relations between the lattices \(\Lambda (\textsf{HA})\), \(\Lambda (\textsf{Esa})\) and \(\Lambda (\texttt{IPC})\) are depicted in the diagram to the right, where arrows indicate lattice isomorphisms.

2.4 \(\texttt{DNA}\)-logics

In this paper, we are especially interested in regular Heyting algebras and their connection to Esakia spaces. This class of structures has important connections to a family of (non-standard) logics closely related to intermediate logics, i.e. \(\texttt{DNA}\)-logics, from double negation on atoms. These logics were originally introduced in [27] and later studied in [8, 11]. Notice that, for any formula \(\phi \), we write \(\phi [{\overline{\lnot p}}/{\overline{p}}]\) for the formula obtained by replacing simultaneously p by \(\lnot p\) for any atom p occurring in \(\phi \).

Definition 2.5

For every intermediate logic L, its negative variant \(L^\lnot \) is

$$\begin{aligned} L^\lnot \;=\; \{\,\phi \in {\mathcal {L}}_{\texttt{IPC}}\,|\, \phi [{\overline{\lnot p}}/{\overline{p}}]\in L \,\}. \end{aligned}$$

We call the negative variant of some intermediate logic a \(\texttt{DNA}\)-logic.

Every \(\texttt{DNA}\)-logic contains the formula \(\lnot \lnot p \rightarrow p\) for every atomic proposition \(p\in \texttt{AT}\)—but in general this is not true if we replace p by an arbitrary formula \(\phi \). So we can think of \(\texttt{DNA}\)-logics as intermediate logics where atoms do not play the role of arbitrary formulas, since the principle of uniform substitution does not hold, but rather the role of arbitrary negated formulas. We notice that \(\texttt{DNA}\)-logics are an example of weak logics in the sense of [28, Definition 2], i.e. they are consequence relations closed under permutations of atomic variables.

Given a \(\texttt{DNA}\)-logic \(\texttt{L}\) there is a standard way to find an intermediate logic L such that \(L^{\lnot } = \texttt{L}\), as the following lemma shows.

Lemma 2.6

Given a \(\texttt{DNA}\)-logic \(\texttt{L}\), define the set

$$\begin{aligned} S(\texttt{L}) \;{:}{=}\; \{\, \phi \,|\, \sigma (\phi ) \in \texttt{L} \text { for every substitution }\sigma \,\}. \end{aligned}$$

Then \(S(\texttt{L})\) is an intermediate logic and \((S(\texttt{L}))^{\lnot } = \texttt{L}\).

We refer the reader to [8, Theorem 4.6] for the proof of the previous lemma. \(S(\texttt{L})\) is usually referred to as the schematic fragment of the logic \(\texttt{L}\)—see for example [11].

There is also another way to characterize \(\texttt{DNA}\)-logics, that is, through their algebraic semantics based on Heyting algebras. Let H be a Heyting algebra, then we call a valuation \(\mu : \texttt{AT}\rightarrow H\) negative if every atom is mapped to a regular element of H, or equivalently if \(\lnot \lnot \mu (p) = \mu (p)\) for every p. If we restrict the algebraic semantics presented in Subsection 2.3 to negative valuations we obtain a correct semantics for \(\texttt{DNA}\)-logics, in the following sense: If H is a Heyting algebra, the set of formulas \(\phi \) such that \(\llbracket \phi \rrbracket ^{H,\mu } = 1\) for every negative valuation \(\mu \) is a \(\texttt{DNA}\)-logic—we call this set the \(\texttt{DNA}\)-logic of H. We write \( H\vDash ^{\lnot } \phi \) if \(\llbracket \phi \rrbracket ^{H,\mu } = 1\) for every negative valuation \(\mu \), and we extend this notion to classes of algebras in the usual way.

In [8] this semantics was employed in order to adapt results from the field of intermediate logic to study \(\texttt{DNA}\)-logics. In particular, we can show that \(\texttt{DNA}\)-logics form a lattice \(\Lambda (\texttt{IPC}^\lnot )\), dual to a particular sublattice of \(\Lambda (\textsf{HA})\). We write \( K \preceq H\) whenever K is a subalgebra of H.

Definition 2.7

(\(\texttt{DNA}\)-variety). A variety of Heyting algebras \({\mathcal {V}}\) is called a \(\texttt{DNA}\)-variety if it is additionally closed under the operation:

$$\begin{aligned} {\mathcal {V}}^{\uparrow } \;=\; \{\, H \mid \exists K\in {\mathcal {V}}.\; K_{\lnot } = H_{\lnot } \text { and } K \preceq H \,\}. \end{aligned}$$

We write \({\mathbb {D}}({\mathcal {C}})\) for the smallest \(\texttt{DNA}\)-variety of algebras containing \({\mathcal {C}}\). We let \(\Lambda (\textsf{HA}^{\uparrow })\) be the sublattice of \(\Lambda (\textsf{HA})\) comprised of all and only the \(\texttt{DNA}\)-varieties. We remark here that, since \(\texttt{DNA}\)-varieties are uniquely determined by their regular elements, there is a one-to-one correspondence between \(\texttt{DNA}\)-varieties and varieties generated by regular Heyting algebras.

It can be shown [8, Section 3.4] that for any \(\texttt{DNA}\)-logic \(\texttt{L}\) the set \(Var^{\lnot }(\texttt{L}) {:}{=} \{ H \mid \forall \phi \in \mathtt {(}L).\; H\vDash ^{\lnot } \phi \}\) is a \(\texttt{DNA}\)-variety and that given \({\mathcal {V}}\) a \(\texttt{DNA}\)-variety, the set \(Log^{\lnot }({\mathcal {V}}) {:}{=} \{ \phi \mid {\mathcal {V}}\vDash ^{\lnot } \phi \}\) is a \(\texttt{DNA}\)-logic. With these preliminary results in place, we can state the correspondence between \(\texttt{DNA}\)-logics and \(\texttt{DNA}\)-varieties, analogous to the one for intermediate logics and varieties. See [8, Theorem 3.35] for the proof of the following theorem.

Theorem 2.8

The lattices \(\Lambda (\texttt{IPC}^{\lnot })\) and \(\Lambda (\textsf{HA}^{\uparrow })\) are dually isomorphic. In particular, the maps \(Var^{\lnot }\) and \(Log^{\lnot }\) are inverse to each other.

The (propositional) inquisitive logic \(\texttt{InqB}\) [13, 15] is usually introduced, analogously to dependence logic, in terms of team semantics (see Section 6.3). However, it can also be viewed as a \(\texttt{DNA}\)-logic. We start by recalling the definitions of the following intermediate logics:

$$\begin{aligned} \begin{array}{ll} {\texttt {KP} } &{} = \texttt{IPC} + (\lnot p \rightarrow q\vee r) \rightarrow (\lnot p \rightarrow q) \vee (\lnot p \rightarrow r) \\ {\texttt {ND} } &{} = \texttt{IPC} + \{ (\lnot p \rightarrow \bigvee _{i\le k} \lnot q_i )\rightarrow \bigvee _{i\le k}(\lnot p \rightarrow \lnot q_i) \mid k\ge 2 \}. \end{array} \end{aligned}$$

Moreover, we define the intermediate logic \(\texttt{ML}\) as the set of all formulas which are valid in posets of the form \((\wp (n){\setminus }\emptyset , \supseteq )\) for \(0<n<\omega \), under the usual Kripke semantics. It is a well-known fact that \(\texttt{ND}\subseteq \texttt{KP}\subseteq \texttt{ML}\) (see e.g. [10]). The following theorem establishes an important connection between these intermediate logics and inquisitive logic.

Theorem 2.9

(Ciardelli [11]). Inquisitive logic is the negative variant of any intermediate logic L such that \(\texttt{ND}\subseteq L\subseteq \texttt{ML}\).

In the light of the previous theorem, the algebraic approach that we introduced to study \(\texttt{DNA}\)-logics can be employed to study inquisitive logic as well, as it was done in [8]. In fact, such algebraic approach extends the original work from [7] on the algebraic semantics of inquisitive logic. It was later shown in [28] that this algebraic semantics for \(\texttt{InqB}\) is unique, making \(\texttt{InqB}\) (as well as every \(\texttt{DNA}\)-logic) algebraizable in a suitable sense.

3 The stone space of maximal elements

We start by introducing and recalling some basic properties of regular clopens of Esakia spaces. These properties belong to the folklore, but we shall provide details of the proofs in these sections as it does not seem to us that they are explicitly presented in the past literature. We stress however that Theorem 3.4 is already stated in [19, A.2.1] and [2, Section 3].

First, notice that given an Esakia space \(\mathfrak {E}\) we can consider two topologies on it: the equipped Esakia topology \(\tau _{\mathfrak {E}}\) and the Alexandrov topology \(\tau _\le \) induced by the partial order on \(\mathfrak {E}\), i.e., the topology having upsets as open sets. To distinguish the interior and closure operators in the two topologies we use the notations \(\textrm{Int}\), \(\textrm{Cl}\) and \(\textrm{Int}_{\le }, \textrm{Cl}_{\le }\) respectively. As the next definition makes explicit, in the rest of this article when we speak of regular subsets of an Esakia space we always mean regular sets under the order topology.

Definition 3.1

An upset U of an Esakia space \(\mathfrak {E}\) is regular if

$$\begin{aligned} {\textrm{Int}_{\le }(\textrm{Cl}_{\le }(U)) = U}. \end{aligned}$$

We denote by \({{\mathcal {U}}}{{\mathcal {R}}}(\mathfrak {E})\) the regular upsets of \(\mathfrak {E}\), and we denote by \(\mathcal {RCU}(\mathfrak {E})\) the set of upsets of \(\mathfrak {E}\) that are (i) regular according to the Alexandrov topology and (ii) clopen according to the equipped Esakia topology. We start by providing several equivalent characterisations of such subsets. Recall that we let \(\overline{U}=((U)^\downarrow )^c\) and that if \(U\in {{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\) then \(\overline{U}\) is its pseudocomplement in the Heyting algebra \({{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\).

Proposition 3.2

Let \(\mathfrak {E}\) be an Esakia space and let \(U\in {{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\). Then the following are equivalent:

  1. (i)

    U is regular;

  2. (ii)

    \(U=\overline{\overline{U}}\);

  3. (iii)

    \(U^\downarrow {\setminus } U \;\subseteq \; \overline{U}^\downarrow {\setminus } \overline{U}\).

Proof

Firstly, we notice that just by the definition of closure and interior we immediately obtain the following:

$$\begin{aligned} x\in \textrm{Int}_{\le }(\textrm{Cl}_{\le }(U)) \Longleftrightarrow x^\uparrow \subseteq U^\downarrow \Longleftrightarrow x\notin ((U^\downarrow )^c)^\downarrow \Longleftrightarrow x\in \overline{\overline{U}}, \end{aligned}$$

showing the equivalence of (i) and (ii). The equivalence between (ii) and (iii) is then proved as follows.

(ii) \(\Rightarrow \) (iii)

Suppose \(U=\overline{\overline{U}}\) and let \(x\in U^\downarrow \setminus U\). Since \(x\notin U = (\overline{U}^{\downarrow })^c\), it follows that \(x \in \overline{U}^{\downarrow }\). Moreover, since \(x\in U^\downarrow \), we have that \(x\notin ( U^\downarrow )^c = \overline{U}\). That is, \(x\in \overline{U}^\downarrow {\setminus } \overline{U}\).

(iii) \(\Rightarrow \) (ii) Suppose that \(U^\downarrow {\setminus } U \,\subseteq \, \overline{U}^\downarrow {\setminus } \overline{U}\), we want to show that \(U = (\overline{U}^{\downarrow })^c\).

(\(\subseteq \)) Take \(x \in U\) and consider any \(y\ge x\), which lies again in U since it is an upset. Since \(U \cap \overline{U} = \emptyset \) it follows that \(y \notin \overline{U}\); and since y is an arbitrary element above x, it follows that \(x\notin \overline{U}^{\downarrow }\), that is, \(x \in (\overline{U}^{\downarrow })^c\).

\((\supseteq )\) Now suppose \(x \in (\overline{U}^{\downarrow })^c\), which entails \(x\notin \overline{U}^\downarrow {\setminus } \overline{U} \). Thus by assumption we have that \(x\notin U^\downarrow {\setminus } U \). Then, either \(x\in U\), which proves our claim, or \(x\notin U^\downarrow \). However the latter gives a contradiction, since \(x\in (U^\downarrow )^c = \overline{U}\) contradicts our assumption that \(x\in (\overline{U}^{\downarrow })^c \subseteq \overline{U}^c\). Hence we have that \(x\in U\), which proves our claim. \(\square \)

Now, given an upset Q we indicate with M(Q) the set of maximal elements of Q, that is:

$$\begin{aligned} M(Q) \;{:}{=}\; \{ q\in Q \mid \forall q' \in Q.( q' \ge q \implies q' = q ) \}. \end{aligned}$$

We often write simply M(p) in place of \(M(p^\uparrow )\). We especially remark that, by compactness, it follows that for every Esakia space \(\mathfrak {E}\) and for every element \(x\in \mathfrak {E}\) the set M(x) is nonempty—see e.g. [19, Theorem 3.2.1]. An important characterisation of elements in \(\mathcal {RCU}(\mathfrak {E})\) is then in terms of the maximal elements of the Esakia space \(\mathfrak {E}\), as the following proposition makes precise.

Proposition 3.3

Let \(\mathfrak {E}\) be an Esakia space and \(U\in {{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\). Then the following are equivalent:

  1. (i)

    U is regular;

  2. (ii)

    For every \(x\in \mathfrak {E}\) we have that \(x\in U\) if and only if \( M(x)\subseteq U\).

Proof

Firstly notice that, if \(x \in U\) then \(M(x) \subseteq U\) since U is an upset. So in particular (ii) boils down to the right to left direction. We prove the two implications (i) \(\Rightarrow \) (ii) and (ii) \(\Rightarrow \) (i) separately.

(i) \(\Rightarrow \) (ii). Given \(x \in \mathfrak {E}\), suppose that \(M(x)\subseteq U\); we want to show that \(x \in U\). Towards a contradiction, assume that \(x\notin U\), which together with the previous assumption entails \(x \in U^{\downarrow }\setminus U\). Since U is regular by assumption, by Proposition 3.2 it follows that \(x\in \overline{U}^\downarrow \setminus \overline{U}\). Since \(\overline{U}\) is an upset itself and, by the remark above, there are maximal elements above every point of an Esakia space, it follows that \(M(x) \cap \overline{U} \ne \emptyset \). But this is in contradiction with \(M(x) \subseteq U\) since \(U \cap \overline{U} = \emptyset \).

(ii) \(\Rightarrow \) (i). By Proposition 3.2, it suffices to show that if \(x \in U^{\downarrow } \setminus U\) then \(x \in \overline{U}^{\downarrow } {\setminus } \overline{U}\). So consider \(x \in U^{\downarrow } {\setminus } U\). Since \(x\notin U\), by assumption \(M(x)\nsubseteq U\). By maximality of the elements in M(x), we have that \(M(x)\nsubseteq U^\downarrow \), thus \(M(x) \cap \overline{U} \ne \emptyset \). This implies that \(x \in \overline{U}^{\downarrow }\). Moreover, since \(x\in U^\downarrow \) we have that \(x\notin \overline{U}\), thus concluding that \(x \in \overline{U}^\downarrow {\setminus } \overline{U}\). \(\square \)

By Proposition 3.2 the elements of \(\mathcal {RCU}(\mathfrak {E})\) correspond one-to-one to the regular elements of \(H_{\mathfrak {E}}\). In the light of this fact, it immediately follows that \( \mathcal {RCU}(\mathfrak {E}) \) is a Boolean algebra, where negation is defined as \(\lnot U{:}{=} \overline{U}\) and disjunction as \(U\dot{\vee } V{:}{=} \lnot (\overline{U}\cap \overline{V})\). Proposition 3.3 suggests then a connection between the Boolean algebras of regular elements and the Stone space of the maximal elements of \(\mathfrak {E}\). Consider the set \(M_{\mathfrak {E}}\) of maximal elements of \(\mathfrak {E}\). It is well-known [19, Theorem 3.2.3] that this set forms a Stone space under the relative topology \(\tau _{M_\mathfrak {E}}\) inherited from \(\mathfrak {E}\):

$$\begin{aligned} U\in \tau _{M_\mathfrak {E}} \;\Longleftrightarrow \; \exists V \in \tau _{\mathfrak {E}} \text { such that } U= V\cap M_\mathfrak {E}. \end{aligned}$$

The following result provides a correspondence between \(\mathcal {RCU}(\mathfrak {E})\) and \({\mathcal {C}}(M_\mathfrak {E})\). We attribute this result to Esakia, as it is mentioned in [19, A.2.1], but we develop the proof idea from [2, Section 3].

Theorem 3.4

(Esakia). Let \(\mathfrak {E}\) be an Esakia space and \({\mathcal {C}}(M_{\mathfrak {E}})\) the clopen sets of the Stone space \(M_{\mathfrak {E}}\). Then the map:

$$\begin{aligned} M:&\, \mathcal {RCU}(\mathfrak {E}) \rightarrow {\mathcal {C}}(M_\mathfrak {E}) \\ M:&\, U \mapsto U \cap M_\mathfrak {E} \end{aligned}$$

is an isomorphism of Boolean algebras.

Proof

First, we show that M is well-defined: let \(U\in \mathcal {RCU}(\mathfrak {E})\). Since U is a clopen of \(\mathfrak {E}\), then \(M(U) = U\cap M_\mathfrak {E}\) is a clopen of \(M_{\mathfrak {E}}\) by definition of the relative topology.

Secondly, we check that M is a homomorphism. The only non-trivial case to check is the condition for negation. Let \(U \in \mathcal {RCU}(\mathfrak {E})\), then we have:

$$\begin{aligned} M(\lnot U) \;=\; M(\, \overline{U} \,) \;=\; M_\mathfrak {E} \cap (U^{\downarrow })^c \;=\; M_\mathfrak {E}\setminus M(U) \;=\; \lnot M(U); \end{aligned}$$

where the latter negation is computed in the Boolean algebra \({\mathcal {C}}(M_\mathfrak {E}) \).

Thirdly, we show that M is injective. Suppose \(M(U) = M(V)\) for \(U,V \in \mathcal {RCU}(\mathfrak {E})\), then for any \(x\in \mathfrak {E}\) we have that \(M(x)\subseteq U\) if and only if \(M(x)\subseteq V\). So by Proposition 3.3 it follows that \(x\in U\) if and only if \(x\in V\), whence \(U = V\).

Finally, we show that M is surjective. Let \(U\in {\mathcal {C}}(M_\mathfrak {E})\), since U is a clopen of \(M_{\mathfrak {E}}\) under the relative topology and \(M_{\mathfrak {E}}\) is closed in \(\mathfrak {E}\), we have by compactness that \(U=V\cap M_{\mathfrak {E}}\) for some V clopen in the Esakia topology of \(\mathfrak {E}\). By the display above, we have that for all \(W\in {{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\), \(M(\overline{W})=M_\mathfrak {E}{\setminus } M(W)\), from which it follows that

$$\begin{aligned} M(\overline{\overline{V}}) = M_\mathfrak {E}\setminus (M_\mathfrak {E}\setminus M(V))=M_\mathfrak {E}\cap V =U. \end{aligned}$$

Since \(\overline{\overline{V}}\in \mathcal {RCU}(\mathfrak {E}) \) this shows that M is also surjective. \(\square \)

The following corollary follows immediately using Stone duality. Notice that if B is a Boolean algebra we write \(\mathfrak {S}_B\) for its dual Stone space.

Corollary 3.5

Let H be a Heyting algebra, then the Stone dual \(\mathfrak {S}_{H_\lnot }\) of the Boolean algebra \(H_\lnot \) is isomorphic to the Stone space \(M_{\mathfrak {E}_H}\), i.e. \( \mathfrak {S}_{H_\lnot }\cong M_{\mathfrak {E}_H}\).

Proof

By Theorem 3.4 we have that \(\mathcal {RCU}(\mathfrak {E}_H)\cong {\mathcal {C}}(M_{\mathfrak {E}_H})\), and consequently \(H_\lnot \cong {\mathcal {C}}(M_{\mathfrak {E}_H})\). By Stone duality it follows that \( \mathfrak {S}_{H_\lnot }\cong M_{\mathfrak {E}_H} \). \(\square \)

4 Regular Esakia spaces

A main goal of this work is to study Esakia spaces dual to regular Heyting algebras. We start by giving them a name.

Definition 4.1

An Esakia space \(\mathfrak {E}\) is regular if \(H_\mathfrak {E}= \langle (H_{\mathfrak {E}})_\lnot \rangle \).

Given an Esakia space \(\mathfrak {E}\), we also write \(\mathfrak {E}_r\) for the Esakia space dual to \(\langle (H_{\mathfrak {E}})_\lnot \rangle \). The notion of regular Esakia spaces is thus defined in external terms, by means of Esakia duality. In this section we consider the problem of providing an internal characterisation of regular Esakia spaces.

We give two partial answers to this question. Firstly, in Section 4.1, we give a characterisation of regular Esakia spaces in terms of special p-morphisms, and we apply it to the finite case to obtain a more fine-grained description. Secondly, in Section 4.2, we follow an alternative approach in terms of suitable equivalence relations. This allows us to obtain a necessary condition for an Esakia space to be regular and also a description of finite regular posets. Finally, in Section 4.3, we use duality methods to prove some additional results on varieties generated by regular Heyting algebras.

4.1 A characterisation by regular-preserving morphisms

One way to characterise regular Heyting algebras is to look at homomorphisms fixing their Boolean algebra of regular elements. This motivates the following definition.

Definition 4.2

Let \(h:\mathfrak {E}\rightarrow \mathfrak {E}'\) be a p-morphism, then h preserves regulars if \( h^{-1}: \mathcal {RCU}(\mathfrak {E}') \rightarrow \mathcal {RCU}(\mathfrak {E})\) is an isomorphism of Boolean algebras.

As regular clopen upsets of Esakia spaces correspond to clopens of maximal elements, it follows that regular preserving p-morphisms can be characterised in terms of their action on maximal points.

Proposition 4.3

Let \(h:\mathfrak {E}\rightarrow \mathfrak {E}'\) be a p-morphism, then h preserves regulars if and only if \(h{\upharpoonright } M_\mathfrak {E}\) is a homeomorphism.

Proof

We provide full details for the left to right direction and simply resort to Stone duality for the converse.

\((\Rightarrow )\) Since h is continuous, it suffices to check that it is a bijection. We first show that \(h{\upharpoonright } M_\mathfrak {E}\) is an injection. Consider two distinct \(x,y\in M_{\mathfrak {E}}\), since \(M_{\mathfrak {E}}\) is a Stone space there are two disjoint clopen neighbourhoods \(U_x, U_y\) of x and y respectively. From Theorem 3.4 it follows that \(M^{-1}(U_x)\cap M^{-1}(U_y) = \emptyset \). Now, since h preserves regulars, it follows that \(h^{-1}{{\upharpoonright } }\mathcal {RCU}(\mathfrak {E}') \) is an isomorphism, whence \(M^{-1}(U_x)= h^{-1}(V_x)\) and \(M^{-1}(U_y)= h^{-1}(V_y)\) for some \(V_x,V_y\in \mathcal {RCU}(\mathfrak {E}')\) such that \(V_x\cap V_y=\emptyset \). Then, it follows that \(h(x)\in V_x\) and \(h(y)\in V_y\), whence \(h(x)\ne h(y)\).

Now let \(x\in M_{\mathfrak {E}'}\) and consider the family \(\{U_x^i\mid i\in I\}\) of all clopen neighbourhoods of x in \(M_{\mathfrak {E}'}\). We notice that, since any two points in \(M_{\mathfrak {E}'}\) are separated by a clopen, \(\bigcap _{i\in I} U^i_x=\{x\}\). Since \( h^{-1}: \mathcal {RCU}(\mathfrak {E}') \rightarrow \mathcal {RCU}(\mathfrak {E})\) is an isomorphism of Boolean algebras, it follows by Theorem 3.4 that \(g{:}{=}M\circ h^{-1}\circ M^{-1}\) is an isomorphism between \( {\mathcal {C}}(M_{\mathfrak {E}'})\) and \( {\mathcal {C}}(M_{\mathfrak {E}})\). Now, if \(\bigcap _{i\in I} g(U^i_x)=\emptyset \), then by compactness there is some finite \(I_0\subseteq I\) such that \(\bigcap _{i\in I_0} g(U^i_x)=\emptyset \), contradicting \(\bigcap _{i\in I_0} U^i_x\ne \emptyset \). Let \(y\in \bigcap _{i\in I} g(U^i_x) \), then y is maximal and additionally \(h(y)\in M^{-1}(\bigcap _{i\in I} U^i_x)\). Since h(y) must also be maximal and \(M(M^{-1}(\bigcap _{i\in I} U^i_x))=\{x\}\), this shows that h is also surjective.

\((\Leftarrow )\) Since \(h{\upharpoonright } M_\mathfrak {E}\) is a bijection, it follows by Stone duality that the map \(M\circ h^{-1} \circ M^{-1}\) is an isomorphism between \( {\mathcal {C}}(M_{\mathfrak {E}'})\) and \( {\mathcal {C}}(M_{\mathfrak {E}})\). By Theorem 3.4 we then have that \( h^{-1}: \mathcal {RCU}(\mathfrak {E}') \rightarrow \mathcal {RCU}(\mathfrak {E})\) is an isomorphism of Boolean algebras, which proves our claim. \(\square \)

It is then immediate to conclude that the embedding of a Heyting algebra into one with the same regular elements induces a surjective p-morphism of the dual spaces which is injective on the maximal elements.

Corollary 4.4

Let \(A,B \in \textsf{HA}\), \(A\preceq B\) and \(A_\lnot =B_\lnot \), then there is a surjective p-morphism \(h:\mathfrak {E}_B\twoheadrightarrow \mathfrak {E}_A\) which is also injective on maximal elements.

Proof

By Esakia duality, the inclusion \(A\preceq B\) induces a p-morphism \(h:\mathfrak {E}_B\twoheadrightarrow \mathfrak {E}_A\) defined by \(h: F \mapsto F\cap A\), where \(F\subseteq B\) is any prime filter over B. The fact that h is continuous and surjective already follows from the duality between subalgebras and quotient spaces. By Proposition 4.3 above we also have that h is injective on maximal elements. \(\square \)

The following theorem provides a characterisation of regular Esakia spaces.

Theorem 4.5

The following are equivalent, for any Esakia Space \(\mathfrak {E}\):

  1. (i)

    \(H_{\mathfrak {E}}\) is regular;

  2. (ii)

    For any Heyting algebra K, \( K\preceq H_{\mathfrak {E}}\) and \( (H_{\mathfrak {E}})_\lnot =K_\lnot \) entail \(K= H_{\mathfrak {E}}\);

  3. (iii)

    For any Esakia space \(\mathfrak {E}'\) and any surjective p-morphism \(f:\mathfrak {E}\twoheadrightarrow \mathfrak {E}'\), if \(f{{\upharpoonright } } M_\mathfrak {E}\) is a homeomorphism, then f is a homeomorphism.

Proof

Claims (i) and (ii) are equivalent by the definition of being regular. We show the equivalence of (ii) and (iii).

(ii) \(\Rightarrow \) (iii). Let \( f:\mathfrak {E}\twoheadrightarrow \mathfrak {E}'\) be a surjective p-morphism, then by Esakia duality we have that \(f^{-1}[H_{\mathfrak {E}'}]\preceq H_{\mathfrak {E}}\). By Stone duality, if \(f{\upharpoonright } M_\mathfrak {E}\) is a homeomorphism, then \(M\circ f^{-1} \circ M^{-1}\) is an isomorphism of Boolean algebras and so by Theorem 3.4\(f^{-1}: \mathcal {RCU}(\mathfrak {E}') \rightarrow \mathcal {RCU}(\mathfrak {E}) \) is an isomorphism. Since \(\mathcal {RCU}(\mathfrak {E})=(H_{\mathfrak {E}})_\lnot \) and \(\mathcal {RCU}(\mathfrak {E}')=(H_{\mathfrak {E}'})_\lnot \), it follows that \((f^{-1}[H_{\mathfrak {E}'}])_\lnot =f^{-1}[(H_{\mathfrak {E}'})_\lnot ]= (H_{\mathfrak {E}})_\lnot \), which by (ii) entails \(f^{-1}[H_{\mathfrak {E}'}]=H_{\mathfrak {E}}\). By Esakia duality it follows that f is injective, and so is a homeomorphism.

(iii) \(\Rightarrow \) (ii). Let \(K\preceq H_{\mathfrak {E}}\) be such that \(K_\lnot = (H_{\mathfrak {E}})_\lnot \). By Corollary 4.4, there is a surjective p-morphism \(h:\mathfrak {E}\twoheadrightarrow \mathfrak {E}_K\) which is also injective on maximal elements. Hence, \(h{{\upharpoonright } } M_\mathfrak {E}\) is a continuous bijection of Stone spaces and thus a homeomorphism. By (iii) it follows that \(h:\mathfrak {E}\twoheadrightarrow \mathfrak {E}_K\) is a homeomorphism of Esakia spaces and, by Esakia duality, we obtain that \(K= H_{\mathfrak {E}}\). \(\square \)

In the finite context the characterisation of the previous theorem can be further strengthened. We recall the following definitions of \(\alpha \)-reductions and \(\beta \)-reductions [6, 17].

Definition 4.6

Let \(\mathfrak {F}\) be a partial order and \(x,y\in \mathfrak {F}\) be distinct elements.

  • Suppose \(x^\uparrow =y^\uparrow \cup \{x\}\). An \(\alpha \)-reduction is a surjection \(h:\mathfrak {F}\rightarrow \mathfrak {F}{\setminus }\{y\}\) such that \(h(y)=x\) and \(h(z)=z\) whenever \(z\ne y\).

  • Suppose \(x^\uparrow {\setminus } \{x\}=y^\uparrow {\setminus }\{y\}\). A \(\beta \)-reduction is a surjection \(h:\mathfrak {F}\rightarrow \mathfrak {F}{\setminus }\{y\}\) such that \(h(y)=x\) and \(h(z)=z\) whenever \(z\ne y\).

Notice that, since \(\alpha \)-reductions and \(\beta \)-reductions are not necessarily continuous with respect to the Stone topology of an Esakia space, we have introduced them only with respect to partial orders and not for Esakia spaces. This explains why we will use them to characterize only finite regular Esakia spaces, whose underlying topology is discrete. In particular, in the finite case we can always look at immediate successors of points of a poset: given any \(x\in \mathfrak {F}\), we let \(S(x){:}{=}\{ y\in \mathfrak {F}\mid x<y \text { and } x< z\le y \Rightarrow z=y \}\). We recall that a Heyting algebra is subdirectly irreducible if it has a second greatest element.

Theorem 4.7

H is a finite, regular (subdirectly irreducible) Heyting algebra if and only if \(\mathfrak {E}_H\) is a finite, (rooted) poset such that:

  1. (i)

    For all non-maximal \(x\in \mathfrak {E}_H\), \(|S(x)|\ge 2\).

  2. (ii)

    For all non-maximal \(x,y\in \mathfrak {E}_H\), if \(x\ne y\), then \(S(x)\ne S(y).\)

Proof

It suffices to consider conditions (i) and (ii) since it is already well-known that finite (subdirectly irreducible) Heyting algebras correspond to finite (rooted) posets under Esakia duality.

\((\Rightarrow )\) (i) If this is not the case, then there is a point \(x\in \mathfrak {E}_H\) such that \(S(x)=\{y\}\). Then, we can apply the \(\alpha \)-reduction h such that \(h(x)=h(y)\) and \(h(z)=z\) for all \(z\ne x\). Then h is a p-morphism which is injective on maximal elements, hence, by Theorem 4.5, h is an isomorphism, contradicting \(x\ne y\), \(h(x)=h(y)\). (ii) If this is not the case, then there are two distinct \(x,y\in \mathfrak {E}\) such that \(S(x)=S(y)\). We then apply the \(\beta \)-reduction h such that \(h(x)=h(y)\) and \(h(z)=z\) for all \(z\ne y\). But then h is a p-morphism which is injective on maximal elements hence, by Theorem 4.5, h must also be an isomorphism, which gives us a contradiction.

\((\Leftarrow )\) If H is not regular, then \(\langle H_\lnot \rangle \preceq H\) and \(\langle H_\lnot \rangle \ne H\). By Corollary 4.4 there exists a p-morphism \(h: \mathfrak {E}_H\twoheadrightarrow \mathfrak {E}_{\langle H_\lnot \rangle }\) which is also injective on maximal elements. Moreover, since \(\langle H_\lnot \rangle \ne H\), we also have that \(\mathfrak {E}_H\ne \mathfrak {E}_{\langle H_\lnot \rangle }\), meaning that h is not injective. Since \(\mathfrak {E}_H\) is finite, it follows that \(h=f_0\circ \dots \circ f_n\), where each \(f_i\) is either an \(\alpha \)- or a \(\beta \)-reduction – see [6, Prop. 3.1.7]. In particular, \(f_n\) is an \(\alpha \)- or a \(\beta \)-reduction over \(\mathfrak {E}_H\), meaning that either (i) or (ii) holds. \(\square \)

4.2 A characterisation by equivalence relations

Although Theorems 4.5 and 4.7 already give us suitable characterisations of (finite) regular Esakia spaces, we provide an alternative description of them in terms of suitable equivalence relations. This will make more explicit how finite regular posets are controlled by their maximal elements and will also allow for a finer analysis of polynomials of regular elements.

To this end we introduce a way to identify points over an Esakia space. If \(X\subseteq \mathfrak {F}\) and \(\theta \) is an equivalence relation, we let \(X/\theta {:}{=} \{[x]_\theta \mid x\in X \} \). We define the following equivalence relations, which can also be seen as a kind of bounded bisimulations (see in particular [33]).

Definition 4.8

Let \(\mathfrak {E}\) be an Esakia space, we define:

$$\begin{aligned} x {\sim } _0 y \;&\;\Longleftrightarrow \; M(x)=M(y) \\ x {\sim } _{n+1} y \;&\;\Longleftrightarrow \; x^\uparrow / {\sim } _n = y^\uparrow / {\sim } _n \\ {\sim } _{\infty } \;&=\; \bigcap _{n\in \omega } {\sim } _n. \end{aligned}$$

For any \(x\in \mathfrak {E}\) we simply write \([x]_{n}\) and \([x]_\infty \) for its equivalence class over \( {\sim } _n\) and \( {\sim } _\infty \) respectively.

Lemma 4.9

Let \(\mathfrak {E}\) be an Esakia space and \(x,y\in \mathfrak {E}\), then \(x {\sim } _{n}y\) entails \(x {\sim } _{l}y\) for all \(l\le n<\omega \).

Proof

By induction on \(n\ge 1\).

Let \(x {\sim } _{1}y\) and suppose \(x {\not \sim } _{0}y\). Then there is, without loss of generality, some \(z\in M_\mathfrak {E}\) such that \(x\le z\) but \(y\nleq z\). Thus for all \(w\ge y\), \(M(z)\ne M(w)\) and so \(z {\not \sim } _{0}w\), which contradicts \(x {\sim } _{1}y\).

Let \(x {\sim } _{n+1}y\) and suppose \(x {\not \sim } _{l}y\) for some \(l\le n\). Then there is, without loss of generality, some \(z\ge x\) such that for all \(w\ge y\), \(z {\not \sim } _{l-1}w\). By induction hypothesis it follows that \(z {\not \sim } _{n}w\), contradicting \(x {\sim } _{n+1}y\). \(\square \)

The intuitive idea behind the relation \( {\sim } _n\) is that it captures the equivalence of two points up to a certain complexity of polynomials (terms) over regular elements. To make this idea precise we recall the following notion of implication rank. The key idea is that the rank of an element of \(\langle \mathcal {RCU}(\mathfrak {E})\rangle \) should indicate “how hard” it is to obtain this elements from regular ones.

Definition 4.10

(Implication rank).

  1. (a)

    Let \(\phi \) be a polynomial, we define its implication rank \(\textsf{rank}(\phi )\) recursively as follows:

    1. (i)

      If \(\phi \) is a constant or variable, then \(\textsf{rank}(\phi )=0\);

    2. (ii)

      \(\textsf{rank}(\psi \wedge \chi )=\text {max}\{\textsf{rank}(\psi ),\textsf{rank}(\chi ) \};\)

    3. (iii)

      \(\textsf{rank}(\psi \vee \chi )=\text {max}\{\textsf{rank}(\psi ),\textsf{rank}(\chi ) \};\)

    4. (iv)

      \(\textsf{rank}(\psi \rightarrow \chi )=\text {max}\{\textsf{rank}(\psi ),\textsf{rank}(\chi ) \}+1\).

  2. (b)

    Let \(\mathfrak {E}\) be an Esakia space, then for every \(U\in \langle \mathcal {RCU}(\mathfrak {E})\rangle \) we let

    $$\begin{aligned} \textsf{rank}(U)= \text {min}\{\textsf{rank}(\phi ) \mid \phi (V_0,\dots ,V_n)=U \text { for } V_0,\dots ,V_n\in \mathcal {RCU}(\mathfrak {E}) \}. \end{aligned}$$

The following lemma characterises the relation \( {\sim } _{n}\) over (finite) Esakia spaces and it relates it to the implication rank of polynomials over regular elements. This result mirrors Visser’s classical result on bounded bisimulation [33, Thms. 4.7\(-\)4.8] and Esakia and Grigolia’s characterisation of finitely generated Heyting algebras [6, 20], but with the key difference that we restrict attention to polynomials over (possibly infinitely many) regular elements.

Proposition 4.11

  1. (i)

    Let \(\mathfrak {E}\) be an Esakia space and let \(H = H_{\mathfrak {E}}\) be its dual Heyting algebra. For all \(x,y \in \mathfrak {E}\) such that \(x {\sim } _{n} y\), \( x \in U \) if and only if \( y \in U\), for all \(U\in \langle H_\lnot \rangle \) with \(\textsf{rank}(U)\le n \).

  2. (ii)

    Let \(\mathfrak {F}\) be a finite poset and let \(H = H_{\mathfrak {F}}\) be its dual Heyting algebra. If for all \(U\in \langle H_\lnot \rangle \) with \(\textsf{rank}(U)\le n \) we have that \( x \in U \) if and only if \( y \in U\), then \(x {\sim } _{n} y\) for all \(x,y \in \mathfrak {F}\).

Proof

We prove the claim (i) by induction on n.

Let \(n=0\) and suppose \(x {\sim } _{0} y\). Let \(U\in \langle H_\lnot \rangle \) be such that \(\textsf{rank}(U)=0\), it follows \(U\in H_\lnot \). By \(x {\sim } _{0} y\) we have \(M(x)=M(y)\) and so \(M(x)\subseteq U\) if and only if \(M(y)\subseteq U\). Since \(U\in \mathcal {RCU}(\mathfrak {E})\), it follows by Proposition 3.2 that \(x\in U\) if and only if \(y\in U\).

Let \(n=m+1\) and suppose \(x {\sim } _{m+1} y\). If \(\textsf{rank}(U)=k\le m\) then the claim follows by the induction hypothesis together with Lemma 4.9. If \(\textsf{rank}(U)=m+1\) we proceed by induction on the complexity of the polynomial \(\psi \) of least implication rank for which \(U=\psi (V_0,\dots ,V_k)\) where \(V_i\in \mathcal {RCU}(\mathfrak {F})\) for all \(i\le k\).

  • If \(\psi \) is atomic, \(\psi = \alpha \wedge \beta \) or \(\psi =\alpha \vee \beta \), then the claim follows immediately by the induction hypothesis.

  • If \(\psi =\alpha \rightarrow \beta \), let \(V=\alpha (V_1,\dots ,V_k)\), \(W=\beta (V_1,\dots ,V_k)\), clearly \(\textsf{rank}(V)\le m\), \(\textsf{rank}(W)\le m\) and \(U= ((V\setminus W)^{\downarrow })^c\).

We show only one direction as the converse is analogous. Suppose \(y\notin ((V\setminus W)^{\downarrow })^c\), then there is some \(z\ge y\) such that \(z\in V{\setminus } W\). Since \(x {\sim } _{m+1} y\), we have \(x^\uparrow / {\sim } _{m} = y^\uparrow / {\sim } _{m}\) and thus there is some \(k\ge x \) such that \(k {\sim } _{m} z\). By induction hypothesis \(k\in V\setminus W\), showing \(x\notin ((V{\setminus } W)^{\downarrow })^c\).

We next prove item (ii). We let \(\mathfrak {F}\) be a finite poset and we reason by induction.

Let \(n=0\) and suppose \(x {\not \sim } _{0} y\). Without loss of generality we have that \(M(y)\nsubseteq M(x)\), hence by Theorem 3.4 there is a regular upset U such that \(M(x)\subseteq U\) and \(M(y)\nsubseteq U\), whence by Proposition 3.2 we have \(x\in U\) and \(y\notin U\).

Let \(n=m+1\). If \(x {\not \sim } _{m+1} y \) then \( x^\uparrow / {\sim } _{m} \ne y^\uparrow / {\sim } _{m}\), hence (without loss of generality) there is \(z\ge x\) such that for all \(k\ge y\) we have \(z {\not \sim } _{m} k\). By induction hypothesis, for every k, there is either an upset \(V_k\in \langle H_\lnot \rangle \) such that \(z\in V_k\) and \(k\notin V_k\), or an upset \(U_k\in \langle H_\lnot \rangle \) such that \(z\notin U_k\) and \(k\in U_k\), with \(\textsf{rank}(V_k),\textsf{rank}(U_k)\le m\) for all \(k\ge y\). We let

$$\begin{aligned} I_0&=\{k\ge y \mid z\in V_k, k\notin V_k \}\\ I_1&=\{k\ge y \mid z\notin U_k, k\in U_k \}. \end{aligned}$$

And we define \(Z{:}{=}(( \bigcap _{k\in I_0} V_k {\setminus } \bigcup _{k\in I_1} U_k )^\downarrow )^c \), i.e. \(Z{:}{=} \bigcap _{k\in I_0} V_k \rightarrow \bigcup _{k\in I_1} U_k\) and thus \(\textsf{rank}(Z)\le m+1\). Clearly Z is an upset and by definition \(Z\in \langle {{\mathcal {U}}}{{\mathcal {R}}}(\mathfrak {F})\rangle \).

Now, since for every \(k\in I_0\) we have \(z\in V_k\) and for every \(k\in I_1\) we have \(z\notin U_k\), it follows that \(z\in \bigcap _{k\in I_0} V_k {\setminus } \bigcup _{k\in I_1} U_k\), whence \(x\notin Z \).

Then, if \(k\in I_0\) then \(k\notin V_k\), whence \( k\notin \bigcap _{k\in I_0} V_k {\setminus } \bigcup _{k\in I_1} U_k\). Otherwise, if \(k\in I_1\) then \(k\in U_k\), whence \( k\notin \bigcap _{k\in I_0} V_k {\setminus } \bigcup _{k\in I_1} U_k\). Since \(I_0\cup I_1=y^\uparrow \), this shows that \(k\notin \bigcap _{k\in I_0} V_k {\setminus } \bigcup _{k\in I_1} U_k\) for all \(k\ge y\) and so \(y\in (( \bigcap _{k\in I_0} V_k {\setminus } \bigcup _{k\in I_1} U_k )^\downarrow )^c =Z\), finishing our proof. \(\square \)

We remark in passing that the second claim of the previous proposition can be extended to some specific classes of infinite Esakia spaces. For example, it can be generalized to Esakia spaces dual to finitely generated Heyting algebras, or to Esakia spaces whose order reducts are image finite posets, i.e. posets \(\mathfrak {E}\) where \(|x^\uparrow |<\omega \) for every \(x\in \mathfrak {E}\).

The next proposition follows immediately. Notice that by Esakia duality we treat elements of a finite poset as filters over the dual Heyting algebra and elements of a finite Heyting algebra as upsets of the dual poset.

Proposition 4.12

Let \(\mathfrak {F}\) be a finite poset and let \(H = H_{\mathfrak {F}}\) be its dual Heyting algebra, the following are equivalent for any \(x,y \in \mathfrak {F}\):

  1. (i)

    \(x {\sim } _{\infty } y\);

  2. (ii)

    \(x \cap \langle H_{\lnot } \rangle = y \cap \langle H_{\lnot } \rangle \);

  3. (iii)

    \(\forall U \in \langle H_\lnot \rangle . [ x \in U \iff y \in U ]\).

Proof

Claims (ii) and (iii) are rephrasing of the same condition under Esakia duality. The equivalence of (i) and (iii) follows from Proposition 4.11. \(\square \)

We can use the relation \( {\sim } _\infty \) to supplement Proposition 4.3 and characterise p-morphisms preserving polynomials of regulars between finite posets.

Definition 4.13

Let \(h:\mathfrak {E}\rightarrow \mathfrak {E}'\) be a p-morphism, then we say that h preserves polynomials of regulars if \( h^{-1}: \langle \mathcal {RCU}(\mathfrak {E}') \rangle \rightarrow \langle \mathcal {RCU}(\mathfrak {E})\rangle \) is an isomorphism of Heyting algebras.

Proposition 4.14

Let \(h:\mathfrak {F}\rightarrow \mathfrak {F}'\) be a p-morphism between finite posets, then h preserves polynomials of regulars if and only if \(x {\not \sim } _{\infty } y\) entails \(h(x) {\not \sim } _{\infty }h(y)\).

Proof

\((\Rightarrow )\) Suppose \(h:\mathfrak {F}\rightarrow \mathfrak {F}'\) is a p-morphism preserving polynomials of regular elements, and let \(x,y\in \mathfrak {F}\) be such that \(x {\not \sim } _{\infty } y\). Since h preserves polynomials of regulars, the induced map \( h^{-1}: \langle {{\mathcal {U}}}{{\mathcal {R}}}(\mathfrak {F}) \rangle \rightarrow \langle {{\mathcal {U}}}{{\mathcal {R}}}(\mathfrak {F}')\rangle \) is an isomorphism of Heyting algebras. It follows that \(x\in h^{-1}(U)\) and \(y\notin h^{-1}(U)\) for some \(h^{-1}(U)\in {{\mathcal {U}}}{{\mathcal {R}}}(\mathfrak {F})\). Hence, we obtain that \(h(x)\in U\) and \(h(y)\notin U\) for some \(U\in {{\mathcal {U}}}{{\mathcal {R}}}(\mathfrak {F})\), which by Proposition 4.12 proves our claim. \((\Leftarrow )\) Analogous to the previous direction. \(\square \)

Finally, the next theorem shows that regular Esakia spaces are stable under \( {\sim } _\infty \). Moreover, in the finite setting, it provides us with a second characterisation of finite regular posets.

Theorem 4.15

  1. (i)

    Let \(\mathfrak {E}\) be a regular Esakia space and \(x,y\in \mathfrak {E}\); if \(x {\sim } _\infty y\) then \(x=y\).

  2. (ii)

    Let \(\mathfrak {F}\) be a finite poset such that \(x {\sim } _\infty y\) entails \(x=y\), then \(\mathfrak {F}\) is regular.

In particular, a finite Heyting algebra is regular if and only if its dual poset is stable under \( {\sim } _{\infty }\).

Proof

For claim (i) consider distinct \(x,y\in \mathfrak {E}\), then there is a clopen upset \(U\in {{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\) such that (without loss of generality) \(x\in U\) and \(y\notin U\). By regularity, it follows that \(U=\psi (V_0,\dots ,V_k)\) for some regular clopen upsets \(V_i\), \(i\le k\). By Proposition 4.11 it follows immediately that \(x {\not \sim } _\infty y\).

For claim (ii), consider an arbitrary surjective p-morphism \(p:\mathfrak {F}\twoheadrightarrow \mathfrak {F}'\) such that \(p{\upharpoonright } M_\mathfrak {F}\) is a bijection. We first prove by induction on \(n<\omega \) that \(x {\not \sim } _{n} y\) entails \(p(x) {\not \sim } _{n}p(y)\).

If \(x {\not \sim } _{0} y\), then \(M(x)\ne M(y)\). Since \(p{\upharpoonright } M_\mathfrak {F}\) is a bijection, we have by the definition of p-morphism that \(M(p(x))\ne M(p(y))\) and so \(x\ne y\).

If \(x {\not \sim } _{n+1} y\), there is without loss of generality some \(z\ge x\) such that \(z {\not \sim } _{n} w\) for all \(w\ge y\). By induction hypothesis we obtain that \(p(z)\ne p(w)\) for all \(w\ge y\), and by the definition of p-morphism it follows \(p(x)\ne p(y)\).

Thus we obtain that p preserves polynomials of regulars. Finally, since for \(x,y\in \mathfrak {F}\) we have that \(x\ne y\) entails \(x {\not \sim } _\infty y\), this shows that p is itself an injection, and thus by Theorem 4.5 we have that \(\mathfrak {F}\) is regular. \(\square \)

We conclude by noticing that, by our previous observations, an equivalent sufficient and necessary characterisation of regular Esakia spaces works in the restricted cases of image-finite Esakia spaces or of Esakia spaces dual to finitely generated Heyting algebras.

4.3 Varieties of (strongly) regular Heyting algebras

We conclude this section by providing some side results on varieties generated by regular Heyting algebras. It is in fact natural to consider what is the intermediate logic of all regular Heyting algebras. As a matter of fact, it was proven already in [11, Cor. 5.2.3] that \(S(\texttt{IPC}^\lnot )=\texttt{IPC}\), which by [8, Prop. 4.17] means that the variety generated by regular Heyting algebras is the whole variety of Heyting algebras.

Fig. 1
figure 2

The Esakia spaces \(\mathfrak {R}_0\), \(\mathfrak {R}_1\) and \(\mathfrak {R}_2\).

One could then wonder if, by looking at suitable subclasses of regular Heyting algebras, one can obtain proper subvarieties of Heyting algebras. For example one could consider, for all \(n<\omega \), the varieties generated by those Heyting algebras which are stable under \( {\sim } _m\) for \(m\ge n\) but not under \( {\sim } _m\) for \(m<n\). In fact, by the next proposition, we have that the sequence of equivalence relations \( {\sim } _n\) does not in general converge to any finite value.

Proposition 4.16

There is an Esakia space \(\mathfrak {E}\) such that each \(\mathfrak {E}/ {\sim } _{n}\) is distinct, for each \(n<\omega \).

Proof

Consider the poset \(\mathfrak {R}_1\) in Fig. 1 (which adapts [30, Fig. 4.1]) and provide it with the topology induced by the following subbasis

$$\begin{aligned} \{ x^\uparrow \mid x\in \mathfrak {R}_1 \} \cup \{(x^\uparrow )^c \mid x\in \mathfrak {R}_1 \}. \end{aligned}$$

It can be verified that the resulting space is an Esakia space and, moreover, that for every \(n<\omega \), \(a_n {\not \sim } _{n}a_{n+1}\). Therefore, for every \(n<m<\omega \) we have that \(\mathfrak {R}_1/ {\sim } _{n}\) and \(\mathfrak {R}_1/ {\sim } _{m}\) are distinct. \(\square \)

We then introduce the following definition.

Definition 4.17

  1. (i)

    Let \(\mathfrak {E}\) be an Esakia space, we say that it is strongly regular if for all \(x,y\in \mathfrak {E}\) we have that \(x {\not \sim } _{0}y\), i.e. \(M(x)\ne M(y)\).

  2. (ii)

    We say that a Heyting algebra H is strongly regular if its dual Esakia space \(\mathfrak {E}_H\) is strongly regular.

For example, if we provide the poset \(\mathfrak {R}_2\) with a topology analogous to the one we assigned to \(\mathfrak {R}_1\) in Proposition 4.16, we see that \(\mathfrak {R}_2\) makes for a strongly regular Esakia space. Moreover, the map \(p:\mathfrak {R}_2\rightarrow \mathfrak {R}_0\) defined by letting, for each \(i<\omega \), \(p(a_i)=a_i\), \(p(b_i)=b_i\), \(p(c_i)=a_0\), \(p(d_i)=b_0\) and \(p(r)=r\) is a p-morphism from a strongly regular Esakia space onto the dual of the Rieger-Nishimura lattice. Working on this idea we can strengthen the aforementioned result and establish that the variety generated by strongly regular Heyting algebras is the variety of all Heyting algebras. We start by defining the strong regularisation of a finite poset.

Definition 4.18

Let \(\mathfrak {F}\) be a finite poset, the strong regularisation of \(\mathfrak {F}\) is the poset \(\mathfrak {F}^*\) obtained by adding, for each element \(x\in \mathfrak {F}\), a new maximal element \(x^*\) such that \((x^*)^\downarrow =x^\downarrow \cup \{x^*\}\).

It is then possible to prove that every finite poset is a p-morphic image of its strong regularisation, hence showing that strongly regular Heyting algebras generate the whole variety of Heyting algebras.

Theorem 4.19

The variety generated by strongly regular Heyting algebras is \(\textsf{HA}\).

Proof

Since \(\textsf{HA}\) has the finite model property, it follows that if \(\textsf{HA}\nvDash \phi \) there is a finite Heyting algebra H such that \(H\nvDash \phi \). Now, if \(\mathfrak {F}\twoheadrightarrow \mathfrak {E}_H \), it follows by duality that \(H\preceq H_{\mathfrak {F}}\). So, since the validity of formulas is preserved by the variety operations, we obtain that \(H_{\mathfrak {F}}\nvDash \phi \). It is thus sufficient to show that every finite poset is a p-morphic image of a finite strongly regular poset.

To this end, let \(\mathfrak {F}\) be an arbitrary finite poset and \(\mathfrak {F}^*\) be its strong regularisation. Clearly \(\mathfrak {F}^*\) is also finite. Let \(p:\mathfrak {F}^*\rightarrow \mathfrak {F}\) be defined by letting \(p(x)=x\) for all \(x\in \mathfrak {F}\), and \(p(x^*)\in M(x)\) for all \(x^*\in \mathfrak {F}^*{\setminus }\mathfrak {F}\), i.e. p assigns each \(x^*\) to some maximal elements that it “chooses” from M(x). We check that p is a p-morphism.

  1. (i)

    Forth Condition: If \(x\le y\) for \(x,y\in \mathfrak {F}\) then obviously \(p(x)\le p(y)\). If \(x\le y^*\) then \(x\le y\) and thus \(p(x)\le p(y)\le p(y^*)\).

  2. (ii)

    Back Condition: If \(p(x^*)\le y\) then since \(p(x^*)\) is maximal we immediately have \(y=p(x^*)\), satisfying the condition. Otherwise, if \(p(x)\le y\) and \(x\in \mathfrak {F}\), then we have that \(p(x)\le y=p(y)\) and by definition of p also that \(x\le y\).

This shows that \(p:\mathfrak {F}^*\rightarrow \mathfrak {F}\) is a p-morphism, which completes our proof. \(\square \)

5 Cardinality of \(\Lambda (\textsf{HA}^{\uparrow }\!\!)\)

In this section we apply the characterisation of regular posets of Section 4 to show that the lattice of \(\texttt{DNA}\)-varieties \(\Lambda (\textsf{HA}^{\uparrow })\) and the lattice of \(\texttt{DNA}\)-logics \(\Lambda (\texttt{IPC}^\lnot )\) have power continuum. This solves a question raised in [8, 30] and complements the previous result that the sublattice of \(\texttt{DNA}\)-logics extending \(\texttt{InqB}\) is dually isomorphic to \(\omega +1\). Inquisitive logic has thus a special location in the lattice of negative variants, having only countably many extensions.

5.1 Jankov’s formulae

To prove the uncountability of \(\Lambda (\textsf{HA}^{\uparrow })\) we adapt to our setting the notion and the method of Jankov’s formulae. Jankov’s formulae were introduced in [23, 24] in order to show that the lattice of intermediate logic has the cardinality of the continuum. We recall how to adapt Jankov’s formulae to the setting of \(\texttt{DNA}\)-logics and \(\texttt{DNA}\)-varieties [8]. We write \(\mathsf {HA_{RFSI}}\) for the class of all regular, finite, subdirectly irreducible Heyting algebras.

Definition 5.1

Let \(H\in \mathsf {HA_{RFSI}}\), let 0 be the least element of H and s its second greatest element.

  • The Jankov representative of \(x\in H\) is a formula \(\psi _x\) defined as follows:

    1. (i)

      If \(x\in H_\lnot \), then \(\psi _x=p_x\), where \(p_x\in \texttt{AT}\);

    2. (ii)

      If \(x=\delta _H(a_0,...,a_n)\) with \(a_0,...,a_n\in H_\lnot \), then \(\psi _x=\delta (p_{a_0},...,p_{a_n})\).

  • The Jankov \(\texttt{DNA}\)-formula \(\chi ^{\texttt{DNA}}(H)\) is defined as follows:

    $$\begin{aligned} \chi ^{\texttt{DNA}}(H){:}{=} \alpha \rightarrow \psi _s, \end{aligned}$$

    where \(\alpha \) is the following formula:

    $$\begin{aligned} \alpha = (\psi _0\leftrightarrow \bot ) \;&\wedge \;\bigwedge \{(\psi _a\wedge \psi _b) \leftrightarrow \psi _{a\wedge b}\mid a,b\in H \} \; \\ {}&\wedge \; \bigwedge \{(\psi _a\vee \psi _b) \leftrightarrow \psi _{a\vee b}\mid a,b\in H \} \; \\ {}&\wedge \; \bigwedge \{(\psi _a\rightarrow \psi _b) \leftrightarrow \psi _{a\rightarrow b}\mid a,b\in H \}. \end{aligned}$$

As it is generally clear from the context that we are dealing with the \(\texttt{DNA}\)-version of Jankov’s formulae, we write just \(\chi (H)\) for the Jankov \(\texttt{DNA}\)-formula of H. We recall the following result from [8, Theorem 4.31]. For any \(A,B\in \textsf{HA}\), we write \( A\le B \) if \(A\in {{\mathbb {H}}}{{\mathbb {S}}}(B) \).

Theorem 5.2

Let \(A\in \mathsf {HA_{RFSI}}\) and \(B\in \textsf{HA}\) then \(B\nvDash ^\lnot \chi (A) \text { iff } A\le B.\)

The next proposition adapts to the context of \(\texttt{DNA}\)-logics Jankov’s classical result on intermediate logics and Heyting algebras.

Proposition 5.3

Let \({\mathcal {C}}\) be an \(\le \)-antichain of finite, regular, subdirectly irreducible Heyting algebras, then for all \({\mathcal {I}},{\mathcal {J}}\subseteq {\mathcal {C}}\) such that \({\mathcal {I}}\ne {\mathcal {J}}\) we have that \(Log^\lnot ({\mathcal {I}})\ne Log^\lnot ({\mathcal {J}})\).

Proof

Since \({\mathcal {I}}\ne {\mathcal {J}}\) there is without loss of generality some \(H\in {\mathcal {I}}\setminus {\mathcal {J}}\). By Theorem 5.2 it follows that \(H\nvDash ^\lnot \chi (H)\), thus \(\chi (H)\notin Log^\lnot ({\mathcal {I}})\). Since \({\mathcal {C}}\) is an antichain, \(H\nleq K\) for all \(K\in {\mathcal {J}}\), which by Theorem 5.2 gives \(K\vDash ^\lnot \chi (H)\), whence \(\chi (H)\in Log^\lnot ({\mathcal {J}})\). \(\square \)

To prove that the lattice of \(\texttt{DNA}\)-logics has power continuum it is thus sufficient to exhibit an infinite antichain of finite, regular, subdirectly irreducible Heyting algebras. Perhaps surprisingly, we can use examples of antichains which are standard in the literature, as they turn out to consist of regularly generated algebras.

5.2 Antichain \(\Delta _0\)

We start by introducing the antichain \(\Delta _0\)—for this example see e.g. [6, p. 71]. This is an antichain of posets which are all regular, but which, as we shall see, contains for all \(n<\omega \) infinitely many elements which are not stable under \( {\sim } _n\). For every \(n<\omega \) we define the poset \(\mathfrak {F}_n\), with domain

$$\begin{aligned} \textsf{dom}(\mathfrak {F}_n)=\{ r\} \cup \{a_m \mid m\le n \} \cup \{b_m \mid m\le n \} \cup \{c_m \mid m\le n \}; \end{aligned}$$

and such that

  • \(r\le a_i, b_i, c_i \text { for all } i\le n;\)

  • \(a_i\le a_j, a_i\le b_j \text { and } c_i\le c_j, c_i\le b_j \text { whenever } j\le i;\)

  • \(b_i\le a_j \text { and } b_i\le c_j \text { whenever } j\le i.\)

Fig. 2
figure 3

The antichain \(\Delta _0\)

We let \(\Delta _0{:}{=}\{ \mathfrak {F}_n \mid n<\omega \}\) be the set of all such posets. The following result follows by noticing that we cannot perform neither \(\alpha \) nor \(\beta \)-reductions on any \(\mathfrak {F}_n\) without collapsing the maximal elements—we refer the reader to [6, Lem. 3.4.19] for a proof of this fact.

Proposition 5.4

The set of Heyting algebras dual to \(\Delta _0\) is a \(\le \)-antichain.

Since every poset in \(\Delta _0\) is finite and rooted, it follows immediately by Esakia duality that its algebraic duals are finite, subdirectly irreducible, Heyting algebras. In order to establish our result we also need to make sure that every Heyting algebra which we are dealing with is regularly generated. This follows from the characterisation of finite regular posets which we provided in Section 4. We recall that, if \(\mathfrak {F}\) is a finite poset, then the depth of an element \(x\in \mathfrak {F}\), written \(\textsf{depth}(x)\), is defined as the size of the maximal chain in \(x^\uparrow \setminus \{x\}\). Clearly the depth of a maximal elements is 0 and, in each \(\mathfrak {F}_n\) the depth of the root is \(n+1\).

Proposition 5.5

For every \(n<\omega \), the poset \(\mathfrak {F}_n\) is regular. In particular, \(\mathfrak {F}_n/ {\sim } _n=\mathfrak {F}_n\) for every \(n<\omega \).

Proof

Consider \(\mathfrak {F}_n\) for some \(n<\omega \), we prove by induction on \(\textsf{depth}(x)\) that \([x]_k = \{x \} \) whenever \(k\ge \textsf{depth}(x)-1\), for \(\textsf{depth}(x)>0\), and \(k\ge 0\) otherwise.

Let \(\textsf{depth}(x)\le 1\). Without loss of generality we let \(x=a_{1}\). Then, given any \(y\in \mathfrak {F}_n\) such that \(y\ne a_1\), we clearly have \(M(a_1)\ne M(y)\), showing \([a_1]_0=\{a_1\}\).

Let \(\textsf{depth}(x)=m+1< n+1\). Without loss of generality we let \(x=a_{m+1}\) and by induction hypothesis \([a_{l}]_{k}=\{a_{l}\}\), \([b_{l}]_{k}=\{b_{l}\}\) and \([c_{l}]_{k}=\{c_{l}\}\) whenever \(l \le m\), \(k\ge l-1\). Now, for all \(y\in \mathfrak {F}_n\) such that \(x\ne y\), if \(\textsf{depth}(y)\le m\) then \([y]_{m}=\{y\}\), thus \([y]_{k}=\{y\}\) for all \(k\ge m\). Otherwise, if \(\textsf{depth}(y)>m\) then since \(y\ne a_{m+1}\), it follows \(y\le c_{m}\). Since \([c_m]_{m-1}=\{c_m\}\), this proves \(x {\not \sim } _{m}y\) and thus by Lemma 4.9\(x {\not \sim } _{k}y\) for all \(k\ge m\). It follows \([x]_{k} =\{x\}\) for all \(k\ge m=\textsf{depth}(x)-1\).

Let \(\textsf{depth}(x)=n+1\). The only point with depth \(n+1\) in \(\mathfrak {F}_n\) is the root r and clearly it is the only point in \([r]_{n}\).

It follows that \([x]_n=\{x\}\) for all \(x\in \mathfrak {F}_n\) and thus \(\mathfrak {F}_n/ {\sim } _n =\mathfrak {F}_n\). \(\square \)

Once we know that every poset from the antichain \(\Delta _0\) above is regular, it is then straightforward to reason as in Jankov’s original proof and show that the cardinality of the lattices of \(\texttt{DNA}\)-logics and \(\texttt{DNA}\)-varieties is exactly \(2^{\aleph _0}\). We say that a finite Heyting algebra has width (or depth) n if its dual poset has width (or depth) n,

Theorem 5.6

There are continuum-many \(\texttt{DNA}\)-logics and \(\texttt{DNA}\)-varieties. In particular, there are continuum-many \(\texttt{DNA}\)-varieties generated by Heyting algebras of width 3.

Proof

Let \({\mathcal {A}}_0\) be the set of Heyting algebras dual to the posets in \(\Delta _0\). Since every poset in \(\Delta _0\) has width 3, the same holds for the dual Heyting algebras. By Proposition 5.4, \({\mathcal {A}}_0\) is an infinite \(\le \)-antichain of finite, subdirectly irreducible Heyting algebras. Moreover, by Theorem 4.15 and Proposition 5.5 we also have that each Heyting algebra in \({\mathcal {A}}_0\) is regularly generated. By Theorem 5.3 we have \(Log^\lnot ({\mathcal {I}})\ne Log^\lnot ({\mathcal {J}})\) whenever \({\mathcal {I}},{\mathcal {J}}\subseteq {\mathcal {A}}_0\) and \({\mathcal {I}}\ne {\mathcal {J}}\). By duality, we also have \({\mathbb {D}}({\mathcal {I}})\ne {\mathbb {D}}({\mathcal {J}})\) whenever \({\mathcal {I}},{\mathcal {J}}\subseteq {\mathcal {A}}_0\) and \({\mathcal {I}}\ne {\mathcal {J}}\), where \({\mathbb {D}}({\mathcal {C}})\) denotes the \(\texttt{DNA}\)-variety generated by \({\mathcal {C}}\). Since \(|\Delta _0|=\omega \), our result follows immediately. \(\square \)

We also remark that, given the fact that \(\texttt{DNA}\)-varieties are in one-to-one correspondence with varieties of Heyting algebras generated by regular algebras, this also shows the existence of continuum-many varieties of Heyting algebras generated by regular elements.

5.3 Antichain \(\Delta _1\)

Interestingly, we can also apply another standard example of infinite \(\le \)-antichain to our context, originally due to Kuznetsov [26], and show that there are continuum-many subvarieties of Heyting algebras which are generated by strongly regular elements.

We recall the following construction and redirect the reader to [4, Section 3] for more details. For every \(1<n<\omega \), we let \(\mathfrak {G}_n\) be the poset with domain

$$\begin{aligned} \textsf{dom}(\mathfrak {G}_n)=\{ r\} \cup \{a_m \mid m\le n \} \cup \{b_m\mid m\le n \} \end{aligned}$$

and such that

  • \(r\le a_i, b_i, \text { for all } i\le n;\)

  • \(a_0\le b_j, \text { for all } 0\le j< n;\)

  • \(a_n\le b_j, \text { for all } 0<j\le n;\)

  • \(a_i\le b_j, \text { for all } 0< i< n \text { and } i\ne j.\)

We let \(\Delta _1\,{:}{=}\,\{ \mathfrak {G}_n\mid 1<n<\omega \}\) be the set of all such posets. One can check that, whenever we collapse two maximal points in a frame \(\mathfrak {G}_{n+1}\), the result is that every point of depth 1 is related to every point of depth 0, which is not the case in \(\mathfrak {G}_n\). We thus obtain the following proposition, whose detailed proof is left to the reader.

Proposition 5.7

The set of Heyting algebras dual to \(\Delta _1\) is a \(\le \)-antichain.

Fig. 3
figure 4

The antichain \(\Delta _1\)

As the posets in \(\Delta _1\) grow in width rather than depth, we have that every \(\mathfrak {G}_n\) is stable already under the quotient \( {\sim } _0\), i.e. \(\mathfrak {G}_n/ {\sim } _0=\mathfrak {G}_n\) for all \(1<n<\omega \). So every poset in \(\Delta _1\) is actually strongly regular.

Proposition 5.8

Every poset \(\mathfrak {G}_n\) is strongly regular.

Proof

By construction, it is straightforward to check that any two different \(x,y\in \mathfrak {G}_n\) see different maximal elements, i.e. \(M(x)\ne M(y)\). \(\square \)

By the same reasoning as above, we immediately obtain a second uncountable family of \(\texttt{DNA}\)-varieties and \(\texttt{DNA}\)-logics.

Theorem 5.9

There are continuum many \(\texttt{DNA}\)-varieties generated by strongly regular Heyting algebras of depth 3.

Proof

Analogously to Theorem 5.6, together with the fact that posets from \(\Delta _1\) are strongly regular and have depth 3. \(\square \)

Also, this means that there are continuum-many varieties of Heyting algebras generated by strongly regular elements of width 3.

6 Applications to logic

In this section we consider some applications to logic of regular Heyting algebras. As we saw in Section 2.4, regular Heyting algebras play an important role in the algebraic semantics of \(\texttt{DNA}\)-logics. We employ Esakia duality to adapt these results to the topological setting, thereby obtaining a topological semantics for \(\texttt{DNA}\)-logics. Secondly, we consider the case of dependence logic and we extend this topological semantics to this setting as well. We start by adapting the notion of \(\texttt{DNA}\)-variety to the context of Esakia spaces.

6.1 \(\texttt{DNA}\)-varieties of Esakia spaces

In analogy with the algebraic case, we define a special family of varieties of Esakia spaces, closed under an additional operation which preserves the structure of the regular clopen upsets.

Definition 6.1

(\(\texttt{DNA}\)-variety of Esakia spaces). A \(\texttt{DNA}\)-variety of Esakia spaces \({\mathcal {E}}\) is a variety of Esakia spaces additionally closed under the following operation:

$$\begin{aligned} {\mathcal {E}}^{M} = \{ \mathfrak {E} \mid \exists \mathfrak {F}\in {\mathcal {E}}.\, \exists f:\mathfrak {E} \twoheadrightarrow \mathfrak {F}.\; f{\upharpoonright } M_{\mathfrak {E}} \text { is a homeomorphism of Stone spaces} \}. \end{aligned}$$

Given a class \({\mathcal {E}}\) of Esakia spaces, we write \({\mathbb {S}}({\mathcal {E}})\) for the smallest \(\texttt{DNA}\)-variety of Esakia spaces containing \({\mathcal {E}}\) and we denote by \(\Lambda (\textsf{Esa}^{M})\) the sublattice of \(\Lambda (\textsf{Esa})\) consisting of \(\texttt{DNA}\)-varieties. When we restrict Esakia duality to \(\texttt{DNA}\)-varieties of Heyting algebras and \(\texttt{DNA}\)-varieties of Esakia spaces, we immediately obtain the following theorem.

Theorem 6.2

The maps \(\overline{{{\mathcal {P}}}{{\mathcal {F}}}}\) and \(\overline{{\mathcal {C}}{\mathcal {U}}}\) restricted to the sublattices \(\Lambda (\textsf{HA}^{\uparrow })\) and \(\Lambda (\textsf{Esa}^{M})\) induce an isomorphism of the two lattices.

Proof

Notice that, given H and K Heyting algebras, the two conditions

  1. (i)

    \( K_{\lnot } = H_{\lnot } \text { and } K \preceq H \);

  2. (ii)

    \( \exists f: \mathfrak {E}_H \twoheadrightarrow \mathfrak {E}_K.\; f^{-1}: \mathcal {RCU}(\mathfrak {E}_K) \rightarrow \mathcal {RCU}(\mathfrak {E}_H)\) is a Boolean algebras isomorphism;

are dual to each other. By Proposition 4.3 we have that (ii) is equivalent to the following claim

  1. (iii)

    \(\exists f: \mathfrak {E}_H \twoheadrightarrow \mathfrak {E}_K.\; f{\upharpoonright } M_{\mathfrak {E}} \text { is a homeomorphism of Stone spaces}\).

Given this, we have that \(\texttt{DNA}\)-varieties of Heyting algebras are in one-to-one correspondence to \(\texttt{DNA}\)-varieties of Esakia spaces, from which it is immediate to verify the main statement. \(\square \)

Finally, we notice that in [8] we proved several results concerning \(\texttt{DNA}\)-varieties of Heyting algebras, which are straightforward to adapt to \(\texttt{DNA}\)-varieties of Esakia spaces. In particular, we recall the following Birkhoff’s style theorem. We say that a class of Heyting algebras \({\mathcal {C}}\) has the \(\texttt{DNA}\)-finite model property if whenever \({\mathcal {C}}\nvDash ^\lnot \phi \) there is some finite \(H\in {\mathcal {C}}\) such that \(H\nvDash ^\lnot \phi \).

Theorem 6.3

  1. (i)

    Every \(\texttt{DNA}\)-variety of Heyting algebras \({\mathcal {X}}\) is generated by its collection of regular, subdirectly irreducible elements, i.e. \({\mathcal {X}}= {\mathbb {D}}({\mathcal {X}}_{RSI}) \).

  2. (ii)

    If a \(\texttt{DNA}\)-variety \({\mathcal {X}}\) has the \(\texttt{DNA}\)-finite model property, it is generated by its finite, regular, subdirectly irreducible elements, i.e. \({\mathcal {X}}={\mathbb {D}}({\mathcal {X}}_{RFSI}) \).

Using Esakia duality it is immediate to translate this result to \(\texttt{DNA}\)-varieties of Esakia spaces. We recall that a Heyting algebra is subdirectly irreducible if and only its dual Esakia space is strongly rooted, i.e. if it has a least element r such that \(\{ r \}\) is open (see [18, p. 152] and [3, Theorem 2.9]). A \(\texttt{DNA}\)-variety of Esakia spaces has the \(\texttt{DNA}\)-finite model property if its dual \(\texttt{DNA}\)-variety of Heyting algebras has this property.

Corollary 6.4

  1. (i)

    Every \(\texttt{DNA}\)-variety of Esakia spaces \({\mathcal {E}}\) is generated by its collection of regular, strongly rooted elements, i.e. \({\mathcal {E}}= {\mathbb {S}}({\mathcal {E}}_{RSI}) \).

  2. (ii)

    If a \(\texttt{DNA}\)-variety \({\mathcal {E}}\) has the \(\texttt{DNA}\)-finite model property, then it is generated by its rooted, finite, regular elements, i.e. \({\mathcal {E}}={\mathbb {S}}({\mathcal {E}}_{RFR}) \).

6.2 \(\texttt{DNA}\)-logics and inquisitive logic

We introduce a topological semantics for \(\texttt{DNA}\)-logics that mirrors their algebraic semantics. The results of Section 3 suggest to define a semantics for \(\texttt{DNA}\)-logics in terms of Esakia spaces and regular clopen upsets.

Given an Esakia space \(\mathfrak {E}\) we call a function \(\mu :\texttt{AT}\rightarrow \mathcal {RCU}(\mathfrak {E})\) a \(\texttt{DNA}\)-valuation over \(\mathfrak {E}\). For \(\mu \) a \(\texttt{DNA}\)-valuation, define the interpretation of formulas over \(\mathfrak {E}\) as follows:

$$\begin{aligned} \begin{array}{llll} \llbracket p \rrbracket ^\mathfrak {\mathfrak {E},\mu } &{}= \mu (p) &{}\llbracket \bot \rrbracket ^\mathfrak {\mathfrak {E},\mu } &{}= \emptyset \\ \llbracket \top \rrbracket ^\mathfrak {\mathfrak {E},\mu } &{}= \mathfrak {E} &{}\llbracket \phi \wedge \psi \rrbracket ^\mathfrak {\mathfrak {E},\mu } &{}= \llbracket \phi \rrbracket ^{\mathfrak {E},\mu } \cap \llbracket \psi \rrbracket ^\mathfrak {\mathfrak {E},\mu } \\ \llbracket \phi \rightarrow \psi \rrbracket ^\mathfrak {\mathfrak {E},\mu } &{}= \overline{ \llbracket \phi \rrbracket ^\mathfrak {\mathfrak {E},\mu } \setminus \llbracket \psi \rrbracket ^{\mathfrak {\mathfrak {E},\mu }} } &{}\llbracket \phi \vee \psi \rrbracket ^\mathfrak {\mathfrak {E},\mu } &{}= \llbracket \phi \rrbracket ^{\mathfrak {\mathfrak {E},\mu }} \cup \llbracket \psi \rrbracket ^{\mathfrak {\mathfrak {E},\mu }}. \end{array} \end{aligned}$$

The only difference with the definition of Section 2.3 being that in the atomic case the interpretation is restricted to \(\mathcal {RCU}(\mathfrak {E})\). Notice however that not all formulas have to range over the set \(\mathcal {RCU}(\mathfrak {E})\). For example, it is not true in general that the union of two regular sets is regular, and in fact \(\llbracket p \vee q \rrbracket ^{\mathfrak {E},\mu }\) may be a non-regular element of \({\mathcal {C}}{\mathcal {U}}(\mathfrak {E})\). We then say that a formula \(\phi \) is \(\texttt{DNA}\)-valid on a space \(\mathfrak {E}\) (\(\mathfrak {E} \vDash ^{\lnot } \phi \)) if \(\llbracket \phi \rrbracket ^{\mathfrak {E},\mu } = \mathfrak {E}\) for every \(\texttt{DNA}\)-valuation \(\mu \). We say that a formula \(\phi \) is \(\texttt{DNA}\)-valid on a class of spaces \({\mathcal {E}}\) (\({\mathcal {E}}\vDash ^{\lnot } \phi \)) if it is \(\texttt{DNA}\)-valid on every element of the class. We write \(Log^\lnot ({\mathcal {E}})\) for the set of the \(\texttt{DNA}\)-valid formulas of \({\mathcal {E}}\) and we write \(Space^{\lnot }(\texttt{L}) \) for the \(\texttt{DNA}\)-variety of Esakia spaces which validate all formulas in \(\texttt{L}\).

Since \(\texttt{DNA}\)-valuations over Esakia spaces correspond through Esakia duality exactly to negative valuations over their dual Heyting algebras, the algebraic completeness of \(\texttt{DNA}\)-logics immediately establishes the completeness of this topological semantics.

Theorem 6.5

Let \(\texttt{L}\) be a \(\texttt{DNA}\)-logic, \({\mathcal {E}}\) a \(\texttt{DNA}\)-variety of Esakia Spaces, \(\phi \) a formula and \(\mathfrak {E}\) an Esakia Space. Then we have the following:

$$\begin{aligned} \phi \in \texttt{L}&\Longleftrightarrow Space^{\lnot }(\texttt{L})\vDash ^\lnot \phi ; \\ \mathfrak {E}\in {\mathcal {E}}&\Longleftrightarrow \mathfrak {E}\vDash ^\lnot Log^{\lnot }({\mathcal {E}}). \end{aligned}$$

We remark that this also delivers a topological semantics for inquisitive logic which differs from the one previously studied in [7], rather based on UV-spaces. Since inquisitive logic \(\texttt{InqB}\) is the negative variant of any intermediate logic between \(\texttt{ND}\) and \(\texttt{ML}\), the former result shows that inquisitive logic also admits a topological semantics based on Esakia spaces, which mirrors its algebraic semantics based on regular Heyting algebras.

Corollary 6.6

Let L be any intermediate logic between \(\texttt{ND}\) and \(\texttt{ML}\), then \(\phi \in \texttt{InqB}\) if and only if \( Space^{\lnot }(L)\vDash ^\lnot \phi \).

6.3 Dependence logic

We conclude by showing how the previous topological semantics can be extended to dependence logic, which, in its propositional version, can be seen as an extension of inquisitive logic in a larger signature.

Originally, dependence logic was introduced by Väänänen [32] as an extension of first-order logic with dependence atoms. A key aspect of dependence logic is that it is formulated in so-called team-semantics, which was introduced by Hodges in [21]. In its propositional version, which was developed by Yang and Väänänen in [35, 36], teams are simply sets of propositional assignments. It was soon observed in Yang’s thesis [34]—see also [12, 35]—that the team semantics of propositional dependence logic actually coincides with the state-based semantics of inquisitive logic, thus establishing an important connection between dependence and inquisitive logic.

We explore here a further aspect of this connection and we illustrate the relation between propositional dependence logic and regular Esakia spaces. In particular, we will adapt the completeness proof of Theorem 6.5 so as to obtain a sound and complete topological semantics for dependence logic.

6.3.1 Syntax and semantics

Propositional dependence logic can be seen as an extension of inquisitive logic in a larger vocabulary \({\mathcal {L}}_{\texttt{IPC}}^\otimes \), which adds the so-called tensor operator to the signature of intuitionistic logic. Formulas of dependence logic are thus defined recursively as follows:

$$\begin{aligned} \phi \,{:}{:}{=} \, p \mid \bot \mid \phi \wedge \phi \mid \phi \otimes \phi \mid \phi \vee \phi \mid \phi \rightarrow \phi , \end{aligned}$$

where \(p\in \texttt{AT}\). We define \(\lnot \alpha {:}{=}\alpha \rightarrow \bot \) and we say that a formula is standard if it does not contain any instance of \(\vee \). We provide this syntax with the usual team semantics. We recall that a propositional assignment is a map \(w:\texttt{AT}\rightarrow 2\) and that a team is a set of assignments \(t\in \wp ( 2^\texttt{AT})\). The team semantics of dependence logic is then defined as follows.

Definition 6.7

(Team Semantics). The notion of a formula \(\phi \in {\mathcal {L}}_{\texttt{IPC}}^\otimes \) being true in a team \(t\in \wp ({2^\texttt{AT}})\) is defined as follows:

$$\begin{aligned} \begin{array}{ll} t\vDash p &{} {}\forall w\in t \ ( w(p)=1) \\ t\vDash \bot &{} t=\emptyset \\ t\vDash \psi \vee \chi &{} t\vDash \psi \text { or } t\vDash \chi \\ t\vDash \psi \wedge \chi &{} t\vDash \psi \text { and } t\vDash \chi \\ t\vDash \psi \otimes \chi &{} \exists s,r\subseteq t \text { such that } s\cup r = t \text { and } s\vDash \psi , r\vDash \chi \\ t\vDash \psi \rightarrow \chi &{} \forall s \ ( \text {if }s\subseteq t \text { and } s\vDash \psi \text { then } s\vDash \chi ). \end{array} \end{aligned}$$

We define propositional dependence logic as the set \(\texttt{InqB}^\otimes = Log( \wp (2^\texttt{AT}))\) of all formulas of \({\mathcal {L}}_{\texttt{IPC}}^\otimes \) valid under team semantics. We notice that inquisitive logic has exactly the same semantics but it is formulated in the restricted language \({\mathcal {L}}_{\texttt{IPC}}\), which lacks the tensor disjunction \(\otimes \), thus in particular \(\texttt{InqB}^\otimes \supseteq \texttt{InqB}\). The following normal form was proven in [11] for inquisitive logic and extended in [35] to dependence logic.

Theorem 6.8

(Disjunctive Normal Form). Let \(\phi \in {\mathcal {L}}_{\texttt{IPC}}^\otimes \), then there are standard formulas \(\alpha _0,\dots ,\alpha _n\in {\mathcal {L}}_{\texttt{IPC}}^\otimes \) such that \(\phi \equiv _{\texttt{InqB}^\otimes } \bigvee _{i\le n} \alpha _i \).

We finally remark that the propositional dependence atom can be defined in this system as follows:

$$\begin{aligned} \mathop {=\!} \, (p_0,\dots , p_n, q) \, {:}{=}\, \bigwedge _{i\le n} (p_i\vee \lnot p_i ) \rightarrow (q\vee \lnot q). \end{aligned}$$

We thus notice that, despite the name, it is not the dependence atom which distinguishes the propositional version of inquisitive and dependence logics, but rather the presence of the tensor. This observation is also justified by the work of Barbero and Ciardelli in [14], as they showed that the tensor cannot be uniformly defined by the other operators.

6.3.2 Algebraic semantics of dependence logic

As we have recalled above, inquisitive logic admits a (non-standard) algebraic semantics, which was introduced in [7] and further investigated in [8]. As dependence logic extends inquisitive logic by the tensor operator, it is natural to provide it with an algebraic semantics by augmenting inquisitive algebras with an interpretation for it. Such a semantics was first introduced in [31] and was later shown in [28] to be unique up to a suitable notion of algebraizability. We can use such algebraic semantics to build a bridge with Esakia spaces and provide a topological semantics for dependence logic. Firstly, we introduce the notion of \(\texttt{InqB}^\otimes \)-algebras as in [28].

Definition 6.9

An \(\texttt{InqB}^\otimes \)-algebra A is a structure in the signature \({\mathcal {L}}_{\texttt{IPC}}^\otimes \) such that:

  1. (i)

    \(A{ {\upharpoonright } } \{\vee ,\wedge ,\rightarrow , \bot \} \in Var(\texttt{ML}) \);

  2. (ii)

    \(A_\lnot { {\upharpoonright } } \{\otimes ,\wedge ,\rightarrow , \bot \} \in \textsf{BA}\);

  3. (iii)

    \(A \vDash x \otimes (y \vee z) \approx (x\otimes y) \vee (x\otimes z);\)

  4. (iv)

    \(A \vDash (x\rightarrow z) \rightarrow (y \rightarrow k) \approx (x\otimes y) \rightarrow (z\otimes k).\)

Hence, an \(\texttt{InqB}^\otimes \)-algebra is the expansion of a Heyting algebra satisfying the validities of \(\texttt{ML}\), and the additional conditions above. By expanding the previous definition, one can see that it amounts to the equational definition of a class of algebras, thus giving rise to a variety of structures. Notice that, as the regular elements of a Heyting algebra always form a Boolean algebra, what the condition \({\mathcal {A}}_\lnot { {\upharpoonright } } \{\otimes ,\wedge ,\rightarrow , \bot \} \in \textsf{BA}\) really entails is that, for all regular elements \(x,y\in {\mathcal {A}}_\lnot \), \(x\otimes y {:}{=} \lnot (\lnot x \wedge \lnot y)\), i.e. the tensor is the “real” Boolean disjunction over regular elements.

We let \( \mathsf {InqBAlg^\otimes } \) be the variety of all \(\texttt{InqB}^\otimes \)-algebras and we write \( \mathsf {InqBAlg^\otimes _{FRSI}} \) for its subclass of finite, regular and subdirectly irreducible elements. We say that A is a dependence algebra if it belongs to the subvariety generated by all finite, regular, subdirectly irreducible \(\texttt{InqB}\)-algebras, i.e. if \(A\in {\mathbb {V}}(\mathsf {InqBAlg^\otimes _{FRSI}})\). We write \(\textsf{DA}\,{:}{=}\, {\mathbb {V}}(\mathsf {InqBAlg^\otimes _{FRSI}})\) for the variety of dependence algebras. It was proven in [28] that \(\textsf{DA}\) is the equivalent algebraic semantics of \(\texttt{InqB}^\otimes \). In particular, we have the following completeness result:

Theorem 6.10

(Algebraic Completeness). For any formula \(\phi \in {\mathcal {L}}_{\texttt{IPC}}^\otimes \) we have that \(\phi \in \texttt{InqB}^\otimes \) if and only if \( \textsf{DA} \vDash ^\lnot \phi \).

Where on the right hand side we are using the same notion of truth of Section 2.4, i.e. formulas of dependence logic are evaluated under negative valuations, which map atomic formulas to regular elements of the underlying dependence algebra.

6.3.3 Topological semantics of dependence logic

The algebraic semantics of propositional dependence logic makes for an important bridge with the topological approach that we developed in this article. In fact, dependence algebra are expansions of Heyting algebras (more specifically of \(\texttt{ML}\)-algebras), whence we can dualize them according to Esakia duality. The only problem when proceeding in this way is that, as the Esakia duality accounts only for the Heyting algebra structure of a dependence algebra, the correct interpretation of the tensor operator is “lost in translation”. To avoid this problem we shall consider only regular dependence algebras.

Let \(\mathfrak {E}\) be a regular Esakia space satisfying \(\texttt{ML}\), it is easy to provide an interpretation for the tensor over clopen upsets \(\mathfrak {E}\). In fact, as we remarked previously, the tensor of two regular elements is simply their classical Boolean disjunction. Moreover, it follows immediately from the disjunctive normal form of \(\texttt{InqB}\) (Theorem 6.8) and the fact that \(\texttt{ML}\)-spaces are complete with respect to \(\texttt{InqB}\) (Corollary 6.6) that any clopen upset of a regular \(\texttt{ML}\)-Esakia space is a union of regular ones. This allows us to define the tensor operator over \({{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\) as follows:

  1. (i)

    For \(U,V\in \mathcal {RCU}(\mathfrak {E})\) we let \(U\otimes V {:}{=} \overline{(\overline{U} \cup \overline{V})}\);

  2. (ii)

    For \(U,V\in {{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E}){\setminus } \mathcal {RCU}(\mathfrak {E})\) we let

    $$\begin{aligned} U\otimes V {:}{=} \bigcup \{ U_0 \otimes V_0\mid U_0\subseteq U, V_0\subseteq V, U_0,V_0\in \mathcal {RCU}(\mathfrak {E}) \}. \end{aligned}$$

We leave it to the reader to verify that \({{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\) forms a dependence algebra, where the tensor operator is interpreted as we remarked. However, although this definition suffices in explaining how the tensor can be interpreted over algebras of clopen upsets, it still does not provide us with a topological intuition of its behaviour. To this end, we prove the following proposition.

Proposition 6.11

Let \(\mathfrak {E}\) be a regular Esakia space satisfying \(\texttt{ML}\), and let \(\otimes \) be defined by the clauses above, then we have, for any \(U,V\in {{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\):

$$\begin{aligned} x\in U\otimes V \Longleftrightarrow \;&M(x)\subseteq U_0\cup V_0\\ \;&\text {for some } U_0\subseteq U \text { and } V_0\subseteq V \text { such that } U_0,V_0\in {\mathcal {C}}(M_\mathfrak {E}). \end{aligned}$$

Proof

Firstly, if \(U,V\in \mathcal {RCU}(\mathfrak {E})\) we have \(U\otimes V = \overline{(\overline{U} \cup \overline{V})}\). We obtain:

$$\begin{aligned} x\in \overline{(\overline{U} \cup \overline{V})}&\Longleftrightarrow \forall y\ge x, \; y\notin \overline{U} \cap \overline{V} \\&\Longleftrightarrow \forall y\ge x \; \exists z \ge y, \; z\in U \cup V \\&\Longleftrightarrow M(x)\subseteq U \cup V\\&\Longleftrightarrow M(x)\subseteq M(U) \cup M(V). \end{aligned}$$

Then, for arbitrary \(U,V\in {{\mathcal {C}}}{{\mathcal {U}}}(\mathfrak {E})\), the claim follows immediately from the definition of the tensor and the display above. \(\square \)

The previous proposition thus provides us with a topological interpretation for the tensor operator and shows that the tensor disjunction between two clopen upsets of an Esakia space is uniquely determined by the Stone subspace of its maximal elements.

Now, let \(\mathsf {Esa^{\texttt{ML}}_{RFR}}\) be the class of rooted, finite and regular posets which satisfy \(\texttt{ML}\) and augment them by a tensor operator defined as in Proposition 6.11. By the definition of the variety of dependence algebras it follows that the validity of \(\texttt{InqB}^\otimes \)-formulas is always witnessed by finite, regular, subdirectly irreducible algebras (see also [31]). The following theorem thus follows exactly as Theorem 6.5, by applying Esakia duality and interpreting the tensor as we illustrated above.

Theorem 6.12

(Topological Completeness). For any formula \(\phi \in {\mathcal {L}}_{\texttt{IPC}}^\otimes \) we have that \(\phi \in \texttt{InqB}^\otimes \) if and only if \( \mathsf {Esa^{\texttt{ML}}_{RFR}} \vDash ^\lnot \phi \).

As the validity of formulas is preserved by the variety operations, we can extend the previous result and infer the completeness of \(\texttt{InqB}^\otimes \) with respect to the closure of the class \(\mathsf {Esa^{\texttt{ML}}_{RFR}}\) under subspaces, p-morphisms and coproducts. Notice, however, that our topological characterisation of the tensor operator is limited to regular Esakia spaces. The questions whether the tensor admits an interesting topological interpretation also in non-regular spaces should be subject of further investigations.

7 Conclusion

In this article we considered regular Heyting algebras from the point of view of Esakia duality and we provided several results about their dual topological spaces. In particular, in Section 4 we described two different characterisations of (finite) regular Esakia spaces and in Section 5 we applied them to show that there are continuum many varieties of Heyting algebras generated by (strongly) regular elements. This also shows that there are continuum many \(\texttt{DNA}\)-varieties and \(\texttt{DNA}\)-logics, in contrast to the fact that there are only countably many extensions of inquisitive logic. Finally, in Section 6, we considered several logical applications of our work and we introduced novel topological semantics for \(\texttt{DNA}\)-logics, inquisitive logic and dependence logic, which crucially rely on regular Esakia spaces.

We believe that the present work hints at some possible directions of further research. Besides the questions already raised in the article, we wish here to bring three points to attention.

Firstly, in [31] we have considered the algebraic semantics of a wide range of intermediate versions of inquisitive and dependence logics. As this semantics relies on Heyting algebras with a core of join-irreducible elements, it is then natural to ask to what extent one could extend the duality results of this article to this context.

Secondly, is it possible to extend our characterisation of finite regular posets from Section 4.2 to account also for infinite Esakia spaces? As we have briefly remarked, the cases of image-finite Esakia spaces, or of Esakia spaces dual to finitely generated Heyting algebras do not pose serious problems, but in general this seems a non-trivial problem.

Finally, the class of finite regular posets has a quite combinatorial nature and makes for an interesting class of structures. Is it possible to provide a classification of these structures up to some suitable notion of dimension, e.g. their depth or their number of maximal elements? We leave these and other problems to future research.