Boolean function analysis on high-dimensional expanders *

,

High-dimensional expanders emerged in recent years as a new area of study, of interest to several different communities.Just as expander graphs are sparse models of the complete graph, so are high-dimensional expanders sparse models of a higher-dimensional object, namely the complete hypergraph.Expander graphs are central objects, appearing in a diverse list of areas.High-dimensional expanders are much newer objects which have already found connections to topological overlapping theory [FGLNP12,Evr17], to analysis of Markov chains [ALOV19], and to coding theory [DHKNT21] and property testing [KL14].(Note that while expander graphs are explicit derandomizations of random graphs, the mere existence of high-dimensional expanders is surprising since there is no sparse random model for generating these objects.).
The goal of this work is to connect these two threads of research, by introducing Boolean function analysis on high-dimensional expanders.
We study Boolean functions on simplicial complexes.A pure d-dimensional simplicial complex X is a set system consisting of an arbitrary collection of sets of size d + 1 together with all their subsets.The sets in a simplicial complex are called faces, and it is standard to denote by X(i) the faces of X whose cardinality is i + 1.Our simplicial complexes are weighted by a probability distribution Π d on the toplevel faces, which induces probability distributions Π i on X(i) in a natural way for all i: we choose s ∼ Π d , and then choose an i-face t ⊂ s uniformly at random.Our main object of study is the space of functions f : X(d) → R, and in particular, Boolean functions f : X(d) → {0, 1}.

Many different definitions of high-dimensional expansion
Although graph expansion has many definitions, all of which are equivalent, they each generalize to higher dimensions differently, leading to a diverse landscape of definitions.
The first definition studied by Linial and Meshulam [LM06] and by Gromov [Gro10] was topological, focusing on generalizations of edge expansion through coboundary maps in higher dimensions.
It was later discovered that certain bounded-degree simplicial complexes constructed by Lubotzky, Samuels and Vishne [LSV05a,LSV05b] satisfy this definition (more accurately, a variant of it called cosystolic expansion), leading to the first known family of bounded-degree complexes that satisfy Gromov's so-called "topological overlap property" [KKL16, DKW18,EK16].The LSV construction comes from arithmetic quotients of Bruhat-Tits buildings, and are high-dimensional generalizations of the celebrated Lubotzky-Philips-Sarnak construction of Ramanujan expander graphs [LPS88].
Subsequent works were interested in additional properties exhibited by the LSV complexes (and others, see [Li04]) that aren't necessarily captured by the topological definitions mentioned above.For example, Dinur and Kaufman [DK17] proved that the LSV complexes support so-called agreement tests that are studied in the context of probabilistically checkable proofs, and were previously known for only dense families of subsets such as the complete complex.The relevant definition for that work is spectral link-expansion, which we now describe.
Let X be a d-dimensional simplicial complex, and let s ∈ X be any face of dimension ≤ d − 1.The graph of the link s is the graph whose vertex set consists of all elements v / ∈ s such that {v} ∪ s ∈ X.The edges are all pairs {v, u} such that {v, u} ∪ s ∈ X.A simplicial complex X is a two-sided (or onesided) link-expander with spectral radius γ if for every link, the non-trivial normalized eigenvalues are upper-bounded by γ in the one-sided case, or sandwiched between −γ and γ in the two-sided case.
Garland [Gar73] had studied this type of spectral expansion in links, and used it to show the vanishing of the real cohomology of Bruhat-Tits buildings.Similar techniques were further explored in the work of Oppenheim [Opp18].The notion of one-sided spectral expansion also appeared in earlier works of Kaufman, Kazhdan and Lubotzky [KKL16], where it was applied towards proving topological expansion.
A third definition, through random walks on the i-faces, was studied initially by Kaufman and Mass [KM18], where the authors defined a combinatorial "random-walk" type of expansion, and showed that this type of expansion was implied by expansion of the links.This notion is concerned with the convergence speed of high-dimensional random walks to the stationary distribution.This was refined by Dinur and Kaufman [DK17], who showed that two-sided link-expansion implies that all random walks on a high-dimensional expander converge at approximately the same speed as on the complete complex, with an error term dominated by γ.
In this paper we continue to study this two-sided definition, and show that it is in fact equivalent to a new random-walk definition which we suggest.We find the new random-walk definition appealing because it is very natural to state, and at the same time equivalent to the powerful two-sided linkexpansion definition.Moreover, the random walk definition generalizes naturally beyond simplicial complexes also to ranked posets (partially ordered sets).Finally, the random walk point of view allows for doing an analog of Fourier analysis, as we discuss below.
Concurrently and independently of this work, Kaufman and Oppenheim [KO20] studied the connection between one-sided link-expansion and convergence of the relevant random walks.They showed that one-sided link-expansion (which is weaker than its two-sided variant) is already enough for getting the same conclusions about the speed of convergence of random walks as was shown for the two-sided case.This was picked up in an exciting work by Anari, Liu, Oveis Gharan and Vinzant [ALOV19], who relied on this connection to solve a longstanding open question in the area of Markov chain sampling.
They showed that a certain well-studied Markov chain on bases of matroids can be viewed as a random walk on the faces of a certain one-sided link expander, thereby using the work of Kaufman and Oppenheim [KO20] to prove convergence of this random walk.These techniques were further developed in subsequent works that analyzed other Markov chains using their underlying high dimensional expanding structure [ALO24, CGSV21, FGYZ22, CLV23, CLV21, ALO22].
Two independent works by [BHKL22] and [GLL22] used alternative decompositions of functions on high dimensional expanders to show hypercontractive properties of random walks in high dimensional expanders.

Random-walk based definition of high-dimensional expanders
Denote the real-valued function space on X(i) by C i := { f : X(i) → R}.There are two natural operators U i : C i → C i+1 and D i+1 : C i+1 → C i , which are defined by averaging: The compositions D i+1 U i and U i−1 D i are Markov operators of two natural random walks on X(i), the upper random walk and the lower random walk.
The first walk we consider is the upper random walk D i+1 U i .Given a face t 1 ∈ X(i), we choose its neighbor t 2 as follows: we pick a random s ∼ Π i+1 conditioned on s ⊃ t 1 and then choose uniformly at random t 2 ⊂ s.Note that there is a probability of 1 i+2 that t 1 = t 2 .We define the non-lazy upper random walk by choosing t 2 ⊂ s conditioned on t 1 ̸ = t 2 .We denote the Markov operator of the non-lazy upper walk by M + i .This operator satisfies the following equality (which can be used to define it) A similar (but not identical) random walk on X(i) is the lower random walk U i−1 D i .Here, given a face t 1 ∈ X(i), we choose a neighbor t 2 as follows: we first choose a r ∈ X(i − 1), r ⊆ t 1 uniformly at random and then choose a t 2 ∼ Π i conditioned on t 2 ⊃ r.
For instance, if X is a graph (a 1-dimensional simplicial complex), then the non-lazy upper random walk is the usual adjacency walk we define on a weighted graph (i.e.traversing from vertex to vertex by an edge).The (lazy) upper random walk has probability 1 2 of staying in place, and probability 1 2 of going to different adjacent vertex.The lower random walk on V = X(0) doesn't depend on the current vertex: it simply chooses a vertex at random according to the distribution Π 0 on X(0).
The Up and Down operators resemble operators in several similar situations.One immediate example is the boundary and coboundary operators with real coefficients.These differ from the operators described above as they include signs according to orientation of the faces, whereas our operators ignore signs.Stanley studied Up and Down operators in numerous combinatorial situations and the most relevant to this work is his definition of sequentially differential posets, which we discuss in Section 1.4.
The Up and Down operators also make an appearance in the Kruskal-Katona theorem.O'Donnell and Wimmer [OW13] related the non-lazy upper and lower random walks (the two random walks are identical in their setting) to the Kruskal-Katona theorem, and used this connection to construct an optimal net for monotone Boolean functions.
We are now ready to give our definition of a high-dimensional expander in terms of these walks.
Definition 1.1 (High-Dimensional Expander).Let γ < 1, and let X be a d-dimensional simplicial complex.We say X is a γ-high-dimensional expander (or γ-HDX) if for all 0 ≤ i ≤ d − 1, the non-lazy upper random walk is γ-similar to the lower random walk in operator norm in the following sense: In the graph case, this coincides with the definition of a γ-two-sided spectral expander: recall that the lower walk on X(0) is by choosing two vertices v 1 , v 2 ∈ X(0) independently.Thus M + i − U i−1 D i is the second eigenvalue of the adjacency random walk in absolute value.For i ≥ 1, we cannot expect the upper random walk to be similar to choosing two independent faces in X(i), since the faces always share a common intersection of i elements.Instead, our definition asserts that traversing through a common (i + 1)-face is similar to traversing through a common (i − 1)-face.
We show that this new definition is equivalent to the aforementioned definition of two-sided link expanders for constant dimension d, thus giving these high-dimensional expanders a new characterization.
Theorem 1.2 (Equivalence between high-dimensional expander definitions).Let X be a d-dimensional simplicial complex.
1.If X is a γ-two-sided link expander then X is a γ-HDX according to the definition we give.
Through this characterization of high-dimensional expansion, we decompose real-valued functions f : X(i) → R in an approximately orthogonal decomposition that respects the upper and lower random walk operators.We also give an example that the O(d) factor in the second item of the equivalence is tight in Section 5.1.We stress that the second direction of this equivalence is non-trivial only in the regime where γ < 1 3(d+1) (so when the dimension grows our definition becomes weaker).

Decomposition of functions on X(i)
We begin by recalling the classical decomposition of functions over the Boolean hypercube.Every function on the Boolean cube {0, 1} n has a unique representation as a multilinear polynomial.In the case of the Boolean hypercube, it is convenient to view the domain as {1, −1} n , in which case the above representation gives the Fourier expansion of the function.The multilinear monomials can be partitioned into "levels" according to their degree, and this corresponds to an orthogonal decomposition of a function into a sum of its homogeneous parts, f = ∑ deg f i=0 f =i , a decomposition which is a basic concept in Boolean function analysis.These concepts have known counterparts for the complete complex, which consists of all subsets of [n] of size at most d + 1, where d + 1 ≤ n/2.The facets (top-level faces) of this complex comprise the slice (as it is known to computer scientists) or the Johnson scheme (as it is known to coding theorists), whose spectral theory has been elucidated by Dunkl [Dun76].For |t| ≤ d + 1, let y t (s) = 1 if t ⊆ s and y t (s) = 0 otherwise (these are the analogs of monomials).Every function on the complete complex has a unique representation as a linear combination of monomials ∑ t f (t)y t (of various degrees) where the coefficients f (t) satisfy the following harmonicity condition:1 for all i ≤ d and all t ∈ X(i), (If we identify y t with the product ∏ i∈t x i of "variables" x i , then harmonicity of a multilinear polynomial P translates to the condition ∑ n i=1 ∂P ∂x i = 0.) As in the case of the Boolean cube, this unique representation allows us to orthogonally decompose a function into its homogeneous parts (corresponding to the contribution of monomials y t with fixed |t|), which plays the same essential part in the complete complex as its counterpart does in the Boolean cube.Moreover, this unique representation allows extending a function from the "slice" to the Boolean cube (which can be viewed as a superset of the "slice"), thus implying further results such as an invariance principle [FKMW18,FM19].
We generalize these concepts for complexes satisfying a technical condition we call properness, which is satisfied when the Markov operators of the upper random walks DU have full rank, or equivalently that KerU i = {0} for all i).This holds for both the complete complex and high-dimensional expanders.We show that the results on unique decomposition for the complete complex hold for arbitrary proper complexes (Theorem 3.2) with a generalized definition of harmonicity which incorporates the distributions Π i .In contrast to the case of the complete complex (and the Boolean cube), in the case of high-dimensional expanders the homogeneous parts are only approximately orthogonal.
The homogeneous components in our decomposition are "approximate eigenfunctions" of the Markov operators defined above, and this allows us to derive an approximate identity relating the total influence (defined through the random walks) to the norms of the components in our decomposition, in complete analogy to the same identity in the Boolean cube (expressing the total influence in terms of the Fourier expansion).
Theorem 1.3 (Decomposition theorem for functions on HDX).Let X be a proper d-dimensional simplicial complex.Every function f : X(ℓ) → R, for ℓ ≤ d, can be written uniquely as • f i is a linear combination of the functions y s (t) = 1 [t⊇s] for s ∈ X(i), i.e. y s (t) = 1 when t ⊇ s.
• Interpreted as a function on X(i), f i lies in the kernel of the Markov operator of the lower random walk If X is furthermore a γ-high-dimensional expander, then the above decomposition is an almost orthogonal decomposition in the following sense: (For an exact statement in terms of the dependence of error on γ and d, see Theorem 4.6).
Instead of requiring f i to lie in the kernel of the walk UD, they take f i to be in the projection of the orthogonal complement of Span { f −1 , ..., f i−1 }.As a consequence, their decomposition is perfectly orthogonal and not just approximately orthogonal.On the flip side, the components f i live in slightly less-nicely-defined spaces.A similar decomposition theorem was, concurrently and independently, proved by Kaufman and Oppenheim [KO20, Theorem 1.3], for one-sided spectral high-dimensional expanders.Our near-orthogonal decomposition holds only for two-sided high-dimensional expanders, while they prove an interesting norm-decomposition instead of an orthogonal decomposition, writing the norm of f as the approximate sum of norms of projections of f to the spaces of functions on X(j).
This is interesting especially in light of the fact that one-sided expanders aren't necessarily proper (for example, the complete (d + 1)-partite simplicial complex is not proper).In particular, there is no known way to write f itself as a sum of components in analogy to Theorem 1.3.
Subsequent to the earlier version of our result [DDFH18], Kaufman and Oppenheim [KO20, Theorem 1.5] extended their decomposition theorem to two-sided high-dimensional expanders.This decomposition is similar to ours but not identical.Similar to our near-orthogonal decomposition, they give a decomposition of f into orthogonal components related to the spaces of functions on X(i), and satisfying a similar "nearly eigenvector" equality for the upper walk operator.Instead of requiring f i to lie in the kernel of the walk UD, they take f i to be in the projection of the orthogonal complement of Span { f −1 , . . . ,f i−1 }.As a consequence, their decomposition is perfectly orthogonal and not just approximately orthogonal.On the flip side, the components f i live in slightly less-nicely-defined

Decomposition of functions on posets
The decomposition we suggest in this paper holds for the more general setting of graded partially ordered sets (posets): A finite graded poset (X, ≤, ρ) is a poset (X, ≤) equipped with a rank function ρ : X → {−1} ∪ N that respects the order, i.e. if x ≤ y then ρ(x) ≤ ρ(y).Additionally, if y is minimal with respect to elements that are greater than x (i.e.y covers x), then ρ(y) = ρ(x) + 1. Denoting X(i) = ρ −1 (i) , we can partition the poset as follows: We consider graded posets with a unique minimum element ∅ ∈ X(−1).Every simplicial complex is a graded poset.Another notable example is the Grassmann poset Gr q (n, d) which consists of all subspaces of F n q of dimension at most d + 1.The order is the containment relation, and the rank is the dimension minus one, ρ(W) = dim(W) − 1.The Grassmann poset was recently studied in the context of proving the 2-to-1 games conjecture [KMS17, DKKMS18a, DKKMS18b, KMS18], where a decomposition of functions of the Grassmann poset was useful.Such a decomposition is a special case of the general decomposition theorem in this paper.
Towards our goal of decomposing functions on graded posets, we generalize the notion of random walks on X(i) as follows: A measured poset is a graded poset with a sequence of measures ⃗ Π = (Π −1 , . . ., Π d ) on the different levels X(i), that allow us to define operators U i , D i+1 similar to the simplicial complex case (for a formal definition see Section 6).The upper random walk defined by the composition D i+1 U i is the walk where we choose two consecutive t 1 , t 2 ∈ X(i) by choosing s ∈ X(i + 1) and then t 1 , t 2 ≤ s independently.The lower random walk U i−1 D i is the walk where we choose two consecutive t 1 , t 2 ∈ X(i) by choosing r ∈ X(i − 1) and then t 1 , t 2 ≥ r independently.
Stanley studied a special case of a measured poset that is called a sequentially differential poset [Sta88].
This is a poset where for all 0 ≤ i ≤ d and some constants r i , δ i ∈ R.There are many interesting examples of sequentially differential posets, such as the Grassmann poset and the complete complex.Definition 1.1 of a highdimensional expander resembles an approximate version of this equation: in a simplicial complex, one may check that the non-lazy version is which suggests a relaxation of (1) to an expanding poset (eposet).

An FKN theorem
Returning to simplicial complexes, as a demonstration of the power of this setup, we generalize the fundamental result of Friedgut, Kalai, and Naor [FKN02] on Boolean functions almost of degree 1.
We view this as a first step toward developing a full-fledged theory of Boolean functions on highdimensional expanders.
An easy exercise shows that a Boolean degree 1 function on the Boolean cube is a dictator, that is, depends on at most one coordinate; we call this the degree one theorem (the easy case of the FKN Theorem with zero-error).The FKN theorem, which is the robust version of this degree one theorem, states that a Boolean function on the Boolean cube which is close to a degree 1 function is in fact close to a dictator, where closeness is measured in L 2 .
The degree one theorem holds for the complete complex as well.The third author [Fil16a] has extended the FKN theorem to the complete complex.Surprisingly, the class of approximating functions has to be extended beyond just dictators.
We prove a degree one theorem for arbitrary proper complexes, and an FKN theorem for highdimensional expanders.In contrast to the complete complex, Boolean degree 1 functions on arbitrary complexes correspond to independent sets rather than just single points, and this makes the proof of the degree one theorem non-trivial.
Definition 1.5 (1-skeleton).The 1-skeleton of a simplicial complex X is the graph whose vertices are X(0), the 0-faces of the complex, and whose edges are X(1), the 1-faces of the complex.
Claim 1.6 (Degree one theorem on simplicial complexes).Suppose that X is a proper d-dimensional simplicial complex, for d ≥ 2, whose 1-skeleton is connected.A function f ∈ C d is a Boolean degree 1 function if and only if there exists an independent set I (in the 1-skeleton of X) such that f is the indicator of intersecting I or of not intersecting I.
Our proof of the FKN theorem for high-dimensional expanders is very different from existing proofs.
It follows the same general plan as recent work on the biased Kindler-Safra theorem [DFH19].The idea is to view a high-dimensional expander as a convex combination of small sub-complexes, each of which is isomorphic to the complete k-dimensional complex on O(k) vertices.We can then apply the known FKN theorem separately on each of these, and deduce that our function is approximately well-structured on each sub-complex.Finally, we apply the agreement theorem of Dinur and Kaufman [DK17] to show that the same holds on a global level.
2 ) to a degree 1 function then there exists a degree

Paper organization
We describe our general setup in Section 2. We describe the property of properness and its implications -a unique representation theorem and decomposition of functions into homogeneous parts -in Section 3. We introduce our definition of high-dimensional expanders in Section 4. In Section 5 we show equivalence between our definition and the earlier one of two-sided link expanders.
In Section 6 we define expanding posets, and describe our decomposition of functions on posets.
We prove almost orthogonality of the decomposition (Theorem 3.2) in this more general setting.The full decomposition theorem for the (interesting) special case of simplicial complexes is explicitly stated in Theorem 4.6.
We prove our degree one theorem in Section 7, and our FKN theorem in Section 8. Theorem 1.3 is a combination of Theorem 3.2 (first two items) and Theorem 4.6 (other three items).
Theorem 1.2 is a restatement of Theorem 5.5.Claim 1.6 is a restatement of Theorem 7.2.Theorem 1.7 is a restatement of Theorem 8.3.

Basic setup
A d-dimensional simplicial complex X is a non-empty collection of sets of size at most d + 1 which is closed under taking subsets.We call a set of size i + 1 an i-dimensional face (or i-face for short), and denote the collection of all i-faces by X(i).A d-dimensional simplicial complex X is pure if every i-face is a subset of some d-face.We will only be interested in pure simplicial complexes.
Let X be a pure d-dimensional simplicial complex.Given a probability distribution Π d on its topdimensional faces X(d), we define a distribution ⃗ and order its vertices uniformly at random (v 0 , v 1 , ..., v d ).Then we set → R} be the space of functions on X(i).It is convenient to define X(−1) := {∅}, and we also let C −1 := R. We turn C i to an inner product space by defining ⟨ f , g⟩ where t is obtained from s by removing a random element.Note that if s ∼ Π i+1 then t ∼ Π i .Similarly, we define the Down operator D i+1 : C i+1 → C i for −1 ≤ i < d as follows: where s is obtained from t by conditioning the vector ⃗ Π on Π i = t and taking the (i + 1)th component.
The operators When the domain is understood, we will use U, D instead of U i , D i+1 .This will be especially useful when considering powers of U, D. For example, if f : Given a face s ∈ X, the function y s is the indicator function of containing s.Our definition of the Up operator guarantees the correctness of the following lemma.
Lemma 2.1.Let s ∈ X(i).We can think of y s as a function in C j for all j ≥ i.Using this convention, U j y s = (1 − i+1 j+2 )y s .
Proof.Direct calculation shows that and so U j y s = (1 − i+1 j+2 )y s .
For 0 ≤ i ≤ k, the space of harmonic functions on X(i) is defined as We also define We can describe V i , a sub-class of functions of C k , in more concrete terms.
Lemma 2.2.Every function h ∈ V i has a representation of the form where the coefficients h(s) satisfy the following harmonicity condition: for all t ∈ X(i − 1), Furthermore, if U k−i is injective on C i then the representation is unique.
Proof.Suppose that h ∈ V i .Then h = U k−i f for some f ∈ H i , which by definition of H i and the Down operator is equivalent to the condition . In other words, the f (s)'s satisfy the harmonicity condition.It is easy to check that f = ∑ s∈X(i) f (s)y s , and so Lemma 2.1 shows that h = ∑ s∈X(i) h(s)y s , where Thus, h(s) is a scaling of f (s) by a non-zero constant, it follows that the coefficients h(s) also satisfy the harmonicity condition.Now suppose that U k−i is injective on C i , which implies that dim H i = dim V i .The foregoing shows that the dimension of the space of coefficients h(s) satisfying the harmonicity conditions is dim H i .Since dim H i = dim V i , this shows that the representation is unique.

Decomposition of the space C k and a convenient basis
Our decomposition theorem relies on a crucial property of simplicial complexes, properness.
We remark that since DU is PSD, ker U = 0 is equivalent to DU > 0. This is because for any x ∈ ker DU, we would have 0 = ⟨x, DUx⟩ = ∥Ux∥ 2 , implying that x = 0.
The complete k-dimensional complex on n points is proper iff k + 1 ≤ n+1 2 .A pure one-dimensional simplicial complex (i.e., a graph) is proper iff it is not bipartite.Unfortunately, we are not aware of a similar characterization for higher dimensions.However, in Section 5 we show that high-dimensional expanders are proper.
We can now state our decomposition theorem.
Theorem 3.2.If X is a proper k-dimensional simplicial complex then we have the following decomposition of C k : In other words, for every function f ∈ C k there is a unique choice of h i ∈ H i such that the functions f Proof.We first prove by induction on ℓ that every function f ∈ C ℓ has a representation f = ∑ ℓ i=−1 U ℓ−i h i , where h i ∈ H i .This trivially holds when ℓ = −1.Suppose now that the claim holds for some ℓ < k, and let f ∈ , and therefore we can write f = h ℓ+1 + Ug, where h ℓ+1 ∈ H ℓ+1 and g ∈ C ℓ .Applying induction, we get that g = ∑ ℓ i=−1 U ℓ−i h i , where h i ∈ H i .Substituting this in f = h ℓ+1 + Ug completes the proof.
It remains to show that the representation is unique.Since ker is not only surjective but also injective.In other words, the representation of f is unique.
where the coefficients f (s) satisfy the following harmonicity conditions: for all 0 ≤ i ≤ k and all t ∈ X(i − 1): Proof.Follows directly from Lemma 2.2.
We can now define the degree of a function.
Definition 3.4.The degree of a function f is the maximal cardinality of a face s such that f (s) ̸ = 0 in the unique decomposition given by Corollary 3.3.
Thus a function has degree d if its decomposition only involves faces whose dimension is less than d.
The following lemma shows that the functions y s , for all (d − 1)-dimensional faces s, form a basis for the space of all functions of degree at most d.
Lemma 3.5.If X is a proper k-dimensional simplicial complex then the space of functions on X(k) of degree at most d + 1 has the functions {y s : s ∈ X(d)} as a basis.
Proof.The space of functions on X(k) of degree at most d + 1 is spanned, by definition, by the functions Given the above, in order to complete the proof, it suffices to show that for every i ≤ d and t ∈ X(i), the function y t can be written as a linear combination of y s for s ∈ X(d).This shows that {y s : s ∈ X(d)} spans the space of functions of degree at most d + 1.Since this set contains |X(d)| functions, it forms a basis.
Recall that y t (r) = 1 r⊇t , where r ∈ X(k).If r contains t then it contains exactly ( k+1−|t| d+1−|t| ) many d-faces containing r, and so This completes the proof.
We call f i the "level i" part of f , and denote the weight of f above level i by We also define

How to define high-dimensional expansion?
In this section we define a class of simplicial complexes which we call γ-high-dimensional expanders (or γ-HDXs).We later show that these simplicial complexes coincide with the high-dimensional expanders defined by Dinur and Kaufman [DK17] via spectral expansion of the links.In addition, we show the decomposition in Section 3 is almost orthogonal for γ-HDXs.We define γ-HDXs through relations between random walks in different dimensions.It is easy to already state the definition using the U, D operators: a k-dimensional simplicial complex is said to be a γ-HDX if for all levels 0 ≤ j ≤ k − 1, We turn to explain the meaning of (3) being small by discussing these random walks. 3he operators U and D induce random walks on the jth level X(j) of the simplicial complex.Recall that our simplicial complexes come with distributions Π j on the j-faces.
Definition 4.1 (The upper random walk DU).Given t ∈ X(j), we choose the next set t ′ ∈ X(j) as follows: • Choose s ∼ Π j+1 conditioned on t ⊂ s.
• Choose uniformly at random t ′ ∈ X(j) such that t ′ ⊂ s.
Definition 4.2 (The lower random walk UD).Given t ∈ X(j), we choose the next set t ′ ∈ X(j) as follows: • Choose uniformly at random r ∈ X(j − 1) such that r ⊂ t.
It is easy to see that the stationary distribution for both these processes is Π j .However, these random walks are not necessarily the same.For example, if j = 0, we consider the graph (X(0), X(1)).The upper walk is the 1 2 -lazy version of the usual adjacency random walk in a graph.The lower random walk is simply choosing two vertices independently, according the distribution Π 0 .In both walks, the first step and the third step are independent given the second step.In fact, we can view the upper walk (resp.lower walk) as choosing a set s ∈ X(j + 1) (resp.r ∈ X(j − 1)), and then choosing independently two sets t, t ′ ∈ X(j) given that they are contained in s (resp.given that they contain r).
One property of a random walk is its laziness: Definition 4.3 (Laziness and non-lazy component).Let M be a random walk.The laziness of M is We say that an operator is non-lazy if ℓz(M) = 0.
The non-lazy component M + of a walk M is given by . It is easy to see that both walks have some laziness.In the upper walk, the laziness is 1 j+2 .We can decompose DU as where M + j is the non-lazy version of DU, i.e. the operator representing the walk when conditioning on t ′ ̸ = t.The laziness of the lower version depends on the simplicial complex itself, thus it doesn't admit a simple decomposition in the general case.
(4) can be written as A γ-HDX is a simplicial complex in which the non-lazy upper walk is similar to the lower walk.
Thus an equivalent way to state (3) is as follows.
Definition 4.4 (High-dimensional expander).Let X be a simplicial complex, and let γ < 1.We say that X is a γ-HDX if for all 0 This definition nicely generalizes spectral expansion in graphs, since if X is a graph, ∥M + j − UD∥ is the second largest eigenvalue (in absolute value) of the normalized adjacency random walk.In Section 5 we show that this definition is equivalent to the definition of high-dimensional two-sided local spectral expanders that was extensively studied [DK17,Opp18].
If γ < 1 k+1 then any γ-HDX is proper, as shown by the following lemma.
Proof.To prove this, we directly calculate ⟨U j f , U j f ⟩ and show that it is positive when f ̸ = 0: and since X is a γ-HDX, Plugging this in (6), we get The last part of the sum is non-negative: Hence ⟨U j f , U j f ⟩ > 0.

Almost orthogonality of the decomposition in HDXs
In Section 6 we prove that the decomposition in Theorem 3.2 is "almost orthogonal".We summarize our results below: Theorem 4.6.Let X be a k-dimensional γ-HDX, where γ < 1 k+1 .For every function f on C ℓ for ℓ ≤ k, the decomposition f = f −1 + • • • + f ℓ of Theorem 3.2 satisfies the following properties: • f i are approximate eigenvectors with eigenvalues The hidden constants in the O notations depend on k but not on the size of X.
This result is analogous to a similar result [KO20, Theorem 5.2] due to Kaufman and Oppenheim, in which a similar decomposition is obtained.However, whereas our decomposition is to functions f −1 , . . ., f ℓ in C ℓ , the decomposition of Kaufman and Oppenheim [KO20] is to functions h −1 , . . ., h ℓ , which live in different spaces.We further expand on the matter in Section 6.5.
The exact constants in the O notation of Theorem 4.6 were not calculated precisely, but the proof in Section 6 gives constants exponential in k.It is not clear whether this is tight.

High-dimensional expanders are two-sided link expanders
In Section 4 we defined γ-HDXs, see Definition 4.4.Earlier works, [EK16, DK17, KO20] for example, gave a different definition of high-dimensional expanders -two-sided link expanders -based on the local link structure.We recall this other definition and prove that the two are equivalent.
Definition 5.1 (Link).Let X be a d-dimensional complex with an associated probability distribution Π d on X(d), which induces probability distributions on X(−1), . . ., X(d − 1).For every i-dimensional face s ∈ X(i) for i < d − 1, the link of s, denoted X s , is the simplicial complex: We associate X s with the weights ⃗ Π s such that . Definition 5.2 (Underlying graph).Let i < d − 1.Given s ∈ X(i), the underlying graph G s is the weighted graph consisting of the first two levels of the link of s.In other words, G s = (V, E), where The weights on the edges are given by w s ({x, y}) = Pr .
We can also consider directed edges, by choosing a random orientation: We define the weight of a vertex x to We define an inner product for functions on vertices along the lines of Section 2: We denote by A s the adjacency operator of the non-lazy upper-walk on X s (0), given by The corresponding quadratic form is Definition 5.3 (Two-sided link expander).Let X be a simplicial complex, and let γ < 1 be some constant.We say that X is a γ-two-sided link expander (called γ-HD expander in some works [DK17]) if every link X s of X satisfies λ(A s ) ≤ γ.
Dinur and Kaufman [DK17] proved that such expanders do exist, based on a result of Lubotzky, Samuels and Vishne [LSV05a].
Theorem 5.4 ([DK17, Lemma 1.5]).For every λ > 0 and every d ∈ N there exists an explicit infinite family of bounded degree d-dimensional complexes which are λ-two-sided link expanders.
We now prove that two-sided link expanders per Definition 5.3 and high-dimensional expanders per Definition 4.4 are equivalent.
Theorem 5.5 (Equivalence theorem).Let X be a d-dimensional simplicial complex.
1.If X is a γ-two-sided link expander, then X is a γ-HDX.
2. If X is a γ-HDX then X is a 3dγ-two-sided link expander.
Proof.Item 1. Assume that X is a γ-two-sided link expander.We need to show that for all i < d, where M + i is the non-lazy upper walk.Let f be a function on X(i), where i < d.We have Let s = t \ {x, y}.Since t ∼ Π i+1 and x ̸ = y ∈ t are chosen at random, we have s ∼ Π i−1 .Given such an s, the probability to get specific (t, x, y) is exactly w s (x, y) (the factor 1/2 accounts for the relative order of x, y), and so In other words, we have shown that where We now note that Therefore we have, by (7), that If X is a γ-two-sided link expander then λ(A s ) ≤ γ for all s, and so Item 2. Assume now that X is a γ-HDX.Our goal is to show that for all i < d − 1 and r ∈ X(i), Using the convention that X(−1) consists of the empty set, for i = −1 we have A ∅ = M + 0 , and so U −1 D 0 is zero on the space perpendicular to the constant function.Thus and from our assumption λ(A ∅ ) ≤ γ.Now assume 1 ≤ i ≤ d − 1, and fix some r ∈ X(i − 1).Let f : X r (0) → R be some eigenfunction of A r , which is perpendicular to the constant function.In order to prove the theorem, we must show that Without loss of generality, we may assume that ∥ f ∥ = 1.
In order to obtain a bound on λ(A r ), we bound ⟨ f , f ⟩, ⟨M + i f , f ⟩, and ⟨UD f , f ⟩ in terms of f and A r .Observe that the norms of f and f are proportional: Furthermore, from what we showed in (9) we obtain that where fr ′ (x) = f (r ′ ∪ {x}).
Fix some r ′ ̸ = r.If fr ′ (x) ̸ = 0 then f (r ′ ∪ {x}) ̸ = 0.In particular, this means that r ⊂ r ′ ∪ {x}.Since both r, r ′ are contained in r ′ ∪ {x}, this means that r \ r ′ = {x}.Thus there is at most one vertex x ∈ X r ′ (0) such that fr ′ (x) ̸ = 0. Since A r ′ is a non-lazy operator, this implies that ⟨A r ′ fr ′ , fr ′ ⟩ = 0. We remain with In other words, the upper non-lazy random walk is proportional to the local adjacency operator.
We prove the following claim, which shows that the lower walk scales f by a factor of at most i+1 i γ: Assuming the above: where the equalities in the first line use ( 10) and ( 11), and the inequalities in the second line use Claim 5.6, our assumption that ∥M + i − U i−1 D i ∥ ≤ γ, and the triangle inequality.
We complete the proof of Theorem 5.5 by proving Claim 5.6: Proof of Claim 5.6.Since U i−1 D i is PSD, we have ⟨U i−1 D i f , f ⟩ ≥ 0, and so we may remove the absolute value and prove Consider the inner product ⟨D i f , D i f ⟩ -this is the expectation upon choosing r ′ ∼ Π i−1 , and then choosing two i-faces s 1 , s 2 ∈ X(i) containing it independently.Hence we decompose to the cases where r ′ = r and r ′ ̸ = r: The first term is 0, since from independence of s 1 , s 2 : since by assumption f is perpendicular to constant functions.
We saw above that for any r ′ ̸ = r, there is at most one i-face containing r ′ (which is s = r ∪ r ′ ) such that f (s) ̸ = 0.For any r ′ ̸ = r, the value f (s 1 ) f (s 2 ) is non-zero only when s 1 = s 2 = r ∪ r ′ .For every s 1 ∈ X(i), we define the event E s 1 to hold when s 2 = s 1 .Namely, the event E s 1 is the event where we choose an edge in the lower walk that is a triple (s 1 , r ′ , s 1 ) where Note that if s 1 doesn't contain r then f 2 (s 1 ) = 0, hence we continue taking expectation over all s 1 ∈ X(i), even though some of them are unnecessary terms.
If we prove that for every s ∈ X(i) we have Pr Thus we are left with proving the following statement: for all s 1 ∈ X(i), We first bound the unconditioned probability Pr , and let 1 s 1 : X(i) → R be its indicator.Notice that U i−1 D i 1 s 1 (s 1 ) = Pr r ′ ,s 2 [s 2 = s 1 ], and so We again use the non-laziness property of M + i to assert that ⟨M Hence Pr[E s 1 ] ≤ γ.Consider now any s 1 ∈ X(i) containing r, and let r ′ be a random (i − 1)-face contained in s 1 .The probability that r ′ ̸ = r is i i+1 , and so Pr

Tightness of Theorem 5.5
In the remainder of the section we give two counterexamples that show that the dependence of Theorem 5.5 on λ and d is tight up to constants.
To show the tightness of the first item in the theorem, namely, that there are simplicial complexes where M + i − UD ≈ γ, it is enough to consider the complete d-dimensional simplicial complex on n vertices.On the one hand, the local links of the complete complex are complete graphs on n − d + 1 vertices (or more), thus it is a 1 n−d -two sided link expander.By direct calculation, where J(n, d + 1) is the adjacency matrix of the Johnson graph.The eigenvalues of J(n, d + 1) are (d + 1 − j)(n − d − 1 − j) − j for j = 0, . . ., d + 1 (see [BCN89]), and so the eigenvalues of .
The spectral norm of M + d − UD is attained at j = d + 1, at which point it equals As for the second item, we show a sequence of complexes X n with a disconnected link (i.e. they are not λ-two sided link expanders for any λ < 1), where The complex X n is obtained by removing a few faces from the d-dimensional complete complex, so that there is a cut in a single link.More formally, we define where Π d is uniform.
One can observe that the link of r 0 = {1, 2, . . . ,d − 1} contains two connected components -the even vertices and the odd vertices.
We comment that instead of removing the set of faces we can reduce their weight significantly so that the link of r 0 will remain connected but not an expander.We next claim that the upper and lower walks are spectrally similar.
Claim 5.7.Let d > 1, and let X n be the sequence of simplicial complexes defined above.Then Proof.We fix n and denote X = X n , and for simplicity of computation we assume n − d is odd.We need to show that for any f : We decompose the inner product to local inner products on the link, as in (9): where f r : X r (0) → R is defined by f r (x) = f (r ∪ {x}).For every r ̸ = r 0 we have the following claim: Substituting this in (13) and using The last term equals We prove Claim 5.8 below.
Proof of Claim 5.8.Denote m = n − d + 1, and without loss of generality, assume m is even.
Let r ̸ = r 0 .If |r \ r 0 | ≥ 3, then X r is the complete graph on m vertices and is a 1 m−1 -two-sided spectral expander.
If |r \ r 0 | = 2 then X r is one of the following: 1.The complete graph -when x, y ∈ r \ r 0 have the same parity.
2. A complete graph with only one edge missing -when x, y ∈ r \ r 0 have different parity.
In both cases these are O 1 m spectral expanders (skipping a short calculation in the second case).If |r \ r 0 | = 1 then the graph X r is obtained by taking the complete graph, and removing m 2 of the edges that are adjacent to the vertex v 0 ∈ r 0 \ r.We claim that this graph is still a o n (1)-spectral expander.
The first vertex v 0 is connected to m 2 vertices, thus there are m 2 − 1 vertices which are connected to all vertices besides themselves and v 0 , and m 2 vertices that are connected to all vertices besides themselves.The adjacency operator for this graph is a block matrix, formed by partitioning the rows and the columns into three parts, of sizes 1, m 2 − 1, m 2 , respectively: where K is the adjacency matrix of a complete graph, and J is the all-1 matrix.
The non-constant eigenvectors of 1 The remaining three eigenvalues correspond to eigenvectors which are constant on blocks.A straightforward calculation shows that these eigenvalues are roots of the cubic polynomial The trivial eigenvalue λ = 1 corresponds to the constant eigenvector.The other two are roots of the quadratic (4m The roots of this quadratic are

Expanding posets (eposets)
In this section, we describe a setting generalizing simplicial complexes, namely measured posets.These are partially ordered sets (a set X with a partial order ≤ on it) whose elements are partitioned into levels X(j), and that have some additional properties stated below.As in simplicial complexes, we can define C j as the space of real-valued functions on X(j), and averaging operators U j : C j → C j+1 and D j+1 : We generalize the notion of a γ-HDX to a γ-expanding poset (eposet) -a measured poset with operators D j , U j such that D j+1 U j − r j I − δ j U j−1 D j ≤ γ, for γ < 1, all non-extreme levels j of the poset, and some constants r j , δ j .We begin the section by discussing the formal notion of an eposet.We then generalize Theorem 4.6 to all eposets, and prove it in the general setting.Finally, we show that if our measured poset is a simplicial complex, then r j ≈ 1 j+2 , δ j ≈ 1 − 1 j+2 , under the assumption that the laziness of the lower walk is small.
2. For every x, y ∈ X, if y is minimal with respect to elements greater than x (i.e.x ≤ y), then We denote the set of elements of rank j by X(j).We assume that there is a unique element of minimal rank which we denote by ∅, and so X(−1) = {∅}.
We say that a graded poset is d-dimensional if the maximal rank of any element in X is d.We say that a d-dimensional graded poset is pure if all maximal elements are of rank d, that is, for every t ∈ X there exists s ∈ X(d) such that t ≤ s.For example, any simplicial complex is a graded poset, if we take ≤ to be the containment relation and ρ to be the cardinality of a face, minus one.Another useful example to keep in mind is the Grassmann poset Gr q (n, d), whose elements are subspaces of dimension at most d + 1 of F n q , and the order is by containment.The rank function for the Grassmann poset is ρ(U) = dim(U) − 1, and so X(j) = {U ⊆ F n q : dim(U) = j + 1}.
Definition 6.1 (Measured poset).Let X be a finite graded pure d-dimensional poset, with a unique minimum element ∅ of rank −1.We say that X is measured by a (joint 3. The sequence Π d , . . ., Π −1 has the Markov property: Π i−1 depends only on Π i for all i > −1. We denote the real-valued function spaces on X(j) by C j .We denote the averaging operators of the steps in the Markov process by U j : The operators U j and D j+1 are adjoint with respect to the inner product given by and thus both U j D j+1 and D j+1 U j are positive semi-definite since, for example, for all f The process we defined for the distribution ⃗ Π in a simplicial complex is an example of a measured poset.For the Grassmann poset mentioned above, we also have a similar probabilistic experiment: 1. Choose a subspace of dimension d + 1, s d ∈ X(d), uniformly at random.
2. Given a subspace s i of dimension i + 1, choose s i−1 ∈ X(i − 1) to be a uniformly random codimension 1 subspace of s i .
An analog for Theorem 3.2 holds for any measured poset.We say that a k-dimensional measured poset X is proper if for all j ≤ k − 1, ker U j = {0}.Also, as before we denote Theorem 6.2.If X is a proper k-dimensional measured poset then we have the following decomposition of C k : In other words, for every function f ∈ C k there is a unique choice of h i ∈ H i such that the functions f Proof.We first prove by induction on ℓ that every function f ∈ C ℓ has a representation f = ∑ ℓ i=−1 U ℓ−i h i , where h i ∈ H i .This trivially holds when ℓ = −1.Suppose now that the claim holds for some ℓ < k, and let f ∈ C ℓ+1 .Since D ℓ+1 : C ℓ+1 → C ℓ is a linear operator, we can decompose C ℓ+1 to ker D ℓ+1 ⊕ (ker D ℓ+1 ) ⊥ .It is well known that (ker D ℓ+1 ) ⊥ = im D * ℓ+1 so we have C ℓ+1 = ker D ℓ+1 ⊕ im D * ℓ+1 = ker D ℓ+1 ⊕ im U ℓ , and therefore we can write f = h ℓ+1 + Ug, where h ℓ+1 ∈ H ℓ+1 and g ∈ C ℓ .Applying induction, we get that g = ∑ ℓ i=−1 U ℓ−i h i , where h i ∈ H i .Substituting this in f = h ℓ+1 + Ug completes the proof.
It remains to show that the representation is unique.Since ker h i is not only surjective but also injective.In other words, the representation of f is unique.

Sequentially differential posets
Sequentially differential posets were first defined and studied (in a slightly different form) by Stanley [Sta88,Sta90].Definition 6.3 (Sequentially differential posets).Sequentially differential posets are measured posets whose averaging operators U, D satisfy an equation for some r j , δ j ∈ R ≥0 and all 0 ≤ j ≤ k − 1.
For example, the complete complex satisfies this definition with parameters In other words, The Grassmann poset Gr q (n, d) is also a sequentially differential poset with The above following from the claim below, that the reader can verify by direct calculation: Claim 6.4.Let X be a measured poset, and suppose we can decompose: where 0 ≤ α i , β i ≤ 1 are constants and M i is some operator.Then where In both the complete complex and the Grassmann poset Gr q (n, d), the non-lazy upper walk and the non-lazy lower walk are the same -given t 1 ∈ X(i), our choice for t 2 ∈ X(i) is a set (or subspace in the Grassmann case) that shares an intersection of size (resp.dimension) i with t 1 (with uniform probability).The only difference between DU and UD is the probability to stay in place.Thus we can decompose: where M i is the non-lazy upper (or lower) random walk.In the simplicial complex case and in the Grassmann case We relax Definition 6.3 to an almost sequentially differential poset -a measured poset that approximately satisfies such an identity: Definition 6.5 (Expanding Poset).Let ⃗ r, ⃗ δ ∈ R k ≥0 , and let γ < 1.We say that X is an (⃗ r, ⃗ δ, γ)-expanding poset (or (⃗ r, ⃗ δ, γ)-eposet) if for all j ≤ k − 1: A sequentially differential poset is an eposet with γ = 0.As we saw in (3), a γ-HDX is an (⃗ r, ⃗ δ, γ)- eposet, where r j = 1 j+2 and δ j = 1 − 1 j+2 .We can use (15) to assert that Gr q (n, d) is an (⃗ r, ⃗ δ, γ)-eposet for r i = q−1 q i+2 −1 , δ i = 1 − r i , and γ = O(1/q n−d ).While this only shows that the Gr q (n, d) is an eposet (even though it is truly sequentially differential), the parameters are much simpler, thus calculations regarding the random walks are easier (see for instance the calculations in Section 6.6).

Almost orthogonality of decomposition
In this section we show that in an eposet, the spaces V i are almost orthogonal to one another.Moreover, we show that these spaces are "almost eigenspaces" of the operator DU.Theorem 6.6.Let X be a k-dimensional (⃗ r, ⃗ δ, γ)-eposet.For every function f on C ℓ for ℓ ≤ k, the decomposition f = f −1 + • • • + f ℓ of Theorem 6.2 satisfies the following properties, when γ is small enough (as a function of k and the eposet parameters): The hidden constant in the O notations depends only on k and the eposet parameters (⃗ r, ⃗ δ, γ) and not on the the eposet size |X|.In particular, the last item implies that if⃗ r > 0 then for a small enough γ, the poset is proper.
In a measured poset, the decomposition of Theorem 6.2 is not necessarily orthogonal.However, this theorem shows that for an eposet, the decomposition is almost orthogonal.
Remark 6.7.In the special case of a sequentially differential poset, i.e. γ = 0, we do get that the decomposition in Theorem 6.2 is orthogonal, and that the decomposition for the r ℓ i given in (17).
Recall our convention that for f ∈ C ℓ−j , We first show how the third item in Theorem 6.6 implies the rest.The following proposition says that the decomposition in Theorem 6.2 is a decomposition of "approximate eigenspaces" of UD.We postpone its proof to the end of this section, and use it first to obtain the full statement of Theorem 6.2.
Recall that Proposition 6.8.Let X be an (⃗ r, ⃗ δ, γ)-eposet, and let h ∈ H ℓ−j .Then U j h ∈ V ℓ−j is an approximate eigenvector of D ℓ+1 U ℓ with eigenvalue r ℓ j+1 : We proceed by showing that these approximate eigenspaces V j are approximately orthogonal.
Proof.Assume without loss of generality that i > j.To prove the statement we use Proposition 6.8 and induction on m = ℓ − i.The base case where m = ℓ − i = 0 (or ℓ = i) follows from the fact that h ℓ is orthogonal to f ℓ−j , for any j ≥ 1 since it is in Assuming the statement holds for m, we show it for m + 1.Let f i , f j be as above (where ℓ As DU ℓ−i h i − r ℓ ℓ−i+1 U (ℓ−1)−i h i = O(γ) ∥h i ∥ then by Cauchy-Schwarz this upper bounded by O(γ) ∥h i ∥ U (ℓ−1)−j h j + r ℓ (ℓ−1)−i ⟨U (ℓ−1)−i h i , U (ℓ−1)−j h j ⟩.
By the induction hypothesis on m = ℓ − 1 − i ≥ 0 this less or equal to The Up operator is an averaging operator so U (ℓ−1)−j h j ≤ h j and the lemma follows.
The preceding lemma gives an error estimate in terms of the norms ∥h i ∥.The following lemma enables us to express the error in terms of the norms ∥ f i ∥.Lemma 6.10.
Proof.By direct calculation with Proposition 6.8 we obtain that for any h ∈ ker D: where Γ t is the remainder, and ∥Γ t ∥ = O(γ) ∥h∥ for all t.Thus D j U j h − ρ ℓ j h = O(γ) ∥h∥ .
Hence using Cauchy-Schwarz, Combining Lemma 6.9 and Lemma 6.10, we obtain the following corollary, which proves the first item of Theorem 6.6.
As a consequence, we obtain an approximate L 2 mass formula, constituting the second item of Theorem 6.6: Corollary 6.12.Under the conditions of Corollary 6.11, for every i ≤ j we have where the hidden constant depends only on k,⃗ r, ⃗ δ.
In particular, swallowing a factor of j − i + 1 in the last inequality.
The fourth item of Theorem 6.6 follows from the preceding ones: Corollary 6.13.Under the conditions of Corollary 6.11, We can bound the magnitude of the second term using Cauchy-Schwarz: using the second item.
For every i, we can bound ⟨ f i , f ⟩ by using the first two items.
Substituting both bounds in (18) and using the second item again, we get We turn to proving Proposition 6.8.It follows directly from a technical claim that generalizes the approximate relation between D and U, namely to an approximate relation between D and U j : for appropriate constants r, δ ∈ R.
The induced (joint) distribution on the link ⃗ Π X s = (Π X s ,d−i−1 , . . ., Π X s ,−1 ) is defined as follows: Namely, the probability of sampling t in X s is the probability of sampling it given that s was sampled from the i-th level in X.
We denote by U s j , D s j the upper and lower walks on X s starting from X s (j).We further denote by M +,s j the non-lazy upper walk on X s starting from X s (j).When s = ∅, that is, X s = X, we simply write M + j .
We define a two-sided link expander poset analogously to the definition for simplicial complexes: Definition 6.16.Let X be a measured poset.We say that X is a γ-two-sided-link expander if for every i ≤ d − 2 and every s ∈ X(i), it holds that where λ(M +,s 0 ) is the second largest eigenvalue of M +,s 0 in absolute value, which is also equal to M +,s 0 − U s −1 D s 0 .
Our main theorem is that for a special class of measurable posets called decomposable posets, the above definition is an equivalent characterization of an eposet.To that end, we first show that Definition 6.5 has an alternate characterization (see Definition 6.19) if the laziness is small.

Laziness and an alternate characterization of eposets
In this section, we show that if the laziness of upper and lower walks of the eposet is small, then there is an alternate more convenient characterization of eposets in terms of U i−1 D i − M + i .First for some definitions.Definition 6.17 (laziness of eposet).Let M be a random walk on the set V. We say that M is α-lazy for some α ∈ (0, 1) if for every t ∈ V we have M(t, t) ≤ α.If furthermore, the operator M can be decomposed as M = αI + (1 − α)M + , then we say that M is α-uniformly lazy.In other words, the walk M is an (α, 1 − α) convex combination of the lazy component I and non-lazy component M + .
Let X be a measured eposet.We say that the upper walk DU is ⃗ α-uniformly lazy for some vector ⃗ α = (α 0 , α 1 , . . ., α d−1 ) if each of the upper walks D i+1 U i are α i -uniformly lazy.
If α i ≤ α for all i ≥ 0 for some α ∈ (0, 1), we then say that the upper walk of X is α-uniformly lazy.
Lemma 6.18.Let γ ∈ (0, 1 /2).Let X be a d-dimensional measured poset whose lower walk UD is γ-lazy and whose upper walk is 1 /2-uniformly lazy.Then Proof.Since the upper walk of X is 1 /2-uniformly lazy, there exists ⃗ α = (α 0 , α 1 , . . ., α d−1 ) such that for all i ∈ {−1, 0, . . . ,d − 1}, we have We apply D i+1 U i − r i I − δ i U i−1 D i to the constant vector 1, which is fixed by all of D i+1 U i , I, U i−1 D i because they are averaging operators.This gives Next, we fix an arbitrary element s ∈ X(i) and let f = 1 s be the function that equals 1 on s and 0 elsewhere.Observe that Now, using the γ-laziness of the lower walks to bound We can now lower-and upper-bound δ i as follows.We have 7 /8 Let us denote A ≈ B to mean ∥A − B∥ ≤ O(γ).We have seen that δ i + r i ≈ 1 and that α i ≈ r i (since and we conclude that Many posets satisfy the mild requirements of Lemma 6.18.For example, in the (d + 1)-dimensional complete complex, the lower walk is 1/(n − d + 1)-lazy, and the upper walk is (1, 1/2, . . ., 1/d)-uniformly lazy, and so 1/2-uniformly lazy.Similarly, in the (d + 1)-dimensional Grassmann complex, the lower walk is q−1 q n−d+1 −1 -lazy, and the upper walk is (1, q−1 q 2 −1 , . . ., q−1 q d −1 )-uniformly lazy, and so 1/(q + 1)-uniformly lazy.
In Definition 6.5, we defined an (⃗ r, ⃗ δ, γ)-eposet to be a poset where ∥DU − rI − δUD∥ ≤ γ.The above lemma states that this is equivalent to provided the lower walk UD is γ-lazy and the upper walk is 1 /2-uniformly lazy.This justifies the following equivalent definition of a γ-eposet.

Decomposable posets
The measured posets we consider in this section have a lattice-like property which we call decomposability.
To define decomposabile posets, we first need the notion of a modular lattice.Given a poset X and elements s 1 , s 2 ∈ X, the join s 1 ∨ s 2 of s 1 , s 2 is an element t such that s 1 , s 2 ≤ t, and t ≤ r whenever s 1 , s 2 ≤ r.If the join exists then it is unique.The meet s 1 ∧ s 2 is defined analogously, with ≤ replaced by ≥.In a simplicial complex, join corresponds to union, and meet to intersection.A graded lattice is said to be modular if ρ(x) + ρ(y) = ρ(x ∨ y) + ρ(x ∧ y) for all x, y ∈ X. Definition 6.20 (Decomposable measured posets).Let X be a measured poset.We say that X is decomposable if the following conditions hold: 1. X is a modular lattice.In particular, s 1 , s 2 ∈ X(i) have a join in X(i + 1) iff they have a meet in X(i − 1).
2. For any s 1 , s 2 ∈ X(i) with meet r ∈ X(i − 1), it holds that As an example, the Grassmann poset is decomposable.Indeed, it is well-known to be modular, since dim(s 1 ) + dim(s 2 ) = dim(s 1 + s 2 ) + dim(s 1 ∩ s 2 ).As for the second condition, it automatically holds whenever the non-lazy upper and lower walks coincide on all links, which is the case for the Grassmann poset.This is because the second condition is easily seen to hold if we replace the non-lazy upper walks with non-lazy down walks.It would be interesting to find other decomposable posets.
One possible source, suggested by an anonymous referee, is the poset of flats of certain matroids.We leave this as a direction for future study.
Another way to obtain a decomposable measured poset is to start with one and introduce weights on the top level.This is a generalization of the special case of simplicial complexes, in which we consider an arbitrary distribution on the top facets.We describe this construction in detail in Section 6.4.3.
We are now ready to state and prove the main theorem of this section, Theorem 6.21 (Equivalence of link-expansion and random-walk expansion).Let X be a d-dimensional measured poset which is decomposable, the lower walk UD is γ-lazy and the upper walk is 1 /2-uniformly lazy.
1.If X is a γ-two-sided link expander, then X is a γ-eposet.
Before proving the theorem, let us calculate the values of η and β for simplicial complexes and for the Grassmann poset Gr q (n, d).
We continue with X = Gr q (n, d).For any r ∈ X(i) and s ∈ X(i + 1) such that r ≤ s we have Proof.Item 1. Assume that X is a γ-two-sided link expander.We show that for all i < d.Let f be a function on X(i), where i < d.By decomposability, We now note that for every r ∈ X(i − 1), Therefore we have that If X is a γ-two-sided link expander then λ(M +,r 0 ) ≤ γ for all r, and so

⟨(M
Item 2. Assume now that X is a γ-HDX.Our goal is to show that for all i < d − 1 and r ∈ X(i), Using the convention that X(−1) consists of a single item ∅, for i = −1 we have M +,∅ 0 = M + 0 , and so U −1 D 0 is zero on the space perpendicular to the constant function.Thus and from our assumption λ(M +,∅ 0 ) ≤ γ.Now assume 1 ≤ i ≤ d − 1, and fix some r ∈ X(i − 1).Let f : X r (0) → R be some eigenfunction of M +,r 0 , which is perpendicular to the constant functions.In order to prove the theorem, we must show that ⟨M Without loss of generality, we may assume that f = 1.In order to obtain a bound on λ(M +,r 0 ), we bound ⟨ f , f ⟩, ⟨M + i f , f ⟩, and ⟨UD f , f ⟩ in terms of f and M +,r 0 .We note that by Bayes' theorem, Furthermore, from what we shown in (21) we obtain that Fix some r ′ ̸ = r.If f ̸ = 0 on the link of r ′ then some s ∈ X(i) satisifies both r ≤ s and r ′ ≤ s; this s must be the join of r and r ′ , and so it is unique.Since M +,r ′ 0 is a non-lazy operator, this implies that ⟨M +,r ′ 0 f , f ⟩ = 0. We remain with In other words, the upper non-lazy random walk is proportional to the local adjacency operator.
We now prove the following claim, which shows that the lower walk scales f by a factor of at most Assuming the above: where the first line uses ( 22) and ( 23), and the second line uses Claim 6.22, our assumption that ∥M + i − U i−1 D i ∥ ≤ γ, and the triangle inequality.
We complete the proof of Theorem 6.21 by proving Claim 6.22: Proof of Claim 6.22.Since UD is PSD, we have ⟨U i−1 D i f , f ⟩ ≥ 0, and so we may remove the absolute value and prove Consider the inner product ⟨D i f , D i f ⟩ -this is the expectation upon choosing r ′ ∼ Π i−1 , and then choosing two i-faces s 1 , s 2 ∈ X(i) containing it independently.Hence we decompose to the cases where

Constructing decomposable posets
Let X be a measured poset given by the distribution ⃗ Π = (Π d , . . ., Π −1 ).Given a distribution D on X(d), we can construct a different distribution ⃗ Ψ by first sampling x ∼ D, and then sampling ⃗ Π conditioned on Π d = x.Lemma 6.23.If X is decomposable with respect to ⃗ Π, then it is decomposable with respect to ⃗ Ψ.
Proof.Only the second condition depends on the measure.Let us spell it out in the case of the original distribution ⃗ Π .
We are given s 1 , s 2 ∈ X(i) with meet r ∈ X(i − 1) and join t ∈ X(i + 1) (which is the only way for the upper walk to get from s 1 to s 2 ), and know that the following two expressions are equal: Let us write these two expressions slightly differently: In order for X to be decomposable with respect to ⃗ Ψ, we need these two expressions to coincide when replacing Π with Ψ throughout.Yet due to the definition of Ψ, this only replaces the Pr[Π i+1 = t] factors with Pr[Ψ i+1 = t] factors.Hence X is decomposable with respect to ⃗ Ψ as well.

Comparison to the Kaufman-Oppenheim decomposition
Kaufman and Oppenheim [KO20] proposed a decomposition of C k to orthogonal spaces in the case of high-dimensional expanders.Their definition extends to the general eposet setting: As U = D * , we have an equivalent definition of these spaces by harmonic conditions similar to ours: By construction, these spaces are orthogonal, and it is easy to see that indeed their direct sum is C k .Kaufman and Oppenheim [KO20, Theorem 1.5] showed that the subspaces B i are approximate eigenspaces of M + .
The following proposition shows that these two spaces are close.Proposition 6.24.If f ∈ V i has unit norm then there exists g ∈ B i so that ∥ f − g∥ = O(γ).
Similarly, if g ∈ B i has unit norm then there exists f ∈ V i so that ∥ f − g∥ = O(γ).
The O notation may depend on k, r, δ only.
This existence of sub-posets as above is the vector-space analog of the existence of γ-HDX simplicial complexes.Moreover, it would be interesting to construct such a poset such that Π 0 , Π d are uniform.
Note however that even in the known constructions for γ-HDX simplicial complexes, Π d is not uniform (but Π 0 is uniform).
Moshkovitz and Raz [MR08] gave a construction that can be viewed as an interesting step in this direction.They constructed, towards a derandomized low degree test, a small set of planes by choosing only planes spanned by directions coming from a smaller field H ⊂ F q .

Eposet parameters in a simplicial complex
Although the definition of (approximately) sequentially differential poset allows a range of parameters⃗ r and ⃗ δ, these parameters turn out to be determined by the laziness of the upper walks, assuming that the lower walks are sufficiently non-lazy.The lemma below shows that any family of simplicial complexes which are eposets, have parameters⃗ r, ⃗ δ approaching r j = 1 j+2 and δ j = 1 − 1 j+2 as γ goes to zero.
Lemma 6.30.Let X (m) be a sequence on k-dimensional (⃗ r (m) , ⃗ δ (m) , γ (m) )-eposets, where lim m→∞ γ (m) = 0. Then for all j ≤ k − 1: Furthermore, suppose that the following two conditions hold: 1.For all j ≤ k − 1, the laziness of U j−1 D j , goes to 0 as m goes to infinity: 2. There exists⃗ α such that for all j ≤ k − 1, D j+1 U j = α j I + (1 − α j )M + , where M + is a non-lazy averaging operator.Proof.To prove both assertions, we use the definition of an eposet to get the following inequality: for any function f ∈ C k .We use this inequality on specific functions f we choose: the constant function, and indicator functions.
The high-dimensional analog of the FKN theorem is obtained from the FKN theorem for the slice using the agreement theorem of Dinur and Kaufman [DK17].

Agreement theorem for high-dimensional expanders
Dinur and Kaufman [DK17] prove an agreement theorem for high-dimensional expanders.The setup is as follows.For each k-face s we are given a local function f s : s → Σ that assigns values from an alphabet Σ to each point in s.Two local functions f s , f s ′ are said to agree if f s (v) = f s ′ (v) for all v ∈ s ∩ s ′ .Let D k,2k be the distribution on pairs (s 1 , s 2 ) obtained by choosing a random t ∼ Π 2k and then independently choosing two k-faces s 1 , s 2 ⊂ t.The theorem says that if a random pair of faces (s, s ′ ) ∼ D k,2k satisfies with high probability that f s agrees with f s ′ on the intersection of their domains, then there must be a global function g : X(0) → Σ such that almost always g| s ≡ f s .Formally: Theorem 8.5 (Agreement theorem for high-dimensional expanders [DK17]).Let X be a d-dimensional λ-two-sided high-dimensional expander, where λ < 1/d, let k 2 < d, and let Σ be some fixed finite alphabet.Let { f s : s → Σ} s∈X(k) be an ensemble of local functions on X(k), i.e. f s ∈ Σ s for each s ∈ X(k)

Proof of Theorem 8.3
Let f , F ∈ C k , where F is a Boolean function and f is a degree 1 function, as in the hypothesis of Theorem 8.3.Since f is a degree 1 function, Lemma 3.5 guarantees that there exist a i ∈ R such that f (y) = ∑ i∈X(0) a i y i .Note that here we view the inputs of f as |X(0)|-bit strings with exactly k + 1 ones, the rest being zero.
We begin by defining two ensembles of pairs of local functions {( f | t , F| t )} t∈X(2k) , {( f | u , F| u )} u∈X(4k) , which are the restrictions of ( f , F) to the 2k-face t and 4k-face u.Formally, for any t ∈ X(2k) and u ∈ X(4k), consider the restriction of f to t and u defined as follows: Observe that the f | t 's are degree 1 functions, while the F| t 's are Boolean functions (similarly for f | u 's and F| u 's).Now, define the following quantities: Clearly, E t [ε t ], E u [δ u ] = O(ε).
We now prove that functions in the collection of local functions {d t } t typically agree with each other.This lets us use the agreement theorem, Theorem 8.5, to sew these different local functions together, yielding a single function d : X(0) → {0, 1, α k , α k − 1}.This d determines a global degree 1 function g defined as follows: g(y) = ∑ i∈X(0) d(i)y i .Claim 8.8.There exists a function d : X(0) → {0, We now show that g in fact agrees pointwise with F most of the time.
spaces.Subsequent to our work, Alev, Jeronimo and Tulsiani [AJT19] used our techniques to analyze more general random walks, which they call swap walks.The same walks were analyzed independetly by different techniques in the work of Dikstein and Dinur [DD19] under the name complement walks.

= 1
− α j ,In particular, if X (m) are k-dimensional simplicial complexes, then α j = 1 j+2 and we get lim assumption that the laziness probability of UD goes to zero.In other words, the interesting eposets are γ-HDXs.
By definition, A s fixes constant functions, and is a Markov operator.It is self-adjoint with respect to the inner product above.Thus A s has eigenvalues λ 1 = 1 ≥ λ 2 ≥ . . .≥ λ m , where m is the number of vertices.We define λ(A s ) = max(|λ 2 |, |λ m |).Orthogonality of eigenspaces guarantees that If Pr (s 1 ,s 2 )∼D k,2k [ f s 1 | s 1 ∩s 2 ≡ f s 2 | s 1 ∩s 2 ] > 1 − ε then there is a g : X(0) → Σ such that [ f s ≡ g| s ] ≥ 1 − O λ (ε).Dinur and Kaufman state the theorem for a binary alphabet, the general version follows in a black box fashion by applying the theorem for binary alphabets ⌈log 2 |Σ|⌉ many times.
u E t⊂u E[(h u [The d's guaranteed by Claim 8.8 naturally correspond to a degree 1 function g : X(k) → R as follows: g(y) := ∑ i∈X(0) d(i)y i .We now show that this g is mostly Boolean.Claim 8.9.t [g| t = g t ] = Pr t [d| t ≡ d t ] = 1 − O λ (ε).