## Introduction

Let $$\mathcal {A}$$ be a non-empty finite subset of $$\mathbb {R}^{n}$$ and $$\mathbb {R}^{\mathcal {A}}$$ denote the set of real vectors whose components are indexed by the set $$\mathcal {A}$$. For $$\beta \in \mathcal {A}$$, write

$$N_{\beta} = \left\{\nu \in \mathbb{R}^{\mathcal{A}} ~:~ \nu_{\setminus \beta} \geq \mathbf{0},~ \sum\limits_{\alpha \in \mathcal{A}} \nu_{\alpha} = 0 \right\}$$

for the cone of vectors in $$\mathbb {R}^{\mathcal {A}}$$ whose entries sum to 0 and which may only have a negative entry in component β. Here, νβ abbreviates the vector in $$\mathbb {R}^{\mathcal {A} \setminus \{\beta \}}$$ which consists of all the components of ν except the component indexed with β.

In the context of non-negative polynomials and non-negative exponential sums, Murray and the authors  have recently introduced the following generalization of the simplicial circuits of an affine matroid. A non-zero vector νNβ is called a sublinear circuit of $$\mathcal {A}$$ with respect to a given convex set X (for short, X-circuit) if

1. (1)

$$\sup _{x \in X} ((-\mathcal {A} \nu ^{\ast })^{T} x) < \infty$$,

2. (2)

if $$\nu \mapsto \sup _{x \in X} ((-\mathcal {A} \nu )^{T} x)$$ is linear on a two-dimensional cone in Nβ, then ν is not in the relative interior of that cone.

Here, $$\mathcal {A}$$ is treated as a linear operator $$\mathcal {A}:\mathbb {R}^{\mathcal {A}}\rightarrow \mathbb {R}^{n}$$, $$\nu \mapsto {\sum }_{\alpha \in \mathcal {A}} \alpha \nu _{\alpha }$$.

In the special case $$X=\mathbb {R}^{n}$$, the first condition is equivalent to $$\mathcal {A} \nu ^{\ast } = \mathbf {0}$$, which together with the second condition tells us that ν is a circuit of the affine matroid with ground set $$\mathcal {A} \subset \mathbb {R}^{n}$$ (see, for example, [7, 19]). Note that these $$\mathbb {R}^{n}$$-circuits are uniquely determined (up to scaling) by their supports. Moreover, the condition νNβ enforces that the convex hull of the support $$\text {supp} \nu ^{\ast } := \{\alpha : \nu ^{\ast }_{\alpha } \neq 0 \}$$ forms a simplex (possibly of dimension less than n) and exactly one element in suppν is contained in the relative interior of this simplex. By the identification of circuits ν with their supports, it is customary to call a subset $$A \subset \mathcal {A}$$ a simplicial circuit if its convex hull conv(A) forms a simplex and the relative interior relintconv(A) contains exactly one element of A. See Fig. 1.

Sublinear circuits appear naturally in the study of non-negative polynomials and, more generally, of non-negative exponential sums $${\sum }_{\alpha \in \mathcal {A}} c_{\alpha } \exp (\alpha ^{T} x)$$. In the framework of exponential sums, Murray, Chandrasekaran and Wierman  have shown that the set of exponential sums (also denoted as signomials)

$$\sum\limits_{\alpha \in \mathcal{A}} c_{\alpha} \exp(\alpha^{T} x)$$

which have at most one negative term and which are non-negative on X can be characterized in terms of a relative entropy program. Sums of such exponential sums are non-negative as well. The cone of exponential sums which admit such a non-negativity certificate is called the X-SAGE cone (or conditional SAGE cone) supported on $$\mathcal {A}$$ and is denoted $$C_{X}(\mathcal {A})$$. Here, the acronym SAGE stands for Sums of Arithmetic-Geometric Exponentials . This cone yields non-negativity certificates for a subclass of polynomials and signomials and thus provides a complement to non-negativity certificates based on sums of squares. It is possible to combine these techniques, see .

The introduction and the study of sublinear circuits is motivated by the following guiding questions:

1. (1)

For $$f \in C_{X}(\mathcal {A})$$, there is often more than one way to write f as a sum of exponential sums which have only one negative term and which are non-negative on X. Are there distinguished representations among them?

2. (2)

Can the X-SAGE cone be naturally decomposed as a Minkowski sum of smaller subcones?

3. (3)

How can the convex geometric properties, such as the extremal rays, of the X-SAGE cone be characterized?

Here, the second and the third question can be seen as geometric viewpoints of the first question. By , the conditional SAGE cone $$C_{X}(\mathcal {A})$$ can be decomposed as a Minkowski sum, where each non-trivial summand refers to the X-SAGE exponentials induced by a sublinear circuit (see Proposition 2.6 for a formal statement). Therefore, the sublinear circuits can be seen as a convex-combinatorial core underlying the conditional SAGE cone. In the unconstrained setting, the circuit viewpoint has been used prominently in the works of Reznick , Iliman and de Wolff  as well as Pantea, Koeppl and Craciun  on non-negative polynomials.

One step further, a reducibility concept for sublinear circuits provides a non-redundant decomposition of the conditional SAGE cone in terms of reduced circuits. This reducibility notion generalizes the reducibility notion for the unconstrained situation which was introduced in , see also . The reduced $$\mathbb {R}^{n}$$-circuits are the key concept to characterize the extremal rays of the unconstrained SAGE cone, since the reduced $$\mathbb {R}^{n}$$-circuits induce extremal rays. In generalization of this, the reduced sublinear circuits facilitate to study the extremal rays of the X-SAGE cone , see Proposition 2.8 for a formal statement.

From a more general point of view, sublinear circuits generalize the combinatorial concepts known from an affine matroid, by taking additionally into account a convex constraint set X. As such, sublinear circuits enlarge the tool set of convex-combinatorial techniques in algebraic geometry and algebraic optimization, see [2, 4, 10, 13, 22] for general background on the rich connections between these disciplines.

In the current paper, we study sublinear circuits for the situation that X is polyhedral. In this setting, the sublinear circuits can be exactly characterized in terms of the normal fan of a certain polyhedron, see Proposition 2.2. This induces a rich polyhedral-combinatorial structure and makes these sublinear circuits amenable to effective computations. For polyhedral X, the number of sublinear circuits is finite, and this gives decompositions of the X-SAGE cones into finitely many summands referring to the X-SAGE exponentials induced by a sublinear circuit.

Among the class of polyhedra, polyhedral cones exhibit particularly nice properties and were in the focus of attention in earlier treatments. Note that, as a very particular case, the unconstrained setting $$X=\mathbb {R}^{n}$$, which is treated in [7, 12, 15], also falls into the class of polyhedral cones. The univariate case $$\mathbb {R}_{+}$$ was studied in detail in . Moreover, every univariate case can be transformed to one of the two conic cases $$\mathbb {R}$$ (unconstrained case), $$\mathbb {R}_{+}$$ (one-sided infinity interval), or to the non-conic [− 1,1] (compact interval). In the multivariate case, the polyhedra $$\mathbb {R}^{n}$$ (unconstrained case), $$\mathbb {R}_{+}^{n}$$ (non-negative orthant) and the cube [− 1,1]n provide prominent examples. In contrast to the unconstrained case and to the non-negative orthant, the cube [− 1,1]n provides a non-conic case.

The goal of the current paper is to develop techniques for handling sublinear circuits, which also provide an access towards approaching non-conic polyhedral sets.

### Contributions

1. 1.

We reveal some precise connections between sublinear circuits and their supports, see Lemma 3.3. In particular, we show that in general sublinear circuits are not uniquely determined by their supports, see Example 3.2.

2. 2.

We develop necessary and sufficient conditions for identifying X-circuits based on support conditions. See Theorems 4.1 and 4.5.

3. 3.

We give conditions for identifying reduced X-circuits which generalize the known characterizations for the unconstrained case. See Theorems 6.2 and 6.4.

4. 4.

Building upon the criteria for sublinear circuits, we study the prominent cases of the non-negative orthant $$\mathbb {R}_{+}^{n}$$ and the cube [− 1,1]n in detail, in particular the planar case and with regard to small support sets. Specifically, for $$\mathcal {A} = \{(i,j) : 1 \le i,j \le 3\} \subset \mathbb {R}^{2}$$ and X = [− 1,1]2, there are 132 circuits and 24 reduced X-circuits, which we classify. See Sections 5 and 6.

5. 5.

As a specific consequence, we can exactly determine the extreme rays of the univariate [− 1,1]-SAGE cone, see Theorem 6.6.

For further recent work on the techniques for certifying non-negativity of signomials and polynomials based on the SAGE cone and its variants, see [1, 5, 18, 23, 24].

The paper is structured as follows. After collecting relevant concepts of sublinear circuits and non-negative signomials in Section 2, we study the connection of X-circuits and their supports in Section 3. Section 4 deals with necessary and sufficient conditions for sublinear circuits, and Section 5 focuses on the case of the cube [− 1,1]n. In Section 6, we provide criteria for reduced sublinear circuits, which gives as a consequence the characterization of the extreme rays of the [− 1,1]-SAGE cone. Section 7 concludes the paper.

## Preliminaries

Throughout the paper, the symbol 0 denotes the zero vector, 1 denotes the all-ones vector and [m] abbreviates the set {1,…,m} for $$m\in \mathbb {N}$$. For a given convex subset $$X \subset \mathbb {R}^{n}$$, denote by $$\sigma _{X}(y) = \sup \{ y^{T} x : x \in X \}$$ its support function.

### X-Circuits

For a non-empty convex set X and finite $$\mathcal {A} \subset \mathbb {R}^{n}$$, we consider X-circuits as defined in the Introduction, where we note that the two defining conditions can also be expressed in terms of the support function: then (1) becomes the condition $$\sigma _{X}(-\mathcal {A} \nu ^{\star }) < \infty$$ and in (2), the mapping $$\nu \mapsto \sigma _{X}(-\mathcal {A} \nu )$$ occurs.

A sublinear circuit λNβ is called normalized if λβ = − 1, in which case the condition 1Tλ = 0 in the definition of Nβ implies $${\sum }_{\alpha \neq \beta } \lambda _{\alpha } =1$$. For normalized X-circuits, we usually employ the symbol λ, whereas we use the symbol ν for X-circuits which are not necessarily normalized. For $$X\subset \mathbb {R}^{n}$$, denote by $${{\varLambda }}_{X}(\mathcal {A}, \beta )$$ the set of normalized X-circuits of $$\mathcal {A}$$ with negative entry corresponding to $$\beta \in \mathcal {A}$$ and by $${{\varLambda }}_{X}(\mathcal {A}):=\bigcup _{\beta \in \mathcal {A}} {{\varLambda }}_{X}(\mathcal {A},\beta )$$ the set of all normalized X-circuits of $$\mathcal {A}$$. For example, in the univariate case $$X=\mathbb {R}$$ with $$\mathcal {A} = \{\alpha _{1}, \ldots , \alpha _{m}\} \subset \mathbb {R}$$, the normalized X-circuits are

$$\lambda = \frac{\alpha_{k}-\alpha_{j}}{\alpha_{k} - \alpha_{i}} e^{(i)} - e^{(j)} + \frac{\alpha_{j} - \alpha_{i}}{\alpha_{k} - \alpha_{i}} e^{(k)} \quad \text{ for } i < j < k,$$
(2.1)

where e(i) denotes the i-th unit vector in $$\mathbb {R}^{m}$$. It is possible that a given support set $$\mathcal {A} \subset \mathbb {R}^{n}$$ has no $$\mathbb {R}^{n}$$-circuits, but then every $$\alpha \in \mathcal {A}$$ is an extreme point of $$\text {conv} \mathcal {A}$$.

### Example 2.1

In the context of the conditional SAGE cone, we can assume without loss of generality, that the convex set X is closed. In the one-dimensional case, up to translation and additive inversion, each closed, convex set is of the form $$X^{(1)}=\mathbb {R}$$, $$X^{(2)}=\mathbb {R}_{+}$$ or X(3) = [− 1,1]. For the support set $$\mathcal {A}=\{0,1,2\}$$, it is instructive to list the sublinear circuits with respect to the three sets X(1), X(2) and X(3). The set Λ(1) of X(1)-circuits is $$\mathbb {R}_{+}(1,-2,1)^{T}$$, which is a special case of (2.1). The set Λ(2) of X(2)-circuits is

$${{\varLambda}}^{(2)} = {{\varLambda}}^{(1)} \cup \mathbb{R}_{+}(0,-1,1)^{T} \cup \mathbb{R}_{+}(-1,0,1)^{T} \cup \mathbb{R}_{+}(-1,1,0)^{T},$$

and this is a special case of Proposition 2.3 below. In particular, the element (− 1,1,0)T is not an X(1)-circuit, because

$$\sigma_{X^{(1)}}\left( -\mathcal{A} (-1,1,0)^{T}\right) = \sigma_{X^{(1)}}(-1) = \sup_{x \in \mathbb{R}}(-x) = \infty.$$

The set Λ(3) of X(3)-circuits is

$${{\varLambda}}^{(3)} = {{\varLambda}}^{(2)} \cup \mathbb{R}_{+}(0,1,-1)^{T} \cup \mathbb{R}_{+}(1,0,-1)^{T} \cup \mathbb{R}_{+}(1,-1,0)^{T},$$

which is a special case of Proposition 3.1 proven in Section 3. Note that, for example, the element (1,− 1,0)T is not an X(2)-circuit, as

$$\sigma_{X^{(2)}}\left( -\mathcal{A} (1,-1,0)^{T}\right) = \sigma_{X^{(2)}}(1)=\sup_{x\ge 0} x=\infty.$$

For polyhedral X, the sublinear circuits can be characterized in terms of normal fans of polyhedra. We refer the reader for background on normal fans to [25, Chapter 7] (for the bounded case of polytopes), [8, Section 5.4] or [22, Chapter 2]. For each face F of a polyhedron P, let

$$\mathsf{N}_{P}(F) = \left\{w ~:~ z^{T} w = \sigma_{P}(w) ~\forall~ z\in F\right\}$$

be the associated outer normal cone.

The support function σP of a polyhedron P is linear on every outer normal cone, and the linear representation may be given by σP(w) = zTw for any zF. The outer normal fan of P is the collection of all outer normal cones,

$$\mathcal{O}(P) = \{\mathsf{N}_{P}(F) ~:~ F \text{ is a face of } P\}.$$

For a convex cone $$K \subset \mathbb {R}^{n}$$, denote by $$K^{\ast } := \{c\in \mathbb {R}^{n}: c^{T}x\ge 0 \text { for all }x\in K\}$$ the dual cone and by K := −K the polar. For a set $$S \subset \mathbb {R}^{n}$$, let rec(S) := {t : ∃sS such that s + λtSλ ≥ 0} denote its recession cone. Using these notations, the support of $$\mathcal {O}(P)$$ coincides with rec(P). The full-dimensional linearity domains of the support function σP are the outer normal cones of the vertices of P (see also [6, Section 1]).

### Proposition 2.2

 Let X be a polyhedron. Then νNβ ∖{0} is an X-circuit if and only if cone{ν} is a ray in $$\mathcal {O}(-\mathcal {A}^{T} X + N_{\beta }^{\circ })$$. As a consequence, there are only finitely many normalized X-circuits.

If X is a polyhedral cone, the situation simplifies, because the support function $$\sigma _{X}(-\mathcal {A} \nu )$$ of a circuit ν can only attain the values zero and infinity. Namely, since $$\mathcal {O}(-\mathcal {A}^{T} X + N_{\beta }^{\circ }) = (\mathcal {A}^{T} X)^{\ast } \cap N_{\beta }$$ and

$$\begin{array}{@{}rcl@{}} (\mathcal{A}^{T} X)^{\ast} = \{\nu ~:~ \nu^{T} y \ge 0~\forall y \in \mathcal{A}^{T} X\}&=& \{\nu ~:~ (\mathcal{A}\nu)^{T} x \ge 0~\forall x \in X\} \\ &=& \{\nu ~:~ \sigma_{X}(-\mathcal{A}\nu) \le 0\}, \end{array}$$

the X-circuits νNβ are precisely the edge generators of the polyhedral cone $$\{\nu \in N_{\beta }~: \sigma _{X}(-\mathcal {A} \nu ) \le 0\}$$.

In the univariate case with $$\mathcal {A} = \{\alpha _{1}, \ldots , \alpha _{m} \} \subset \mathbb {R}$$, the sublinear circuits for the univariate cone $$[0,\infty )$$ have been determined in :

### Proposition 2.3

For $$X = [0,\infty )$$ and $$\mathcal {A} = \{\alpha _{1}, \ldots , \alpha _{m} \} \subset \mathbb {R}$$ with α1 < ⋯ < αm, the normalized X-circuits $$\lambda \in \mathbb {R}^{m}$$ are the vectors either of the form λ = e(k)e(j) for j < k or of the form

$$\lambda = \frac{\alpha_{k}-\alpha_{j}}{\alpha_{k} - \alpha_{i}} e^{(i)} - e^{(j)} + \frac{\alpha_{j} - \alpha_{i}}{\alpha_{k} - \alpha_{i}} e^{(k)} \quad\text{ for }~i < j < k.$$

Note that the X-circuits of the second form are exactly the $$\mathbb {R}$$-circuits from (2.1).

### Remark 2.4

By Proposition 2.2, the X-circuits of $$\mathcal {A}$$ are the outer normal vectors to facets of polyhedra $$P = -\mathcal {A}^{T} X + N_{\beta }^{\circ }$$ (for some β). As Nβ is pointed, P is always full-dimensional.

### Example 2.5

If X is a convex cone, then the second condition in the definition of X-circuits simplifies, because the support function evaluates to 0 whenever it is finite. Consider the conic sets $$X^{(1)}=\mathbb {R}^{2}$$ and $$X^{(2)}=\mathbb {R}_{+}^{2}$$ with respect to the support set $$\mathcal {A}=\{(0,0)^{T},(0,4)^{T},(4,0)^{T},(1,1)^{T}\}$$, as illustrated in Fig. 2. Three points of $$\mathcal {A}$$ are vertices of the convex hull of $$\mathcal {A}$$, and the point (1,1)T is contained in the relative interior of the convex hull of $$\mathcal {A}$$. The set Λ(1) of X(1)-circuits is $$\mathbb {R}_{+}(2,1,1,-4)^{T}$$, and the set Λ(2) of X(2)-circuits is

$${{\varLambda}}^{(2)} = {{\varLambda}}^{(1)} \cup \mathbb{R}_{+}(0,3,1,-4)^{T} \cup \mathbb{R}_{+}(0,1,3,-4)^{T}.$$

Similar to the arguments for the one-dimensional-cases in Example 2.1, the X(2)-circuit (0,1,3,− 4)T is not an X(1)-circuit, as the resulting support function with respect to X(1) is not finite anymore: $$\sigma _{\mathbb {R}^{2}}(-\mathcal {A}(0,1,3,-4))=\sup _{x_{1}\in \mathbb {R}}-8x_{1}=\infty$$.

### Non-negativity of Signomials

We consider the cone $$C_{X}(\mathcal {A})$$ of X-SAGE signomials supported on $$\mathcal {A}$$ , which was informally introduced in the Introduction. For $$\beta \in \mathcal {A}$$, set

$$C_{X}(\mathcal{A},\beta) = \left\{f ~:~ f = \sum\limits_{\alpha\in\mathcal{A}}c_{\alpha}\exp(\alpha^{T} x) \text{ is non-negative on \textit{X}},~ c_{\setminus \beta} \geq \mathbf{0} \right\},$$

called the X-AGE cone supported on $$\mathcal {A}$$ with respect to β. By , $$C_{X}(\mathcal {A})$$ decomposes as $$C_{X}(\mathcal {A}) = {\sum }_{\beta \in \mathcal {A}} C_{X}(\mathcal {A},\beta )$$.

Given a vector λNβ with λβ = − 1, the λ-witnessed AGE cone $$C_{X}(\mathcal {A},\lambda )$$ is defined as

$$C_{X}(\mathcal{A},\lambda) = \left\{\sum\limits_{\alpha\in\mathcal{A}}c_{\alpha}\exp(\alpha^{T} x) : \prod\limits_{\alpha \in \lambda^{+}} \left( \frac{c_{\alpha}}{\lambda_{\alpha}} \right)^{\lambda_{\alpha}} \geq - c_{\beta} \exp\left( \sigma_{X}(-\mathcal{A} \lambda)\right), c_{\setminus \beta} \geq \mathbf{0} \right\}.$$

All signomials in $$C_{X}(\mathcal {A},\lambda )$$ are non-negative over X. Moreover, for polyhedral X, the conditional SAGE cone can be naturally decomposed into the Minkowski sum of a finite set of λ-witnessed cones, where λ runs over the normalized X-circuits.

### Proposition 2.6

 Let $$X \subset \mathbb {R}^{n}$$ be a polyhedron and $${{\varLambda }}_{X}(\mathcal {A})$$ be non-empty. Then, the conditional SAGE cone $$C_{X}(\mathcal {A})$$ decomposes as the finite Minkowski sum

$$C_{X}(\mathcal{A}) = \sum\limits_{\lambda \in {{\varLambda}}_{X}(\mathcal{A})} C_{X}(\mathcal{A},\lambda).$$

### Reduced Circuits and Non-negativity of Signomials

In general, the representation in Proposition 2.6 can include redundancies. Using a reducibility concept of circuits, which takes into account the value $$\sigma _{X}(-\mathcal {A} \nu )$$ of an X-circuit ν, an irredundant representation can be given. The extended form of an X-circuit $$\nu \in \mathbb {R}^{\mathcal {A}}$$ is defined as $$(\nu ,\sigma _{X}(-\mathcal {A} \nu )) \in \mathbb {R}^{\mathcal {A}} \times \mathbb {R}$$. The set

$$G_{X}(\mathcal{A}) = \text{cone}(\{(\nu,\sigma_{X}(-\mathcal{A}\nu)) : \lambda \in {{\varLambda}}_{X}(\mathcal{A}) \} \cup \{(\mathbf{0}, 1)\})$$

is called the circuit graph of $$(\mathcal {A},X)$$. Whenever we consider the circuit graph or the reduced sublinear circuits defined subsequently, we will tacitly assume that the functions $$x \mapsto \exp (\alpha ^{T} x)$$, $$\alpha \in \mathcal {A}$$, are linearly independent on X. Then the set $$G_{X}(\mathcal {A})$$ is pointed and closed (see ).

### Definition 2.7

An X-circuit ν is called reduced if its extended form generates an extreme ray of $$G_{X}(\mathcal {A})$$. Denote by $${{\varLambda }}_{X}^{\star }(\mathcal {A})$$ the set of normalized reduced X-circuits.

For polyhedral X, the conditional SAGE cone $$C_{X}(\mathcal {A})$$ can be decomposed into the Minkowski sum of a finite set of λ-witnessed cones, where λ runs over the reduced X-circuits.

### Proposition 2.8

 Let $$X \subset \mathbb {R}^{n}$$ be a polyhedron and $${{\varLambda }}_{X}(\mathcal {A})$$ be non-empty. Then, the conditional SAGE cone $$C_{X}(\mathcal {A})$$ decomposes as the finite Minkowski sum

$$C_{X}(\mathcal{A}) = \sum\limits_{\lambda \in {{\varLambda}}^{\star}_{X}(\mathcal{A})} C_{X}(\mathcal{A},\lambda).$$

Moreover, there does not exist a proper subset $${{\varLambda }} \subsetneq {{\varLambda }}_{X}^{\star }(\mathcal {A})$$ with

$$C_{X}(\mathcal{A}) = \sum\limits_{\lambda \in {{\varLambda}}} C_{X}(\mathcal{A},\lambda).$$

In the univariate case with $$\mathcal {A} = \{\alpha _{1}, \ldots , \alpha _{m}\}$$ sorted ascendingly, we have

$$\begin{array}{@{}rcl@{}} &&{{\varLambda}}^{\star}_{\mathbb{R}}(\mathcal{A}) = \left\{\left( \frac{\alpha_{i+1}-\alpha_{i}}{\alpha_{i+1} - \alpha_{i-1}}\right)e^{(i-1)} - e^{(i)} + \left( \frac{\alpha_{i} - \alpha_{i-1}}{\alpha_{i+1} - \alpha_{i-1}}\right)e^{(i+1)}~:~ 2 \le i \le m-1 \right\} \\ &&\text{and } {{\varLambda}}^{\star}_{[0,\infty)}(\mathcal{A}) = {{\varLambda}}^{\star}_{\mathbb{R}}(\mathcal{A}) \cup \{e^{(2)} - e^{(1)}\}. \end{array}$$

See  for $${{\varLambda }}^{\star }_{\mathbb {R}}(\mathcal {A})$$ and  for $${{\varLambda }}^{\star }_{[0,\infty )}(\mathcal {A})$$.

## X-Circuits and Their Supports

In this section, we study the relationship between X-circuits and their supports. We begin with a study of the compact univariate case [− 1,1]. This complements the known cases $$\mathbb {R}$$ from the Introduction and $$[0,\infty )$$ from Proposition 2.3.

### Proposition 3.1

Let X = [− 1,1] and $$\mathcal {A} = \{\alpha _{1}, \ldots , \alpha _{m} \} \subset \mathbb {R}$$ with α1 < ⋯ < αm. An element $$\lambda \in \bigcup _{\beta \in \mathcal {A}}N_{\beta }$$ is a normalized X-circuit if and only if it is of the following form:

1. (1)

λ = e(j)e(i) for ij, or

2. (2)

$$\displaystyle {\lambda = \frac {\alpha _{k}-\alpha _{j}}{\alpha _{k}-\alpha _{i}} e^{(i)} -e^{(j)} + \frac {\alpha _{j}-\alpha _{i}}{\alpha _{k}-\alpha _{i}} e^{(k)}}$$ for i < j < k.

### Proof

Fix j ∈ [n] and write $$N_{j} := N_{(\alpha _{j})}$$ for short. By Proposition 2.2, the X-circuits are the vectors spanning the rays in the outer normal cone of the polyhedron

$$\begin{array}{@{}rcl@{}} P & =& -\mathcal{A}^{T}X+N_{j}^{\circ}\\ & =& \text{conv}\left\{(\alpha_{1}, \ldots, \alpha_{m})^{T}, -(\alpha_{1}, \ldots, \alpha_{m})^{T}\right\} + \mathbb{R}\cdot \mathbf{1}-\sum\limits_{i\ne j}\text{pos} e^{(i)}\\ & =& \left\{\theta (\alpha_{1}, \ldots, \alpha_{m})^{T}+\mu\mathbf{1}: -1\le \theta\le 1, \mu\in\mathbb{R}\right\} - \sum\limits_{i\ne j}\text{pos} e^{(i)}. \end{array}$$

Hence, a point w is contained in P if and only if

$$w_{i} \le \theta\alpha_{i}+\mu\quad\text{for }i\ne j \quad\text{ and } \quad w_{j} = \theta\alpha_{j}+\mu \quad \text{for } \theta\in[-1,1] \text{ and } \mu\in\mathbb{R}.$$

By eliminating μ, this is equivalent to

$$w_{j}-w_{i}+\theta (\alpha_{i}-\alpha_{j})\ge 0 \quad \text{ for all }i\in [m] \setminus \{j\},~-1\le \theta \le 1.$$

Eliminating 𝜃 then gives

$$\frac{w_{j}-w_{i}}{\alpha_{j}-\alpha_{i}}\left\{\begin{array}{ll} \le \theta \le 1 &\quad\text{ if } \alpha_{i} >\alpha_{j},\\ \ge \theta \ge -1 &\quad \text{ if } \alpha_{i} <\alpha_{j}, \end{array}\right.$$

which yields $$\frac {w_{j}-w_{i}}{\alpha _{j}-\alpha _{i}} \ge \frac {w_{k}-w_{j}}{\alpha _{k}-\alpha _{j}}$$ for all i,k ∈ [m] with i < j < k and wiwj ≤|αiαj| for all i ∈ [m] ∖{j}. Hence,

$$\begin{array}{@{}rcl@{}} P &=&\left\{w\in\mathbb{R}^{m} :~w_{i}-w_{j} \le |\alpha_{i}-\alpha_{j}| \text{ for } i\in [m] \setminus \{j\} \text{ and } \right. \end{array}$$
(3.1)
$$\begin{array}{@{}rcl@{}} &&~\left. w_{i}(\alpha_{k} - \alpha_{j}) - w_{j}(\alpha_{k} - \alpha_{i}) + w_{k}(\alpha_{j} - \alpha_{i}) \le 0 \text{ for }i,k\in[m] \text{ with } i \!<\! j \!<\! k \right\}.\quad \end{array}$$
(3.2)

We claim that none of the inequalities in the definition of P is redundant. Namely, for each inequality

$$w_{i}(\alpha_{k}-\alpha_{j}) - w_{j}(\alpha_{k}-\alpha_{i}) + w_{k}(\alpha_{j}-\alpha_{i}) \le 0$$

in (3.2), the point e(i) + e(j) + e(k) satisfies this particular inequality with equality and all of the other inequalities strictly. Similarly, for the inequalities in (3.1), it suffices to consider the point αje(i) + αie(j) in case i < j and αie(i) + αje(j) in case i > j. By Remark 2.4, the polyhedron P is full-dimensional.

Hence, by Proposition 2.2, the normalized X-circuits in Nj are exactly the ones given in the statement of the theorem. □

### The supports of X-circuits.

As stated in the Introduction, in the classical case of affine circuits, the normalized circuits are uniquely determined by their supports. Moreover, as a consequence of Theorem 3.1, in the case X = [− 1,1], the normalized X-circuits are uniquely determined by their signed supports. As explained in the following, this phenomenon does not extend to sublinear circuits for arbitrary sets.

In the case of sublinear circuits supported on two elements, the two non-zero entries are additive inverses of each other, so that, for a given β and a given support, indeed this signed support uniquely determines the circuit up to a positive factor. In order to exhibit the mentioned phenomenon, we present a counterexample with support size 3.

### Example 3.2

Let $$\mathcal {A} = \{\alpha _{1}, \alpha _{2}, \alpha _{3} \} = \{(0,0)^{T}, (1,0)^{T}, (0,1)^{T}\} \subset \mathbb {R}^{2}$$. We show that for β := α1, there are two non-proportional circuits which are supported on all three elements of $$\mathcal {A}$$. Specifically, we construct an example, in which

$$\nu^{(1)} := (-2,1,1)^{T} \quad \text{ and } \quad \nu^{(2)} := (-3,1,2)^{T}$$

are sublinear circuits. Note that both of them have the same signed support, but they are not multiples of each other. Observe that

$$-\mathcal{A} \nu^{(1)} = (-1,-1)^{T}, \qquad -\mathcal{A} \nu^{(2)} = (-1,-2)^{T}.$$

We set up X in such a way that (− 1,− 1)T and (− 1,− 2)T are normal vectors of X. For example, choose X as the cone in $$\mathbb {R}^{2}$$ spanned by (− 1,1)T and (2,− 1)T. We obtain

$$- \mathcal{A}^{T} X = \text{pos}\left\{ - \left( \begin{array}{cc} 0 & 0 \\ 1 & 0 \\ 0 & 1 \end{array} \right) \left( \begin{array}{r} -1 \\ 1 \end{array} \right), - \left( \begin{array}{cc} 0 & 0 \\ 1 & 0 \\ 0 & 1 \end{array} \right) \left( \begin{array}{r} 2 \\ -1 \end{array} \right) \right\} = \text{pos}\left\{ \left( \begin{array}{r} 0 \\ 1 \\ -1 \end{array} \right), \left( \begin{array}{r} 0 \\ -2 \\ 1 \end{array} \right) \right\}.$$

Since $$N_{\beta }^{\circ } = N_{(0,0)}^{\circ } = \mathbb {R} \cdot (1,1,1)^{T} + \mathbb {R} \times \mathbb {R}_{\le 0} \times \mathbb {R}_{\le 0}$$, it can be verified (for example, using a computer calculation) that ν(1) and ν(2) are indeed sublinear circuits, and they are the only ones having a negative component νβ up to scaling by a positive factor.

In the example, the two distinct sublinear circuits ν(i), 1 ≤ i ≤ 2, with identical signed supports, have different expressions $$\mathcal {A} \nu ^{(i)}$$, that is, $$\mathcal {A} \nu ^{(1)} \neq \mathcal {A} \nu ^{(2)}$$. By the following statement, it is not possible to have two distinct sublinear circuits with the same signed support and identical non-zero values of $$\mathcal {A} \nu ^{(i)}$$.

### Lemma 3.3

Let ν(1) and ν(2) be sublinear circuits with the same signed support and such that $$\mathcal {A} \nu ^{(1)} = \mathcal {A} \nu ^{(2)}$$. Then ν(1) and ν(2) are proportional, and in case $$\mathcal {A} \nu ^{(1)} = \mathcal {A} \nu ^{(2)} \neq 0$$, the equality ν(1) = ν(2) holds.

### Proof

Let ν(1) and ν(2) have the same signed support with $$\mathcal {A} \nu ^{(1)} = \mathcal {A} \nu ^{(2)}$$. Set β as the index of the negative component of ν(1) and ν(2).

Assuming ν(1)ν(2), the precondition suppν(1) = suppν(2) implies that for sufficiently small ε > 0, the vectors

$$\nu^{\prime} := \nu^{(1)} - \varepsilon \nu^{(2)} \quad \text{ and }\quad \nu^{\prime\prime} := \nu^{(1)} + \varepsilon \nu^{(2)}$$

are contained in Nβ ∖{0} as well. Observe that $$\sigma _{X}(-\mathcal {A} \nu ^{\prime }) =\sigma _{X}(-\mathcal {A} \nu ^{(1)}) - \varepsilon \sigma _{X}(-\mathcal {A} \nu ^{(2)}) < \infty$$ and $$\sigma _{X}(-\mathcal {A} \nu ^{\prime \prime }) = \sigma _{X}(-\mathcal {A} \nu ^{(1)}) + \varepsilon \sigma _{X}(-\mathcal {A} \nu ^{(2)}) < \infty$$. Moreover, ν(1) is a convex combination $$\nu ^{(1)} = \frac {1}{2} \nu ^{\prime } + \frac {1}{2} \nu ^{\prime \prime }$$ for which $$\nu \mapsto \sigma _{X}(-\mathcal {A} \nu )$$ is linear on $$[\nu ^{\prime }, \nu ^{\prime \prime }]$$.

Since $$\mathcal {A}\nu ^{(1)} = \mathcal {A} \nu ^{(2)}$$, the vectors ν(1) and ν(2) are not proportional or we have $$\mathcal {A} \nu ^{(1)} = \mathcal {A} \nu ^{(2)} = \mathbf {0}$$. In both cases, if ν(1) and ν(2) are non-proportional, then this contradicts that ν(1) is a sublinear circuit. □

## Necessary and Sufficient Conditions

In this section, we obtain some criteria for elements $$\nu \in \bigcup _{\beta \in \mathcal {A}}N_{\beta }$$ to be X-circuits of some fixed set X. These criteria only involve the supports rather than the exact values of the coefficients.

For an X-circuit ν, let ν+ := {α : να ≥ 0} and ν denote the single index β with νβ < 0. First recall that in the classical case of affine matroids, any simplicial circuit ν supported on at least three elements has no other support point except ν contained in the relative interior of the convex hull of all its support points, and the coefficients of ν+ are positive multiples of the barycentric coordinates of β, i.e., relintconv(suppν) ∩ ν+ = and $$\mathcal {A} \nu = \mathbf {0}$$ (see, e.g., ). In the following theorem, we give a generalization of this property to the case of X-circuits.

### Theorem 4.1

Let $$\lambda \in {{\varLambda }}_{X}(\mathcal {A},\beta )$$ for some $$\beta \in \mathcal {A}$$. Then relintconv(suppλ) ∩ λ+ = . Moreover, if β ∈conv(λ+), then $$\mathcal {A}\lambda = \mathbf {0}$$.

### Proof

For the first statement, suppose there exists $$\bar {\alpha }\in \lambda ^{+}$$ such that $$\bar {\alpha }\in \text {relint} \text {conv}(\text {supp} \lambda )$$. Hence, there exist 𝜃α ∈ [0,1) for $$\alpha \in (\lambda ^{+} \setminus \{\bar {\alpha }\}) \cup \{\beta \}$$ such that

$$\sum\limits_{\alpha\in\lambda^{+}\setminus\{\bar{\alpha}\}} \theta_{\alpha}+\theta_{\beta}=1 \quad \text{ and } \quad \sum\limits_{\alpha\in\lambda^{+}\setminus\{\bar{\alpha}\}} \theta_{\alpha} \alpha+\theta_{\beta}\beta=\bar{\alpha}.$$

Let τ ∈ (0,1] be maximal such that $$\tau \theta _{\alpha }\lambda _{\bar {\alpha }}\le \lambda _{\alpha }$$ for $$\alpha \in (\lambda ^{+}\setminus \{\bar {\alpha }\}) \cup \{\beta \}$$ and $$(1+\tau )\lambda _{\bar {\alpha }}<1$$. As $$\lambda _{\bar {\alpha }}<1$$, this does indeed exist. The two vectors ν(1) and ν(2) defined by

$$\begin{array}{@{}rcl@{}} \nu^{(1)}_{\alpha} &=& \begin{cases} \lambda_{\alpha}+\tau\theta_{\alpha}\lambda_{\bar{\alpha}} & \text{ for }\alpha \in (\lambda^{+} \setminus \{\bar{\alpha}\}) \cup \{\beta\}, \\ (1-\tau)\lambda_{\bar{\alpha}} & \text{ for }\alpha=\bar{\alpha} \end{cases} \\ \text{and} \quad \nu^{(2)}_{\alpha} &=& \begin{cases} \lambda_{\alpha}-\tau\theta_{\alpha}\lambda_{\bar{\alpha}} & \text{ for }\alpha \in (\lambda^{+} \setminus \{ \bar{\alpha} \}) \cup \{\beta\},\\ (1+\tau)\lambda_{\bar{\alpha}} & \text{ for }\alpha=\bar{\alpha} \end{cases} \end{array}$$

(and 0 outside of λ+ ∪{β}) are non-proportional elements of Nβ with (ν(i))+λ+ for i = 1,2. Moreover, $$\mathcal {A}\nu ^{(i)}= \mathcal {A}\lambda$$ for i = 1,2 and λ ∈relint[ν(1),ν(2)], which contradicts the X-circuit property of λ.

For the second statement, suppose β ∈conv(λ+) and $$\mathcal {A}\lambda \ne \mathbf {0}$$. Then, there exists a normalized element $$\lambda ^{\prime } \in N_{\beta }$$ with $$\lambda ^{+}=(\lambda ^{\prime })^{+}$$ and $$\mathcal {A} \lambda ^{\prime }=\mathbf {0}$$. Let τ be the maximal real number such that $$\nu ^{(1)} := \lambda - \tau \lambda ^{\prime } \in N_{\beta }$$. That maximum clearly exists, and, since $$(\lambda ^{\prime })^{+} =\lambda ^{+}$$, the number τ is positive. Moreover, since λ and $$\lambda ^{\prime }$$ are normalized, we have τ ≤ 1.

The sublinear circuit $$\nu ^{(2)} := \lambda + \tau \lambda ^{\prime }$$ is clearly contained in Nβ as well. Since $$\lambda , \lambda ^{\prime }$$ are non-proportional and τ > 0, the sublinear circuits ν(1) and ν(2) are non-proportional. Furthermore, since ν(1) + ν(2) = 2λ, we see that λ can be written as a convex combination of the two non-proportional elements ν(1)Nβ and ν(2)Nβ. Due to $$\mathcal {A} \lambda ^{\prime } =\mathbf {0}$$, we obtain $$\sigma _{X}(-\mathcal {A} \nu ^{(1)}) = \sigma _{X}(-\mathcal {A} \nu ^{(2)}) =\sigma _{X}(-\mathcal {A} \lambda )$$ and thus

$$\sigma_{X}(-\mathcal{A} \lambda)= \frac{1}{2}\left( \sigma_{X}(-\mathcal{A} \nu^{(1)}) + \sigma_{X}(-\mathcal{A} \nu^{(2)})\right).$$

Hence, $$\lambda \notin {{\varLambda }}_{X}(\mathcal {A},\beta )$$. □

We can provide the following two cases of the converse direction of Theorem 4.1. In particular, both cases will be applicable for X = [− 1,1]n. We can assume that β ∈conv(λ+) −rec(X) since otherwise any λNβ ∖{0} will have $$\sigma _{X}(-\mathcal {A}\lambda ) = \infty$$ and hence, violate condition (1) in the definition of an X-circuit.

### Lemma 4.2

Given $$\beta \in \mathcal {A}$$, let λNβ ∖{0} be normalized with β ∈conv(λ+) −rec(X) and such that λ+ consists of affinely independent vectors.

1. (1)

If |suppλ| = 2 or

2. (2)

if X is full-dimensional, β ∈conv(λ+), $$\mathcal {A}\lambda =\mathbf {0}$$,

then $$\lambda \in {{\varLambda }}_{X}(\mathcal {A},\beta )$$.

Note that, since in the theorem λ+ consists of affinely independent vectors, we have relintconv(λ+) ∩ λ+ = .

### Remark 4.3

If the property of full-dimensionality is omitted in the second condition, the statement is not true anymore. As a counterexample, let X be the singleton set X = {1} and let $$\mathcal {A} = \{1,2,3\}$$. Then $$\lambda = \frac {1}{2}(1,-2,1)^{T}$$ is not an X-circuit, because $$\lambda = \frac {1}{2}\lambda ^{(1)} + \frac {1}{2} \lambda ^{(2)}$$ with λ(1) = (1,− 1,0)T and λ(2) = (0,− 1,1)T and $$\nu \mapsto \sigma _{X}(-\mathcal {A} \nu )$$ is linear on [λ(1),λ(2)]. Note that the functions $$x \mapsto \exp (\alpha ^{T} x)$$, $$\alpha \in \mathcal {A}$$ are not linearly independent on X.

### Proof

For the first statement, suppose there exist ν(1),ν(2)Nβ decomposing λ. Then $$\text {supp}(\nu ^{(i)})\subseteq \text {supp} \lambda$$ for i ∈{1,2}, because the cancellation of terms not contained in suppλ is not possible, as the negative term always corresponds to β. Since $$\nu ^{(1)}_{\beta } <0$$ and $$\nu ^{(2)}_{\beta } < 0$$ and |suppλ| = 2, both ν(1) and ν(2) are proportional to λ.

Now consider the second condition. Since the property of being an X-circuit is invariant under translation of X, we can assume without loss of generality that 0 ∈intX. Suppose that there exist non-proportional, normalized λ(1),λ(2)Nβ and 𝜃1,𝜃2 ∈ (0,1) with 𝜃1 + 𝜃2 = 1 such that

$$\sum\limits_{i=1}^{2} \theta_{i}\left( \lambda^{(i)}, \sigma_{X}(-\mathcal{A}\lambda^{(i)})\right) = (\lambda,\sigma_{X}(-\mathcal{A}\lambda)).$$

We distinguish two cases. If $$\mathcal {A} \lambda ^{(1)} = \mathbf {0}$$, then $$\mathcal {A} \lambda ^{(2)} = -\frac {\theta _{1}}{\theta _{2}}\mathcal {A} \lambda ^{(1)} = \mathbf {0}$$. Hence, the uniqueness of the barycentric coordinates with respect to a given affinely independent ground set implies λ(1) = λ(2), which is a contradiction to their non-proportionality.

If $$\mathcal {A} \lambda ^{(1)} \neq \mathbf {0}$$, then, as the argument above states that $$\mathcal {A}\lambda ^{(2)}=\mathbf {0}$$ implies $$\mathcal {A}\lambda ^{(1)}=\mathbf {0}$$, we have $$\mathcal {A} \lambda ^{(2)} = - \frac {\theta _{1}}{\theta _{2}}\mathcal {A} \lambda ^{(1)} \neq \mathbf {0}$$ as well. Then 0 ∈intX implies $$\sigma _{X}(-\mathcal {A} \lambda ^{(1)}) > 0$$ and $$\sigma _{X}(-\mathcal {A}\lambda ^{(2)}) > 0$$. Since $$\sigma _{X}(-\mathcal {A} \lambda ) = -\sigma _{X}(\mathbf {0}) = 0$$, the mapping $$\nu \mapsto \sigma _{X}(-\mathcal {A} \nu )$$ cannot be linear on [λ(1),λ(2)]. □

### X-circuits of polyhedral cones X.

As discussed after Proposition 2.2, in the case of polyhedral cones X we always have $$\sigma _{X}(-\mathcal {A}\lambda )=0$$ whenever this value is finite. Since we will reduce the determination of the sublinear circuits $${{\varLambda }}_{X}(\mathcal {A})$$ for a cone X in some prominent cases to the classical affine circuits $${{\varLambda }}_{\mathbb {R}^{n}}(\mathcal {A})$$ (which of course is also a case of a polyhedral cone), we first look at an example for the latter case.

In the following, we examine sublinear circuits for various sets $$X\subset \mathbb {R}^{n}$$ (for some $$n\in \mathbb {N}$$) and support sets of the form $$\mathcal {A} = \{(i,j) : 1 \le i,j \le k\}$$, $$k\in \mathbb {N}$$. In these situations, we can write a sublinear circuit ν as a matrix $$M^{(\nu )}\in \mathbb {R}^{k\times k}$$ such that $$M^{(\nu )}_{i,j}=\nu _{(i,j)}$$ for all $$(i,j) \in \mathcal {A}$$.

### Example 4.4

For $$X=\mathbb {R}^{2}$$ and support $$\mathcal {A} = \{(i,j) : 1 \le i,j \le 3\}$$, there are 16 sublinear circuits (up to multiples). Namely, there are 8 sublinear circuits with support size 3 (all of them have non-zero entries 1,− 2,1; they appear in the three rows, the three columns and the two diagonals of the 3 × 3-matrix). Moreover, there are the following 8 sublinear circuits of support size 4. Here, the upper left entry of the matrices refers to the support point (1,1):

$$\begin{array}{cccccc} \left( \begin{array}{rrr} 1 & 0 & 1 \\ 0 & -4 & 0 \\ 0 & 2 & 0 \end{array} \right),\quad \left( \begin{array}{rrr} 0 & 1 & 0 \\ 1 & -3 & 0 \\ 0 & 0 & 1 \end{array} \right) \end{array}$$
(4.1)

as well as the 90-degree, 180-degree and 270-degree rotations about the (2,2)-element of these matrices. As $$\mathbf {0}\in \text {int} \mathbb {R}^{2}$$ and $$\text {rec}(\mathbb {R}^{2})^{\ast }=\{\mathbf {0}\}$$, this reflects in particular the statements of Theorem 4.1 and Lemma 4.2.

Next we consider the sublinear circuits of the non-negative orthant $$\mathbb {R}_{+}^{n}$$. For a non-empty subset S ⊂ [n] and a support point $$\alpha \in \mathcal {A} \subset \mathbb {R}^{n}$$, we write αS for the projection of α onto the components of S, i.e., αS := (αs)sS. We also set $$\mathcal {A}_{S} := \{ \alpha _{S} : \alpha \in \mathcal {A}\}$$ and for a matrix M with n rows, we set MS as the submatrix of M defined by the rows with indices in S, which in particular yields MSλ = (Mλ)S.

### Theorem 4.5

Let n ≥ 2 and $$X=\mathbb {R}_{+}^{n}$$ and $$\beta \in \mathcal {A}$$. A normalized element λNβ with |λ+|≥ 2 is contained in $${{\varLambda }}_{X}(\mathcal {A}, \beta )$$ if and only if there exists a non-empty subset S ⊂ [n] with |{αS : α ∈suppλ}| = |suppλ| such that λ is an $$\mathbb {R}^{|S|}$$-circuit for the support set $$\mathcal {A}_{S}$$ and $$(\mathcal {A} \lambda )_{[n] \setminus S} > \mathbf {0}$$.

### Remark 4.6

The latter condition in Theorem 4.5 implies $$\beta _{S} = (\mathcal {A} \lambda ^{+})_{S}$$ and, hence, βS ∈relintconv((λ+)S), and $$\beta _{[n] \setminus S} \in \text {conv}((\lambda ^{+})_{[n] \setminus S}) -\mathbb {R}_{+}^{{[n] \setminus S}}$$.

### Proof Proof of Theorem 4.5

Let $$\lambda \in {{\varLambda }}_{X}(\mathcal {A},\beta )$$ with |λ+|≥ 2. Hence, $$\mathcal {A} \lambda \ge \mathbf {0}$$. For every s ∈ [n] with $$(\mathcal {A} \lambda )_{\{s\}} > 0$$, we observe that λ is also an $$\mathbb {R}^{n-1}_{+}$$-circuit for $$\mathcal {A}_{[n]\setminus \{s\}}$$. The X-circuit property of λ and |λ+|≥ 2 imply that there exists at least one s ∈ [n] with $$(\mathcal {A} \lambda )_{\{s\}} = 0$$; otherwise, choosing a vector ν supported on a two-element subset of λ+ with entries ε and − ε for sufficiently small ε > 0 would give a non-trivial decomposition $$\lambda = (\frac {1}{2} \lambda - \nu ) + (\frac {1}{2} \lambda + \nu )$$.

Let S be the inclusion-maximal subset S ⊂ [n] with $$(\mathcal {A} \lambda )_{S}=\mathbf {0}$$. By the initial considerations, S and λ is an $$\mathbb {R}^{|S|}$$-circuit of $$\mathcal {A}_{S}$$. This implies the cardinality statement |{αS : α ∈suppλ}| = |suppλ|. By the definition of S, we have $$(\mathcal {A} \lambda )_{[n] \setminus S} > \mathbf {0}$$.

Conversely, let S ⊂ [n] with |{αS : α ∈suppλ}| = |suppλ| such that λ is an $$\mathbb {R}^{|S|}$$-circuit of $$\mathcal {A}_{S}$$ and $$(\mathcal {A} \lambda )_{[n] \setminus S} > \mathbf {0}$$. Then λ is an $$\mathbb {R}_{+}^{|S|}$$-circuit for $$\mathcal {A}_{S}$$ and, further, an X-circuit for $$\mathcal {A}$$. □

Theorem 4.5 can be used in the reduction of the enumeration of all X-circuits to the enumeration of all classical affine circuits.

### Example 4.7

For $$X=\mathbb {R}_{+}^{2}$$ and the support set $$\mathcal {A} = \{(i,j) : 1 \le i,j \le 3\}$$, there are 65 normalized sublinear circuits. Namely, by Theorem 4.5, there are

1. (1)

27 normalized sublinear circuits of cardinality 2: $$\lambda = - e^{(i_{1},j_{1})} + e^{(i_{2},j_{2})}$$ for 1 ≤ i1i2 ≤ 3, 1 ≤ j1j2 ≤ 3; that is, the entry “1” appears in “lower right” quadrant of the entry “− 1”.

2. (2)

16 normalized sublinear circuits in which the entries $$\frac {1}{2},-1,\frac {1}{2}$$ appear in columns 1,2,3, respectively, such that the entry − 1 appears above the line through the two entries $$\frac {1}{2}$$.

3. (3)

16 normalized sublinear circuits in which the entries $$\frac {1}{2}$$ appear in rows 1,2,3, respectively, such that the − 1 appears left to the line containing the two entries $$\frac {1}{2}$$.

4. (4)

8 $$\mathbb {R}^{2}$$-circuits of cardinality 4, which are the normalized versions of the ones from Example 4.4.

Since the diagonal and the anti-diagonal are counted both in cases (2) and (3), we have to subtract 2, which gives 27 + 16 + 16 + 8 − 2 = 65. The following table shows in row i and column j the number of sublinear circuits with ν = {(i,j)}.

1 2 3
1 8 14 2
2 14 21 2
3 2 2 0

Exemplarily, for the case ν = {(1,2)}, there are five circuits of type (1) as well as the following nine (in the subsequent list not normalized) sublinear circuits ν with ν = {(1,2)}, i.e., the component with index (1,2) is the negative component. As before, the upper left entry of the matrices refer to the support point (1,1):

$$\begin{array}{cccccc} \left( \begin{array}{rrr} 1 & -2 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 1 & -2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 1 & -2 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right), & \left( \begin{array}{rrr} 0 & -2 & 1 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -2 & 0 \\ 1 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right), \\ [3.5ex] \left( \begin{array}{rrr} 0 & -2 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right), & \left( \begin{array}{rrr} 0 & -2 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -2 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -2 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 1 \end{array} \right). \end{array}$$

The following theorem characterizes the connection between the X-circuits and the $$\mathbb {R}^{n}$$-circuits for more general polyhedral cones X.

### Theorem 4.8

Let X = pos{v(1),…,v(k)} be an n-dimensional polyhedral cone spanned by the vectors v(1),…,v(k), where kn. Then

$$\left\{ \lambda \in {{\varLambda}}_{X}(\mathcal{A}) : \mathcal{A} \lambda = \mathbf{0} \right\} = {{\varLambda}}_{\mathbb{R}^{n}}(\mathcal{A}).$$
(4.2)

### Proof

Fix $$\beta \in \mathcal {A}$$ and denote by W the k × n-matrix whose rows are the transposed vectors (v(1))T,…,(v(k))T. Hence, $$X^{\ast } = \{x \in \mathbb {R}^{n} : Wx \ge \textbf {0} \}$$. The set $${{\varLambda }}_{X}(\mathcal {A},\beta )$$ is the set of normalized vectors spanning the extreme rays of the cone

$$\begin{array}{@{}rcl@{}} K_{X} & =& \{\nu \in N_{\beta} : \sigma_{X}(-\mathcal{A} \nu) \le \mathbf{0}\} = \{\nu \in N_{\beta} : \mathcal{A} \nu \in X^{\ast}\} \\ &=& \{\nu \in N_{\beta} : W{\mathcal{A}}\nu \ge \textbf{0}\} \end{array}$$

and the set $${{\varLambda }}_{\mathbb {R}}^{n}(\mathcal {A},\beta )$$ is the set of normalized vectors spanning the extreme rays of the cone

$$K_{\mathbb{R}^{n}} = \{\nu \in N_{\beta} : \sigma_{\mathbb{R}^{n}}(-\mathcal{A} \nu) \le \mathbf{0}\} = \{\nu \in N_{\beta} : \mathcal{A} \nu = \mathbf{0}\}.$$

Since the matrix W has rank n, the linear mapping xWx is injective, and thus its kernel is {0}. Hence, $$K_{\mathbb {R}^{n}} = \{ \nu \in N_{\beta } : W \mathcal {A} \nu = \mathbf {0}\}$$. The cone $$K_{\mathbb {R}^{n}}$$ is contained in the cone KX. As a consequence, if λNβ is not contained in the right-hand side of (4.2), it is not contained in the left-hand side.

Conversely, let λNβ be contained in the right-hand side of (4.2). Then $$\mathcal {A} \lambda = \textbf {0}$$ and $$W \mathcal {A} \lambda = \textbf {0}$$. Assume there exists a decomposition into a convex combination λ = 𝜃1λ(1) + 𝜃2λ(2) with $$W \mathcal {A} \lambda ^{(1)} \neq \mathbf {0}$$. Since $$W \mathcal {A} \lambda = \textbf {0}$$ and $$W \mathcal {A} \lambda ^{(1)} \ge \textbf {0}$$, at least one component of $$W\mathcal {A} (\lambda - \theta _{1} \lambda ^{(1)}) = W \mathcal {A} \theta _{2} \lambda ^{(2)}$$ is smaller than zero. This is a contradiction. Hence, λ is contained in the left-hand side of (4.2). □

## The n-Dimensional Cube X = [− 1,1]n

We discuss the sublinear circuits of the n-dimensional cube [− 1,1]n, which is a prominent case of a compact polyhedron. Throughout the section, we assume X = [− 1,1]n for some fixed $$n\in \mathbb {N}$$ and $$\mathcal {A}\subset \mathbb {R}^{n}$$ non-empty and finite. We can already apply some of the former statements to gain knowledge of the structure of X-circuits. For example, as $$\text {rec}(X)^{\ast }=\mathbb {R}^{n}=-\text {rec}(X)^{\ast }$$, Lemma 4.2 implies that every element supported on exactly two points is an X-circuit. Hence, we examine the structure of those X-circuits $$\lambda \in {{\varLambda }}_{X}(\mathcal {A})$$ that have more than two support points. We begin with a necessary criterion.

### Lemma 5.1

Let λNβ with λβ = − 1 for some $$\beta \in \mathcal {A}$$ and |suppλ|≥ 3. If for all j ∈ [n]

$$\left( \alpha_{j}\le\beta_{j}~\text{ for all }~\alpha\in\lambda^{+} \right) \quad \text{ or } \quad \left( \alpha_{j}\ge\beta_{j}~\text{ for all }~\alpha\in\lambda^{+} \right),$$
(5.1)

then $$\lambda \notin {{\varLambda }}_{X}(\mathcal {A})$$.

Note that the precondition expresses that there exists a vertex v of [− 1,1]n such that for all αλ+, the maximal face of the function x↦(βα)Tx contains v.

### Proof

We can assume β∉relint(conv(λ+)), since otherwise the preconditions imply β = α for all αλ+, violating |suppλ|≥ 3. Hence, we have $$\mathcal {A} \lambda \neq \mathbf {0}$$ and the supremum of $$x \mapsto (-\mathcal {A} \nu )^{T} x$$ is attained at some vertex of [− 1,1]n.

Now assume $$\lambda \in {{\varLambda }}_{X}(\mathcal {A})$$. In order to come up with a contradiction, we construct a decomposition of $$\lambda = {\sum }_{\alpha \in \lambda ^{+}} \nu ^{(\alpha )}$$ with supports supp{ν(α)} = {α,β} of cardinality 2 by setting

$$\theta_{\alpha}\nu_{\alpha}^{(\alpha)}:=\lambda_{\alpha} \quad \text{and} \quad \theta_{\alpha}\nu_{\beta}^{(\alpha)}:=-\theta_{\alpha}\nu_{\alpha}^{(\alpha)}=-\lambda_{\alpha}\qquad\text{ for all }\alpha\in\lambda^{+}.$$

We observe that ν(α)Nβ for all αλ+ and $$(\theta _{\alpha })_{\alpha \in \lambda ^{+}}$$ can be chosen with the property $${\sum }_{\alpha \in \lambda ^{+}} \theta _{\alpha } = 1$$. Moreover, $${\sum }_{\alpha \in \lambda ^{+}}\theta _{\alpha }\nu ^{(\alpha )}= \lambda$$ and

$$\begin{array}{@{}rcl@{}} \sum\limits_{\alpha\in\lambda^{+}}\theta_{\alpha}\sigma_{X}(-\mathcal{A}\nu^{(\alpha)}) &= & \sum\limits_{\alpha\in\lambda^{+}}\theta_{\alpha}\sum\limits_{j=1}^{n}\left|\nu_{\alpha}^{(\alpha)}(\alpha_{j}-\beta_{j})\right| = \sum\limits_{\alpha\in\lambda^{+}}\sum\limits_{j=1}^{n}\left|\lambda_{\alpha}(\alpha_{j}-\beta_{j})\right|\\ &\overset{(5.1)}{=} & \sum\limits_{j=1}^{n}\left|\sum\limits_{\alpha\in\lambda^{+}} \lambda_{\alpha}(\alpha_{j}-\beta_{j})\right| = \sigma_{X}(-\mathcal{A}\lambda). \end{array}$$

By distinguishing the cases αj = βj and αjβj, it is straightforward to see that this expression in terms of a convex combination is locally linear. Hence, λ cannot be an X-circuit, which is the contradiction. □

We provide a slightly more general version of Lemma 5.1, whose proof is analogous.

### Lemma 5.2

Let λNβ with λβ = − 1 for some $$\beta \in \mathcal {A}$$, and |suppλ|≥ 3. Further suppose that for $$J(\lambda ):=\{j:\beta _{j}={\sum }_{\alpha \in \mathcal {A}}\lambda _{\alpha } \alpha _{j}\}$$, the support can be disjointly decomposed into the two sets

$$\mathcal{A}^{(1)}=\{\alpha: \alpha_{j}=\beta_{j} \text{ for all }j\notin J(\lambda)\}~\text{ and }~ \mathcal{A}^{(2)}= \{\alpha:\alpha_{j}= \beta_{j} \text{ for all }j\in J(\lambda)\} \ne \emptyset$$

such that for all j ∈ [n] ∖ J(λ) we have

$$\left( \alpha_{j}\le\beta_{j} \text{ for all }\alpha\in \mathcal{A}^{(2)}\right) \quad \text{ or } \quad \left( \alpha_{j}\ge\beta_{j} \text{ for all }\alpha\in \mathcal{A}^{(2)}\right).$$

Then λ is not an X-circuit of $$\mathcal {A}$$.

### Example 5.3

The planar case [− 1,1]2. For the case of the planar square X = [− 1,1]2 we provide some explicit descriptions of the sublinear circuits for support sets located on a grid {(i,j) : 1 ≤ i,jk} for some $$k \in \mathbb {N}$$.

If λ is a normalized [− 1,1]2-circuit, then, due to $$\text {rec}([-1,1]^{2})^{\ast }=-\text {rec}([-1,1]^{2})^{\ast }=\mathbb {R}^{2}$$, there is no restriction on the location of the negative coordinate. However, using Theorem 4.1, we can exclude potential sublinear circuits λNβ for some $$\beta \in \mathcal {A}$$, where relintconv(suppλ) ∩ λ+ and those where β ∈conv(λ+) but $$\mathcal {A}\lambda \ne \mathbf {0}$$; in particular, the latter situation excludes the case β ∈conv(λ+) ∖relintconv(λ+). Moreover, using Lemma 5.1, we can exclude all those potential [− 1,1]2-circuits where |suppλ|≥ 3 and (αjβj for all αλ+) or (αjβj for all αλ+).

For the case k = 3, i.e., the support set $$\mathcal {A} = \{(i,j) : 1 \le i, j \le 3\}$$, the structural statements facilitate to obtain the exact set of sublinear circuits. Up to multiples, there are 132 X-circuits:

1. (1)

72 sublinear circuits supported on two elements: $$e^{(i_{1},j_{1})} - e^{(i_{2},j_{2})}$$ for 1 ≤ i1,i2,j1,j2 ≤ 3 with (i1,j1)≠(i2,j2).

2. (2)

27 sublinear circuits in which the entries $$\frac {1}{2},-1,\frac {1}{2}$$ appear in columns 1,2,3, respectively.

3. (3)

27 sublinear circuits in which the entries $$\frac {1}{2},-1,\frac {1}{2}$$ appear in rows 1,2,3, respectively.

4. (4)

8 sublinear circuits supported on 4 elements.

Since the diagonal and the anti-diagonal are counted both in cases 2 and 3, this gives 72 + 27 + 27 + 8 − 2 = 132 sublinear circuits. The following table shows in row i and column j the number of normalized sublinear circuits λ with λ = {(i,j)}.

1 2 3
1 8 17 8
2 17 32 17
3 8 17 8

The subsequent list gives the 17 (not necessarily normalized) X-circuits ν with ν = {(1,2)}, i.e., the component with index (1,2) is the negative component. As before, the upper left entry of the matrices refer to the support point (1,1):

$$\begin{array}{@{}rcl@{}} \begin{array}{cccccc} \left( \begin{array}{rrr} 1 & -1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array} \right) , & \left( \begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & 0 \\ 0 & 1 & 0 \end{array} \right), \\ [3.5ex] \left( \begin{array}{rrr} 0 & -1 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right) , & \left( \begin{array}{rrr} 1 & -2 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -2 & 1 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right), \\ [3.5ex] \left( \begin{array}{rrr} 0 & -2 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 1 & -2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & -2 & 0 \\ 1 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right) , & \left( \begin{array}{rrr} 0 & -2 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 1 & -2 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right) , \\ [3.5ex] \left( \begin{array}{rrr} 0 & -2 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array} \right), & \left( \begin{array}{rrr} 0 & -2 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 1 \end{array} \right). \end{array} \end{array}$$

### The case k = 4.

In the case $$\mathcal {A} = \{ (i,j) : 1 \le i, j \le 4\}$$, a computer calculation shows that there are 980 normalized X-circuits, which come in the following classes with regard to λ:

1 2 3 4
1 15 47 47 15
2 47 136 136 47
3 47 136 136 47
4 15 47 47 15

Note that in this case, the criteria of this and the previous section are not sufficient to determine the set of sublinear circuits solely from these criteria.

## Reducibility and Extremality

By Proposition 2.8, the reduced sublinear circuits provide an irredundant decomposition of conditional SAGE cones. In this section, we discuss some criteria and key examples for reduced sublinear circuits. As an application of the criteria, we will determine the extremals of the [− 1,1]-SAGE cone in Theorem 6.6.

For the classical case of affine circuits supported on a finite set $$\mathcal {A}$$, the following exact characterization in terms of the support is known.

### Proposition 6.1

([12, Corollary 4.7], [7, Theorem 3.2]) A vector ν is a reduced $$\mathbb {R}^{n}$$-circuit if and only if

$$\mathcal{A} \cap \text{relint} \text{conv} \nu^{+} = \{\nu^{-}\}.$$

For example, with regard to the two matrices in (4.1) of Example 4.4, the left one is not reduced, but the right one is. The following theorem gives a generalization for the necessary direction of Proposition 6.1 to the constrained situation, where X is a non-empty, convex set in $$\mathbb {R}^{n}$$.

### Theorem 6.2

Let $$\lambda \in {{\varLambda }}_{X}(\mathcal {A},\beta )$$. If there exists $$\beta ^{\prime }\in \mathcal {A}\setminus \text {supp} \lambda$$ and some normalized $$\lambda ^{\prime }\in N_{\beta ^{\prime }}$$ where $$(\lambda ^{\prime })^{+} \subset \text {supp}(\lambda )$$ and $$\mathcal {A}\lambda ^{\prime } = \gamma \mathcal {A}\lambda$$ for some γ ≥ 0, then λ is not reduced.

Before providing the proof within this section, we discuss its consequences.

### Corollary 6.3

Let $$\lambda \in {{\varLambda }}_{X}(\mathcal {A},\beta )$$. If $$(\text {conv}(\text {supp} \lambda )\cap \mathcal {A})\setminus \text {supp} \lambda \ne \emptyset$$, then λ is not reduced. Consequently,

$$\{\lambda\in{{\varLambda}}_{X}^{\star}(\mathcal{A}):\mathcal{A}\lambda=\mathbf{0}\}\subset {{\varLambda}}_{\mathbb{R}^{n}}^{\star}(\mathcal{A}).$$

### Proof

The first statement follows by applying Theorem 6.2 with $$\beta ^{\prime }\in \text {conv}(\text {supp} \lambda )\setminus \text {supp} \lambda$$, $$(\lambda ^{\prime })^{+}$$ are the vertices of suppλ and γ = 0. The second one is a direct consequence of Proposition 6.1 and the fact that for $$X=\mathbb {R}^{n}$$ all X-circuits λ have the property $$\mathcal {A}\lambda =\mathbf {0}$$. □

Using this corollary, we can provide an analogon to Theorem 4.8.

### Theorem 6.4

Let X = pos{v(1),…,v(k)} be an n-dimensional polyhedral cone spanned by the vectors v(1),…,v(k), where kn. Then

$$\left\{\lambda \in {{\varLambda}}_{X}^{\star}(\mathcal{A}) : \mathcal{A} \lambda = 0 \right\} = {{\varLambda}}^{\star}_{\mathbb{R}^{n}}(\mathcal{A}).$$

### Proof

By Corollary 6.3, every $$\lambda \in {{\varLambda }}_{X}^{\star }(\mathcal {A})$$ is contained in $${{\varLambda }}_{\mathbb {R}^{n}}^{\star }(\mathcal {A})$$. Suppose there exists some $$\lambda \in {{\varLambda }}_{\mathbb {R}^{n}}^{\star }(\mathcal {A})$$ that is not contained in $${{\varLambda }}_{X}^{\star }(\mathcal {A})$$. By Theorem 4.8, $$\lambda \in {{\varLambda }}_{X}(\mathcal {A})$$. As $$\lambda \notin {{\varLambda }}_{X}^{\star }(\mathcal {A})$$, there exist $$m\in \mathbb {N}$$ and X-circuits ν(1),…,ν(m) which are non-proportional to λ and which satisfy $${\sum }_{i\le m}(\nu ^{(i)},\sigma _{X}(- \mathcal {A} \nu ^{(i)})) = (\lambda ,\sigma _{X}(- \mathcal {A} \lambda ))$$. Since $$\sigma _{X}(-\mathcal {A}\lambda )=0$$ and $$\sigma _{X}(y) \in \{0,\infty \}$$ for all $$y\in \mathbb {R}^{n}$$, we have $$\sigma _{X}(- \mathcal {A} \nu ^{(i)}) =0$$ for all i ∈ [m].

As in Theorem 4.8, denote by W the k × n-matrix whose rows are the transposed vectors (v(1))T,…,(v(k))T. Again,

$$\sigma_{X}(-y) < \infty\quad \text{ if and only if }\quad Wy\ge \textbf{0}.$$

Since $$\mathcal {A}\lambda =W\mathcal {A}\lambda =\mathbf {0}$$, we obtain $$W{\mathcal {A}}\nu ^{(i)} = \textbf {0}$$ and, as the kernel of W is {0}, further $$\mathcal {A}\nu ^{(i)} = \textbf {0}$$ for all i ∈ [m]. Hence, $$\nu ^{(i)}\in {{\varLambda }}_{\mathbb {R}^{n}}^{\star }(\mathcal {A})$$ and therefore $$\lambda \notin {{\varLambda }}_{\mathbb {R}^{n}}^{\star }(\mathcal {A})$$, which is a contradiction. □

We illustrate the applicability of Theorem 6.2 in determining the reduced sublinear circuits by returning to the univariate example X = [− 1,1], which was started in Proposition 3.1.

### Theorem 6.5

Let X = [− 1,1] and $$\mathcal {A}=\{\alpha _{1},\ldots ,\alpha _{m}\}$$ sorted ascendingly, where m ≥ 3. Then, $${{\varLambda }}_{X}^{\star }(\mathcal {A})$$ consists of the following sublinear circuits:

1. (1)

λ = e(2)e(1) or λ = e(m− 1)e(m), or

2. (2)

$$\displaystyle {\lambda = \frac {\alpha _{i-1}-\alpha _{i}}{\alpha _{i-1}-\alpha _{i+1}} e^{(i-1)}- e^{(i)}+ \frac {\alpha _{i-1}-\alpha _{i}}{\alpha _{i-1}-\alpha _{i+1}} e^{(i+1)}}$$ for some i ∈{2,…,m − 1}.

Note that this gives, in particular,

$${{\varLambda}}_{X}^{\star}(\mathcal{A})\cap \left\{\lambda\in \bigcup_{\beta\in\mathcal{A}} N_{\beta}: |\text{supp} \lambda|=3\right\} = {{\varLambda}}_{\mathbb{R}}^{\star}(\mathcal{A}).$$

### Proof Proof of Theorem 6.5

By Proposition 3.1 and Corollary 6.3, the only candidates for normalized reduced X-circuits are

1. (1)

λ = e(i)e(i± 1) or

2. (2)

$$\displaystyle {\lambda = \frac {\alpha _{i}-\alpha _{i+1}}{\alpha _{i-1}-\alpha _{i+1}} e^{(i-1)}- e^{(i)} + \frac {\alpha _{i-1}-\alpha _{i}}{\alpha _{i-1}-\alpha _{i+1}} e^{(i+1)}}$$ for some i ∈{2,…,m − 1}.

For every X-circuit e(i+ 1)e(i) with i > 1, the X-circuit e(i+ 1)e(1) satisfies the precondition of Theorem 6.2 and for every X-circuit e(i− 1)e(i) with i < m, the X-circuit e(i− 1)e(m) satisfies the precondition of Theorem 6.2. Hence, all those X-circuits are not reduced.

We see that for all i ∈ [m], there is precisely one normalized X-circuit λ that appears in the listed set of possible reduced X-circuits. As $$\text {rec}(X)^{\ast }=\mathbb {R}$$, there exists at least one X-AGE signomial where the i-th coefficient is negative, hence $$C_{X}(\mathcal {A},\alpha _{i})\ne \emptyset$$ for all i ∈ [m]. As $$C_{X}(\mathcal {A},\alpha _{i})$$ is the union of several λ-witnessed X-AGE cones and those cones can be solely represented by reduced X-circuits (compare , Sections 4 and 5), for every i ∈ [m] there exists at least one reduced X-circuit in $$C_{X}(\mathcal {A},\alpha _{i})$$. With this, the statement follows. □

### Proof Proof of Theorem 6.2

Since λ and $$\lambda ^{\prime }$$ are normalized elements in Nβ and $$N_{\beta ^{\prime }}$$, we have

$$\begin{array}{@{}rcl@{}} \sum\limits_{\alpha \in \lambda^{+}} \lambda_{\alpha} & = & 1 \text{ and } \lambda_{\beta} = -1,~\lambda_{\alpha} \ge 0 \text{ for } \alpha \in \mathcal{A} \setminus \{\beta\}, \\ \sum\limits_{\alpha \in (\lambda^{\prime})^{+}} \lambda^{\prime}_{\alpha} & = & 1 \text{ and } \lambda^{\prime}_{\beta^{\prime}} = -1, \lambda^{\prime}_{\alpha} \ge 0 \text{ for } \alpha \in \mathcal{A} \setminus \{\beta^{\prime}\}. \end{array}$$

Let τ be the maximal real number in [0,1/γ] (with the convention $$1/\gamma :=\infty$$ if γ = 0) such that $$\nu ^{(1)} := \lambda - \tau \lambda ^{\prime } \in N_{\beta }$$. That maximum clearly exists, and, since $$(\lambda ^{\prime })^{+} \subset \text {supp} \lambda$$, the number τ is positive. Moreover, since λ and $$\lambda ^{\prime }$$ are normalized and distinct, we have τ < 1.

Similarly, let $$\tau ^{\prime }$$ be the maximal real number in [0,γ] with $$\nu ^{(2)} := \lambda ^{\prime } - \tau ^{\prime } \lambda \in N_{\beta ^{\prime }}$$. Here, we have $$0 \le \tau ^{\prime } \le 1$$ (and, in particular, $$\tau ^{\prime }=0$$ if γ = 0 or $$(\lambda ^{\prime })^{+}\subsetneq \lambda ^{+}$$). Hence, $$\nu ^{(1)} \in N_{\beta },\nu ^{(2)} \in N_{\beta ^{\prime }}$$ and $$1-\tau \tau ^{\prime }\in (0,1]$$.

Since $$\nu ^{(1)} + \tau \nu ^{(2)} = \lambda - \tau \lambda ^{\prime } +\tau \lambda ^{\prime } - \tau \tau ^{\prime } \lambda = (1 - \tau \tau ^{\prime }) \lambda$$, we see that λ can be written as a conic combination of the two non-proportional (not necessarily normalized) elements ν(1)Nβ and $$\nu ^{(2)} \in N_{\beta ^{\prime }}$$. Due to $$\mathcal {A} \lambda ^{\prime } = \gamma \mathcal {A} \lambda$$ and as both, 1 − τγ ≥ 0 and $$\gamma -\tau ^{\prime }\ge 0$$, we obtain

$$\begin{array}{@{}rcl@{}} \sigma_{X}(-\mathcal{A} \nu^{(1)}) & = & \sigma_{X}(-\mathcal{A} \lambda + \tau\mathcal{A} \lambda^{\prime})=\sigma_{X}(-\mathcal{A} \lambda + \tau\gamma\mathcal{A} \lambda)\\ & = & (1-\tau\gamma)\sigma_{X}(-\mathcal{A} \lambda) = \sigma_{X}(-\mathcal{A} \lambda) - \tau \sigma_{X}(-\mathcal{A} \lambda^{\prime}), \\ \sigma_{X}(-\mathcal{A} \nu^{(2)}) & = & \sigma_{X}(-\mathcal{A} \lambda^{\prime} + \tau^{\prime} \mathcal{A} \lambda) = \sigma_{X}(-\gamma\mathcal{A} \lambda + \tau^{\prime} \mathcal{A} \lambda) \\ & = & (\gamma-\tau^{\prime})\sigma_{X}(-\mathcal{A} \lambda) = \sigma_{X}(-\mathcal{A} \lambda^{\prime}) - \tau^{\prime} \sigma_{X}(-\mathcal{A} \lambda) \end{array}$$

and further

$$\begin{array}{@{}rcl@{}} \sigma_{X}(-\mathcal{A}\lambda) & = & \frac{1}{1 - \tau \tau^{\prime}} \left( \sigma_{X} \left( - \mathcal{A} \lambda \right) - \tau\sigma_{X} \left( - \mathcal{A} \lambda^{\prime} \right) + \tau\sigma_{X} \left( - \mathcal{A} \lambda^{\prime} \right) - \tau \tau^{\prime} \sigma_{X} \left( - \mathcal{A} \lambda \right) \right) \\ & = & \frac{1}{1 - \tau \tau^{\prime}} \left( \sigma_{X} \left( - \mathcal{A} \nu^{(1)} \right) + \tau \sigma_{X} \left( - \mathcal{A} \nu^{(2)} \right)\right), \end{array}$$

which shows that $$(\lambda ,\sigma _{X}(-\mathcal {A}\lambda ))$$ does not generate an extreme ray in $$G_{X}(\mathcal {A})$$. By the definition of a reduced sublinear circuit, $$\lambda \in {{\varLambda }}_{X}^{\star }(\mathcal {A})$$. □

As a consequence of the results in this section, we can give an exact characterization of the extreme rays of the [− 1,1]-SAGE cone.

### Theorem 6.6

Let X = [− 1,1] and $$\mathcal {A}=\{\alpha _{1},\ldots ,\alpha _{m}\}$$ be sorted ascendingly, where m ≥ 3. The extremal rays of $$C_{X}(\mathcal {A})$$ are the following:

1. (1)

$$\mathbb {R}_{+}\cdot \left (\exp (\alpha _{2}x)-\exp (\alpha _{1}-\alpha _{2}) \exp (\alpha _{1}x) \right )$$,

2. (2)

$$\mathbb {R}_{+}\cdot \left (\exp (\alpha _{m-1}x)-\exp (\alpha _{m-1}-\alpha _{m}) \exp (\alpha _{m}x) \right )$$,

3. (3)

$$\mathbb {R}_{+}\cdot \{c_{i-1}\exp (\alpha _{i-1}x)+c_{i}\exp (\alpha _{i}x)+c_{i+1}\exp (\alpha _{i+1}x)\}$$, with

$$c_{i-1}>0,\quad c_{i+1}>0 \quad\text{ and }\quad c_{i}=-\left( \frac{c_{i-1}}{\lambda_{i-1}}\right)^{\lambda_{i-1}}\left( \frac{c_{i+1}}{\lambda_{i+1}}\right)^{\lambda_{i+1}},$$

where

$$\begin{array}{@{}rcl@{}} \lambda_{i-1}&=&\frac{\alpha_{i+1}-\alpha_{i}}{\alpha_{i+1}-\alpha_{i-1}}, \quad \lambda_{i+1}=\frac{\alpha_{i}-\alpha_{i-1}}{\alpha_{i+1}-\alpha_{i-1}}\quad\text{ and}\\ \alpha_{i-1} - \alpha_{i+1} &\le& \ln \frac{c_{i-1}\lambda_{i+1}}{c_{i+1}\lambda_{i-1}} \le \alpha_{i+1} - \alpha_{i-1}. \end{array}$$

We first deal with the atomic extreme rays, that is, extreme rays which are supported on a single element. These extreme rays are not captured by the X-circuit view.

### Lemma 6.7

(Atomic extreme rays of $$C_{X}(\mathcal {A})$$ for compact sets X) Let $$X\subset \mathbb {R}^{n}$$ be a compact set and $$\mathcal {A}\subset \mathbb {R}^{n}$$ finite with $$|\mathcal {A}|\ge 2$$. Then, there are no atomic extreme rays of $$C_{X}(\mathcal {A})$$.

### Proof

As in Lemma 4.2, we use invariance of the X-circuits under translation of X and can w.l.o.g. assume 0X. Let $$\alpha \neq \beta \in \mathcal {A}$$ arbitrary. Assume that $$f=c_{\alpha }\exp (\alpha ^{T}x)$$ with cα > 0 is extremal. We observe that λNβ with λα = 1 = −λβ is an X-circuit inducing the ray

$$\mathbb{R}_{+}\cdot \left( \exp(\alpha^{T}x)-\frac{1}{\exp(s)}\exp(\beta^{T}x)\right),$$

where s ≥ 0 is finite and such that $$\sigma _{X}(-\mathcal {A}\lambda ))=s$$. Hence, the X-AGE signomials

$$f^{(1)}= c_{\alpha} \exp(\alpha^{T}x) -\frac{c_{\alpha}}{\exp(s)} \exp(\beta^{T}x), f^{(2)}=\frac{c_{\alpha}}{\exp(s)}\exp(\beta^{T}x)$$

sum to f, contradicting the extremality of f. □

### Proof Proof of Theorem 6.6

Let $$\mathcal {A} = \{\alpha _{1},\ldots ,\alpha _{m}\}$$ be sorted ascendingly. By Lemma 6.7, there are no atomic extreme rays, and by Theorem 6.5 and Proposition 2.8, all the extreme rays are supported on two or three elements.

We start by considering the 2-term case. By Lemma 6.5, the only candidates for the extreme rays are the ones given in the cases (1) and (2). Since these cases are symmetric, it suffices to consider case (1), i.e., $$f(x) = \exp (\alpha _{2} x) - \exp (\alpha _{1}- \alpha _{2})\exp (\alpha _{1} x)$$. Any conic combination of 3-term AGE functions and of functions of case (2) has a lowest-exponent term with positive coefficient. Hence, f cannot be written as a convex combination of 3-term AGE functions and of functions of case (2). Thus, f indeed is extremal.

Now consider the 3-term case. By Lemma 6.5, the only candidates for extreme rays are of the form $$f(x)=c_{i-1}\exp (\alpha _{i-1}x)+c_{i}\exp (\alpha _{i}x) + c_{i+1}\exp (\alpha _{i+1}x)$$ with ci− 1 > 0, ci+ 1 > 0 and ci < 0. The proof in [17, Theorem 6.1] shows that f must have a zero in [− 1,1] and that the location x of the zero is

$$x^{\ast} = \ln\left( \frac{c_{i-1} \lambda_{i+1}}{c_{i+1} \lambda_{i-1}}\right) / (\alpha_{i+1} - \alpha_{i-1}),$$

where λi− 1 and λi+ 1 are defined as in case (3) of the theorem. This gives the defining condition for ci as well as the inequality conditions in case (3).

Any decomposition of f cannot involve a 2-term AGE function. For x∈ (− 1,1), this follows from the strict positivity of the 2-term AGE functions of type (1) and (2). For the boundary situations x∈{− 1,1}, we can additionally use the derivative condition $$f^{\prime }(x^{\ast }) = 0$$ to exclude the 2-term AGE functions.

It remains to show that the 3-term AGE function f cannot be decomposed in terms of 3-term AGE functions. However, since f has a zero in [− 1,1] and thus in $$\mathbb {R}$$, it induces an extremal ray of the cone $$C_{\mathbb {R}}(\mathcal {A})$$ and cannot be decomposed using only 3-term AGE functions by [12, Proposition 4.4]. □

### Example 6.8

The reduced sublinear circuits for the cube [− 1,1]2. We consider again the support $$\mathcal {A} = \{(i,j) : 1 \le i, j \le k \}$$ for some $$k \in \mathbb {N}$$. In the case k = 3, there are 24 normalized reduced X-circuits, which come in the following classes:

1. (1)

12 sublinear circuits with entries 1,− 1, namely,

1. (a)

8 with entry − 1 in a corner and entry + 1 beside or below the corner,

2. (b)

4 with entry − 1 in a non-corner boundary entry and entry + 1 in the central, interior entry,

2. (2)

8 sublinear circuits, where the sequence $$\frac {1}{2},-1,\frac {1}{2}$$ appears in a row (3 possibilities), in a column (3 possibilities) or on the diagonal or the antidiagonal,

3. (3)

4 sublinear circuits supported on 4 elements, namely

$$\left( \begin{array}{rrr} 0 & 1/3 & 0 \\ 1/3 & -1 & 0 \\ 0 & 0 & 1/3 \end{array} \right)$$

as well as the 90-degree, 180-degree and 270-degree rotation of this matrix.

Note that, when starting from the set of all sublinear circuits λ for [− 1,1]2, Theorem 6.2 is applicable to rule out that λ is reduced in a number of cases. For example, the matrices

$$\begin{array}{cccccc} \left( \begin{array}{rrr} 0 & 0 & 1/2 \\ 1/2 & -1 & 0 \\ 0 & 0 & 0 \end{array} \right), & \left( \begin{array}{rrr} 0 & 0 & 1/2 \\ 1/2 & 0 & 0 \\ 0 & -1 & 0 \end{array} \right) \end{array}$$

represent sublinear circuits λ and $$\lambda ^{\prime }$$ with $$\mathcal {A} \lambda = (-1/2,0)^{T}$$ and $$\mathcal {A} \lambda ^{\prime } = (-3/2,0)^{T}$$, to which Theorem 6.2 can be applied in order to show that λ is not reduced.

Also note that all reduced $$\mathbb {R}^{2}$$-circuits for the support set $$\mathcal {A}$$ are also reduced [− 1,1]2-circuits. Namely, since for all other [− 1,1]2-circuits λ, we have $$\sigma _{X}(-\mathcal {A} \lambda ) \neq \mathbf {0}$$, those circuits cannot be used to decompose an $$\mathbb {R}^{2}$$-circuit (which has $$\sigma _{X}(-\mathcal {A} \lambda ) = \mathbf {0}$$).

In the case k = 4 with 16 support points, a computer calculation shows that there are 72 reduced sublinear circuits.

## Conclusion and Outlook

We have studied the connection of sublinear circuits and their supports and the sublinear circuits for polyhedral sets X. Since for polyhedral sets, the number of X-circuits is finite, this allows to apply polyhedral and combinatorial techniques. In particular the X-SAGE cones can be decomposed into a finite number of power cones, which arise from the reduced sublinear circuits.

For non-polyhedral sets X, in general the number of X-circuits is not finite anymore. It remains a future task to study necessary and sufficient criteria for sublinear circuits of structured non-polyhedral sets, such as sets with symmetry; for recent work on symmetric SAGE-based optimization see . In a different direction, Forsgård and de Wolff  have characterized the boundary of the SAGE cone through a connection between circuits and tropical geometry. It also remains for future work to establish a generalization of this, aiming at connecting the conditional SAGE cone and sublinear circuits to tropical geometry.