Consider a planar quadrilateral whose vertices have position vectors \({{\varvec{A}}},{{\varvec{B}}},{{\varvec{C}}},{{\varvec{D}}},\) in cyclic order. Let \(K_{\varvec{ABC}}\) denote the area of the triangle \(\varvec{ABC},\) as shown in Figure 1, and use similar notation for the other triangles. The identity

$$\begin{aligned} K_{\varvec{BCD}} {{\varvec{A}}}-K_{\varvec{ACD}} {{\varvec{B}}}+K_{\varvec{ABD}} {{\varvec{C}}}-K_{\varvec{ABC}} {{\varvec{D}}}={\mathbf {0}}\end{aligned}$$
(1)

was given in [1], where it was proved as a consequence of the Jacobi vector triple product identity in \({\mathbb{R}}^3.\)

One has \(2K_{\varvec{ABC}}=\det [{{\varvec{B}}}-{{\varvec{A}}},{{\varvec{C}}}-{{\varvec{A}}}],\) and similarly for the other triangles, so (1) gives

$$\begin{aligned}&\det [{{\varvec{C}}}-{{\varvec{B}}},{{\varvec{D}}}-{{\varvec{B}}}] {{\varvec{A}}}-\det [{{\varvec{D}}}-{{\varvec{C}}},{{\varvec{A}}}-{{\varvec{C}}}] {{\varvec{B}}}\\&\quad +\det [{{\varvec{A}}}-{{\varvec{D}}},{{\varvec{B}}}-{{\varvec{D}}}] {{\varvec{C}}}-\det [{{\varvec{B}}}-{{\varvec{A}}},{{\varvec{C}}}-{{\varvec{A}}}] {{\varvec{D}}}=0. \end{aligned}$$
(2)

The object of this paper is to generalize this fact to arbitrary dimension n. Consider points \({{\varvec{A}}}_0,\ldots ,{{\varvec{A}}}_{n+1}\) in \({\mathbb {R}}^n\) and think of them as column vectors. For \(i=0,\ldots ,n+1,\) consider the \(n\times n\) matrix

$$\begin{aligned} M_i=\big [{{\varvec{A}}}_{i+2}-{{\varvec{A}}}_{i+1} \mid {{\varvec{A}}}_{i+3}-{{\varvec{A}}}_{i+1}\mid \cdots \mid {{\varvec{A}}}_{i+n+1}-{{\varvec{A}}}_{i+1}\big ], \end{aligned}$$

where the indices are computed modulo \(n+2,\) and let \(\Delta _i=\det M_i.\) Here is the main result of this paper.

Figure 1.
figure 1

Quadrilateral \(\varvec{ABCD}\) showing the area \(K_{\varvec{ABC}}\) of triangle \(\varvec{ABC}.\)

FormalPara Theorem 1.

In the above notation, one has

$$\begin{aligned} \sum _{i=0}^{n+1} (-1)^{i(n+1)} \Delta _i {{\varvec{A}}}_i={\mathbf {0}}. \end{aligned}$$
(3)

Before proving this result, let us consider some special cases. For \(n=1,\) the above theorem gives

$$\begin{aligned} ({{\varvec{A}}}_2-{{\varvec{A}}}_1) {{\varvec{A}}}_0+({{\varvec{A}}}_0-{{\varvec{A}}}_2) {{\varvec{A}}}_1+({{\varvec{A}}}_1-{{\varvec{A}}}_0) {{\varvec{A}}}_2={\mathbf {0}}, \end{aligned}$$

which is obvious. For \(n=2,\) the theorem gives us (2). For \(n=3,\) we have five points in \({\mathbb{R}}^3,\) which we may regard as the vertices of a (possibly degenerate) polyhedron, and the theorem gives

$$\begin{aligned}&\det [{{\varvec{A}}}_2-{{\varvec{A}}}_1,{{\varvec{A}}}_3-{{\varvec{A}}}_1,{{\varvec{A}}}_4-{{\varvec{A}}}_1] {{\varvec{A}}}_0\\&\quad +\det [{{\varvec{A}}}_3-{{\varvec{A}}}_2,{{\varvec{A}}}_4-{{\varvec{A}}}_2,{{\varvec{A}}}_0-{{\varvec{A}}}_2] {{\varvec{A}}}_1\\&\quad +\det [{{\varvec{A}}}_4-{{\varvec{A}}}_3,{{\varvec{A}}}_0-{{\varvec{A}}}_3,{{\varvec{A}}}_1-{{\varvec{A}}}_3] {{\varvec{A}}}_2\\&\quad +\det [{{\varvec{A}}}_0-{{\varvec{A}}}_4,{{\varvec{A}}}_1-{{\varvec{A}}}_4,{{\varvec{A}}}_2-{{\varvec{A}}}_4] {{\varvec{A}}}_3\\&\quad +\det [{{\varvec{A}}}_1-{{\varvec{A}}}_0,{{\varvec{A}}}_2-{{\varvec{A}}}_0,{{\varvec{A}}}_3-{{\varvec{A}}}_0] {{\varvec{A}}}_4={\mathbf {0}}. \end{aligned}$$

Here for each i, the coefficient of \( {{\varvec{A}}}_i\) is six times the signed volume of the tetrahedron defined by the other four vertices. For example, in Figure 2, for \({{\varvec{A}}}_0=(0, 0, -1),\, {{\varvec{A}}}_1=(1, 0, 0), \,{{\varvec{A}}}_2=(0, 1,0), \,{{\varvec{A}}}_3=(-1, -1, 0),\, {{\varvec{A}}}_4=(0, 0,1),\) we obtain

$$\begin{aligned} 3{{\varvec{A}}}_0-2 {{\varvec{A}}}_1-2{{\varvec{A}}}_2-2 {{\varvec{A}}}_3+3{{\varvec{A}}}_4={\mathbf{0}}.\end{aligned}$$

In dimension n, the convex hull of \(n+1\) points is a (possibly degenerate) n-simplex. The coefficient \(\Delta _i\) in (3) is the signed volume of the n-simplex defined by the points other than \({{\varvec{A}}}_i.\) In particular, \(\Delta _i\) is unchanged by translation. So on translating by a nonzero vector \({{\varvec{T}}},\) (3) gives \(\sum _{i=0}^{n+1} (-1)^{i(n+1)} \Delta _i ({{\varvec{A}}}_i+{{\varvec{T}}})={\mathbf {0}}.\) Then subtracting (3) and taking the coefficient of \({{\varvec{T}}}\) gives the following scalar identity.

Figure 2.
figure 2

Applying (3) to a triangular bipyramid.

FormalPara Corollary 1.

\(\sum _{i=0}^{n+1} (-1)^{i(n+1)} \Delta _i =0\).

In other words, given \(n+2\) points in \({\mathbb {R}}^n\), when n is odd the sum of the signed volumes of the n-simplices is zero, and when n is even, the alternating sum of the signed volumes is zero.

Multilinear Algebra

Our proof of the theorem is a simple argument using multilinear algebra. Let us summarize the well-known basic ideas we require. Consider real vector spaces V and W. Suppose that k is a positive integer. Recall that a function \(f:V^{k}\rightarrow W\) is said to be multilinear if it is linear in each variable with the other variables held constant; for a gentle introduction, see [4, Chapter 3]. A multilinear function \(f:V^{k}\rightarrow W\) is said to be alternating if for all elements \({{\varvec{A}}}_0,{{\varvec{A}}}_1,\ldots,{{\varvec{A}}}_{k-1}\in V\) and all permutations \(\sigma \) of \(\{0,1,\ldots,k-1\}\), one has

$$\begin{aligned} f({{\varvec{A}}}_{\sigma (0)},{{\varvec{A}}}_{\sigma (1)},\ldots,{{\varvec{A}}}_{\sigma (k-1)})={\text {sgn}}(\sigma ) f({{\varvec{A}}}_0,{{\varvec{A}}}_1,\ldots,{{\varvec{A}}}_{k-1}), \end{aligned}$$

where \({\text {sgn}}(\sigma )\) denotes the sign of \(\sigma \). For example, the determinant function \(\det : V^{n}\rightarrow {\mathbb {R}}, ({{\varvec{A}}}_0,\ldots,{{\varvec{A}}}_{n-1})\mapsto \det [{{\varvec{A}}}_0 \mid \cdots \mid {{\varvec{A}}}_{n-1}]\) is an alternating multilinear function of the column vectors; see [3, Chapter XIII].

There are several known sets of generators for the symmetric group \(S_{k}\) of permutations of \(\{0,1,\ldots,k-1\}\); a nice survey is given in [2]. In particular, \(S_{k}\) is generated by the cycle \((0,1,\ldots,k-1)\) and the transposition (0, 1); see [2, Theorem 2.5]. Thus, in order to show that a multilinear function f is alternating, it suffices to show that for all \({{\varvec{A}}}_0,{{\varvec{A}}}_1,\ldots,{{\varvec{A}}}_{k-1}\in V\):

  1. (a)

    \(f({{\varvec{A}}}_1,{{\varvec{A}}}_2,\ldots,{{\varvec{A}}}_{k-1},{{\varvec{A}}}_{0})=(-1)^{k-1}f({{\varvec{A}}}_0,{{\varvec{A}}}_1,\ldots,{{\varvec{A}}}_{k-1})\),

  2. (b)

    \(f({{\varvec{A}}}_1,{{\varvec{A}}}_0,{{\varvec{A}}}_2,\ldots,{{\varvec{A}}}_{k-1})=-f({{\varvec{A}}}_0,{{\varvec{A}}}_1,{{\varvec{A}}}_2,\ldots,{{\varvec{A}}}_{k-1})\).

Note that from (b) we have

(b\('\)):

\(f({{\varvec{A}}}_0,{{\varvec{A}}}_0,{{\varvec{A}}}_2,{{\varvec{A}}}_3,\ldots,{{\varvec{A}}}_{k-1})={\mathbf {0}}\).

Conversely, it is easy to see that if (b\('\)) holds for all \({{\varvec{A}}}_0,{{\varvec{A}}}_2,\ldots,{{\varvec{A}}}_{k-1}\in V\), then (b) follows. So in order to show that a multilinear function f is alternating, it suffices to verify conditions (a) and (b\('\)). Note that it follows that if f is alternating and if \({{\varvec{A}}}_i={{\varvec{A}}}_j\) for some \(i\ne j\), then \(f({{\varvec{A}}}_0,{{\varvec{A}}}_1,\ldots,{{\varvec{A}}}_{k-1})={\mathbf {0}}\); indeed, one can just permute \({{\varvec{A}}}_i,{{\varvec{A}}}_j\) to the extreme left and employ (b\('\)).

Finally, a key fact about alternating multilinear functions that we will use below is that if \(k>\dim V\), then f is identically zero. In the literature, this fact can be quickly deduced once one has constructed the exterior algebra on V, but we don’t do that here, since we will not require the exterior product. Instead, one can use the following straightforward proof. First choose a basis for V. Using multilinearity, the image of the alternating function f is determined by its values on the basis elements. But if \({{\varvec{A}}}_0,{{\varvec{A}}}_1,\ldots,{{\varvec{A}}}_{k-1}\) are basis elements and \(k>\dim V\), then by the pigeonhole principle, there must be a repetition of one of the basis elements. It follows that since f is alternating, \(f({{\varvec{A}}}_0,{{\varvec{A}}}_1,\ldots,{{\varvec{A}}}_{k-1})={\mathbf {0}}\).

Proof of the Theorem

Let n be an arbitrary positive integer, let \(V={\mathbb {R}}^n,\) and consider the function \(f:V^{n+2}\rightarrow V\) defined by

$$\begin{aligned} f({{\varvec{A}}}_0,{{\varvec{A}}}_1,\ldots,{{\varvec{A}}}_{n+1})=\sum _{i=0}^{n+1} (-1)^{i(n+1)} \Delta _i \; {{\varvec{A}}}_i. \end{aligned}$$

Because \(\det \) is multilinear, and because for each i, the variable \({{\varvec{A}}}_i\) does not occur in \(\Delta _i,\) it follows that f is multilinear. We will show that f is alternating, and hence identically zero. Condition (a) is immediate from the definition of f. So it remains to prove (b′). Using the fact that the determinant is an alternating multilinear function of the column vectors and computing the indices modulo \(n+2,\) we have

$$\Delta _i= \det \big [ {{\varvec{A}}}_{i+2}-{{\varvec{A}}}_{i+1}\mid {{\varvec{A}}}_{i+3}-{{\varvec{A}}}_{i+1} \mid \cdots \mid {{\varvec{A}}}_{i+n+1}-{{\varvec{A}}}_{i+1}\big ]= \det \big [{{\varvec{A}}}_{i+2}\mid {{\varvec{A}}}_{i+3}\mid \cdots \mid {{\varvec{A}}}_{i+n+1}\big ] -\sum _{j=2}^{n+1}\det \big [{{\varvec{A}}}_{i+2}\mid \cdots \mid {{\varvec{A}}}_{i+j-1} \mid {{\varvec{A}}}_{i+1} \mid {{\varvec{A}}}_{i+j+1}\mid \cdots \mid {{\varvec{A}}}_{i+n+1}\big ]. $$

Moving the column \({{\varvec{A}}}_{i+1}\) in the summation \(j-2\) positions to the far left, we have

$$\Delta _i= \det \big [{{\varvec{A}}}_{i+2}\mid {{\varvec{A}}}_{i+3}\mid \cdots \mid {{\varvec{A}}}_{i+n+1}\big ] -\sum _{j=2}^{n+1}(-1)^{j-2}\det \big [{{\varvec{A}}}_{i+1}\mid \cdots \mid {{\varvec{A}}}_{i+j-1} \mid \widehat{{\varvec{A}}_{i+j}} \mid {{\varvec{A}}}_{i+j+1}\mid \cdots \mid {{\varvec{A}}}_{i+n+1}\big ]=\sum _{j=1}^{n+1}(-1)^{j-1}\det \big [{{\varvec{A}}}_{i+1}\mid \cdots \mid {{\varvec{A}}}_{i+j-1} \mid \widehat{{\varvec{A}}_{i+j}} \mid {{\varvec{A}}}_{i+j+1}\mid \cdots \mid {{\varvec{A}}}_{i+n+1}\big ], $$

where in the above, the hat symbol indicates that the term has been omitted. In particular,

$$\begin{aligned} \Delta _0&=\sum _{j=1}^{n+1}(-1)^{j-1}\det \big [ {{\varvec{A}}}_{1}\mid \cdots \mid \widehat{{{\varvec{A}}}_{j}} \mid \cdots \mid {{\varvec{A}}}_{n+1}\big ] \end{aligned}$$

and

$$\begin{aligned} \Delta _1&=\sum _{j=1}^{n+1}(-1)^{j-1}\det \big [ {{\varvec{A}}}_{2}\mid \cdots \mid \widehat{{{\varvec{A}}}_{j+1}} \mid \cdots \mid {{\varvec{A}}}_{n+1}\mid {{\varvec{A}}}_{0}\big ]. \end{aligned}$$

Now suppose \({{\varvec{A}}}_1={{\varvec{A}}}_0.\) Moving \({{\varvec{A}}}_0\) to the far left and replacing it by \({{\varvec{A}}}_1,\) and then adjusting j, we have

$$\begin{aligned} \Delta _1&=\sum _{j=0}^{n}(-1)^{n+j}\det \big [ {{\varvec{A}}}_{1}\mid \cdots \mid \widehat{{{\varvec{A}}}_{j+1}} \mid \cdots \mid {{\varvec{A}}}_{n+1}\big ]\\&=\sum _{j=1}^{n+1}(-1)^{n+j-1}\det \big [ {{\varvec{A}}}_{1}\mid \cdots \mid \widehat{{{\varvec{A}}}_{j}} \mid \cdots \mid {{\varvec{A}}}_{n+1}\big ]. \end{aligned}$$

So \(\Delta _0+ (-1)^{n+1}\Delta _1 =0.\) Since \({{\varvec{A}}}_1={{\varvec{A}}}_0,\) we have \(\Delta _i=0\) for all \(i\ge 2.\) Hence, from the definition of f,

$$\begin{aligned} f({{\varvec{A}}}_0,{{\varvec{A}}}_0,{{\varvec{A}}}_2,\ldots ,{{\varvec{A}}}_{n+1})=(\Delta _0+ (-1)^{n+1}\Delta _1){{\varvec{A}}}_{0}={\mathbf {0}}, \end{aligned}$$

as required.