Adventures in Graph Theory pp 139  Cite as
Introduction: Graphs—Basic Definitions
Abstract
By a graph, we don’t mean a chart or function plot or a diagram. We mean a combinatorial structure which consists of vertices, also called points or nodes, connected by edges, also called arcs. If you wish, think of vertices as destinations and edges as routes. A more precise statement is given below.
1.1 Graph theory
By a graph, we don’t mean a chart or function plot or a diagram. We mean a combinatorial structure which consists of vertices, also called points or nodes, connected by edges, also called arcs. If you wish, think of vertices as destinations and edges as routes. A more precise statement is given below.
Graphs are used in many branches of science but the only application we consider here is to cryptography—the Biggs cryptosystem studied briefly in Chapter 4.
1.1.1 Basic definitions
We begin with a precise definition of a graph.
Definition 1.1.1.
For any set S, let \(S^{(2)} =\{\{s,t\} \ \ s, t\in S, s\not = t\}\), i.e., \(S^{(2)}\) is the set of unordered pairs of distinct elements of S. An undirected simple graph \(\Gamma = (V, E)\) is an ordered pair of sets, where V is a set of vertices (possibly with weights attached) and \(E \subseteq V^{(2)}\) is a set of edges (possibly with weights attached)^{1}. We refer to \(V=V_\Gamma \) as the vertex set of \(\Gamma \), and \(E=E_\Gamma \) as the edge set.
A subgraph of \(\Gamma =(V, E)\) is a graph \(\Gamma '=(V', E')\) formed from subsets \(V'\subset V\) and \(E'\subset E\) such that \(V'\) includes the endpoints of each \(e\in E'\).
Graphs can be used to model electrical networks and other situations involving flows. Various matching problems can be stated in terms of graphs. If you think of vertices as gamblers and edges as bets between them, then the weight of an edge could be the cost of a bet between those gamblers, and the weight of a vertex could represent the numbers of chips that gambler has to play with. Edge weights in a graph could also represent costs or flow capacities.
There are situations in which we may wish to generalize the concept of a graph to allow more than one edge between a pair of vertices. Such graphs are called multigraphs.
Definition 1.1.2.
An undirected multigraph \(\Gamma \) is an ordered pair of sets (V, E), where V is a set of vertices and E is a set of edges, together with a map \(E \rightarrow V^{(2)}\), assigning to each edge an unordered pair of distinct vertices which are said to be incident to that edge.
Remark 1.1.3.
A loop is an edge of the form (v, v), for some \(v\in V\). Unless stated otherwise, none of the graphs in this book will have loops. A graph with no multiple edges or loops is called a simple graph.
Unless stated otherwise, we will assume that both the vertex set V and the edge set E are finite sets.
Let \(\Gamma \) be a graph. The cardinality V of the vertex set V is called the order of \(\Gamma \), and the cardinality E of the edge set E is called the size of \(\Gamma \) .
Definition 1.1.4.
Definition 1.1.5.
Example 1.1.6.
The graph with n vertices and an edge connecting each pair of vertices is called the complete graph on n vertices and denoted \(K_n\). The graph \(K_4\) is also called the tetrahedron graph and is shown in Figure 1.1. It has genus 3.
Example 1.1.7.
The graph obtained from \(K_4\) by removing one edge is called the diamond graph and is shown in Figure 1.2. It has genus 2.
Example 1.1.8.
A graph with n vertices \(v_1, v_2, \dots , v_n\) and edges \((v_1, v_2)\), \((v_2, v_3)\), \(\dots \), \((v_{n1}, v_n)\), \((v_n, v_1)\) is called a cycle graph and denoted \(C_n\). All cycle graphs have genus 1. The cycle graph \(C_5\) is shown in Figure 1.3.
Example 1.1.9.
Suppose we start with a cycle graph on \(n1\) vertices and add a vertex q and an edge from q to each vertex of the cycle graph. The resulting graph is called a wheel graph and denoted \(W_n\). The vertices from the cycle graph are known as the spoke vertices and the corresponding edges from the spoke vertices to q are the spokes. The genus of \(W_n\) is \(n1\). The wheel graph \(W_6\) is shown in Figure 1.4.
Example 1.1.10.
A graph consisting of two vertices and \(n>1\) edges connecting the vertices is called a banana graph or dipole graph of order n. It has genus \(n1\). The case \(n=3\) is also called a theta graph. A banana graph with 4 edges is shown in Figure 1.5.
Lemma 1.1.11.
Let \(\Gamma =(V, E)\) denote a connected graph and let T be a spanning tree of \(\Gamma \). For each edge \(e\in E\) not in T, there is a unique cycle cyc(T, e) of \(\Gamma \) containing only e and edges in T.
This lemma defines cyc(T, e), a socalled fundamental cycle. For a proof, see Biggs, Lemma 5.1 in [Bi93].
The girth of \(\Gamma \) is the length of a shortest cycle (if it exists) contained in \(\Gamma \). If no cycles exist in \(\Gamma \) the girth is, by convention, defined to be \(\infty \).
Example 1.1.12.
For example, the girth of the Rubik’s cube graph in the quarter turn metric \(\Gamma _{QT}\) (see the Example in the Preface) is 4, while the radius is 26.
A bijection \(p:V\rightarrow V\) of the vertices which induces a bijection of the edges \(P:E\rightarrow E\) is called an automorphism of \(\Gamma =(V, E)\). The group of automorphisms of \(\Gamma \) is denoted as \(Aut(\Gamma )\).
If \(G\subset Aut(\Gamma )\) is a subgroup, then we say G acts transitively on \(\Gamma \) if for any two \(v, w\in V\), there is a \(g\in G\) such that \(w=g(v)\).
We say \(\Gamma = (V, E)\) is a distancetransitive graph provided, given any \(v, w\in V\) at any distance i, and any other two vertices x and y at the same distance, there is an automorphism of \(\Gamma \) sending v to x and w to y.
We say \(\Gamma \) is a distanceregular graph, provided it is a regular graph such that, for any \(v, w\in V\), the number of vertices at distance j from v, \(\{x\in V\ \ dist(v, x)=j\}\), and at distance k from w, \(\{x\in V\ \ dist(w, x)=k\}\), depends only upon j, k, and \(i = dist(v, w)\).
Recall a connected graph is one for which, given any distinct vertices u, v, there is a walk in the graph connecting u to v. A connected component of \(\Gamma \) is a subgraph which is connected and not the proper subgraph of another connected subgraph of \(\Gamma \). The number of connected components of the graph \(\Gamma \) is denoted \(c(\Gamma )\).
Definition 1.1.13.
When the sensitivity of f is small relative to the number of variables, it means that for every vertex \(v\in GF(2)^n\), flipping one coordinate will change the value of f only for relatively few coordinates.
Example 1.1.14.
One of the major outstanding problems about Boolean functions is the sensitivity conjecture, which states that the minimum degree of a real polynomial that interpolates a Boolean function f is bounded above by some fixed power of the sensitivity of f.
Exercise 1.1.
Compute the sensitivity of the graph in Figure 1.7.
A walk with no repeated vertices (except possibly the beginning and ending vertices, if they are the same) is called a path. A walk with no repeated edges is called a trail. Clearly a path is a trail, but the converse is not true in general. If there are \(n=V\) vertices, then no path can have length more than n. A path of length n is called a Hamiltonian path and a graph which has a Hamiltonian path is called traceable. A path whose start and end vertices are the same is called a simple cycle. An Eulerian cycle is a trail whose start and end vertices are the same which visits every edge exactly once. A cycle of length n is called a Hamiltonian cycle. A graph which has an Eulerian cycle is called an Eulerian graph and a graph which has a Hamiltonian cycle is called a Hamiltonian graph.
Lemma 1.1.15.
Proof.
We may, without loss of generality, identify the vertex sets \(V_2\) and \(V_1\) with \(\{0,1,\dots , n1\}\). Suppose \(f : \Gamma _2 \rightarrow \Gamma _1\) is an isomorphism, so \(f_V:V_2\rightarrow V_1\) may be regarded as a permutation.
If a matrix \(P\in GL(n,\mathbb {Z})\) satisfies for each \(n\times n\) matrix \(M=(m_{ij})\), \(P^{1}MP=(m_{\rho (i),\rho (j)})\), for all i, j, for some permutation \(\rho :\mathbb {Z}/n\mathbb {Z}\rightarrow \mathbb {Z}/n\mathbb {Z}\) then P is a permutation matrix. Every permutation matrix arises in this way.
Therefore, \(A_{\Gamma _2} = P^{1}A_{\Gamma _1}P\), for some P depending on \(f_V\).
The converse is left to the reader. \(\Box \)
Example 1.1.16.
Definition 1.1.17.
The incidence matrix of an oriented graph \(\Gamma \), with orientation \(\iota \), is an \(n\times m\) matrix \(B=B_{\Gamma ,\iota } = (b_{ij})\), where m and n are the numbers of edges and vertices, respectively, such that \(b_{ij} = 1\) if the vertex \(v_i\) is the head of edge \(e_j\), \(b_{ij} = 1\) if the vertex \(v_i\) is the tail of edge \(e_j\), and \(b_{ij}=0\) otherwise. The incidence matrix of an undirected graph (or a graph without a prescribed orientation) \(\Gamma \) is an \(n\times m\) matrix \(B=B_\Gamma = (b_{ij})\) such that \(b_{ij} = 1\) if the edge \(e_j\) is incident to vertex \(v_i\), and \(b_{ij}=0\) otherwise.
When there is possible ambiguity, we call the matrix in the former case a signed incidence matrix and the latter an unsigned incidence matrix.
Exercise 1.2.
For the graph \(\Gamma \) in Figure 1.6, solve the following problems.
 (a)
Find the adjacency matrix A of \(\Gamma \).
 (b)
Fix an ordering of the edges and find the corresponding (unsigned) incidence matrix B of \(\Gamma \).
 (c)
Is \(\Gamma \) Hamiltonian? If so, find a Hamiltonian cycle.
 (d)
Is \(\Gamma \) Eulerian? If so, find an Eulerian trail.
 (e)
Find the girth of \(\Gamma \).
 (f)
Find the diameter of \(\Gamma \).
Unfortunately, the literature is not consistent in the definition of some of the technical terms in graph theory. For example, the rank of a graph \(\Gamma \) is sometimes defined to be the rank of its adjacency matrix \(A=A_\Gamma \) and sometimes defined to be the rank of its signed incidence matrix \(B=B_{\Gamma ,\iota }\) (as we’ll see below, this rank does not depend on the orientation \(\iota \) chosen). The lemma below concerns the latter definition.
We prove the following lemma for the oriented incidence matrix (see Godsil and Royle [GR01] for more details). In particular, the rank is independent of the orientation chosen.
Lemma 1.1.18.
Let \(\Gamma \) be a graph and B its incidence matrix. The rank of B is \(nc(\Gamma )\), where n is the number of vertices of \(\Gamma \) and \(c(\Gamma )\) is its number of connected components.
Proof.
Suppose the matrix B is \(n\times m\) and \(z\in \mathbb {R}^n\) is in the left kernel of B: i.e., \(zB=0\). For each edge \((u, v)\in E\), \(zB=0\) implies \(z_u=z_v\). Therefore, z is constant on connected components. This implies that the left kernel of B has dimension equal to the number of connected components of the graph. The result now follows from the rank plus nullity theorem from matrix theory. \(\Box \)
Definition 1.1.19.
Lemma 1.1.20.
(a) \({\mathcal F}=\{f_i\}\subset C^0(\Gamma , F)\) is a basis. (b) \({\mathcal G}=\{g_j\}\subset C^1(\Gamma , F)\) is a basis.
Proof.
The proof is left as an exercise. \(\Box \)
Example 1.1.21.
Exercise 1.3.
Verify that the left kernel of a signed incidence matrix of an oriented connected simple graph is the space of constant functions.
As the example below shows, this space does depend on the orientation selected.
Example 1.1.22.
Example 1.1.23.
Lemma 1.1.24.
The cycle space of an oriented connected graph \(\Gamma =(V, E)\) is the vector space spanned by the vector representations of the cycles of \(\Gamma \).
Proof.
Let C be a cycle in \(\Gamma \), let \(\iota \) denote the given orientation, and let B denote the matrix representation of the oriented incidence map with respect to the standard basis. For each vertex v of C, there are exactly two edges of C incident to v. Regarding C as a subgraph of \(\Gamma \), let \(\vec {C}=\vec {C}(\iota )\) denote the associated vector representation as in (1.5). The ith entry of \(B\vec {C}\) is the dot product of \(\vec {C}\) with the ith row of B. The only nonzero entries of ith row of B are those entries associated to the edges incident to the ith vertex of \(\Gamma \). Either exactly two of those incident edges are in C or none are. In the latter case, it’s clear that the ith entry of \(B\vec {C}\) is 0. In the former case, there are two possibilities: either the two incident edges in C are oriented with the same sign or they aren’t. If they are oriented with the same sign, then either both these edges go into the ith vertex of \(\Gamma \) or both these edges go out of the ith vertex of \(\Gamma \). In each of these cases, it’s clear that the ith entry of \(B\vec {C}\) is 0. Finally, suppose the two incident edges in C are oriented with opposite signs. This means one of these edges goes into the ith vertex and the other goes out. This implies that both of the corresponding nonzero entries of the ith row of B have the same sign, by definition of B. Again, the ith entry of \(B\vec {C}\) is 0.
Thus, the vector space spanned by the vector representations of the cycles is contained in the cycle space. \(\Box \)
Definition 1.1.25.
Lemma 1.1.26.
Let \(\Gamma =(V, E)\) denote a connected graph and let T be a spanning tree of \(\Gamma \). For each edge \(h\in E\) which is also an edge in T, there is a unique cocycle coc(T, h) of \(\Gamma \) containing only h and edges not in T.
This Lemma defines coc(T, h).
Proof.
The proof is left as an exercise. \(\Box \)
1.1.2 Simple examples
Example 1.1.27.
Exercise 1.4.
Pick an orientation of the tetrahedron graph \(\Gamma \) depicted in Figure 1.10, for example the default orientation. Find a basis for the cycle space and a basis for the cocycle space of \(\Gamma \).
Example 1.1.28.
Example 1.1.29.
Example 1.1.30.
 by adding edge 2 to the tree, you get a cycle 0, 1, 2 with vector representation$$ g_1 = (1,1,1,0,0,0,0,0,0,0,0), $$
 by adding edge 6 to the tree, you get a cycle 4, 5, 6 with vector representation$$ g_2 = (0,0,0,0,1,1,1,0,0,0,0), $$
 by adding edge 10 to the tree, you get a cycle 8, 9, 10 with vector representation$$ g_3 = (0,0,0,0,0,0,0,0,1,1,1). $$
The vectors \(\{g_1,g_2,g_3\}\) form a basis of the cycle space of \(\Gamma \).
Example 1.1.31.
 by removing edge 3 from the graph, you get a cocycle with vector representation$$ b_1 = (0,0,0,1,0,0,0,0,0,0,0), $$
 by removing edge 7 from the graph, you get a cocycle with vector representation$$ b_2 = (0,0,0,0,0,0,0,1,0,0,0), $$
 by removing edges 0, 1 from the graph, you get a cocycle with vector representation$$ b_3 = (1,1,0,0,0,0,0,0,0,0,0), $$
 by removing edges 1, 2 from the graph, you get a cocycle with vector representation$$ b_4 = (0,1,1,0,0,0,0,0,0,0,0), $$
 by removing edges 4, 5 from the graph, you get a cocycle with vector representation$$ b_5 = (0,0,0,0,1,1,0,0,0,0,0), $$
 by removing edges 4, 6 from the graph, you get a cocycle with vector representation$$ b_6 = (0,0,0,0,1,0,1,0,0,0,0), $$
 by removing edges 8, 9 from the graph, you get a cocycle with vector representation$$ b_7 = (0,0,0,0,0,0,0,0,1,1,0), $$
 by removing edges 9, 10 from the graph, you get a cocycle with vector representation$$ b_8 = (0,0,0,0,0,0,0,0,0,1,1). $$
The vectors \(\{b_1,b_2,b_3,b_4,b_5,b_6,b_7,b_8\}\) form a basis of the cocycle space of \(\Gamma \).
Note that these vectors are not orthogonal to the basis vectors of the cycle space in Example 1.1.30 unless we work over GF(2).
Exercise 1.5.
Let \(V=\{0,1,2,\dots , 10\}\), and, for all \(u, v\in V\), let \((u, v)\in E\) if and only if \(u<v\) and \(vu\) is a square \(\pmod {11}\). Is the undirected graph \(\Gamma =(V, E)\) connected? What is its order? What is its size?
Exercise 1.6.
Find the cycle space of the graph in Figure 1.6.
Exercise 1.7.
Find the cocycle space of the graph in Figure 1.6.
Definition 1.1.32.
A graph \(\Gamma \) is said to be planar if it has an embedding in the plane (roughly speaking, if it can be drawn in the plane without edge crossings). A dual graph \(\Gamma ^*\) of a planar graph \(\Gamma \) is a graph whose vertices correspond to the faces (or planar regions) of a such an embedding (including the infinite region). An edge of \(\Gamma ^*\) is adjacent to two vertices of \(\Gamma ^*\) if the corresponding regions of the embedding of \(\Gamma \) have a common edge.
Exercise 1.8.
Show that the ncycle graph \(C_n\) and the banana graph with n edges are dual graphs.
Good sources for further reading on basic graph theory are Biggs [Bi93], Bollobás [Bo98], Godsil and Royle [GR01], and Marcus [Mar08].
1.2 Some polynomial invariants for graphs
Polynomial invariants for graphs include the (vertex) chromatic polynomial, the (factorial, respectively, geometric) cover polynomial (for directed graphs), and one introduced above: the characteristic polynomial of the adjacency matrix.
Some others are introduced below.
1.2.1 The Tutte polynomial of a graph
Let \(\Gamma =(V, E)\) be a graph. For any subset S of E, let \(\Gamma _S=(V, S)\) be the spanning subgraph of \(\Gamma \) containing all vertices of \(\Gamma \) and the edges S.
Definition 1.2.1.
For example, if \(\Gamma \) is the complete graph on 4 vertices \(K_4\) and if \(\Gamma _S\) is the subgraph consisting of a 3cycle and a disjoint 4th vertex, then \(c(\Gamma _S)=2\) and \(r(\Gamma _S)=2\). If \(\Gamma _S\) is the subgraph with 4 vertices and no edges, we say that \(c(\Gamma )=4\) and \(r(\Gamma )=0\).
Definition 1.2.2.
Definition 1.2.3.
For example, recall from Exercise 1.8 that the ncycle graph \(C_n\) and the banana graph with n edges are dual graphs. In Exercises 1.10 and 1.11, we will calculate the Tutte polynomials of a cycle graph and a banana graph and show that they satisfy Equation (1.7).
Example 1.2.4.

The empty set has \(S=0\) and \(r(S)=0\).

There are 6 subsets with \(S=1\) and \(r(S)=1\).

There are 15 subsets with \(S=2\) and \(r(S)=2\) (12 twopaths and 3 sets of two disconnected edges).

There are 4 subsets with \(S=3\) and \(r(S)=2\) (the 4 threecycles).

There are 16 subsets with \(S=3\) and \(r(S)=3\) (12 threepaths and the 4 complements of the threecycles).

There are 15 subsets with \(S=4\) and \(r(S)=3\) (3 fourcycles and the 12 complements of the twopaths).

There are 6 subsets with \(S=5\) and \(r(S)=3\).

If S consists of all edges in \(K_4\), \(S=6\) and \(r(S)=3\).
Example 1.2.5.
We can see that calculating the Tutte polynomial by considering all subsets of E would take a prohibitively long time for even moderately sized graphs. Fortunately, there are other methods. We will describe a method that uses only spanning subsets of E. This method uses a fixed ordering on the edges of E. It is not evident, a priori, that the polynomial generated by this method is independent of the ordering chosen, but it can be shown that it is equal to the Tutte polynomial. First, we give some definitions.
Definition 1.2.6.
Let \(\Gamma =(V, E)\) be a connected graph with a fixed ordering on E. Let T be a subset of E which forms a spanning tree of \(\Gamma \). Let e be an edge in T. If we remove e from the tree T, the remaining graph has two components, with vertex sets \(V_1\) and \(V_2\), and with \(V(\Gamma )= V_1 \cup V_2\). (It is possible that a component consists of a single vertex.) The cut of \(\Gamma \) determined by T and e consists of all edges of \(\Gamma \) from a vertex in \(V_1\) to a vertex in \(V_2\). The edge e is in the cut determined by T and e. We say that e is an internally active edge of T if it is the smallest edge in the cut, with respect to the given ordering on the edges of \(\Gamma \). The number i of edges of T which are internally active in T is called the internal activity of T .
Definition 1.2.7.
Let \(\Gamma =(V, E)\) be a connected graph with a fixed ordering on E. Let T be a subset of E which forms a spanning tree of \(\Gamma \). Let e be an edge which is not in T. If we add the edge e to T, we obtain a graph with exactly one cycle. We say that e is an externally active edge for T if it is the smallest edge in the cycle it determines, with respect to the given ordering on the edges of \(\Gamma \). The number j of edges of \(E \setminus T\) which are externally active for T is called the external activity of T .
The following theorem describes the Tutte polynomial in terms of spanning trees. The proof can be found in Bollobás [Bo98].
Theorem 1.2.8.
Corollary 1.2.9.
For a connected graph \(\Gamma \), \(T_\Gamma (1,1)\) is the number of spanning trees of \(\Gamma \).
Example 1.2.10.
A spanning tree must have exactly 3 edges. There are 20 ways to pick 3 edges from 6, but 4 of these, \(\{e_1, e_2, e_4\}\), \(\{e_1, e_3, e_5\}\), \(\{ e_2, e_3, e_6\}\), and \(\{e_4, e_5, e_6\}\) give cycles, not spanning trees. This leaves 16 spanning trees.
As an example, consider the spanning tree \(T=\{e_2, e_3, e_4\}\). First, we examine the edges in T to see which are internally active. Edge \(e_2\) is not internally active, since \(e_1\) is an edge in the cut determined by T and \(e_2\). Edge \(e_3\) is internally active, since \(e_3\) is the smallest edge in the cut determined by T and \(e_3\) (\(e_1\) is not in this cut). Edge \(e_4\) is not internally active, since \(e_1\) is the smallest edge in the cut determined by T and \(e_2\). Next, we examine the edges in \(E \setminus T\) to see which are externally active. The edge \(e_1\) must be externally active, since it is the smallest edge in any cycle of which it is a member. The edges \(e_5\) and \(e_6\) cannot be externally active in any cycle, since each cycle in \(K_4\) has at least 3 edges, so must contain a smaller edge than \(e_5\) or \(e_6\).
We list the 16 spanning trees below, with the number i of internally active edges, and the number j of externally active edges for each.

The tree \(\{ e_1, e_2, e_3\}\) has \(i=3\) and \(j=0\), so \(t_{30}=1\).

The trees \(\{e_1, e_2, e_5\}\), \(\{e_1, e_2, e_6\}\), and \(\{e_1, e_3, e_4\}\) have \(i=2\) and \(j=0\), so \(t_{20}=3\).

The trees \(\{e_1, e_4, e_5\}\) and \(\{e_1, e_4, e_6\}\) have \(i=1\) and \(j=0\) so \(t_{10}=2\).

The trees \(\{e_1, e_3, e_6\}\), \(\{e_1, e_5, e_6\}\), \(\{e_2, e_3, e_4\}\), and \(\{e_2, e_3, e_5\}\) have \(i=1\) and \(j=1\), so \(t_{11}=4\).

The trees \(\{e_2, e_4, e_5\}\) and \(\{e_2, e_4, e_6\}\) have \(i=0\) and \(j=1\), so \(t_{01}=2\).

The trees \(\{e_2, e_5, e_6\}\), \(\{e_3, e_4, e_5\}\), and \(\{e_3, e_4, e_6\}\) have \(i=0\) and \(j=2\), so \(t_{02}=3\).

The tree \(\{e_3, e_5, e_6\}\) has \(i=0\) and \(j=3\), so \(t_{03}=3\).
Another approach to calculating the Tutte polynomial is to use reduction formulas under deletion and contraction of edges (see, e.g., Bollobás [Bo98]].
Several other evaluations of the Tutte polynomial are known (see D. Welsh [We99], for example).
Lemma 1.2.11.
The Tutte polynomial has the following additional properties.
 (a)
\(T_{\Gamma }(1,2)\) is the number of connected spanning subgraphs of \(\Gamma \),
 (b)
\(T_{\Gamma }(2,1)\) is the number of spanning forests of \(\Gamma \),
 (c)
\(T_{\Gamma }(2,2)\) is the number of spanning subgraphs of \(\Gamma \),
 (d)
\(T_{\Gamma }(1  k, 0)\) is the number of proper kvertex colorings of \(\Gamma \),
 (e)
\(T_{\Gamma }(2,0)\) is the number of acyclic orientations of \(\Gamma \), and
 (f)
\(T_{\Gamma }(0,2)\) is the number of Eulerian orientations of \(\Gamma \).
From Merino [Me04], it is known in general that \(T_\Gamma (1,0)\) represents the number of critical configurations of level 0 (discussed further in Chapter 4). Using Sage, it is easy to compute this quantity.
Example 1.2.12.
Exercise 1.9.
Show that the Tutte polynomial of a tree with m edges is \(x^m\).
Exercise 1.10.
Exercise 1.11.
Exercise 1.12.
Show that the Tutte polynomial of the diamond graph (\(K_4\) with one edge removed) is \(x^3+2 x^2+2 x y+x+y^2+y\).
Exercise 1.13.
1.2.2 The Ihara zeta function
Recall the circuit rank \(\chi (\Gamma )\) of an undirected graph \(\Gamma \) is the minimum number edges to remove from \(\Gamma \), making it into a forest.
Lemma 1.2.13.
This is well known and the proof is omitted.
It is known that for regular graphs the Ihara zeta function is a rational function.
Lemma 1.2.14.
Example 1.2.15.
Note these last two outputs agree.
Indeed, there is this generalization (see Terras [Te10]).
Proposition 1.2.16.
Exercise 1.14.
1.2.3 The Duursma zeta function
In a fascinating series of papers starting in the 1990s, Iwan Duursma introduced a new class of zeta functions. They are intended to be for linear codes, but they were implicitly introduced for other objects as well.
The Duursma zeta function of a code
Before we define the Duursma zeta function of a graph, we introduce the Duursma zeta function of a linear block code, following Duursma [Duu01], [Duu03a], [Duu03b], and [Duu04]. For more on codes, see Biggs [Bi08].
Let C be an \([n,k, d]_q\) code, i.e., a linear code over GF(q) of length n, dimension k (as a vector space over GF(q)), and minimum distance \(d=d_C\). The Singleton bound states that \(k+d\le n+1\) (see for example [HP10] for a proof). If equality is satisfied in this bound, then we call C a minimum distance separable or MDS code.
Definition 1.2.17.
The Duursma zeta function is defined in terms of the zeta polynomial by means of (1.8) above.
Lemma 1.2.18.
Proof.
Proposition 1.2.19.
The Duursma zeta polynomial \(P=P_C\) exists and is unique, provided \(d^\perp \ge 2\).
For the proof, see Proposition 96 in Joyner and Kim [JK11].
The Duursma zeta function of a graph
This section will explore some of the properties of the analogous zeta function for graphs and give examples using the software package Sage.
Let \(\Gamma =(V, E)\) be a graph. One way to define the zeta function of \(\Gamma \) is via the binary code defined by the Duursma zeta function of the cycle space of \(\Gamma \) over GF(2). Indeed, the matroid attached to a graph is typically that vector matroid attached to the (oriented) incidence matrix of the graph (see Duursma [Duu04]).
Proposition 1.2.20.
Let \(F=GF(2)\). The cycle code of \(\Gamma =(V, E)\) is a linear binary block code of length E, dimension \(g(\Gamma )\) (namely, the genus of \(\Gamma \)), and minimum distance equal to the girth of \(\Gamma \). If \(C\subset GF(2)^{E}\) is the cycle code associated to \(\Gamma \) and \(C^*\) is the cocycle code associated to \(\Gamma \), then \(C^*\) is the dual code of C. In particular, the cocycle code of \(\Gamma \) is a linear binary block code of length E, and dimension \(r(\Gamma )=En(\Gamma )\).
This follows from Hakimi and Bredeson [HB68] (see also Jungnickel and Vanstone [JV10]) in the binary case^{8}.
Proof.
Let d denote the minimum distance of the code C. Let g denote the girth of \(\Gamma \), i.e., the smallest cardinality of a cycle in \(\Gamma \). If K is a cycle in \(\Gamma \), then the vector \(\mathrm{vec}(K)\in GF(2)^{E}\) is an element of the cycle code \(C\subset GF(2)^{E}\). This implies \(d\le g\).
In the other direction, suppose \(K_1\) and \(K_2\) are cycles in \(\Gamma \) with associated support vectors \(v_1=\mathrm{vec}(K_1)\), \(v_2=\mathrm{vec}(K_2)\). Assume that at least one of these cycles is a cycle of minimum length, say \(K_1\), so the weight of its corresponding support vector is equal to the girth g. The only way that \(\mathop {wt}(v_1+v_2)<\min \{\mathop {wt}(v_1),\mathop {wt}(v_2)\}\) can occur is if \(K_1\) and \(K_2\) have some edges in common. In this case, the vector \(v_1+v_2\) represents a subgraph which is either a cycle or a union of disjoint cycles. In either case, by minimality of \(K_1\), these new cycles must be at least as long. Therefore, \(d\ge g\), as desired. \(\Box \)
Definition 1.2.21.
Exercise 1.15.
Example 1.2.22.
Remark 1.2.23.
It is well known that the cycle space of a planar graph \(\Gamma \) is the cocycle space of its dual graph, \(\Gamma ^\perp \), and vice versa.
The following question asks when an analog of the Riemann hypothesis holds.
Open Question 1.
For which planar graphs \(\Gamma \) satisfying \(\Gamma ^\perp \cong \Gamma \), are all the roots of the Duursma zeta polynomial on the circle \(t=2^{1/2}\)?
1.2.4 Graph theory and mathematical blackjack
In this section, we illustrate a connection between graph theory, (settheoretic) combinatorics, and a combinatorial game called mathematical blackjack.
An m(sub)set is a (sub)set with m elements. For integers \(k<m<n\), a Steiner system S(k, m, n) is an nset X and a set S of msubsets having the property that any ksubset of X is contained in exactly one mset in S.
Example 1.2.24.
Lemma 1.2.25.
S is a Steiner system of type (5, 6, 12).
If \(X=\{1,2,\dots , 12\}\), a Steiner system S(5, 6, 12) is a set of 6sets, called hexads, with the property that any set of 5 elements of X is contained in (“can be completed to”) exactly one hexad.
A decomposition of a graph \(\Gamma \) is a set of subgraphs \(H_1,\dots , H_k\) that partition the edges of \(\Gamma \). If all the \(H_i\) are isomorphic to a given group H, then we say the decomposition is an Hdecomposition. In chapter 6, we shall run into graph decompositions in our discussion of the Cayley graph of a pary function.
From the perspective of Steiner systems, if H is any connected subgraph of \(K_n\) (the complete graph on n vertices), an Hdecomposition of \(K_n\) gives rise to a S(k, m, n), where m is the number of vertices of H and k is the smallest number such that any k vertices of H must contain at least 2 neighbors.
Question 1.1.
Is the Steiner system S(5, 6, 12) described above associated to a graph decomposition of \(K_{12}\)?
The MINIMOG description
This section is devoted to Conway’s [Co84] construction of S(5, 6, 12) using a \(3\times 4\) array called the MINIMOG.

the first row has label 1,

the second row has label \(+\), and

the third row has label −.
Each line in \(GF(3)^2\) with finite slope occurs once in the \(3\times 3\) part of some tet. The odd man out for a column is the label of the row corresponding to the nonzero digit in that column; if the column has no nonzero digit, then the odd man out is a “?”. Thus the tetracode words associated in this way to these patterns are the odd men out for the tets.
Lemma 1.2.26.
(Conway [CS99], Chapter 11, p. 321) If we ignore signs, then from these signed hexads we get the 132 hexads of a Steiner system S(5, 6, 12). These are all possible 6sets in the shuffle labeling for which the odd men out form a part (in the sense that an odd man out “?” is ignored or regarded as a “wildcard”) of a tetracode word and the column distribution is not 0, 1, 2, 3 in any order^{10}.
Example 1.2.27.
is the tetracode \(0\ 0\ ?\ ?\) and the signed hexad \(\{1,2,3,4,5,6\}\) and the hexad \(\{1,2,3,4,5,6\}\).
is the tetracode \(0\ +\ +\ +\) and the signed hexad \(\{1,2,3,6,7,10\}\) and the hexad \(\{1,2,3,6,7,10\}\).
Furthermore, it is known (see Conway [Co84]) that the Steiner system S(5, 6, 12) in the shuffle labeling has the following properties.

There are 11 hexads with total 21 and none with lower total.

The complement of any of these 11 hexads in \(\{0,1,\dots , 11\}\) is another hexad.

There are 11 hexads with total 45 and none with higher total.
Mathematical blackjack is a 2person combinatorial game whose rules will be described below. What is remarkable about it is that a winning strategy, discovered by Conway and Ryba (see Conway and Sloane [CS86] and Kahane and Ryba [KR01]), depends on knowing how to determine hexads in the Steiner system S(5, 6, 12).
Mathematical blackjack is played with 12 cards, labeled \(0,\dots , 11\) (for example, \(\text { king}, \text { ace}, 2, 3, \dots , 10, \text { jack}\), where the king is 0 and the jack is 11). Divide the 12 cards into two piles of 6 (to be fair, this should be done randomly). Each of the 6 cards of one of these piles is to be placed face up on the table. The remaining cards are in a stack which is shared and visible to both players. If the sum of the cards face up on the table is less than or equal to 21, then no legal move is possible^{11} so you must shuffle the cards and deal a new game.

Players alternate moves.

A move consists of exchanging a card on the table with a lower card from the other pile.

The player whose move makes the sum of the cards on the table under 21 loses.
The winning strategy (given below) for this game is due to Conway and Ryba (see [CS86] and [KR01]). There is a Steiner system S(5, 6, 12) of hexads in the set \(\{0,1,\dots , 11\}\). This Steiner system is associated to the MINIMOG in the shuffle numbering.
Proposition 1.2.28.
(Ryba) For this Steiner system, the winning strategy is to choose a move which is a hexad from this system.
This result is proven in Kahane and Ryba [KR01].
Lemma 1.2.29.
The probability that the first player has a win in mathematical blackjack (with a random initial deal) is 6 / 7.
Example 1.2.30.
We play a game.
 Initial deal: 0, 2, 4, 6, 7, 11. The total is 30. The pattern for this deal is where \(\bullet \) is a ±. No combinations of ± choices will yield a tetracode odd men out, so this deal is not a hexad.
 First player replaces 7 by 5: 0, 2, 4, 5, 6, 11. The total is now 28. (Note this is a square in the picture at 1.) This corresponds to the col+tet with tetracode odd men out \(\ + \ 0\ \).
 Second player replaces 11 by 7: 0, 2, 4, 5, 6, 7. The total is now 24. Interestingly, this 6set corresponds to the pattern (hence possible with tetracode odd men out \(0\ + \ +\ ?\), for example). However, it has column distribution 3, 1, 2, 0, so it cannot be a hexad.
 First player replaces 6 by 3: 0, 2, 3, 4, 5, 7. (Note this is a cross in the picture at 0.) This corresponds to the tettet pattern with tetracode odd men out \(\  \ +\ 0\). Cards total 21. First player wins.
In addition to the references mentioned above, see also Curtis [Cu76] and [Cu84] for further reading related to mathematical blackjack.
Footnotes
 1.
When there is no ambiguity, we will abuse notation and write \((u, v)\in V^{(2)}\) or \(uv \in V^{(2)}\) if \(\{u, v\}\in V^{(2)}\), when the meaning is clear.
 2.
This is not the usual definition of genus: the minimal integer n such that the graph can be embedded in a surface of genus n.
 3.
In the case of an unweighted multigraph \(\Gamma \) without loops, we associate to that graph an edgeweighted graph \(\Gamma '\) whose edge weights are given by the corresponding edge multiplicities of \(\Gamma \).
 4.
Also called the flow space or the space of circulation functions.
 5.
In the sense of set theory. So, a bond is a cocycle that does not contain any other cocycle as a proper subset.
 6.
This formulation in a graphtheoretic setting is actually due to Sunada [Su86], rather than Ihara.
 7.
In general, if C is an [n, k, d]code, then we use \([n,k^\perp , d^\perp ]\) for the parameters of the dual code, \(C^\perp \). It is a consequence of Singleton’s bound that \(n+2dd^\perp \ge 0\), with equality when C is an MDS code.
 8.
It is likely true in the nonbinary case as well, but no proof seems to be in the literature.
 9.
See Theorem 1 in Dankelmann, Key, and Rodrigues [DKR13].
 10.
That is to say, the following cannot occur: some column has 0 entries, some column has exactly 1 entry, some column has exactly 2 entries, and some column has exactly 3 entries.
 11.
Conway [Co76] calls such a game \(0=\{\}\); in this game, the first player automatically loses and so a good player will courteously offer you the first move!