1 Preliminaries

Let \(G=\{g_{1}, g_{2}, \ldots , g_{n}\}\) be a group. Also, let \(\beta (i, j)=\beta (j, i)=0\vee 1\) \((i,j\in \{1, 2, \ldots , n\})\) be a Boolean function obeying \(\beta (i, j)\equiv 1\) if \(g_{j}=g_{i}^{-1}\). With utilizing a multiplicative operation ‘\(\cdot \)’ of G, determine a Brandt groupoid B, with a zero 0 and a binary operation ‘\(*\)’, as follows:

$$\begin{aligned} g_{i}* g_{j}=\left\{ \begin{array}{ll} g_{i}\cdot g_{j}=g_{i}g_{j}~&{} \quad \text {if}~\beta (i, j)=1;\\ 0, &{} \quad \text {otherwise}. \end{array} \right. \end{aligned}$$
(1)

The groupoid B can be represented by a weighted symmetric digraph D given by the adjacency matrix \(A(D)=[\beta (i,j)g_{j}]_{i,j=1}^{n}\), wherein a nonzero entry \(\beta (i,j)g_{j}\in B\) is a weight of an arc emanating from a vertex i and entering a vertex j, while a zero (ij)-th entry means that there is no such arc in D. By construction, every component of the digraph D is strongly connected, that is, for any two vertices, i and j, there exists at least one consistently oriented path from i to j and from j to i.

For solving purely combinatorial tasks, there may be enough to consider an unweighted symmetric digraph U given by the adjacency matrix \(A(U)=[\beta (i,j)]_{i,j=1}^{n}\). The same symmetric matrix A(U) is also the adjacency matrix of an undirected graph H.

Since Brandt groupoids are applicable for describing local symmetries of crystals [1,2,3,4,5], we have to impose additional restrictions on the function \(\beta (i,j)\) as well as the definition of the Brandt groupoid. First, recall that the subset S of (suitable) operations of local symmetry is not necessary a subgroup (practically, is not). Also, we can choose suitable elements for S from a union \(\cup _{l=1}^{k} G_{l}\) of \(k\ge 2\) groups rather than from only one group, as above. For a chosen set S, we additionally impose on \(\beta (i,j)\) a condition to obey: \(\beta (i,j)=1\Rightarrow g_{i}g_{j}\in S\subset \cup _{l=1}^{k} G_{l}\) \((g_{i}, g_{j}\in \cup _{l=1}^{k} G_{l})\). So, the graph H corresponding to the subset S has \(|S|\le |\cup G_{k}|\) vertices.

The determination of a Brandt groupoid can be made for the symmetric digraph U of an arbitrary undirected graph H. To this end, one has to properly choose suitable group elements to utilize them as the weights of all arcs of U, to produce the weighted symmetric digraph D. Here, “properly” means that any two walks (paths) going from an arbitrary vertex i to another vertex j, in U, have the same weight. The weight of an (oriented) walk is equal to the product of weights of all arcs that it traverses (taking into account the number of times that each arc is traversed). Hence, it follows that any closed walk (cycle) in D has as its weight a group unit.

The Hadamard product \(A\circ Q\) of matrices \(A=[a_{ij}]_{i, j=1}^{n}\) and \(Q=[q_{ij}]_{i, j=1}^{n}\) is the matrix \(R=[a_{ij}q_{ij}]_{i, j=1}^{n}\) \((R=A\circ Q)\). Let H be an arbitrary simple graph (unweighted, undirected, connected, without loops and multiple edges). Also, let \({\bar{H}}\) be a graph obtained by attaching a loop of weight 1 to each vertex of H. So, its adjacency matrix is \(A({\bar{H}})=A(H)+I\), where I is the identity matrix (with all 1’s on the main diagonal and 0’s elsewhere).

Determine the subset S of group elements in a matrix representation. We define it as the set \(S=\{P_{1}, P_{2}, \ldots , P_{p}\}\) of all \(n\times n\) permutation matrices for which \(P\circ A({\bar{H}})=P\) \((P\in S)\). The number of permutation matrices in S is: \(p=|S|=\mathrm {per}[A({\bar{H}})]\), where \(\mathrm {per}[A({\bar{H}})]\) is the permanent [6] of a matrix \(A({\bar{H}})\). Further, construct a derivative digraph \(\varGamma \), whose vertex set is the set S of permutation matrices, and where there is an arc uv from a vertex u to a vertex v iff (if and only if) \(P_{u}P_{v}\in S\). This determines the Boolean function \(\beta (u, v)\) and, thus, determines the respective Brand groupoid B.

Impose certain rules for the discrete motion on the graph \(\varGamma \). Let each vertex of \(\varGamma \) be occupied by exactly one spinless particle (all particles being equivalent). Initially, let consecutive acts of particle displacement happen at regular intervals during which each particle can either halt at the vertex it occupies or move just to an adjacent vertex; accordingly, two or more particles cannot simultaneously move to one vertex. That is, a step of motion is a permutation of particles; this permutation is represented by a matrix P for which holds: \(P\circ A({\bar{H}})=P\). For this reason, we can drop our initial condition for all displacement acts to happen simultaneously—but we still require that just particles participating in a common cyclic permutation must move synchronously. In terms of this text, our discrete motion model is described by a Brand groupoid B. The groupoid B is determined on the permutation group \(G=\langle S\rangle \), generated by the above set S of permutations, by equaling all products \(P_{u}P_{v}\not \in S\) to a zero element of B. Accordingly, \(\beta (u,v)=1\) if \(P_{u}P_{v}\in S\), and \(\beta (u,v)=0\) if \(P_{u}P_{v}\not \in S\).

Here, an essential physical feature is that we precisely consider the distance-limited motion process under which every particle may not move from its initial position farther than at a unit distance. A unit distance is the distance at which are the nearest (adjacent) neighbors of a particle’s home vertex; such a distance in a graph theory [7] is assumed to be equal to 1. So, every particle is always “kept on a short leach”. Probably, corpuscular motion of valence electrons in a molecule with highly localized \(\sigma \)-bonds may be described in a similar way.

A generalization of the shortest-distance motion is a distance-limited motion with the upper bound on distance. In order to discuss this, we need to introduce some additional terminology.

The square \(H^{2}\) of a graph H [7] has \(V(H^{2}) = V(H)\) with vertices uv adjacent in \(H^{2}\) whenever the distance \(d(u, v) \le 2\) in H. The powers \(H^{3}, H^{4}, \ldots , H^{s}\) \((s\in \{1, 2, \ldots , n-1\})\) of H are defined similarly (with \(d(u, v) \le s\) in H). For example, \(C_{5}^{2}\ = K_{5}\), while \(P_{4}^{2} = K_{4} - x\) [7]. Similarly to the above, let \({\bar{H}}^{s}\) be a graph obtained by attaching a loop of weight 1 to each vertex of \(H^{s}\). So, the adjacency matrix \(A(\bar{H^{s}})=A(H^{s})+I\). Denote by \(T_{s}\) \((s\in \{1, 2, \ldots , n-1\})\) the set of all permutation matrices P such that \(P\circ A({\bar{H}}^{s})=P\), where \({\bar{H}}^{1}={\bar{H}}\). Further, in order to determine a Brandt groupoid for the group \(G_{s}=\langle \cup _{j=1}^{s} T_{j}\rangle \) \((s\in \{1, 2, \ldots , n-1\})\), isomorphically generated by all permutation matrices of the union \(\cup _{j=1}^{s}T_{j}\), it is sufficient to determine a certain Boolean function \(\beta (u,v)\) on the Cartesian square \(G_{s}\times G_{s}\) of the group \(G_{s}\).

Let E denote the set of all ordered pairs (uv) \((u,v\in \{1, 2, \ldots , |\cup _{j=1}^{s}T_{j}|\}; P_{u}, P_{v}\in \cup _{j=1}^{s}T_{j})\) of indices with an additional proviso that at least \(P_{u}\in T_{1}\) or \(P_{v}\in T_{1}\), while the other permutation matrix, if any, arbitrarily belongs to the full union \(\cup _{j=1}^{s}T_{j}\). Note that, in our illustrative example, greater indices \(\big (|{\cup _{j=1}^{s}}(T_{j})|+1, |{\cup _{j=1}^{s}}(T_{j})|+2, \ldots , |G_{s}|\big )\) of permutation matrices, which belong to the difference \(G_{s}\setminus \cup _{j=1}^{s} T_{j}\), will stay beyond its scope. For this reason, we assume that \(\beta (u,v)\equiv 0\) if \((u, v)\not \in E\). Provided that both \((u, v), (v, u)\in E\) \(\big (\forall u, v\in \{1, 2, \ldots , |\cup _{j=1}^{s}T_{j}|\}\big )\), the set E represents the set of all arcs (edges) of the unweighted symmetric digraph U above.

Lastly, determine the Boolean function \(\beta (u,v)\). Namely, \(\beta (u,v)= 1\) if (\(P_{u}\in T_{1}\) & \(P_{v}\in T_{j}\) or \(P_{u}\in T_{j}\) & \(P_{v}\in T_{1}\)), and \(P_{u}P_{v}\in T_{j+k}\) \((k\in \{0, \pm 1\}; j, j+k\in \{1, 2, \ldots , s\})\); otherwise, \(\beta (u,v)= 0\). Thus, having determined both the group \(G_{s}\) and Boolean function on \(G_{s}\times G_{s}\), we also determine the respective Brandt groupoid B, designed in our last example. Hence, we come to a mathematical description of a distance-limited motion. This process on an n-vertex graph does not allow n collisionless particles to leave their original vertices at a distance greater than s \((s\le n-1)\). Here, as a physical example, it is appropriate to recall (tiny) local eddy currents (also called Foucault currents), which are loops (cycles) of electrical current induced within conductors by a changing magnetic field in the conductor, due to Faraday’s law of induction.

Beyond the discrete motion context, a bridge of similarity may also be extended toward crystallographic applications of the same groupoid description (for uses like those in [3,4,5]). Note that the local symmetry operations utilized in crystallography may permute not only the nearest neighboring atoms. Such an operation can be an (imaginary) permutation of two identical molecular particles in a common unit cell or two identical adjacent layers in a crystal, which may also involve permuting nonadjacent atoms.

In a broader context, one may consider a semigroup with a zero. Such a semigroup can be included in an associative ring with divisors of zero. A semigroup with a zero also describes some automaton in theory of automata. A Brandt groupoid may also be an instance. As to semigroups in general, see bib. in [8].

Here, we move on to the next part, which contains a practical example of the application of the Brandt groupoid concept and a broader verbal discussion of the topic.

2 A practical part

If we can put it this way, then the case of a distance-limited motion of particles that occurs inside one molecule may be a mutual change in the arrangement of ligands in a coordination compound. Here, the small, molecular scale makes it easier to trace the logic of our actions, which remains the same in the general case. We are also limited by the size of Table 1 (below) that can fit on a page; this forces us to restrict ourselves here to considering only permutations of neighboring ligands, although the problem can be solved in the general case taking into account more distant permutations.

Table 1 Multiplication table of elements of Brandt groupoid \(B=\{p_{1}, p_{2}, \ldots , p_{19}, 0\}\)

A simple example is the coordination complexes of metals with an octahedral arrangement of ligands, such as \(\mathrm {[Cu(NH_{3})_{6}]^{3+}}\) and \(\mathrm {[Co(NH_{3})_{6}]^{3+}}\) (see Fig. 1). We will not be interested in whether the complex has perfect symmetry; but it is essential that the ligands are located at the vertices of (some) octahedron, the edges of which exactly correspond to pairs of adjacent (distance-1) ligands. In other words, we mean the skeletal graph of an octahedron, the vertices of which have the indicated meaning for us. We can always make it so in our thought experiment that all the ligands in the octahedral complex are distinct. These can be isotopic isomers of ammonia or various simple amines. It is important to see that such a model is realistic and does not depend on the very chemical nature of both the metal and the ligands. For variety and at the same time for simplicity, we will assume that one of the ligands cannot be used for permutation with neighbors. Let us assume that it is rigidly covalently bound to some massive immobile radical, relative to which this ligand is also immobile.

Thus, although we first started discussing the octahedral complex, in practice we will have to deal with (the skeleton graph of) a quadrangular pyramid (see Fig. 2).

We will allow the ligands to move only to the positions previously occupied by their nearest neighbors, but exclude permutations involving cycles of lengths 3, 4, and 5 (which are possible in the complex, but here “do not fit into the size of the table”). As a result, using Fig. 2, we can find all suitable permutations that, together with the zero element and the binary operation \((*)\), will make up our Brandt groupoid B:

Fig. 1
figure 1

The octahedral complex \(\mathrm {[Co(NH_3)_6]^{3+}}\) of hexaamminecobalt(III) chloride

Fig. 2
figure 2

Graph of quadrangular pyramid. [Used SageMath.]

(2)

Here, \(p_{1}\) is the identity permutation that does not permute any ligands; \(p_{2}, p_{3}, \ldots , p_{9}\) are permutations that each transpose only one pair of adjacent ligands; while \(p_{10}, p_{11}, \ldots , p_{19}\) rearrange two pairs of ligands at once. Note that all these permutations are involutions: \(p_{j}^{2}=p_{1}\) \(\big (j=\{1, 2, \ldots , 19\}\big )\), i.e., successive double execution of such a permutation is equivalent to the identity transformation.

When using the group multiplication \((\cdot )\), permutations from B can generate the complete symmetric group \(S_{\!5}\) of \(5! = 120\) elements (e.g., the product \(p_{10}p_{14}\), which results in a cyclic permutation of five ligands, and any transposition from B do this). However, when using the \((*)\) operation of the groupoid B, the same nineteen permutations generate only themselves and zero. Thus, if all five ligands in the quadrangular coordination pyramid are distinct, the action of all permutations from B on the set of all these ligands promises to generate no more than 19 distinct substitutional isomers. (Recall again that \(p_{1}\) is also considered isomer-generating.) If stereochemistry and energetics allow, then all such isomers can be formed. But if considerably bulky ligands are present or their rearrangement requires significant energy expenditure, then not all isomers can be obtained. To simplify the task, we will further follow the choice of pure combinatorics and assume that in our case all 19 isomers are formed.

Table 1 below is the multiplication table of elements of our groupoid B, where the zero element occurs every time \(p_{i}p_{j}\not \in \{p_{1}, p_{2}, \ldots , p_{19}\}=B\setminus \{0\}\).

Table 1 is symmetrical about its main diagonal, each of whose entries is equal to \(p_{1}\) (which denotes the identity permutation). If we replace all of its nonzero internal entries with ones, this will give us the adjacency matrix A of the undirected graph H with loops (see beneath). The vertices of H correspond to nonzero elements (permutations) of the groupoid B, each loop denotes the multiplication of the corresponding element by the identity element, and a walk w of length l along the edges (and/or loops) of the graph H corresponds to a nonzero product of the respective l permutations in the groupoid B. A sequence of vertices, of H, in which at least one pair of consecutive vertices is not connected by an edge is always equivalent to the zero element 0 of the groupoid B. The product of two walks \(w_{1}\) and \(w_{2}\) on H corresponds to a nonzero element of the groupoid B iff the end vertex of the former coincides with the initial vertex of the latter. Thus, our groupoid B can be interpreted as the Brandt groupoid of all walks of the graph H (here this groupoid is also a semigroup with zero). Note that graphs were first used to study the isomerization of coordination compounds in 1966 in the pioneering work of Balaban, Farcasia, and Banica [9] and then [10,11,12,13,14,15,16].

The graph H is given by the following adjacency matrix A(H) obtained from Table 1.

$$\begin{aligned} A(H)= & {} \left[ \begin{array}{ccccccccccccccccccc} 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 1\\ 1 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 1\\ 1 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 1 &{} \quad 0\\ 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1\\ 1 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0\\ 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 1 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 0 &{} \quad 1 \end{array}\right] .\nonumber \\ \end{aligned}$$
(3)

The graph H of the groupoid B constructed from the matrix A is shown in Fig. 3 (built using SageMath). In our case, H is a subgraph of the full graph of the symmetric group \(S_{\!5}\). We call a group graph full here if it uses all the elements of a group as the vertex set, and also as a generating set of this group; accordingly, the identity element is represented by single loops attached to each vertex, while other elements are represented by arcs (or undirected edges, one instead of each pair of opposite arcs connecting two incident vertices). The classical Cayley graph \(\mathrm {Cay}(G)\) of the group G [17] has no loops and uses only a certain (minimal) generating set of nonidentity elements of the group, which ensures its connectivity.

The authors of [9,10,11,12,13,14,15,16] used graphs, in particular, to study the pathways of mutual conversion of isomers. For this purpose, we already have the full graph H of groupoid B in Fig. 3. It is easy to see that this graph “as is” allows obtaining any isomer of 19 by one-step isomerization of a certain isomer selected as the original one, formally obtained using the identity permutation, \(p_{1}\), from itself. When a certain physicochemical process can be reduced to just one stage, it can be more beneficial in practice. But to be frank, it is much more interesting for a researcher to study more complex and closed to eyes multistage processes, in which the sequence of individual stages is not yet clear. In this regard, it should be noted that the graph H also proposes to consider isomerization sequences that include an arbitrary number of identity isomerization steps, which in no way can clarify the general situation with nonidentity isomerizations.

In our subsequent reasoning, we want to exclude a one-step identity transformation of isomers at any intermediate stage, with the possibility of returning to the original isomer only after several stages. Such a return corresponds to a closed walk on the graph H. In view of the above, we will only use from H its “essential” subgraph Q \(\big (Q\subset H\big )\), which is H less vertex 1 corresponding to the identity permutation \(p_{1}\) and all loops. See Fig. 4. Here, recall that both H and Q are related to the groupoid B, which imposes certain restrictions on sequences of mutual transformations of isomers. While in [9,10,11,12,13,14,15,16], high-symmetric graphs are initially taken, which potentially take into account any mutual transformations of all possible isomers in the problem.

Fig. 3
figure 3

The graph H determining the Brand groupoid B. (Built using SageMath.)

We also select the following subgraph Q of the graph H, shown in Fig. 4.

Although Fig. 4 hides it, Q is a planar graph. If we disregard vertex 1 in H, which is a fixed point for all automorphisms of this graph, we can say that the graphs H and Q have the same eight-element automorphism group generated by the permutations: \([(3,5)(6,7)(8,9)(12,13)(14,18)(15,19)(16,17), (10,11)(2,3)(4,5)(6,8)(12,14)(13,15)(16,19)(17,18)]\). This automorphism group induces five orbits on the vertex set of H: \(\{1\}, \{2, 3, 4, 5\}, \{6, 7, 8, 9\}\), \(\{10, 11\}, \{12, 13, 14, 15, 16, 17, 18, 19\}\). The same orbits, except for the very first one, are induced on the vertex set of the graph Q. But the valencies of all vertices in the graph Q are 2 less than the same vertices have in H.

There are general aspects associated with any complex and multistage process, depicted by a graph or otherwise. In particular, when describing a process, it is required to determine the (minimum) number of process stages that must be overcome in order to pass from one state of the system to another. This can be done on a graph by calculating or directly determining the distances between its vertices. Using SageMath, we have calculated the following matrix D of distances between the vertices of the graph Q (see below).

$$\begin{aligned} D(Q)= & {} \left[ \begin{array}{ccccccccccccccccccc} 0 &{} \quad 2 &{} \quad 1 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 1 &{} \quad 1 &{} \quad 3 &{} \quad 1 &{} \quad 1 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 2\\ 2 &{} \quad 0 &{} \quad 2 &{} \quad 1 &{} \quad 1 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 3 &{} \quad 1 &{} \quad 3 &{} \quad 2 &{} \quad 1 &{} \quad 1 &{} \quad 2 &{} \quad 3 &{} \quad 2 &{} \quad 2\\ 1 &{} \quad 2 &{} \quad 0 &{} \quad 2 &{} \quad 1 &{} \quad 1 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 1 &{} \quad 1 &{} \quad 2 &{} \quad 3\\ 2 &{} \quad 1 &{} \quad 2 &{} \quad 0 &{} \quad 2 &{} \quad 1 &{} \quad 1 &{} \quad 2 &{} \quad 3 &{} \quad 1 &{} \quad 2 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 2 &{} \quad 1 &{} \quad 1\\ 2 &{} \quad 1 &{} \quad 1 &{} \quad 2 &{} \quad 0 &{} \quad 2 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 1 &{} \quad 2 &{} \quad 1 &{} \quad 2 &{} \quad 3 &{} \quad 3\\ 2 &{} \quad 2 &{} \quad 1 &{} \quad 1 &{} \quad 2 &{} \quad 0 &{} \quad 2 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 1 &{} \quad 1 &{} \quad 2\\ 1 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 3 &{} \quad 2 &{} \quad 0 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 1\\ 1 &{} \quad 1 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 2 &{} \quad 0 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 2 &{} \quad 1 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 3\\ 1 &{} \quad 3 &{} \quad 1 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 0 &{} \quad 4 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3\\ 3 &{} \quad 1 &{} \quad 3 &{} \quad 1 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 2 &{} \quad 4 &{} \quad 0 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 2\\ 1 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 1 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 0 &{} \quad 2 &{} \quad 4 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 2\\ 1 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 1 &{} \quad 2 &{} \quad 3 &{} \quad 2 &{} \quad 0 &{} \quad 3 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 4 &{} \quad 3\\ 3 &{} \quad 1 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 3 &{} \quad 2 &{} \quad 4 &{} \quad 3 &{} \quad 0 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 3\\ 2 &{} \quad 1 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 1 &{} \quad 3 &{} \quad 2 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 0 &{} \quad 3 &{} \quad 4 &{} \quad 3 &{} \quad 3\\ 2 &{} \quad 2 &{} \quad 1 &{} \quad 3 &{} \quad 1 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 3 &{} \quad 0 &{} \quad 2 &{} \quad 3 &{} \quad 4\\ 2 &{} \quad 3 &{} \quad 1 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 4 &{} \quad 2 &{} \quad 0 &{} \quad 2 &{} \quad 3\\ 3 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 3 &{} \quad 1 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 3 &{} \quad 4 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 0 &{} \quad 2\\ 2 &{} \quad 2 &{} \quad 3 &{} \quad 1 &{} \quad 3 &{} \quad 2 &{} \quad 1 &{} \quad 3 &{} \quad 3 &{} \quad 2 &{} \quad 2 &{} \quad 3 &{} \quad 3 &{} \quad 3 &{} \quad 4 &{} \quad 3 &{} \quad 2 &{} \quad 0 \end{array}\right] .\nonumber \\ \end{aligned}$$
(4)
Fig. 4
figure 4

The “essential” subgraph \(Q\subset H\), being the graph H less its vertex 1 and loops. (Built using SageMath.)

Keeping the connection of the graph Q with H (as a subgraph without vertex 1 of H), we must sequentially number the rows (columns) of the matrix D with numbers starting from 2 and up to 19 inclusive. As an example, let us choose entry \(d_{10, 11}=4\) of the matrix D. The indicated distance 4 means that the shortest path from vertex 10 to vertex 11 of graph Q contains four edges and there are no shorter paths in Q connecting these vertices (although this distance is equal to 2 in H). As such a corresponding path, we take a path that sequentially traverses edges (10, 2), (2, 9), (9, 3), (3, 11) in Fig. 4. Here, remember that each edge (ij) of the graph Q (or earlier H) corresponds to the (ij)-th entry of Table 1. Specifically, we have:

$$\begin{aligned} (10, 2)\Rightarrow p_{4}; (2, 9)\Rightarrow p_{13}; (9, 3)\Rightarrow p_{15}; (3, 11)\Rightarrow p_{5}. \end{aligned}$$
(5)

Now we calculate the product of permutations from B [see (2)], sequentially including \(p_{10}\) (where 10 is the number of the first vertex of the path), \(p_{4}, p_{13}, p_{15}, p_{5}\) (which correspond to the edges of the path):

(6)

that is, we have safely reached along our path of length 4 from vertex 10 to vertex 11 of the graph Q. It works the same way for any other path in Q.

Note that the D distance matrix for Q can be represented as the sum of four (0, 1)-matrices with positive integer coefficients: \(D=\sum _{s=1}^{4}sD_{s}\), where a matrix \(D_{s}\) \((s=1, 2, 3, 4)\) is the adjacency matrix of the corresponding distance graph \(Q_{s}\), which has the same vertex set as Q but vertices are adjacent in it iff they are at the distance s in Q \((Q_{1}\equiv Q)\). Recall that \(Q^{s}=\cup _{1}^{s}Q_{i}\) is the s-th power of the graph Q (see [7] or Preliminaries), which is related to the problem of particles moving on a graph for a bounded distance no father than s. Both \(Q_{s}\) and \(Q^ {s}\) can be constructed using the appropriate standard SageMath routines.

There are also other actions using graph theory, which, in particular, are related to the study of mutual transformation of isomers or local symmetry operations, but also relate to the study of any system with restrictions on possible sequences of transitions between states. They include determining the number of allowed pathways of the system from one state to another through a certain number of stages. In our case, this is done by counting the number of all walks of length l \((l\in \mathbb {N})\) on the graph Q from vertex i to vertex j. Let \(A^{l}=[a_{ij}^{(l)}]_{2}^{19}\) denote the l-th power of the adjacency matrix A(Q) \(\big (A^{1}=A\big )\). Then, the number of all walks of length l from vertex i to vertex j \(\big (i,j\in \{2,3, \ldots ,19\}\big )\) equals the entry \(a_{ij}^{(l)}\) of \(A^{l}\) (see p. 151 in [7] or more on pp. 44–45 in [18]); and the number \(W_{l}\) of all walks of length l on Q is equal to \({\mathbf {1}}^{\!\top }\!A^{l}{\mathbf {1}}\), where \({\mathbf {1}}^{\!\top }\) denotes the transposed column vector \({\mathbf {1}}\), all entries of which are 1’s.

There is also an asymptotic spectral estimate: , where n is the number of vertices, and \(\lambda _{1}\) is the maximum eigenvalue of the adjacency matrix A of the corresponding graph (see [18]). For Q, we have . For the full graph H of the groupoid B [see the adjacency matrix A(H) in (3)], . Comparing these two formulae, it becomes clear that the fraction of walks on H that pass through vertex 1 asymptotically tends to one \(\left( \lim _{l\rightarrow \pmb {\infty }}\frac{W_{l}(H)-W_{l}(Q)}{W_{l}(H)}=1\right) \). Note that the “special property” of vertex 1 in the graph H is associated only with the identity rearrangement of ligands in any of the isomers, and it should not be related to a preference for any particular isomer. Remember that the vertices of the graph H are permutations, not isomers, as it really can be in [9,10,11,12,13,14,15,16].

Among all walks on the graph Q \(\big (H\big )\), we can separately consider closed walks, which correspond to the return of the system after l stages to some initial state from which it left. In the case of a graph of a system that allows a return to some previous states, closed walks on this graph are a mandatory attribute. This is also said about any chemical process with an established equilibrium, in which several reversible reactions are coupled, when the product of one of the reactions is again the initial component for the reaction that took place at one of the previous stages. A simple example is the equilibrium in a solution of several competing coordination compounds, in which the same ligands are common for several different complexes (as in a mixture of complexes of various metal cations with halogen anions and ammonia). Any walk on a graph of length \(l\ge n\) from vertex i to another vertex j must contain built-in cycles. Therefore, in a general context, the existence of closed walks cannot be ignored.

Due to the considered general case, we can say that the number of walks of length l returning to the vertex i is equal to the diagonal entry \(a_{ii}^{(l)}\) of the matrix \(A^{l}\) of the corresponding graph. The number \(N_{l}\) \(\big (l\in \mathbb {N}\cup \{0\}\big )\) of all closed walks of length l on the graph is the trace \(\mathrm {Tr}(A^{l})\) of the matrix \(A^{l}\) (\(\mathrm {Tr}[A^{l}(H)]=\sum _{i=1}^{19}a_{ii}^{(l)}\) and \(\mathrm {Tr}[A^{l}(Q)]=\sum _{i=2}^{18}a_{ii}^{(l)}\)). For a nonbipartite graph, there is also an asymptotic estimate ; in particular, for Q and for H. Thus, \(\lim _{l\rightarrow \infty }\frac{N_{l}}{W_{l}}=1/n\), that is, 1/18 and 1/19 for Q and H, respectively. For a bipartite graph (which is not our case here), , since its minimum (negative) eigenvalue \(\lambda _{n}\) has the same modulus as the maximum positive eigenvalue \(\lambda _{1}\) \(\big (|\lambda _{n}|=\lambda _{1}\big )\); accordingly, the share of closed walks also doubles.

It should be noted that there are recurrent relations for the exact calculation of the number \(N_{l + s}\) from the already known numbers, \(N_{l}, N_{l + 1}, \ldots , N_{l + s-1}\) (see p. 81 in [18]). In the general case, there may be different equivalence relations that require different numbers s of known previous values \(N_{(\cdots )}\). From a practical perspective, we will try to discuss this issue in a little more detail. In order to proceed we need to introduce some terminology.

The characteristic polynomial \(\phi (\varGamma ; x)\) of a (di)graph \(\varGamma \) is the characteristic polynomial of its adjacency matrix \(A(\varGamma )\):

$$\begin{aligned} \phi (\varGamma ; x):=\phi [A(\varGamma );x]=\det [xI-A(\varGamma )]=\sum _{k=0}^{n}c_{k}x^{n-k}\quad \big (c_{0}:=1\big ). \end{aligned}$$
(7)

The spectrum \(\mathrm {Sp}(\varGamma )=\{\lambda _{1}, \lambda _{2}, \ldots , \lambda _{n}\}\) (of eigenvalues) of \(\varGamma \) is the multiset of all roots of \(\phi (\varGamma ;x)\) (see p. 23 in [18]). Accordingly,

$$\begin{aligned} \phi (\varGamma ;x)=\prod _{k=1}^{n}(x-\lambda _{k}). \end{aligned}$$
(8)

Knowing the coefficients \(c_{k}\) (7) of the characteristic polynomial \(\phi (\varGamma ;x)\) always guarantees finding one recurrence relation for the total number of closed paths, which, however, in the general case, may not be optimal in length (see p. 20 in [18]):

$$\begin{aligned} N_{l+n}=-c_{1}N_{l+n-1}-c_{2}N_{l+n-2}-\cdots - c_{n}N_{l}\quad \big (l\in \mathbb {N}\cup \{0\}; N_{0}=n\big ). \end{aligned}$$
(9)

The minimum polynomial \(\phi _{\mathrm {min}}(A;x)\) of a matrix A is the unique polynomial of the least power \(n_{\mathrm {min}}\) satisfying the matrix equation \(\phi _{\mathrm {min}}(A; A)=O\), where the second A is used as an argument (in place of x); while O is the matrix of all 0’s. All roots of \(\phi _{\mathrm {\min }}(A;x)\) are also the roots of \(\phi (A;x)\) (see p. 20 in [18]). For us, \(\phi _{\mathrm {min}}(A;x)=\sum _{k=0}^{n_{\mathrm {min}}}t_{k}x^{n_{\mathrm {min}}-k}\) (when it does not coincide with \(\phi (A; x)\)) is useful in that its coefficients allow us to obtain a shorter recurrence relation:

$$\begin{aligned} N_{l+n_{\mathrm {min}}}= & {} -t_{1}N_{l+n_{\mathrm {min}}-1}-t_{2}N_{l+n_{\mathrm {min}}-2}-\cdots -t_{n_{\mathrm {min}}}N_{l}\quad \nonumber \\&\quad \big (l\in \mathbb {N}\cup \{0\}; N_{0}=n; n_{\mathrm {min}}\le n\big ). \end{aligned}$$
(10)

As an example, compare the polynomials of the adjacency matrices of our graphs H and Q.

$$\begin{aligned} \phi (H;x)= & {} x^{19} - 19x^{18} + 123x^{17} - 233x^{16} - 739x^{15} + 3537x^{14} - 1525x^{13}\nonumber \\&- 12377x^{12}+14851x^{11} + 17639x^{10} - 31663x^{9} - 11323x^{8} + 30215x^{7} \nonumber \\&+4275x^{6} - 14439x^{5} - 2067x^{4} + 3224x^{3} + 744x^{2} - 176x - 48;\nonumber \\ \phi _{\mathrm {min}}(H;x)= & {} x^{14}-15x^{13}+63x^{12}+11x^{11}-572x^{10}+702x^{9}+1354x^{8}-2226x^{7}\nonumber \\&- 1363x^{6}+ 2317x^{5}+879x^{4}-873x^{3}-354x^{2}+52x+24. \end{aligned}$$
(11)
$$\begin{aligned} \phi (Q;x)= & {} x^{18} - 30x^{16} - 20x^{15} + 341x^{14} + 404x^{13} -1792x^{12} - 2992x^{11} \nonumber \\&+ 4044x^{10} + 9672x^{9} - 1772x^{8} - 12432x^{7} -3328x^{6} + 4752x^{5}\nonumber \\&+ 880x^{4} - 928x^{3} + 128x^{2};\nonumber \\ \phi _{\mathrm {min}}(H;x)= & {} x^{12}-x^{11}-23x^{10}+3x^{9}+192x^{8}+94x^{7}-670x^{6}-612x^{5}\nonumber \\&+816x^{4}+976x^{3}-184x^{2}-272x+64. \end{aligned}$$
(12)

As can be seen from (11), the maximum power of x in the minimum polynomial \(\phi _{\mathrm {min}}[A(H); x]\) is 5 less than in the characteristic polynomial \(\phi [A(H); x]\); and in (12), this difference for \(\phi _{\mathrm {min}}[A(Q); x]\) and \(\phi [A(Q); x]\) is 6. Thus, it is more optimal for us to use the recurrence formula (10) for polynomials \(\phi _{\mathrm {min}}[A(H);x]\) and \(\phi _{\mathrm {min}}[A(Q);x]\) in (11) and (12), respectively, than the formula (9) for \(\phi [A(H); x]\) and \(\phi [A(Q); x]\). Since it is not difficult to obtain specific working formulas from (12) using the coefficients from (11) and (12), we will not dwell on this here. But we still want to consider the symmetry of the graphs H and Q in relation to our problem.

Recall two general asymptotic formulae for calculating the number of all closed paths on the (di)graph (see above): for a nonbipartite graph (our case) and for a bipartite graph (not our case). In both cases, to calculate these numbers, we need to know the maximum eigenvalue \(\lambda _{1}\) of the adjacency matrix of the corresponding graph. In our case, \(\lambda _{1}(H)=7.476045837502325\) and \(\lambda _{1}(Q)=3.895106515927531\), which are computed using the \(19\times 19\) matrix A(H) from (3) and the \(18\times 18\) matrix A(Q) or their characteristic polynomials \(\phi [A(H);x]\) in (11) and \(\phi [A(Q);x]\) in (12), respectively. However, to selectively compute the values \(\lambda _{1}\) in our problem, the original matrices can be replaced by derived matrices of lower dimensions. And this can be done due to the symmetry of our graphs H and Q. The symmetry and orbits of the vertices of the graphs H and Q have already been previously considered by us in the paragraph of the text located under Fig. 4. To use this information for our purposes, we additionally need to consider here some general reasoning.

Since all vertices in one orbit are equivalent, the following holds. Each vertex in orbit \(\mathrm {Orb}_{i}\) of an arbitrary (di)graph \(\varGamma \) has the same number \(\nu _{ij}\) of adjacent vertices in orbit \(\mathrm {Orb}_{j}\) (\(i\ne j\) or \(i=j\)); note that the numbers \(\nu _{ij}\) and \(\nu _{ji}\) are not necessarily equal to each other. In the case of a (0, 1) adjacency matrix \(A(\varGamma )\), the numbers \(\nu _{ij}\) can be calculated as follows. First, we number all the vertices of the graph (respectively, the rows and columns of its adjacency matrix) in such a way that the vertices belonging to one common orbit are numbered sequentially. In this case, the adjacency matrix is partitioned into rectangular blocks (with square blocks on the diagonal). All row sums inside the (ij)-th block will be equal to the corresponding number \(\nu _{ij}\).

Let \(\omega \) denote the number of all vertex orbits in a graph. The \(\omega \times \omega \) matrix \(\varOmega :=[\nu _{ij}]_{i,j=1}^{\omega }\), whose entries are the underlined numbers, is the weighted adjacency matrix of a (di)graph \({\mathfrak {D}}(\varGamma )\). The (di)graph \({\mathfrak {D}}\) (resp. weighted adjacency matrix \(\varOmega \) of \({\mathfrak {D}}\)) is called the quotient graph, or divisor of a (di)graph \(\varGamma \) (resp. the adjacency matrix \(A(\varGamma )\)) (see Ch. 3.4 in [18]). The term “divisor” is explained by the fact that the characteristic polynomial \(\phi ({\mathfrak {D}};x) = \phi (\varOmega ; x) \) divides the characteristic polynomial of a (di)graph \(\varGamma \). Thus, all eigenvalues (roots) of the divisor \({\mathfrak {D}}(\varGamma )\) \(\big (\varOmega \big )\) are also eigenvalues of \(\varGamma \). But the main thing for us is that the divisor of a graph (matrix) always has the same maximum eigenvalue \(\lambda _{1}\) as the “divided” graph (matrix) itself: \(\lambda _{1}[A(\varGamma )]=\lambda _{1}\big (A[{\mathfrak {D}}(\varGamma )]\big )\). This is a consequence of symmetry. See below matrix the \(5\times 5\) matrix \(A[{\mathfrak {D}}(H)]\) and the \(4\times 4\) matrix \(A[{\mathfrak {D}}(Q)]\), which are divisors of the adjacency matrices A(H) and A(Q), respectively.

$$\begin{aligned} A[{\mathfrak {D}}(H)]=\left[ \begin{array}{ccccc} 1 &{} \quad 4 &{} \quad 4 &{} \quad 2 &{} \quad 8\\ 1 &{} \quad 2 &{} \quad 2 &{} \quad 1 &{} \quad 2\\ 1 &{} \quad 2 &{} \quad 1 &{} \quad 0 &{} \quad 2\\ 1 &{} \quad 2 &{} \quad 0 &{} \quad 1 &{} \quad 0\\ 1 &{} \quad 1 &{} \quad 1 &{} \quad 0 &{} \quad 1 \end{array} \right] ;\quad A[{\mathfrak {D}}(Q)]=\left[ \begin{array}{cccc} 1 &{} \quad 2 &{} \quad 1 &{} \quad 2\\ 2 &{} \quad 0 &{} \quad 0 &{} \quad 2\\ 2 &{} \quad 0 &{} \quad 0 &{} \quad 0\\ 1 &{} \quad 1 &{} \quad 0 &{} \quad 0 \end{array} \right] . \end{aligned}$$
(13)

Note that if \(\varGamma \) is a weighted (di)graph, whose weighted adjacency matrix can have any numerical (or other) values, then the generalization of the number \(\nu _{ij}\) is the sum of the weights of all arcs going out of any one vertex of the orbit \(\mathrm {Orb}_{i}\) to the vertices of the orbit \(\mathrm {Orb}_{j}\). Generalization of group orbits in the context of eigenvalues can be any block partitions of the matrix, provided that all row sums of entries in each block are equal. This is a necessary and sufficient condition for finding any divisors (obtained using the orbits of symmetry groups, semigroups [8], or any others). Such a partition is also called the (weighted) equitable partition of a matrix (see an example [19]). It should be noted that the number \(\omega \) of orbits of any type is not less than the number of the main eigenvalues of the adjacency matrix of a graph (see [20]), and all main eigenvalues of the (di)graph \(\varGamma \) are also the main eigenvalues of each of its divisors \({\mathfrak {D}}(\varGamma )\).

We have previously mentioned the example of the movement of particles at a limited distance from their original positions. It is also desirable for us not to stray far from the topic of direct application of Brand groupoids themselves. Now we want to additionally consider examples of such applications, using also, for greater clarity, an analog example not from physicochemistry itself. We want to further understand where and why Brand groupoids can be used.

As a speculative example, let us imagine the assembly at a plant of a complex unit from a large number of ready-made blocks. In general, assemblers can vary the sequence of assembly operations. For example, if an object has a mirrorlike symmetry of its sides, then at the appropriate stage the assembler can optionally attach the desired block either to the left or to the right. And only then the same block to the other (remaining) side. However, it is inconvenient or impossible for him/her to start assembling from blocks that should be in the middle of the unit. Thus, we see a picture of a system with many allowed states, which are all results of attaching individual blocks (with all possible assembly options). And we see a lot of allowed (unresolved) transitions between the states of the system; in this case, the transition (operation) is the attachment of the next specific block to specific previously installed blocks.

Although, in the general case, many ways can lead to the same result, the choice of possible assembly path is limited by the ability to combine sequentially separate attachments of the missing blocks. That is, although all states of the system are reachable at different stages, restrictions are imposed on the sequence of transitions. By attaching a specific part, the assembler knows which part can be attached next and which can not. As applied to the description of our system, the use of the Brandt groupoid assumes that all constraints imposed on a sequence of operations are expressed in the language of constraints for the simplest sequences of two sequential operations. Also, this algebraic model assumes the reversibility of any performed operation immediately after its execution (when the attached block can be disconnected, and a multistage sequential disassembly is possible in the order strictly reverse to assembly).

Above, we considered an example of mutual transformations of substitutional isomers, in which transformation of any isomer into any other of the considered ones is possible, but there is a limited number of possible pathways from one to another. In general, isomerization occurs through allowed intermediate stages. Note that due to the ability to disassemble an already (partially) assembled unit in the previous example, it is also (in principle) possible to go from any stage of the assembly (allowed state of the system) to any other stage using the same blocks after disassembly (and others, if necessarily). Disassembly of the hexaamminecobalt(III) chloride molecule (see Fig. 1) in this context means its dissociation.

Very similar to the isomerization of substitutional isomers is the mutual transformation of conformations of a flexible chain molecule. One should make a reservation here that there can be an uncountable number of such molecular configurations; therefore, either one must choose a discrete subset of them, or introduce an infinite Brandt groupoid (which is not considered here). The very change of conformations (for example, the folding of an extended chain into a globule) occurs due to the rotation of segments of the chain molecule around simple chemical bonds. Due to stereochemical limitations, not all direct transitions between conformations are possible, but a whole sequence of intermediate conformations is required. Again, the aforementioned groupoid model with a finite number of selected conformations must indicate the allowed (forbidden) two-element sequences of conformations transformations. A single transformation can be understood as rotation through a specific angle around a specific chemical bond of a chain molecule. Stereochemistry gives an unambiguous answer to the question of whether it will be possible to perform two such rotations one after the other. Therefore, the model based on of the Brandt groupoid is also applicable in this example.

Another example is a system of competing reversible chemical reactions with the overall equilibrium (which has become a system in a steady-stable state, where the concentrations of all components are already constant). An element of the set of states of this system is a single component or a subset of components that are either the initial or final products of any of the reactions involved. A substance, which is the product of some successive reaction, can become again the initial product for the previous reaction, which then again gives the initial component for the subsequent reaction. Two reactions are related to each other only when the products of one contain the initial component(s) for the other. Therefore, the transitions between the states of our system can occur only in certain pathways. And any two reactions will be an incompatible reaction pair if and only if these reactions are not related. Such a pair corresponds to a zero product of elements of the corresponding modeling Brandt groupoid \((g_{1}*g_{2}=0)\).

It is also worth recalling once again the successful applications of Brandt grupoids in crystallography [3,4,5]. Structure analysis of a complex crystal structure can be greatly helped by the presence of crystal’s symmetry, which reduces the number of corresponding unknowns when determining the coordinates of atoms. The final establishment of a dissymmetric or only locally symmetric structure is helped by obtaining its preliminary approximate model. In particular, such a model is obtained by dividing the structure under study into conditionally symmetric (read: locally symmetric) parts. The principle of local symmetry means considering symmetry in a limited part of the object in which the symmetry operations are defined, if we do not take into account the rest of the object. However, in the general case, an arbitrary combination of two local symmetry operations performed in a crystal can lead to mixing of atoms that were in the selected part of the crystal and its rest. That is, the combination of two such operations will no longer be a local symmetry operation, which excludes the formation of a symmetry group. The Brand groupoid B collects all local symmetry operations and records which pairwise combination of them also generates a local symmetry operation (exactly as in a group), and which does not; in the latter case, the result of the “illegal” combination of operations is indicated by the zero of the grupoid.

The use of groupoids should not present much of a problem for a reader who has previously used symmetry groups but has not yet imposed any restrictions on the group elements and binary operation. It is enough to get used to the idea that this can be done, and a reader tuned in to use groups can easily use groupoids as well. The use of Brandt groupoids can also be approached from the side of graph theory, since each simple graph also represents some groupoid. Thus, Brandt groupoids can serve as a concept unifying the use of group theory and graph theory.

In our presentation, we briefly reviewed theory of Brand grupoids, as well as examples of their (possible) applications. Using the example of isomerization of substitutional isomers, we also considered what additional mathematical apparatus can be used to solve such practical problems. In the end, we would like to recommend to all interested readers to download a free computer algebra system SageMath (better together with GAP) from Internet and use it in research. This system, in particular, can perform the calculations required when using Brandt groupoids (as in the examples discussed above).