1 Introduction

Ordinary domination is a problem that is among the most studied problems in graph theory (Goddard and Henning 2013; Haynes et al. 1998). In this task one is keen to determine the minimum number of places in which to keep resources such that every place either has a resource or is close enough to the place in which the resource exists. It is quite common that in practical applications some additional constraints or desires are taken into account.

One of the very popular varieties, the k-rainbow domination problem, has been first studied in Brešar et al. (2005), and later elaborated and applied in a number of works (Brešar and Šumenjak 2007; Chang et al. 2010; Gabrovšek et al. 2020, 2019; Gao et al. 2019; Kraner Šumenjak et al. 2013, 2018; Shao et al. 2019a, b). Two similar but different types of rainbow domination with suitably encoded independencies have been studied: namely the independent rainbow domination numbers of graphs (Gabrovšek et al. 2020; Shao et al. 2019a) and the rainbow independent domination numbers of graphs (Kraner Šumenjak et al. 2018). From the practical applicability point of view both of these dominations are reasonable and make sense. The differences between the two concepts are nicely explained in Kraner Šumenjak et al. (2018). For more details on practical motivations and examples on rainbow domination, independent rainbow domination and rainbow independent domination we refer to Brešar et al. (2005), Gabrovšek et al. (2020), Gao et al. (2019), Kraner Šumenjak et al. (2018) and Shao et al. (2019a).

In Gabrovšek et al. (2020), independent rainbow domination numbers of generalized Petersen graphs of P(n, 2) and P(n, 3) were established by adopting a well known tropical path algebra technique for polygraphs. The method has been previously applied to domination problems (and other), see e.g. Klavžar and Žerovnik (1996), Pavlič and Žerovnik (2013), Repolusk and Žerovnik (2018), Žerovnik (2006) and Žerovnik (1999). In the current article we suitably adjust this technique so that it works also in the case of rainbow independent domination. The main difference to previous applications of the technique is that we define an auxiliary graph on arcs between two consecutive monographs, not on single monographs as before. The reason for the change is that it allows more efficient implementation for computation of the invariant considered here. By doing this we obtain a general result describing the t-rainbow independent domination number of a given polygraph as the minimum weight of a closed walk of length n in a suitably defined graph (Theorem 3.1) and consequently, as a minimum diagonal entry of the tropical product of length n of suitably defined associated matrices (Theorem 3.2). We then apply these results to obtain the exact values for 2-rainbow independent numbers of Cartesian products \(C_n \Box P_m\) and \(C_n \Box C_m\) for all n and \(m\le 5\), and also of generalized Petersen graphs P(n, 2). These results were previously announced in the conference article (Gabrovšek et al. 2021) without many details and proofs.

The article is organized in the following way. In Sect. 2 we present some basic definitions and known facts on rainbow independent domination, polygraphs and tropical algebra. In Sect. 3 we provide the necessary theoretical framework and in Sect. 4 we obtain the exact values for 2-rainbow independent numbers of polygraphs mentioned above.

2 Preliminaries

2.1 Rainbow independent domination of graphs

A graph F is a combinatorial object, defined by two sets, an arbitrary set \(V=V(F)\) of vertices and a set \(E(F) \subseteq V \times V\) of edges. Usually, we set \((u,v) = (v,u)\), and we have undirected graphs. Otherwise, F is a directed graph or digraph. Let F be a graph, \(S \subseteq V(F)\) and let \(w \in V(F)\). The open neighborhood of w in S is denoted by \(N_S(w)\), i.e., \(N_S(w)=\{u ~|~ (u, w) \in E(F), u \in S\}\). Similarly, the closed neighborhood of w in S is denoted by \(N_S[w]\), i.e., \(N_S[w]=\{w\} \cup N_S(w)\). If \(S=V(F)\) and no confusion can arise, we will write N(w) and N[w] instead of \(N_S(w)\) and \(N_S[w]\), respectively. If \(T \subseteq V(F)\), then we define \(N(T)= \cup _{x \in T}N(x)\). A subset S of V(F) for which the vertices are pairwise non-adjacent is called an independent set S of the graph F. As is well known, the degree of a vertex w is the total number of edges incident to w. The interval [ij] of integers \(i \le j\) is defined by \( [i,j] = \{ k \in {\mathbb {N}} ~|~ i\le k \le j \}\). Two graphs F and H are called isomorphic if and only if there is a bijection \(\psi : V(F) \rightarrow V(H)\) such that \(( (u,v) \in E(F){\iff }(\psi (u),\psi (v)) \in E(H) )\). For basic definitions not given here see Hammack et al. (2011).

In Kraner Šumenjak et al. (2018), the notion of t-rainbow independent domination was introduced. For a function \(f : V (F) \rightarrow \{0, 1, 2,\ldots , t\}\) we denote by \(V_i\) the set of vertices to which the value i is assigned by f, i.e., \(V_i = \{v \in V (F) ~|~ f(v) = i\}\). A function \(f : V (F) \rightarrow \{0, 1,\ldots , t\}\) is called a t-rainbow independent dominating function (tRiDF for short) of F if the following two conditions hold:

  1. (1)

    The set \(V_i\) is independent for each \(i=1, \dots , t\), and

  2. (2)

    For every \(v\in V_0\) and for every \(i=1,\ldots ,t\) we have \(N(v)\cap V_i \ne \emptyset \).

The weight of tRiDF f of graph F is the value \(w(f) =\sum _{i=1} ^t |V_i|.\) The t-rainbow independent domination number \(\gamma _{\textrm{rit}}(F)\) is the minimum weight over all tRiDFs of F.

If f is a tRiDF of F and H is a subgraph of F, then f, restricted to H, is called a partial tRiDF (ptRiDF) for H. Note that the restriction of f, that is a ptRiDF of H, is not necessarily a tRiDF of H.

Note that a tRiDF f can alternatively be represented by an ordered partition \((V_0,V_1,\dots ,V_t)\), where (\(v \in V_i{\iff }f(v)=i\) for \(i= 0,1,2,\dots ,t\)) and the set \(V_i\) is independent for each \(i= 1,2,\dots ,t\). We sometimes simply write \(f =(V_0,V_1,\dots ,V_t)\).

2.2 Polygraphs

Let \(G_{1},\ldots ,G_{n}\) be arbitrary mutually disjoint graphs and denote by \(X_{1},\ldots ,X_{n}\) a sequence of sets of edges such that an edge of \(X_{i}\) joins a vertex of \(V(G_{i})\) with a vertex of \(V(G_{i+1})\) (\(X_{i}\subseteq V(G_{i}) \times V(G_{i+1})\) for \(i\in [1,n]\)). A polygraph \(\Omega _{n}=\Omega _{n}(G_{1},\ldots , G_{n};X_{1},\ldots , X_{n})\) over monographs \(G_{1},\ldots , G_{n}\) has a vertex set \(V(\Omega _{n})=V(G_{1}) \cup \ldots \cup V(G_{n}),\) and an edge set \(E(\Omega _{n})=E(G_{1}) \cup X_{1} \cup \ldots \cup E(G_{n}) \cup X_{n}.\) For convenience, we set \(G_{0} = G_{n}\) and \(G_{n+1} = G_{1}\). Thus, \(X_0=X_n\), so we may write, for instance, \(X_{0}\subseteq V(G_{0}) \times V(G_{1}) = V(G_{n}) \times V(G_{1})\), and \(X_{n}\subseteq V(G_{n}) \times V(G_{n+1})= V(G_{n}) \times V(G_{1})\).

In the case when all graphs \(G_{i}\) are isomorphic to a fixed graph G (i.e., there exists an isomorphism \(\psi _i : V(G_i)\longrightarrow V(G)\) for \(i=0, 1,\ldots ,n+1\), and \(\psi _{0}=\psi _n\) and \(\psi _{n+1}=\psi _1\)) and all sets \(X_{i}\) are equal to a fixed set \(X \subseteq V(G) \times V(G)\) (i.e., \((u,v) \in X \Longleftrightarrow \left( \psi ^{-1}_i(u),\psi ^{-1}_{i+1}(v)\right) \in X_i\) for all i), we call such a graph rotagraph, \(\omega _{n}(G;Y)\). If a polygraph has the property that \(n-1\) of its monographs are isomorphic to a fixed graph G and consequently at most two consecutive sets \(X_i\) are not equal to the fixed set of edges X, then we call it a nearly rotagraph.

Polygraphs were first studied in mathematical chemistry (Babić et al. 1986) as a model of polymer molecules. Furthermore, typical examples of polygraphs are Cartesian products of graphs and generalized Petersen graphs. The Cartesian product \(G\Box H\) of graphs G and H is a graph with a vertex set \(V(G) \times V(H)\), where two vertices are adjacent if and only if they are equal in one coordinate and adjacent in the other. For example \(G = C_n \Box C_m\) is a graph with \(V(G) = \{ v_{i,j} ~|~ i \in [0, n-1], j \in [0, m-1] \}\) and \(E(G) = \{ e_{i,j} ~|~ e_{i,j} = (v_{i,j}, v_{i+1,j}), i \in [0, n-1], j \in [0, m-1] \} \cup \{ e' _{i,j} ~|~ e' _{i,j} = (v_{i,j}, v_{i,j+1}), i \in [0, n-1], j \in [0, m-1] \} \), where indices i and j are read modulo n and m, respectively (see e.g. Hammack et al. 2011).

For positive integers \(n\ge 3\) and k, \(1 \le k < \frac{n}{2}\), the generalized Petersen graph P(nk) is defined to be a graph with a vertex set \(\{u^1_i, u^2_i ~|~ i\in [0,n-1]\}\) and an edge set \(\{u^1_iu^2_i, u^1_iu^1_{i+k}, u^2_iu^2_{i+1} ~|~ i \in [0,n-1] \}\), in which the subscripts are computed modulo n (see e.g. Gabrovšek et al. 2020; Shao et al. 2019a; Watkins 1969).

2.3 Tropical algebra

Tropical algebra or min-plus algebra is a semialgebra over the ordered, idempotent semifield \({\mathbb {R}}\cup \{ \infty \}\), equipped with the operations of addition \(a \oplus b = \min (a,b)\) and multiplication \(a \odot b = a+b\). Here \(\infty \) is the unit element for addition \(\oplus \) and 0 is the unit element for multiplication \(\odot \). As in standard arithmetic the operations \(\oplus \) and \(\odot \) are associative and commutative, and \(\odot \) is distributive over \(\oplus \). Matrix operations are defined in analogy to linear algebra with tropical operations replacing the standard ones. In particular, for matrices \(A, B \in ({\mathbb {R}}\cup \{ \infty \})^{n\times n}\) the tropical or min-plus product AB is defined by

$$\begin{aligned} (AB) _{ij} = \min _{k \in [1,n]} (A_{ik}+ B_{kj}) \end{aligned}$$

for all \(i,j \in [1,n]\). The mth tropical (or min-plus) power of A is denoted by \(A^m\). To be more precise,

$$\begin{aligned} A^m _{ij} = \min _{ j_1,\ldots , j_{m-1} \in [1,n]} (A_{i j_1} +A_{j_1 j_2}+ \cdots + A_{j_{m-1} j}) \end{aligned}$$

for all \(i,j \in [1,n]\). For our purposes we will in fact consider matrices over idempotent subsemiring \({\mathbb {N}}\cup \{0\} \cup \{ \infty \}\) equipped with the min-plus operations (also known as path algebra, see e.g. Gabrovšek et al. 2020; Klavžar and Žerovnik 1996; Pavlič and Žerovnik 2013; Repolusk and Žerovnik 2018; Žerovnik 1999). The trace of matrix A in min-plus algebra is defined as \( {{\,\textrm{tr}\,}}(A) = \min _{i\in [1,n]} A_{ii}.\) For matrices \(A, B \in ({\mathbb {R}}\cup \{ \infty \})^{n \times n}\) it holds that (see e.g. Gabrovšek et al. 2020)

$$\begin{aligned} {{\,\textrm{tr}\,}}(AB) = {{\,\textrm{tr}\,}}(BA). \end{aligned}$$
(1)

The term tropical algebra is sometimes used for all semifields isomorphic to min-plus algera. For more details we refer the interested reader to Bapat (1998), Butkovič (2010), Kolokoltsov and Maslov (1997), Litvinov (2007), Müller and Peperko (2015) and Rosenmann et al. (2019).

3 Theoretical framework

In Gabrovšek et al. (2020) a path-algebra technique for computing the independent rainbow domination numbers of generalized Petersen graphs is used. In this section we use a similar idea, but somewhat modify it and apply it for the case of rainbow independent domination.

We begin by defining a weighted digraph which we can associate to a given polygraph which, in turn, permits utilization of the algebraic approach. Intuitively, we are going to define a digraph in which vertices correspond to restrictions of tRiDF functions to pairs of consecutive monographs and arcs correspond to pairs of vertices which are on the intersecting monograph restrictions of the same tRiDF.

Similarly to our study of the independent rainbow domination case in Gabrovšek et al. (2020), the main reason for the introduction of a new construction lies in the fact that in the case of t-rainbow domination, a vertex which has neighbors in both neighboring monographs can be evaluated only when the colors of all neighbors are known. It would be possible to handle this by considering bigger monographs. We choose here a different approach by defining the associated digraph, which is based on ordered pairs of monographs. The associated digraph that we define can be considered as a line graph of the associated digraph from Klavžar and Žerovnik (1996), Pavlič and Žerovnik (2013), Repolusk and Žerovnik (2018) and Žerovnik (1999, 2006).

For a given polygraph \(\Omega _n(G_1,G_2, \dots , G_n;\) \(X_1,X_2, \ldots , X_n)\), we define an associated digraph \({\mathcal {G}}\) in the following way. The vertices of \({\mathcal {G}}\) are ordered tuples of subsets of vertices \((B_0,B_1,B_2, \ldots , B_t)\) such that \( B_0 \cup B_1 \cup B_2 \cup \cdots \cup B_t = V(G_i) \cup V(G_{i+1})\) for some \(i \in [1,n]\) and there exists a ptRiDF \(f =(V_0,V_1,V_2, \ldots , V_t)\), for the subgraph induced on \( V(G_i) \cup V(G_{i+1})\), defined (at least) on \( V(G_{i-1}) \cup V(G_i) \cup V(G_{i+1}) \cup V(G_{i+2})\), such that \( B_0 = V_0 \cap (V(G_i) \cup V(G_{i+1}))\), \( B_1 = V_1 \cap (V(G_i) \cup V(G_{i+1})),\ldots \), and \( B_t = V_t \cap (V(G_i) \cup V(G_{i+1}))\). We use the notation \({\mathcal {V(G)}}_{i,i+1}\) for the set of vertices, which are ptRiDF for \( V(G_i) \cup V(G_{i+1})\). It is clear that \({\mathcal {V(G) }} = \cup _{i=1}^n{\mathcal {V(G)}}_{i,i+1}\).

As usual the weight of a vertex \(B =(B_0,B_1,B_2,\ldots , B_t)\) is defined with formula

$$\begin{aligned} w(B) = \frac{1}{2}( |B_1| + |B_2|+ \cdots + |B_t|). \end{aligned}$$

We introduce some more useful notations. A vertex of \({\mathcal {G}}\) is an ordered tuple of sets that meet some monographs, so the restriction of B to monograph \(G_i\) is denoted by

$$\begin{aligned} B^i = B \cap G_i. \end{aligned}$$

Therefore \(B^i = (B^i_0,B^i_1, B^i _2,\ldots , B^i _t)\), where \(B_0^i = B_0 \cap V(G_i)\), \(B_1^i = B_1 \cap V(G_i),\ldots \), \(B_t^i = B_t \cap V(G_i)\). Two vertices of \({\mathcal {G}}\) are connected when they coincide exactly on the common monograph.

To be more formal, an arc (vu) connects vertices v and u of \({\mathcal { G }}\) if:

  1. (1)

    For some i, \(v \in {\mathcal {V(G)}}_{i-1,i}\), \(u \in {\mathcal {V(G)}}_{i,i+1}\), and

  2. (2)

    v and u coincide on \(V(G_i)\). More precisely, \(v_0^i = u_0^i\), \(v_1^i = u_1^i\), \(\ldots \), \(v_t^i = u_t^i\).

In the terminology of ptRiDF’s, a tRiDF of \(G_i\) has to be defined on \(N(V(G_i)) \subseteq V(G_{i-1}) \cup V(G_i) \cup V(G_{i+1})\). It is clear that \(v\cup u\) is a ptRiDF for \(G_i\).

Furthermore, we denote the intersection of \((t+1)\)tuples by \(v\cap u = (v_0,v_1,\ldots , v_t) \cap (u_0,u_1,\ldots , u_t) = (v_0 \cap u_0, v_1\cap u_1,\ldots , v_t \cap u_t)\), and similarly for the union \(v\cup u\). Observe that \(v\cap u = B^i\) when v and u are restrictions of \(f = B = (B_0,B_1,B_2, \ldots , B_t)\).

The weight of the arc (vu) is defined in a natural way as the sum of weights of v and u, so

$$\begin{aligned} w(v,u) = w(v) + w(u). \end{aligned}$$

Similarly as in Gabrovšek et al. (2020) it can be seen in a straighforward manner that a walk which is defined by consecutive arcs \((v_1,v_2), (v_2,v_3) \dots (v_{\ell -1},v_{\ell })\), has the weight \( w(v_1) + 2 w(v_2) + \dots + 2 w(v_{\ell -1}) + w(v_\ell )\).

As we point out in the following result, the t-rainbow independent domination number is closely related to certain walks in the associated digraph \({\mathcal {G}}\). The result can be proved in a very similar manner as (Gabrovšek et al. 2020, Theorem 3.1). To avoid too much repetition of similar ideas we omit its proof.

Theorem 1.1

The t-rainbow independent domination number \(\gamma _{\textrm{rit}}(\Omega _n(G_1,G_2, \dots , G_n;\) \(X_1,X_2,\) \(\dots ,\) \(X_n))\) of the polygraph \(\Omega _n(G_1,G_2, \dots , G_n;X_1,X_2, \dots , X_n)\) is equal to the minimum weight of a closed walk of length n in \({\mathcal {G}}\).

Let us consider four consecutive monographs \(G_{k-1}\), \(G_k\), \(G_{k+1}\), and \(G_{k+2 }\), or written equivalently, the elements of \({\mathcal {V(G)}}_{k-1,k}\), \({\mathcal {V(G)}}_{k,k+1}\) and \({\mathcal {V(G)}}_{k+1,k+2}\). Then \(u\in {\mathcal {V(G)}}_{k-1,k}\) and \(v\in {\mathcal {V(G)}}_{k,k+1}\) are connected by an arc (uv) if they coincide on \(G_k\). Moreover, a path of lenght two connects \(u\in {\mathcal {V(G)}}_{k-1,k}\) and \(z\in {\mathcal {V(G)}}_{k+1,k+2}\) if there exists a \(v\in {\mathcal {V(G)}}_{k,k+1}\) such that there are arcs (uv) and (vz) in \({\mathcal {G}}\). We obtain a path of minimal weight if we choose \(l\in {\mathcal {V(G)}}_{k,k+1}\) such that \(w(u,l) + w(l,z)\) is minimal.

The consideration can alternatively be written in the following matrix form. Let \({\mathcal {E}}(G)\) be the set of edges of \({\mathcal {G}}\) and let A(k) be a matrix with elements \(a^{(k)}_{ij}\), for \(i\in {\mathcal {V(G)}}_{k-1,k}\) and \(j\in {\mathcal {V(G)}}_{k,k+1}\), where the value of \(a^{(k)}_{ij}\) equals

$$\begin{aligned} a^{(k)}_{ij} ={\left\{ \begin{array}{ll} w(i\cap j ), &{} \text {if } (i,j) \in {\mathcal {E(G)}}, \\ \infty , &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
(2)

The product \(P = A(k) A(k+1) \) is a matrix with entries

$$\begin{aligned} P_{ij} = \min \{ w(i\cap l) + w(l\cap j) \} =\min \{ a^{(k)}_{il} + a^{(k+1)}_{lj} \}, \end{aligned}$$

where l runs over all elements of \( {\mathcal {V(G)}}_{k,k+1}\) such that both \((i,l) \in {\mathcal {E(G)}}\) and \((l,j) \in {\mathcal {E(G)}}\). In a more describtive manner, the ijth entry of a product of matrices is the minimal weight of a path of length two that starts at \(i \in {\mathcal {V(G)}}_{k-1,k}\) and ends at \(j \in {\mathcal {V(G)}}_{k+1,k+2}\).

Inductively, the minimum weight of a closed walk of length n on a polygraph with n monographs is a diagonal element of the corresponding product of matrices. Note that some of the matrices A(k) may be rectangular, however the product \(A(1) \cdots A(n)\) is a square matrix. We formally state the conclusion in the following way:

Theorem 1.2

For \(k=1,2,\dots , n\), let A(k) be the matrices defined by (2). Then the t-rainbow independent domination number of polygraph \( \Omega _n(G_1,G_2, \dots , G_n;X_1,X_2, \dots , X_n) \) is equal to

$$\begin{aligned} \gamma _{\textrm{rit}}\left( \Omega _n(G_1,G_2, \dots , G_n; X_1,X_2, \dots , X_n)\right) = {{\,\textrm{tr}\,}}( A(1)A(2)\cdots A(n)) \end{aligned}$$

Let us consider a special case when the polygraph is a rotagraph. Note that in this case the matrices A(k) are independent of k. We can therefore define a matrix \(A = A(1)\) with entries \(a_{ij} = w(i\cap j)\), \((i,j) \in {\mathcal {E(G)}}\) and conclude:

Corollary 1.3

The t-rainbow independent domination number of rotagraph \(\omega _{n}(G;X)\) is

$$\begin{aligned} \gamma _{\textrm{rit}} \left( \omega _{n}(G;X)\right) = {{\,\textrm{tr}\,}}(A^{n}). \end{aligned}$$

We will also need a version of this result for the case when the polygraph is a nearly rotagraph. Let us recall that a polygraph is a nearly rotagraph, if all monographs but one are isomorphic: \(G_2 \simeq G_3 \simeq \dots \simeq G_n\). Therefore, also \(X_1\) and \(X_n\) can be different from other \(X_i = X\).

The following consequence follows from Theorem 3.2 and (1) (or by shifting the indices of the monographs).

Corollary 1.4

Let a polygraph \(\Omega _n(G_1,G_2, \dots , G_n;X_1,X_2, \dots , X_n)\) be a nearly rotagraph, that is \(G_2 =G_3 = \dots = G_n =G\) and \(X_2 = X_3 = \dots = X_{n-1} = X\). Then \(\gamma _{\textrm{rit}}(\Omega _n(G_1,G, \dots , G;\) \(X_1,X,\dots ,X, X_n))\) is equal to

$$\begin{aligned} {{\,\textrm{tr}\,}}(A(1)A(2)A^{n-3}A(n)) = {{\,\textrm{tr}\,}}(A(n)A(1)A(2)A^{n-3}) ={{\,\textrm{tr}\,}}( A^k A(n)A(1)A(2)A^{n-3-k}) \end{aligned}$$

for any \(k \in [1,n-3]\), where \(A=A(3)\).

4 Results

We will compute 2-rainbow independent domination numbers of \(C_n \square P_m\) for \(m=1,2,3,4,5\). An explicit proof will be given for \(m=1\).

For \(m=2,3,4,5\) only the needed data is provided because the proofs are analogous. Similarly, for \(C_n \square P_m\), a detailed proof is provided only for \(m = 3\) and brief arguments are given for other m. For generalized Petersen graphs P(n, 2), we will explicitly provide a proof for the case when n is odd, since, as we will see, P(n, 2) is in this case a nearly rotagraph. In the case when n is even, the P(n, 2) is a rotagraph, and the proof is analogous to previously elaborated cases (\(C_n \square P_1\) and \(C_n \square C_3\)) and therefore the details are omitted.

The source code of the C++ program for computing the matrices, products, and traces, is available at Gabrovšek et al. (2022).

4.1 Computations for graphs \(\mathbf {C_n \square P_m}\)

The case \(\textbf{C}_{\textbf{n}} \square \textbf{P}_{\textbf{1}}\). First note that \(\textbf{C}_{\textbf{n}} \square \textbf{P}_{\textbf{1}}\) equals \(\textbf{C}_{\textbf{n}}\) (by trivial isomorphism) and that \(\gamma _{2ik}(C_n) = i(C_n \square P_2)\) is a special case of general identity \(\gamma _{rik}(G) = i(G \square K_r)\), as proved in Kraner Šumenjak et al. (2018). Independent domination numbers \(i(C_n \square P_2)\) are computed in Repolusk and Žerovnik (2018) and hence the formula for \(\gamma _{2ik}(C_n)\) is known. We use this case as the first example to explain our method. The associated monograph is a vertex, \(G_i=P_1\) and the two monographs \(G_i \cup G_{i+1}\) form a path \(P_2 \square P_1 = P_2\). We order the two vertices \(v_1\), \(v_2\) of \(G_i \cup G_{i+1}\) and represent the partial 2RiDF by a tuple \((c_1,c_2)\), such that \(f(v_1) = c_1\) and \(f(v_2) = c_2\).

Recall that a partial 2RiDF must be defined on \(N(G_i \cup G_{i+1})\), and observe that there are exactly 6 possible restrictions of partial 2RiDFs to \(G_i \cup G_{i+1}\): (1, 2), (2, 1), (2, 0), (1, 0), (0, 2), and (0, 1) (note that (0, 0), (1, 1), and (2, 2) are not restrictions of any partial 2RiDFs). For brevity, we will often say that \((c_1,c_2)\) is a partial 2RiDF if it is clear that there is an extension \((\star ,c_1,c_2,\star )\) that is a partial 2RiDF.

A matrix (2) is thus the following \(6\times 6\) matrix:

For clarity, we enumerate the rows and columns by tuples associated to the restrictions of partial 2RiDFs (in lexicographical order).

It is clear that the entries \(A_{(c_1,c_2),(d_1,d_2)} = \infty \) when \(c_2 \ne d_1\), since the monographs do not match, i.e., there is no edge in the auxiliary graph \({\mathcal {G}}\).

Furthermore, \(A_{(1,2),(2,1)} = 1\) since the coloring (1, 2, 1) is clearly (a restriction of) a partial 2RiDF of \(G_i \cup G_{i+1} \cup G_{i+2}= P_3\), and one color is used on the middle graph \(G_{i+1}\) (see Fig. 1a). Similarly, \(A_{(2,0),(0,1)} = 0\) (see Fig. 1b), and \(A_{(1,0),(0,1)} = \infty \) since (1, 0, 1) is not a partial 2RiDF (see Fig. 1c). More examples are provided below, where the computation of \(\gamma _{ri2}(C_{n} \square C_{5})\) is outlined.

Fig. 1
figure 1

Visualization of matrix entries for \(C_n \square P_1\)

Straightforward computation shows that

$$\begin{aligned} A^{9} = A^5 + [2]_{i,j = 1}^6, \end{aligned}$$

where \([2]_{i,j = 1}^6\) is a \(6\times 6\) matrix with all entries equal to 2. It follows that

$$\begin{aligned} A^{k+4} = A^{k} + [2]_{i,j = 1}^6 \quad \text {for}\quad k \ge 5. \end{aligned}$$
(3)

We also compute the following traces:

$$\begin{aligned} {{\,\textrm{tr}\,}}(A^{3})&= 2, \; {{\,\textrm{tr}\,}}(A^{4}) = 2, \; {{\,\textrm{tr}\,}}(A^{5}) = 4, \; {{\,\textrm{tr}\,}}(A^{6}) = 4, \; {{\,\textrm{tr}\,}}(A^{7}) = 4, \; {{\,\textrm{tr}\,}}(A^{8}) = 4, \nonumber \\ {{\,\textrm{tr}\,}}(A^{9})&= 6. \end{aligned}$$
(4)

Theorem 1.5

For \(n \ge 3\) it holds

(5)

Proof

It follows from construction and Corollary 3.3 that

$$\begin{aligned} \gamma _{ri2}(C_{n}) = {{\,\textrm{tr}\,}}(A^n) \quad \text {for}\quad n \ge 3. \end{aligned}$$

Equation (3) converts to

$$\begin{aligned} \gamma _{ri2}(C_{n+4}) = \gamma _{ri2}(C_{n}) +2. \end{aligned}$$
(6)

For \(n = 5,6,7,8\) the theorem holds. By induction, we assume that it holds for n and will prove that it holds for \(n+4\).

If \(n \equiv 0 \bmod 4\) or \(n \equiv 3 \bmod 4\) we have from Eq. (6) and the induction hypothesis:

$$\begin{aligned} \gamma _{ri2}(C_{n+4} ) = \gamma _{ri2}(C_{n} ) +2 = \left\lceil \llceil \frac{n}{2} \right\rceil \rrceil + 2 = \left\lceil \llceil \frac{n}{2} +2 \right\rceil \rrceil = \left\lceil \llceil \frac{n+4}{2} \right\rceil \rrceil . \end{aligned}$$

Similarly, if \(n \equiv 1 \bmod 4\) or \(n \equiv 2 \bmod 4\), we have:

$$\begin{aligned} \gamma _{ri2}(C_{n+4}) = \gamma _{ri2}(C_{n} ) +2 = \left\lceil \llceil \frac{n}{2} \right\rceil \rrceil + 3 = \left\lceil \llceil \frac{n}{2} +2 \right\rceil \rrceil +1 = \left\lceil \llceil \frac{n+4}{2} \right\rceil \rrceil +1. \end{aligned}$$

Since (5) holds also for \(n=3\) and \(n=4\), the theorem holds for \(n \ge 3\). \(\square \)

The case \(\textbf{C}_{\textbf{n}}\square \textbf{P}_{\textbf{2}}\). For \(m=2\), the method explained in the previous case gives a \(26 \times 26\) matrix with the following properties:

$$\begin{aligned} \begin{array}{llllll} {{\,\textrm{tr}\,}}(A^{3}) = 4, &{} \quad {{\,\textrm{tr}\,}}(A^{4}) = 4, &{} \quad {{\,\textrm{tr}\,}}(A^{5}) = 5, &{} \quad {{\,\textrm{tr}\,}}(A^{6}) = 6, &{} \quad {{\,\textrm{tr}\,}}(A^{7}) = 7, &{} \quad {{\,\textrm{tr}\,}}(A^{8}) = 8, \\ {{\,\textrm{tr}\,}}(A^{9}) = 9, &{} \quad {{\,\textrm{tr}\,}}(A^{10}) = 10, &{} \quad {{\,\textrm{tr}\,}}(A^{11}) = 11, &{} \quad {{\,\textrm{tr}\,}}(A^{12}) = 12. &{} \quad &{} \quad \\ \end{array} \end{aligned}$$

and

$$\begin{aligned} {{\,\textrm{tr}\,}}(A^{n+4}) = {{\,\textrm{tr}\,}}(A^n) + [4]_{i,j = 1}^{26} \quad \text {for}\quad n \ge 8. \end{aligned}$$

Reasoning along the same lines as in the proof of Theorem 4.1 results in the next theorem.

Theorem 1.6

For \(n \ge 4\) it holds

$$\begin{aligned} \gamma _{ri2}(C_{n} \square P_{2}) = n. \end{aligned}$$

The case \(\textbf{C}_{\textbf{n}} \square \textbf{P}_{\textbf{3}}\). For \(m=3\) we obtain a \(112 \times 112\) matrix with the properties:

$$\begin{aligned} \begin{array}{llllll} {{\,\textrm{tr}\,}}(A^{3}) = 5&{} \quad {{\,\textrm{tr}\,}}(A^{4}) = 6&{} \quad {{\,\textrm{tr}\,}}(A^{5}) = 7&{} \quad {{\,\textrm{tr}\,}}(A^{6}) = 8&{} \quad {{\,\textrm{tr}\,}}(A^{7}) = 10&{} \quad {{\,\textrm{tr}\,}}(A^{8}) = 11 \\ {{\,\textrm{tr}\,}}(A^{9}) = 13 &{} \quad {{\,\textrm{tr}\,}}(A^{10}) = 14&{} \quad {{\,\textrm{tr}\,}}(A^{11}) = 15&{} \quad {{\,\textrm{tr}\,}}(A^{12}) = 16&{} \quad {{\,\textrm{tr}\,}}(A^{13}) = 18&{} \quad {{\,\textrm{tr}\,}}(A^{14}) = 19 \\ {{\,\textrm{tr}\,}}(A^{15}) = 21&{} \quad {{\,\textrm{tr}\,}}(A^{16}) = 22&{} \quad {{\,\textrm{tr}\,}}(A^{17}) = 23&{} \quad {{\,\textrm{tr}\,}}(A^{18}) = 24&{} \quad {{\,\textrm{tr}\,}}(A^{19}) = 26&{} \quad {{\,\textrm{tr}\,}}(A^{20}) = 27 \\ \end{array} \end{aligned}$$

and \({{\,\textrm{tr}\,}}(A^{n+6}) = {{\,\textrm{tr}\,}}(A^n) + [8] \text { for } n \ge 14\). Since \(\gamma _{ri2}(C_{n} \square P_{3}) = {{\,\textrm{tr}\,}}(A^{n})\), we have

Theorem 1.7

For \(n \ge 3\) it holds

$$\begin{aligned} \gamma _{ri2}(C_{n} \square P_{3}) = \left\{ \begin{array}{ll} \left\lceil \llceil \frac{4n}{3} \right\rceil \rrceil , &{}\quad n \equiv 0, 1, 2, 4, 5 \bmod 6\\ &{} \\ \left\lceil \llceil \frac{4n}{3} \right\rceil \rrceil +1,&{} \quad n \equiv 3 \bmod 6\\ \end{array} \right. \end{aligned}$$

The case \(\mathbf {C_n \square P_4}\). For \(m=4\) the matrix is a \(490 \times 490\) matrix with the properties:

$$\begin{aligned} \begin{array}{llllll} {{\,\textrm{tr}\,}}(A^{3}) = 6 &{} \quad {{\,\textrm{tr}\,}}(A^{4}) = 8&{} \quad {{\,\textrm{tr}\,}}(A^{5}) = 10 &{} \quad {{\,\textrm{tr}\,}}(A^{6}) = 11 &{} \quad {{\,\textrm{tr}\,}}(A^{7}) = 14&{} \quad {{\,\textrm{tr}\,}}(A^{8}) = 14 \\ {{\,\textrm{tr}\,}}(A^{9}) = 17 &{} \quad {{\,\textrm{tr}\,}}(A^{10}) = 18&{} \quad {{\,\textrm{tr}\,}}(A^{11}) = 20 &{} \quad {{\,\textrm{tr}\,}}(A^{12}) = 22 &{} \quad {{\,\textrm{tr}\,}}(A^{13}) = 24&{} \quad {{\,\textrm{tr}\,}}(A^{14}) = 25 \\ {{\,\textrm{tr}\,}}(A^{15}) = 28 &{} \quad {{\,\textrm{tr}\,}}(A^{16}) = 28&{} \quad {{\,\textrm{tr}\,}}(A^{17}) = 31 &{} \quad {{\,\textrm{tr}\,}}(A^{18}) = 32 &{} \quad {{\,\textrm{tr}\,}}(A^{19}) = 34&{} \quad {{\,\textrm{tr}\,}}(A^{20}) = 36 \\ {{\,\textrm{tr}\,}}(A^{21}) = 38 &{} \quad {{\,\textrm{tr}\,}}(A^{22}) = 39&{} \quad {{\,\textrm{tr}\,}}(A^{23}) = 42 &{} \quad {{\,\textrm{tr}\,}}(A^{24}) = 42 &{} \quad {{\,\textrm{tr}\,}}(A^{25}) = 45&{} \quad {{\,\textrm{tr}\,}}(A^{26}) = 46 \\ {{\,\textrm{tr}\,}}(A^{27}) = 48 &{} \quad {{\,\textrm{tr}\,}}(A^{28}) = 50&{} \quad {{\,\textrm{tr}\,}}(A^{29}) = 52 &{} \quad {{\,\textrm{tr}\,}}(A^{30}) = 53 &{} \quad {{\,\textrm{tr}\,}}(A^{31}) = 56 &{} \quad \\ \end{array} \end{aligned}$$

and \({{\,\textrm{tr}\,}}(A^{n+8}) = {{\,\textrm{tr}\,}}(A^n) + [14]_{i,j = 1}^{490} \text { for } n \ge 23\). Since \(\gamma _{ri2}(C_{n} \square P_{4}) = {{\,\textrm{tr}\,}}(A^{n})\), we have

Theorem 1.8

For \(n \ge 3\) it holds

$$\begin{aligned} \gamma _{ri2}(C_{n} \square P_{4}) = \left\{ \begin{array}{ll} \left\lceil \llceil \frac{7n}{4} \right\rceil \rrceil , &{} \quad n \equiv 0,2,3,6 \bmod 8\\ &{} \\ \left\lceil \llceil \frac{7n}{4} \right\rceil \rrceil +1,&{} \quad n \equiv 1,4,5,7 \bmod 8\\ \end{array} \right. \end{aligned}$$

The case \(\textbf{C}_{\textbf{n}}\square \textbf{P}_{\textbf{5}}\). For \(m=5\) the obtained matrix is a \(2148 \times 2148\) matrix with the properties:

$$\begin{aligned} \begin{array}{llllll} {{\,\textrm{tr}\,}}(A^{3}) = 7 &{} \quad {{\,\textrm{tr}\,}}(A^{4}) = 10 &{} \quad {{\,\textrm{tr}\,}}(A^{5}) = 12 &{} \quad {{\,\textrm{tr}\,}}(A^{6}) = 13 &{} \quad {{\,\textrm{tr}\,}}(A^{7}) = 16 &{} \quad {{\,\textrm{tr}\,}}(A^{8}) = 18 \\ {{\,\textrm{tr}\,}}(A^{9}) = 20 &{} \quad {{\,\textrm{tr}\,}}(A^{10}) = 22&{} \quad {{\,\textrm{tr}\,}}(A^{11}) = 25 &{} \quad {{\,\textrm{tr}\,}}(A^{12}) = 26 &{} \quad {{\,\textrm{tr}\,}}(A^{13}) = 29 &{} \quad {{\,\textrm{tr}\,}}(A^{14}) = 31 \\ {{\,\textrm{tr}\,}}(A^{15}) = 34 &{} \quad {{\,\textrm{tr}\,}}(A^{16}) = 36&{} \quad {{\,\textrm{tr}\,}}(A^{17}) = 38 &{} \quad {{\,\textrm{tr}\,}}(A^{18}) = 39 &{} \quad {{\,\textrm{tr}\,}}(A^{19}) = 42&{} \quad {{\,\textrm{tr}\,}}(A^{20}) = 44 \\ {{\,\textrm{tr}\,}}(A^{21}) = 47 &{} \quad {{\,\textrm{tr}\,}}(A^{22}) = 49&{} \quad {{\,\textrm{tr}\,}}(A^{23}) = 51 &{} \quad {{\,\textrm{tr}\,}}(A^{24}) = 52 &{} \quad {{\,\textrm{tr}\,}}(A^{25}) = 55&{} \quad {{\,\textrm{tr}\,}}(A^{26}) = 57 \\ {{\,\textrm{tr}\,}}(A^{27}) = 60 &{} \quad {{\,\textrm{tr}\,}}(A^{28}) = 62&{} \quad {{\,\textrm{tr}\,}}(A^{29}) = 64 &{} \quad {{\,\textrm{tr}\,}}(A^{30}) = 65 &{} \quad {{\,\textrm{tr}\,}}(A^{31}) = 69&{} \quad {{\,\textrm{tr}\,}}(A^{32}) = 70 \\ {{\,\textrm{tr}\,}}(A^{33}) = 73 &{} \quad {{\,\textrm{tr}\,}}(A^{34}) = 75&{} \quad {{\,\textrm{tr}\,}}(A^{35}) = 77 &{} \quad {{\,\textrm{tr}\,}}(A^{36}) = 78 &{} \quad {{\,\textrm{tr}\,}}(A^{37}) = 81&{} \quad {{\,\textrm{tr}\,}}(A^{38}) = 83 \\ {{\,\textrm{tr}\,}}(A^{39}) = 86 &{} \quad {{\,\textrm{tr}\,}}(A^{40}) = 88&{} \quad {{\,\textrm{tr}\,}}(A^{41}) = 90 &{} \quad {{\,\textrm{tr}\,}}(A^{42}) = 91 &{} \quad {{\,\textrm{tr}\,}}(A^{43}) = 95&{} \quad {{\,\textrm{tr}\,}}(A^{44}) = 96 \\ {{\,\textrm{tr}\,}}(A^{45}) = 99 &{} \quad &{} \quad &{} \quad &{} \quad &{} \quad \\ \end{array} \end{aligned}$$

and \({{\,\textrm{tr}\,}}(A^{n+12}) = {{\,\textrm{tr}\,}}(A^n) + [26]_{i,j = 1}^{2148} \text { for } n \ge 33\). Since \(\gamma _{ri2}(C_{n} \square P_{5}) = {{\,\textrm{tr}\,}}(A^{n})\), we have

Theorem 1.9

For \(n \ge 20\) it holds

$$\begin{aligned} \gamma _{ri2}(C_{n} \square P_{5}) = \left\{ \begin{array}{ll} \left\lceil \llceil \frac{13n}{6} \right\rceil \rrceil , &{}\quad n \equiv 0,1,2,6,8 \bmod 12\\ &{} \\ \left\lceil \llceil \frac{13n}{6} \right\rceil \rrceil +1,&{}\quad n \equiv 3,4,5,7,9,10,11 \bmod 12\\ \end{array} \right. \end{aligned}$$

4.2 Computations for graphs \(\textbf{C}_{\textbf{n}} \square \textbf{C}_{\textbf{m}}\)

The case \(\textbf{C}_{\textbf{n}}\square \textbf{C}_{\textbf{3}}\). In this case, the graph \(G_i \cup G_{i+1} = P_2 \square C_3\) has 54 partial 2RiDFs, which we again present as tuples: (0, 1, 0, 0, 0, 2), (0, 0, 1, 0, 2, 0), \(\ldots \). For example, a 2RiDF for the graph \(G_i \cup G_{i+1}\) on Fig. 2a is encoded as a vector (0, 1, 0, 1, 0, 2), whereas a 2RiDF for the graph \(G_{i+1} \cup G_{i+2}\) is encoded by (1, 0, 2, 0, 1, 0).

We obtain a \(54 \times 54\) matrix A as before. For example:

\(A_{(0,1,0,1,0,2), (1,0,2,0,1,0)} = 2\), since \(w(G_{i+1}) = 2\) (see Fig. 2a), and on the other hand, \(A_{(2,0,1,0,1,0), (0,1,0,2,0,0)} = \infty \), since (2, 0, 1, 0, 1, 0, 2, 0, 0) does not define a partial 2RiDF on \(G_{i} \cup G_{i+1} \cup G_{i+2} = P_3 \square C_3\) (see Fig. 2b).

Fig. 2
figure 2

Visualization of matrix entries for \(C_n \square C_3\)

The obtained \(54 \times 54\) matrix has the following properties:

$$\begin{aligned} \begin{array}{llllll} {{\,\textrm{tr}\,}}(A^{3}) = 5 &{} \quad {{\,\textrm{tr}\,}}(A^{4}) = 6 &{} \quad {{\,\textrm{tr}\,}}(A^{5}) = 6 &{} \quad {{\,\textrm{tr}\,}}(A^{6}) = 6 &{} \quad {{\,\textrm{tr}\,}}(A^{7}) = 10 &{} \quad {{\,\textrm{tr}\,}}(A^{8}) = 10\\ {{\,\textrm{tr}\,}}(A^{9}) = 11 &{} \quad {{\,\textrm{tr}\,}}(A^{10}) = 12 &{} \quad {{\,\textrm{tr}\,}}(A^{11}) = 12&{} \quad &{} \quad &{} \quad \\ \end{array} \end{aligned}$$

and \({{\,\textrm{tr}\,}}(A^{n+6}) = {{\,\textrm{tr}\,}}(A^n) + [6]_{i,j = 1}^{54} \text { for } n \ge 5\). Since \(\gamma _{ri2}(C_{n} \square C_{3}) = {{\,\textrm{tr}\,}}(A^{n})\), we have

Theorem 1.10

For \(n \ge 3\) it holds

$$\begin{aligned} \gamma _{ri2}(C_{n} \square C_{3}) = \left\{ \begin{array}{ll} n, &{} \quad n \equiv 0 \bmod 6\\ &{} \\ n+1,&{} \quad n \equiv 5 \bmod 6\\ &{} \\ n+2,&{} \quad n \equiv 2,3,4 \bmod 6\\ &{} \\ n+3,&{} \quad n \equiv 1 \bmod 6\\ \end{array} \right. \end{aligned}$$

The case \(\textbf{C}_{\textbf{n}} \square \textbf{C}_{\textbf{4}}\). For \(m=4\) we obtain a \(470 \times 470\) matrix with the properties:

$$\begin{aligned} \begin{array}{llllll} {{\,\textrm{tr}\,}}(A^{3}) = 6 &{} \quad {{\,\textrm{tr}\,}}(A^{4}) = 8 &{} \quad {{\,\textrm{tr}\,}}(A^{5}) = 10&{} \quad {{\,\textrm{tr}\,}}(A^{6}) = 10 &{} \quad {{\,\textrm{tr}\,}}(A^{7}) = 14 &{} \quad {{\,\textrm{tr}\,}}(A^{8}) = 15\\ {{\,\textrm{tr}\,}}(A^{9}) = 17 &{} \quad {{\,\textrm{tr}\,}}(A^{10}) = 18 &{} \quad {{\,\textrm{tr}\,}}(A^{11}) = 20 &{} \quad {{\,\textrm{tr}\,}}(A^{12}) = 20 &{} \quad {{\,\textrm{tr}\,}}(A^{13}) = 24 &{} \quad {{\,\textrm{tr}\,}}(A^{14}) = 25\\ {{\,\textrm{tr}\,}}(A^{15}) = 27 &{} \quad {{\,\textrm{tr}\,}}(A^{16}) = 28 &{} \quad {{\,\textrm{tr}\,}}(A^{17}) = 30&{} \quad {{\,\textrm{tr}\,}}(A^{18}) = 30 &{} \quad {{\,\textrm{tr}\,}}(A^{19}) = 34 &{} \quad {{\,\textrm{tr}\,}}(A^{20}) = 35 \\ \end{array} \end{aligned}$$

and \({{\,\textrm{tr}\,}}(A^{n+6}) = {{\,\textrm{tr}\,}}(A^n) + [10]_{i,j = 1}^{470} \text { for } n \ge 14\). Since \(\gamma _{ri2}(C_{n} \square C_{5}) = {{\,\textrm{tr}\,}}(A^{n})\), we have

Theorem 1.11

For \(n \ge 4\) it holds

$$\begin{aligned} \gamma _{ri2}(C_{n} \square C_{4}) = \left\{ \begin{array}{ll} \left\lceil \llceil \frac{5n}{3} \right\rceil \rrceil , &{}\quad n \equiv 0 \bmod 6\\ &{} \\ \left\lceil \llceil \frac{5n}{3} \right\rceil \rrceil +1,&{}\quad n \equiv 2,4,5 \bmod 6\\ &{} \\ \left\lceil \llceil \frac{5n}{3} \right\rceil \rrceil +2,&{} \quad n \equiv 1,3 \bmod 6\\ \end{array} \right. \end{aligned}$$

The case \(\textbf{C}_{\textbf{n}} \square \textbf{C}_{\textbf{5}}\). For \(m=5\) we obtain a \(1300 \times 1300\) matrix with the properties:

$$\begin{aligned} \begin{array}{llllll} {{\,\textrm{tr}\,}}(A^{3}) = 6 &{} \quad {{\,\textrm{tr}\,}}(A^{4}) = 10 &{} \quad {{\,\textrm{tr}\,}}(A^{5}) = 10&{} \quad {{\,\textrm{tr}\,}}(A^{6}) = 12 &{} \quad {{\,\textrm{tr}\,}}(A^{7}) = 15 &{} \quad {{\,\textrm{tr}\,}}(A^{8}) = 16\\ {{\,\textrm{tr}\,}}(A^{9}) = 18 &{} \quad {{\,\textrm{tr}\,}}(A^{10}) = 20 &{} \quad {{\,\textrm{tr}\,}}(A^{11}) = 22&{} \quad {{\,\textrm{tr}\,}}(A^{12}) = 24 &{} \quad {{\,\textrm{tr}\,}}(A^{13}) = 26 &{} \quad {{\,\textrm{tr}\,}}(A^{14}) = 28\\ {{\,\textrm{tr}\,}}(A^{15}) = 30 &{} \quad {{\,\textrm{tr}\,}}(A^{16}) = 32 &{} \quad {{\,\textrm{tr}\,}}(A^{17}) = 34&{} \quad {{\,\textrm{tr}\,}}(A^{18}) = 36 &{} \quad {{\,\textrm{tr}\,}}(A^{19}) = 38 &{} \quad {{\,\textrm{tr}\,}}(A^{20}) = 40\\ {{\,\textrm{tr}\,}}(A^{21}) = 42 &{} \quad {{\,\textrm{tr}\,}}(A^{22}) = 44 &{} \quad {{\,\textrm{tr}\,}}(A^{23}) = 46&{} \quad {{\,\textrm{tr}\,}}(A^{24}) = 48 &{} \quad &{} \quad \\ \end{array} \end{aligned}$$

and \({{\,\textrm{tr}\,}}(A^{n+10}) = {{\,\textrm{tr}\,}}(A^n) + [20]_{i,j = 1}^{1300} \text { for } n \ge 14\). Since \(\gamma _{ri2}(C_{n} \square C_{5}) = {{\,\textrm{tr}\,}}(A^{n})\), we have

Theorem 1.12

For \(n \ge 8\) it holds

$$\begin{aligned} \gamma _{ri2}(C_{n} \square C_{5}) = 2n. \end{aligned}$$

4.3 Computations for graphs P(n, 2)

Generalized Petersen graph P(n, 2) is a rotagraph \(\omega _{\ell }(G;X)\) for \(n=2\ell \) even and a nearly rotagraph

$$\begin{aligned} \Omega (G_1,G,\ldots ,G; X_1,X,\ldots ,X,X_{\ell }) \end{aligned}$$

for \(n=2\ell +1\) odd, as indicated on Fig. 3.

Fig. 3
figure 3

Petersen graphs as a rotagraph and a nearly rotagraph

For n even, we proceed as in the cases \(C_n \square P_m\) and \(C_n \square C_m\), since P(n, 2) is a rotagraph. When n is odd, the argument is slightly more involved. In any case, we first need to construct the matrix that will allow computations regarding consecutive monographs that are isomorphic.

The matrix \(A=A(3)\) is a \(300 \times 300\) matrix with the properties:

$$\begin{aligned} \begin{array}{llllll} {{\,\textrm{tr}\,}}(A) = 4 &{} \quad {{\,\textrm{tr}\,}}(A^2) = 6 &{} \quad {{\,\textrm{tr}\,}}(A^3) = 7 &{} \quad {{\,\textrm{tr}\,}}(A^4) = 8 &{} \quad {{\,\textrm{tr}\,}}(A^5) = 8 &{} \quad {{\,\textrm{tr}\,}}(A^6) = 13 \\ {{\,\textrm{tr}\,}}(A^7) = 14 &{} \quad {{\,\textrm{tr}\,}}(A^8) = 15 &{} \quad {{\,\textrm{tr}\,}}(A^9) = 16 &{} \quad {{\,\textrm{tr}\,}}(A^{10}) = 16 &{} \quad {{\,\textrm{tr}\,}}(A^{11}) = 21 &{} \quad \\ \end{array} \end{aligned}$$

and it holds

$$\begin{aligned} {{\,\textrm{tr}\,}}(A^{\ell +5}) = {{\,\textrm{tr}\,}}(A^{\ell }) + [8]_{i,j = 1}^{300} \text { for } \ell \ge 6. \end{aligned}$$
(7)

It follows from the construction and Corollary 3.3 that \(\gamma _{ri2}(P(2\ell ,2) = {{\,\textrm{tr}\,}}(A^\ell )\).

For the case \(P(2\ell +1,2)\) we need a product \(A(\ell )A(1)A(2)\). However, we do not explicitly calculate matrices \(A(\ell )\), A(1) and A(2). For computational reasons we rather calculate a \(300 \times 300\) matrix \(A(1)'\) that satisfies \(A(\ell )A(1)A(2) = A A(1)' A \). It follows from the construction and Corollary 3.4 that

$$\begin{aligned} \gamma _{ri2}(P(2\ell +1,2) = {{\,\textrm{tr}\,}}(A(\ell )A(1)A(2)A^{\ell -3}) ={{\,\textrm{tr}\,}}(A A(1)' AA^{\ell -3})={{\,\textrm{tr}\,}}( A(1)' A^{\ell -1}). \end{aligned}$$
(8)

We do not list the matrices A and \(A(1)'\), since they are too large (the code is available at Gabrovšek et al. (2022)).

The computed traces are:

$$\begin{aligned} \begin{array}{llll} {{\,\textrm{tr}\,}}(A(1)'A^0) = 4, &{} \quad {{\,\textrm{tr}\,}}(A(1)'A^1) = 6, &{} \quad {{\,\textrm{tr}\,}}(A(1)'A^2) = 7, &{} \quad {{\,\textrm{tr}\,}}(A(1)'A^3) = 8, \\ {{\,\textrm{tr}\,}}(A(1)'A^4) = 12, &{} \quad {{\,\textrm{tr}\,}}(A(1)'A^5) = 13, &{} \quad {{\,\textrm{tr}\,}}(A(1)'A^6) = 14, &{} \quad {{\,\textrm{tr}\,}}(A(1)'A^7) = 15, \\ {{\,\textrm{tr}\,}}(A(1)'A^8) = 16, &{} \quad {{\,\textrm{tr}\,}}(A(1)'A^9) = 20, &{} \quad {{\,\textrm{tr}\,}}(A(1)'A^{10}) = 21. &{} \quad \\ \end{array} \end{aligned}$$

Thus we have \(\gamma _{ri2}( P(2\ell +1,2) )\) for \(\ell = 1,\ldots , 10\), or equivalently, for \(n=3,5,\ldots ,21\). We are ready to prove the last theorem of the article.

Theorem 1.13

For \(n \ge 3\) it holds

$$\begin{aligned} \gamma _{ri2}(P(n, 2)) = \left\{ \begin{array}{ll} \left\lceil \llceil \frac{4n}{5} \right\rceil \rrceil , &{} \quad n \equiv 0, 9 \bmod 10\\ &{} \\ \left\lceil \llceil \frac{4n}{5} \right\rceil \rrceil +1,&{} \quad n \equiv 7,8 \bmod 10\\ &{} \\ \left\lceil \llceil \frac{4n}{5} \right\rceil \rrceil +2,&{} \quad n \equiv 3,4,5,6 \bmod 10\\ &{} \\ \left\lceil \llceil \frac{4n}{5} \right\rceil \rrceil +3,&{} \quad n \equiv 1,2 \bmod 10\\ \end{array} \right. \end{aligned}$$

Proof

In the proof we only consider the case when n is odd, \(n=2\ell +1\), since the case when n is even is straightforward. It holds from (8) that

$$\begin{aligned} \gamma _{ri2}( P(2\ell +1,2) ) = {{\,\textrm{tr}\,}}(A(1)' A^{\ell -1}) \quad \text {for}\quad \ell \ge 1. \end{aligned}$$

Equation (7) converts to

$$\begin{aligned} \gamma _{ri2}( P(2\ell +11,2) ) = \gamma _{ri2}( P(2\ell +1,2) ) + 8 \quad \text {for}\quad \ell \ge 3. \end{aligned}$$
(9)

For \(\ell = 1,\ldots , 10\), or equivalently, \(n=3,5,\ldots ,21\), the theorem holds. By induction, we assume it holds for \(\ell \) and will prove that it holds also for \(\ell +5\).

If \(n \equiv 9 \bmod 10\), or equivalenlty, \(\ell =4 \bmod 5\), Eq. (9) and the induction hypothesis implies:

$$\begin{aligned} \gamma _{ri2}( P(2\ell +11,2) )&= \gamma _{ri2}( P(2\ell +1,2) ) + 8 = \left\lceil \llceil \frac{4 (2\ell +1)}{5} \right\rceil \rrceil + 8\\&= \left\lceil \llceil \frac{4(2(\ell +5)+1)}{5} \right\rceil \rrceil . \end{aligned}$$

The cases \(n \equiv 7 \bmod 10\), \(n \equiv 3 \bmod 10\) or \(n \equiv 5 \bmod 10\), \(n \equiv 1 \bmod 10\) are treated similarly (details are omitted). Since the formula also holds for \(n=3\), \(n=4\), and \(n=5\), the proof is complete. \(\square \)