In this section, we use difference triangle sets to construct LDPC convolutional codes over \({\mathbb {F}}_q\). The construction was provided for \((n,n-1)_q\) convolutional codes in [1]. Here, we generalize it for arbitrary n and k.
We will construct a sliding parity-check matrix H as in (1), whose kernel defines a convolutional code. Due to the block structure of H, it is enough to consider
$$\begin{aligned} {\mathcal {H}}:= H_{\mu }^c=\begin{bmatrix} H_0 &{} &{} &{} \\ H_1 &{} H_0 &{} &{} \\ \vdots &{} \vdots &{} \ddots &{} \\ H_{\mu } &{} H_{\mu -1} &{} \cdots &{} H_0 \end{bmatrix}, \end{aligned}$$
(3)
since H is then constructed by sliding it. It is easy to see that H does contain a cycle of length \(2\ell \) not satisfying the FRC if and only if \({\mathcal {H}}\) does. Assuming that \(H_0\) is full rank, we can perform Gaussian elimination on the matrix
$$\begin{aligned} \begin{bmatrix} H_0\\ H_1\\ \vdots \\ H_{\mu } \end{bmatrix}, \end{aligned}$$
which results in the block matrix
$$\begin{aligned} {\bar{H}}=\begin{bmatrix} A_0 &{} | &{} I_{n-k} \\ A_1 &{} | &{} 0 \\ \vdots &{} &{} \vdots \\ A_{\mu } &{} | &{} 0 \end{bmatrix}, \end{aligned}$$
(4)
with \(A_i\in {\mathbb {F}}_q^{(n-k)\times k}\) for \(i=1,\ldots , \mu \). With abuse of notation, we write \(H_0\) for \([A_0|I_{n-k}]\), and \(H_i\) for the matrices \([A_i | 0]\).
Remark 1
If we define the matrix \({\tilde{H}}(z)=\sum _{i=0}^{\mu }A_{i}z^i\in {\mathbb {F}}_q[z]^{(n-k)\times k}\), then we obtain that \( H(z)=[{\tilde{H}}(z)\ I_{n-k}]\) and hence H(z) has a polynomial right inverse, i.e. H(z) is basic.
Given \(n\in {\mathbb {N}}\), with the following definition we describe how we construct the above mentioned matrix \({\bar{H}}\) from a (k, w)-wDTS, which then will define an \((n,k)_q\) convolutional code.
Definition 2
Let k, n be positive integers with \(n>k\) and \({\mathcal {T}}:=\{T_1, \ldots , T_{k}\}\) be a (k, w)-wDTS with scope \(m({\mathcal {T}})\). Set \(\mu =\left\lceil \frac{m({\mathcal {T}})}{n-k}\right\rceil -1\) and define the matrix \({\bar{H}}\in {\mathbb {F}}_q^{(\mu +1)(n-k)\times n}\), in which the lth column has weight w and support \(T_l\), i.e. for any \(1\le i \le (\mu +1)(n-k)\) and \(1\le l\le k\), \({\bar{H}}_{i,l}\ne 0\) if and only if \(i \in T_l\). We say that \({\bar{H}}\) has support \({\mathcal {T}}\). The last \(n-k\) columns of \({\bar{H}}\) are given by \([I_{n-k},0_{n-k},\ldots , 0_{n-k}]^\top \). Derive the matrix \({\mathcal {H}}\) by “shifting" the columns of \({\bar{H}}\) by multiples of \(n-k\) and then a sliding matrix H of the form of Eq. (1). Finally, define \({\mathcal {C}}:= \ker ({\mathcal {H}})\) over \({\mathbb {F}}_q\).
Observe that if \(k=n-1\), we simply get the construction provided in [1, Definition 4].
Proposition 1
Let n, k, w be positive integers with \(n>k\), \({\mathcal {T}}\) be a (k, w)-wDTS with scope \(m({\mathcal {T}})\) and set \(\mu =\left\lceil \frac{m({\mathcal {T}})}{n-k}\right\rceil -1\). If \({\bar{H}}\) has support \({\mathcal {T}}\), then the corresponding code is an \((n,k,\delta )\) convolutional code with \(\mu \le \delta \le \mu (n-k)\). Moreover \(H_{\mu }\) is full rank if and only if \(\delta =\mu (n-k)\).
Proof
As the matrix H(z) defined in Remark 1 is basic, \(\delta \) is the maximal degree of the full-size minors of H, which is clearly upper bounded by \(\mu (n-k)\). Moreover, any minor formed by a column with degree \(\mu \) and suitable columns of the systematic part of H has degree \(\mu \), which proves the lower bound.
If \(H_{\mu }\) is full rank, it is equal to \([H]_{hr}\), and H is reduced. Hence, \(\delta \) is equal to the sum of the \(n-k\) row degrees that are all equal to \(\mu \), i.e. \(\delta =\mu (n-k)\). If \(H_{\mu }\) is not full rank, there are two possible cases. First, if \(H_{\mu }\) contains no all-zero row, then \([H]_{hr}=H_{\mu }\) is not full rank, and hence \(\delta \) is strictly smaller than the sum of the row degrees which is \(\mu (n-k)\). Second, if \(H_{\mu }\) contains a row of zeros, then the sum of the row degrees of H is strictly smaller than \(\mu (n-k)\) and thus, also \(\delta \) is strictly smaller than \(\mu (n-k)\). \(\square \)
Remark 2
If \(k<n-k\), i.e. the rate of the code is smaller than 1/2, then (4) implies that \(H_{\mu }\) cannot be full rank. Moreover, in this case, \([H]_{hr}\) can only be full rank if at least \(n-2k\) row degrees of H are zero.
Proposition 2
Let n, k, w be positive integers with \(n>k\) and \({\mathcal {T}}\) be a (k, w)-wDTS. Assume \({\bar{H}}\) has support \({\mathcal {T}}\) and consider the convolutional code \({\mathcal {C}}\) constructed as kernel of the sliding parity-check matrix corresponding to \({\bar{H}}\). If N is the maximal codeword length, i.e. for any codeword \(v(z)\in {\mathcal {C}}\), \(\deg (v)+1\le N/n\), then the sliding parity-check matrix corresponding to \({\bar{H}}\) has density
$$\begin{aligned} \frac{wk+n-k}{(n-k)(\mu n+N)}. \end{aligned}$$
Proof
To compute the density of a matrix, one has to divide the number of nonzero entries by the total number of entries. The result follows immediately. \(\square \)
Theorem 2
Let \({\mathcal {C}}\) be an (n, k) convolutional code with parity-check matrix H. Assume that all the columns of \(\begin{bmatrix} A_0^\top&\cdots&A_{\mu }^\top \end{bmatrix}^\top \) defined as in (4) have weight w and denote by \(w_j\) the minimal column weight of \(\begin{bmatrix} A_0^\top&\cdots&A_{j}^\top \end{bmatrix}^\top \). For \(I\subset \{1,\ldots ,(n-k)(\mu +1)\}\) and \(J\subset \{1,\ldots ,n(\mu +1)\}\) we define \([{\mathcal {H}}]_{I;J}\) as the submatrix of \({\mathcal {H}}\) with row indices I and column indices J. Assume that for some \({\tilde{w}}\le w\) all I, J with \(|J|\le |I|\le {\tilde{w}}\) and \(j_1:=\min (J)\le k\) and I containing the indices where column \(j_1\) is nonzero, we have that the first column of \([{\mathcal {H}}]_{I;J}\) is not contained in the span of the other columns of \([{\mathcal {H}}]_{I;J}\). Then
-
(i)
\({\tilde{w}}+1\le \mathrm {d_{free}}({\mathcal {C}})\le w+1\),
-
(ii)
\(\min (w_j,{\tilde{w}})+1\le d_j^c({\mathcal {C}})\le w_j+1\).
Proof
(i) Without loss of generality, we can assume that the first entry in the first row of \(H_0\) is nonzero. Denote the first column of \({\mathcal {H}}\) by \([h_{1,1},\ldots ,h_{1,(n-k)\mu }]^\top \). Then, \(v(z)=\sum _{i=0}^r v_iz^i\) with
$$\begin{aligned} v_0&=[1\ 0\cdots 0\ -h_{1,1}\cdots \ -h_{1,(n-k)}]\quad \text { and } \\ v_i&=[0\ 0\cdots 0\ -h_{1,(n-k)i+1}\cdots \ -h_{1,(n-k)(i-1)}], \end{aligned}$$
for \(i\ge 1\) is a codeword with \(\mathrm {wt}(v(z))=w+1\) as the weight of the first column of \({\mathcal {H}}\) is equal to w. Hence \(\mathrm {d_{free}}\le w+1\).
Assume by contradiction that there exists a codeword \(v(z)\ne 0\) with weight \(d\le {\tilde{w}}\). We can assume that \(v_0\ne 0\), i.e. there exists \(i\in \{1,\ldots ,n\}\) with \(v_{0,i}\ne 0\). We know that \({\mathcal {H}} v^{\top }=0\) and from (4) we obtain that there exists \(j\in \{1,\ldots ,n\}\) with \(j\ne i\) and \(v_{0,j}\ne 0\) and we can assume that \(i\le k\).
Now, we consider the homogeneous system of linear equations given by \({\mathcal {H}} v^{\top }=0\) and we only take the rows, i.e. equations, where column i of \({\mathcal {H}}\) has nonzero entries. Moreover, we define \({\tilde{v}}\in {\mathbb {F}}^d\) as the vector consisting of the nonzero components of \(v_0, v_1, \ldots , v_{\deg (v)}\). We end up with a system of equations of the form \([{\mathcal {H}}]_{I;J}{\tilde{v}}^{\top }=0\) where \([{\mathcal {H}}]_{I;J}\) fulfills the assumptions stated in the theorem. But this is a contradiction as \({\tilde{v}}^{\top }\) has all components nonzero and therefore \([{\mathcal {H}}]_{I;J}{\tilde{v}}^{\top }=0\) implies that the first column of \([{\mathcal {H}}]_{I;J}\) is contained in the span of the other columns of this matrix.
(ii) The result follows from Theorem 1 with an analogue reasoning as in part (i). \(\square \)
Remark 3
With the assumptions of Theorem 2, if \({\tilde{w}}=w\), one has \(d_j^c=\mathrm {d_{free}}\) for \(j\ge \mu \). Moreover, if \({\bar{H}}\) has support \({\mathcal {T}}\), one achieves higher column distances (especially for small j) if the elements of \({\mathcal {T}}\) are small.
Corollary 1
If \({\mathcal {T}}\) is a (k, w)-DTS and \({\mathcal {C}}\) is an (n, k) convolutional code constructed from \({\mathcal {T}}\) as in Definition 2, then one has that:
-
(i)
\(\mathrm {d_{free}}({\mathcal {C}})= w+1\),
-
(ii)
\( d_j^c({\mathcal {C}})= w_j+1\).
Proof
As already mentioned in [27], matrices \({\mathcal {H}}\) constructed from a DTS have the property that for every pair of columns, their supports intersect at most once. Since \([{\mathcal {H}}]_{I;J}\) as defined in Theorem 2 has the property that all entries in the first column are non-zero, all other columns have at most one non-zero entry. But this implies that the first column cannot be in the span of the other columns and thus, the requirements of Theorem 2 are fulfilled for \({\tilde{w}}=w\), which proves the corollary. \(\square \)
Remark 4
If \(n-k>1\), it is not necessary to have a DTS to obtain that all columns of \({\mathcal {H}}\) intersect at most once since one only has to consider shifts of columns by multiples of \(n-k\). Therefore, we still need to consider a set \({\mathcal {T}}= \{T_1,\ldots , T_k\}\) such that all the differences \(a_{i_1,j_1}-a_{i_1,s_1}\) and \(a_{i_2,j_2}-a_{i_2,s_2}\) for \(i_1\ne i_2\) are different, i.e. two differences coming from different triangles of \({\mathcal {T}}\) have always to be different, but \(a_{i,j_1}-a_{i,s_1}\) and \(a_{i,j_2}-a_{i,s_2}\), i.e. differences coming from the same triangle, only have to be different if \((n-k)\mid (a_{i,j_1}-a_{i,j_2})\).
Example 3
Consider \(n=3\), \(k=1\) and \(T_1=\{1,2,3\}\). It holds \(2-1=3-2\) but since \(3-2\) is not divisible by \(n-k=2\), this does not matter and we still get that all columns of \({\mathcal {H}}\) intersect at most once. For example for \(\mu =1\), we get
$$\begin{aligned} {\mathcal {H}}=\left[ \begin{matrix}1 &{} 1&{} 0 &{}0 &{} 0 &{} 0\\ 1 &{} 0 &{} 1 &{} 0 &{} 0 &{} 0\\ 1 &{} 0 &{} 0 &{} 1 &{} 1 &{} 0\\ 0 &{} 0 &{} 0 &{} 1 &{} 0 &{} 1\end{matrix}\right] . \end{aligned}$$
From Corollary 1 we know that if we use a DTS to construct the parity-check matrix of the code, then the values of the nonzero entries are not important to achieve good distance properties. In the following, we present a construction that achieves also quite large distances if one takes the sets in a wDTS as support sets for the columns of the non-systematic part of \({\bar{H}}\). Moreover, in Sect. 4, we show that this construction ensures that the Tanner graph associated to H is free from cycles of arbitrary length not satisfying the FRC if the size of the underlying field is sufficiently large and the wDTS fulfills some additional properties.
Definition 3
Let k, n be positive integers with \(n>k\) and \({\mathcal {T}}:=\{T_1, \ldots , T_{k}\}\) be a (k, w)-wDTS with scope \(m({\mathcal {T}})\). Set \(\mu =\left\lceil \frac{m({\mathcal {T}})}{n-k}\right\rceil -1\) and let \(\alpha \) be a primitive element for \({\mathbb {F}}_q\), so that every non-zero element of \({\mathbb {F}}_q\) can be written as power of \(\alpha \). For any \(1\le i \le (\mu +1)(n-k)\), \(1\le l\le k\), define
$$\begin{aligned} {\bar{H}}^{\mathcal {T}}_{i,l} := {\left\{ \begin{array}{ll}\alpha ^{il} &{} \text { if } i \in T_l \\ 0 &{} \text { otherwise} \end{array}\right. }. \end{aligned}$$
Obtain the matrix \({\mathcal {H}}^{\mathcal {T}}\) by “shifting" the columns of \({\bar{H}}^{\mathcal {T}}\) by multiples of \(n-k\) and then a sliding matrix \(H^{\mathcal {T}}\) of the form of Eq. (1). Finally, define \({\mathcal {C}}^{\mathcal {T}}:= \ker ({\mathcal {H}}^{\mathcal {T}})\) over \({\mathbb {F}}_q\).
Example 4
Let \({\mathbb {F}}_q:=\{0,1,\alpha , \ldots , \alpha ^{q-2}\}\) and \({\mathcal {T}}\) be a (2, 3)-wDTS, such that \(T_1:=\{1,2,6\}\) and \(T_2:=\{1,2,4\}\). Then, with the notation above,
$$\begin{aligned} {\bar{H}}^{\mathcal {T}}= \begin{bmatrix} \alpha &{} \alpha ^{2} &{} 1 \\ \alpha ^{2} &{} \alpha ^{4} &{} 0 \\ 0 &{} 0 &{} 0 \\ 0 &{} \alpha ^8 &{} 0 \\ 0 &{} 0 &{} 0 \\ \alpha ^6 &{} 0 &{} 0 \end{bmatrix}, \end{aligned}$$
which leads to the following sliding matrix.
$$\begin{aligned} {\mathcal {H}}^{\mathcal {T}}= \left[ \begin{array}{cccccccccccccccccc} \alpha &{} \alpha ^{2} &{} 1 &{} &{} &{} &{} &{} &{} &{} &{} \\ \alpha ^{2} &{} \alpha ^{4} &{} 0 &{} \alpha &{} \alpha ^2 &{} 1 &{} &{} &{} &{} &{} &{} &{} &{} &{} &{} &{} \\ 0 &{} 0 &{} 0 &{} \alpha ^{2} &{} \alpha ^{4} &{} 0 &{} \alpha &{} \alpha ^2 &{} 1 &{} &{} &{} &{} &{} &{} &{} &{} \\ 0 &{} \alpha ^8 &{} 0 &{} 0 &{} 0 &{} 0 &{} \alpha ^{2} &{} \alpha ^{4} &{} 0 &{} \alpha &{} \alpha ^2 &{} 1 &{} &{} &{} &{} &{} \\ 0 &{} 0 &{} 0 &{} 0 &{} \alpha ^8 &{} 0&{} 0 &{} 0 &{} 0 &{} \alpha ^{2} &{} \alpha ^{4} &{} 0 &{} \alpha &{} \alpha ^2 &{} 1 &{} &{} &{} \\ \alpha ^6 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} \alpha ^8 &{} 0 &{} 0 &{} 0 &{} 0 &{} \alpha ^{2} &{} \alpha ^{4} &{} 0 &{} \alpha &{} \alpha ^2 &{} 1 \\ \end{array}\right] . \end{aligned}$$
The code constructed here is a \((3,2)_q\) convolutional code. In this example, one has \(d_0^c=2\), \(d_1^c=d_2^c=d_3^c=d_4^c=3\) and \(d_5=\mathrm {d_{free}}=4\).
The next theorem is a generalization of [1, Theorem 12] to any rate.
Theorem 3
Let w, n, k be positive integers with \(n>k\) and \({\mathcal {T}}\) be a (k, w)-wDTS with scope \(m({\mathcal {T}})\) and \(q>(\mu +1)(n-k)(k-1)+1=\lceil \frac{m({\mathcal {T}})}{n-k}\rceil (n-k)(k-1)+1\). Let \({\mathcal {C}}^{\mathcal {T}}\) be the \((n,k)_q\) convolutional code defined from \({\mathcal {T}}\), as defined in Definition 3 and consider \({\mathcal {H}}^{\mathcal {T}}\) as in (3). Then, all the \(2\times 2\) minors in \({\mathcal {H}}^{\mathcal {T}}\) that are non-trivially zero are non-zero.
Proof
The only \(2\times 2\) minors to check are the ones of the form \(\begin{vmatrix}a_1&a_2\\ a_3&a_4 \end{vmatrix}\). By definition of wDTS, the support of any column of \({\mathcal {H}}^{\mathcal {T}}\) intersects the support of its shift at most once. This ensures that the columns of all these minors are the shift of two different columns of \({\bar{H}}^{\mathcal {T}}\). Moreover, all the elements in the minor are powers of \(\alpha \). In particular, let \(1\le i,r \le (\mu +1)(n-k)\), \(1\le j,\ell \le k\) (note that \(j<\ell \) or \(\ell <j\) according to which columns from \({\bar{H}}^{\mathcal {T}}\) are involved in the shifts). Hence we have that:
$$\begin{aligned}&\begin{vmatrix}a_1&a_2\\ a_3&a_4 \end{vmatrix} = \begin{vmatrix}\alpha ^{ij}&\alpha ^{m\ell }\\ \alpha ^{(i+r)j}&\alpha ^{(m+r)\ell } \end{vmatrix} = \\&\alpha ^{ij}\alpha ^{(m+r)\ell } - \alpha ^{m\ell }\alpha ^{(i+r)j} = \alpha ^{ij + m\ell }(\alpha ^{r\ell }-\alpha ^{rj}) \end{aligned}$$
which is 0 if and only if \(r\ell = rj \mod (q-1)\). Since it holds that \(0\le j < \ell \le k\) or \(0\le \ell < j \le k\) and \(1\le r \le (\mu +1)(n-k)\), this cannot happen. \(\square \)
The following theorem is a generalization of [1, Theorem 13] for any rate. However, in the proof in [1] there is a computation mistake, hence we put the correct version below.
Theorem 4
Let w, n, k be positive integers with \(n>k\) and \({\mathcal {T}}\) be a (k, w)-wDTS with scope \(m({\mathcal {T}})\), \(w\ge 3\). Let \({\mathcal {C}}^{\mathcal {T}}\) be the \((n,k)_q\) convolutional code defined from \({\mathcal {T}}\), as in Definition 3 with \({\mathcal {H}}^{\mathcal {T}}\) as defined in (3) and assume that \((\mu +1)(n-k)>2\). Assume also that \(q=p^N\), where \(p>2\) and
$$\begin{aligned} N>(\mu +1) (n-k)(k-1)= \Big \lceil \frac{m({\mathcal {T}})}{n-k}\Big \rceil (n-k)(k-1). \end{aligned}$$
Then, all the \(3\times 3\) minors in \({\mathcal {H}}^{\mathcal {T}}\) that are non-trivially zero are non-zero.
Proof
We need to distinguish different cases.
Case I The \(3\times 3\) minors are of the form
$$\begin{aligned} \begin{vmatrix}a_1&a_2&a_3\\ a_4&a_5&a_6 \\ a_7&a_8&a_9 \end{vmatrix}, \end{aligned}$$
with \(a_i \ne 0\) for any i. As we observed in Theorem 3, in this case all the columns are shifts of three different columns from \({\bar{H}}^{\mathcal {T}}\), since each column can intersect any of its shifts at most once. Observe that we can write a minor of this form as
$$\begin{aligned} \begin{vmatrix}a_1&a_2&a_3\\ a_4&a_5&a_6 \\ a_7&a_8&a_9 \end{vmatrix} = \begin{vmatrix} \alpha ^{ij}&\alpha ^{lu}&\alpha ^{tm} \\ \alpha ^{(i+r)j}&\alpha ^{(l+r)u}&\alpha ^{(t+r)m}\\ \alpha ^{(i+r+s)j}&\alpha ^{(l+r+s)u}&\alpha ^{(t+r+s)m}\\ \end{vmatrix}, \end{aligned}$$
where \(1\le i,l,t \le (\mu +1)(n-k)\), \(r,s \in {\mathbb {Z}}\) are possibly negative, with \(r\ne s\), and \(1\le j,u,m\le k\) representing the index of the column from which the selected element comes from (or if the selected elements belongs to the shift of some column, j, u, m are still the indexes of the original column). Due to symmetry in this case we can assume \(r,s\in {\mathbb {N}}\) and \(1\le i,l,t \le (\mu +1)(n-k)-3\). Moreover, \(-(\mu +1)(n-k) +1\le i+r, l+r, t+r\le (\mu +1)(n-k) - 1\) and \(-(\mu +1)(n-k)\le i+r+s, l+r+s, t+r+s \le (\mu +1)(n-k)\). This determinant is 0 if and only if
$$\begin{aligned} \alpha ^{ru+rm+sm} + \alpha ^{rm+rj+sj}+ \alpha ^{rj+ru+sk}=\alpha ^{ru+rj+sj}+ \alpha ^{rj+rm+sm}+ \alpha ^{ru+rm+sk}. \end{aligned}$$
(5)
Without loss of generality we can assume that \(j<u<m\) and it turns out that the maximum exponent in Eq. (5) is \(ru+rm+sm\) while the minimum is \(ru + rj + sj\). Let \(M:=ru+rm+sm - (ru + rj + sj)\). It is not difficult to see that the maximum value for M is \(((\mu +1)(n-k)-1)(k-1)\) hence this determinant can not be zero because \(\alpha \) is a primitive element for \({\mathbb {F}}_q\) and, by assumption, \(q=p^N\), where \(N>M\).
Case II The \(3\times 3\) minors are of the form
$$\begin{aligned} \begin{vmatrix}a_1&a_2&0\\ a_3&a_4&a_5\\ a_6&0&a_7 \end{vmatrix}. \end{aligned}$$
As in the first case, we can assume that the minor is given by
$$\begin{aligned} \begin{vmatrix} \alpha ^{ij}&\alpha ^{lu}&0 \\ \alpha ^{(i+r)j}&\alpha ^{(l+r)u}&\alpha ^{(t+r)m}\\ \alpha ^{(i+r+s)j}&0&\alpha ^{(t+r+s)m}\\ \end{vmatrix}, \end{aligned}$$
with the same bounds on the variables as before. But, in this case \(j\ne u,m\) but u can be equal to m. Indeed, the first column intersects the other two in two places, which means that they are not all shifts of the same column. However, the second and third ones can belong to the same column. This determinant is 0 when \(\alpha ^{ru+sm}+\alpha ^{rj+sj}- \alpha ^{rm+sm}=0\). In this case, according to the different possibilities for j, u, m and r, s we check the maximum and the minimum exponent. We present here only the worst case for the field size, which is obtained when \(j<u<m\), \(r<0\). We see that the minimum exponent is \(rj+sj\) and the maximum is \(rj+sm\). We consider \(M:=rj+sm-rj-sj\) and we check what is the maximum value that M can reach. It is not difficult to see that this is \((\mu +1)(n-k)(k-1)\). When \(p=p^N\), with \(N>M\), the considered determinant is never 0.
Case III The \(3\times 3\) minors are of the form
$$\begin{aligned} \begin{vmatrix}a_1&a_2&a_3\\ a_4&a_5&a_6 \\ a_7&a_8&0 \end{vmatrix}, \end{aligned}$$
with \(a_i \ne 0\) for any i. We can assume that, the minor is given by
$$\begin{aligned} \begin{vmatrix} \alpha ^{ij}&\alpha ^{lu}&\alpha ^{tm} \\ \alpha ^{(i+r)j}&\alpha ^{(l+r)u}&\alpha ^{(t+r)m}\\ \alpha ^{(i+r+s)j}&\alpha ^{(l+r+s)u}&0 \\ \end{vmatrix}, \end{aligned}$$
with the same bounds on the variables as in previous cases. However, this time \(1\le j<u<m\le k\). After some straightforward computations, we get that this determinant is 0 if and only if
$$\begin{aligned} \alpha ^{rm+rj+sj}+ \alpha ^{rj+ru+su}=\alpha ^{ru+rj+sj}+ \alpha ^{ru+rm+su}. \end{aligned}$$
(6)
In the worst case, consider \(M:=ru+rj+su - (rm+rj+sj) = r(u-m)+s(u-j)\) with \(r<0\). We immediately see that the maximum value that M can reach is \((\mu +1)(n-k)(k-2)+1\), hence this determinant can not be zero because \(\alpha \) is a primitive element for \({\mathbb {F}}_q\) and, by assumption, \(q=p^N\), where \(N>M\).
Case IV The \(3\times 3\) minors are of the form
$$\begin{aligned} \begin{vmatrix}a_1&a_2&0\\ 0&a_3&a_4 \\ a_6&0&a_5 \end{vmatrix}. \end{aligned}$$
In this case, we can have that the three considered columns come from different shifts of the same one, hence we allow that some (or all) among j, u, m are equal. Arguing as before, we notice that these minors are given by
$$\begin{aligned} \begin{vmatrix} \alpha ^{ij}&\alpha ^{lu}&0 \\ 0&\alpha ^{(l+r)u}&\alpha ^{(t+r)m}\\ \alpha ^{(i+r+s)j}&0&\alpha ^{(t+r+s)m}\\ \end{vmatrix} =\\ \alpha ^{ij+lu+tm+rm}(\alpha ^{ru+sm}+\alpha ^{rj+sj}). \end{aligned}$$
This determinant is 0 whenever \(r(u-j) + s(m-j) - (q-1)/2=0 \mod (q-1)\). Analyzing all the possibilities we can have according to r, s being negative or positive and j, u, m being equal or different, after some computations, we obtain that, whenever \(q>2(k-1)((\mu +1)(n-k)-1)+1\), the considered determinant is never 0. And this is the case for our field size assumption. \(\square \)
Observe that Case IV of Theorem 4 corresponds to the lower bound for the field size sufficient to avoid the presence of 6-cycles not satisfying the FRC. Hence, we have the following result.
Corollary 2
Let \({\mathcal {C}}^{\mathcal {T}}\) be an (n, k) convolutional code constructed from a (k, w) wDTS \({\mathcal {T}}\) and satisfying the conditions of Theorems 3 and 4. Then, \(d_{free}({\mathcal {C}}^{\mathcal {T}})\ge 3\) and the code is free from 4 and 6-cycles not satisfying the FRC.
Remark 5
If \({\mathcal {C}}^{\mathcal {T}}\) is an (n, k) convolutional code constructed from a (k, w) wDTS \({\mathcal {T}}\) and satisfying the conditions of Theorems 3 and 4, such that \(H_{\mu }\) has no zero row and \(n-k\le \min \{3,k\}\), then, it follows from Proposition 1 that \(\delta =\mu (n-k)\).
Example 5
Consider the \((3,2)_q\) code constructed in Example 4. Note that \(\mu = 5\), hence, for \(q>11\) we can avoid all the 6-cycles not satisfying the FRC (Case IV of Theorem 4).