Disjoint difference families and their applications

Difference sets and their generalisations to difference families arise from the study of designs and many other applications. Here we give a brief survey of some of these applications, noting in particular the diverse definitions of difference families and the variations in priorities in constructions. We propose a definition of disjoint difference families that encompasses these variations and allows a comparison of the similarities and disparities. We then focus on two constructions of disjoint difference families arising from frequency hopping sequences and show that they are in fact the same. We conclude with a discussion of the notion of equivalence for frequency hopping sequences and for disjoint difference families.

external difference families arises from many applications in communications and information security. Roughly speaking, a difference family consists of a collection of subsets of an abelian group, and internal differences are the differences between elements of the same subsets, while external differences are the differences between elements of distinct subsets. Most of the definitions do not coincide exactly with each other, understandably since they arise from diverse applications, and the priorities of maximising or minimising various parameters are also understandably divergent. However, there is enough overlap in these definitions to warrant a study of how they relate to each other, and how the construction of one family may inform the construction of another. One of the aims of this paper is to perform a brief survey of these difference families, noting the variations in definitions and priorities, and to propose a definition that encompasses these definitions and allows a more unified study of these objects.
One particular class of internal difference family arises from frequency hopping (FH) sequences. FH sequences allow many transmitters to send messages simultaneously using a limited number of channels and it transpires that the question of how efficiently one can send messages has to do with the number of internal differences in a collection of subsets of frequency channels. The seminal paper of Lempel and Greenberger [50] gave optimal FH sequences using transformations of linear feedback shift register (LFSR) sequences. In another paper by Fuji-Hara et al. [28] various families of FH sequences were constructed using designs with particular automorphisms, and the question was raised there as to whether these constructions are the same as the LFSR constructions in [50]. Here we show a correspondence between one particular family of constructions in [28] and that of [50].
The relationship between the equivalence of difference families and the equivalence of the designs and codes that arise from them has been much studied. Here we will focus on the notion of equivalence for frequency hopping sequences and for disjoint difference families.

Definitions
Let G be an abelian group 1 of size v, and let Q 0 , . . . , Q q−1 be disjoint subsets of G, |Q i | = k i , i = 0, . . . , q − 1. We will call (G; Q 0 , . . . , Q q−1 ) a disjoint difference family DDF(v; k 0 , . . . , k q−1 ) over G with the following external E(·) and internal I(·) differences: We will call the DDF uniform if all the Q i are of the same size, and we will say it is a perfect 2 internal (or external) DDF if |I(d)| (or |E(d)|) is a constant for all d ∈ G \ {0}. We will call the DDF a partition type DDF if {Q 0 , . . . , Q q−1 } is a partition of G.
Remark 1 As mentioned before and as will be pointed out in Sect. 2, there is by no means a consensus on the terms used to describe a DDF. Here we point out the disparity between our terms and those of [14], and in Sect. 2 we will point out the differences as they arise. In particular, the definition of difference family in [14] stipulates that the subsets Q i are all of the same size, but does not insist that they are disjoint. We have defined a DDF to consist of disjoint subsets (of varying sizes) because we want to be able to define external differences. Using the term uniform to describe the subsets Q i being of the same size is consistent with terminology used in design theory.
and is a perfect internal and external DDF.
For example, the (7, 3, 1) difference set Q 0 = {0, 1, 3} ⊆ Z 7 . We have |I(d)| = 1. Let It is not hard to see that a perfect partition type internal DDF is also a perfect partition type external DDF and vice versa. However, this is not generally true for DDFs that are not partition type: 9, 10, 14, 17, 24}. This is a perfect external DDF(25; 6, 6) with |E(d)| = 3 for all d ∈ Z * 25 given in [30]. However, it is not a perfect internal DDF: For many codes and sequences [12,17,28,30,64], desirable properties can be expressed in terms of (some) external or internal differences of DDFs. We give a brief survey of these applications and the properties required of the DDFs in the next section. the cross-correlation between pairs of sequences (the number of positions in which cyclic shifts of one sequence agree with the other sequence in the pair).
A single FH sequence X may be viewed in a combinatorial way: Define Q i , i = 0, . . . , q− 1, as subsets of Z v , with j ∈ Q i if x j = i. Hence each Q i corresponds to a frequency f i , and the elements of Q i are the positions in X where f i is used. (For example, the frequency hopping sequence X = (0, 0, 1, 0, 1, 1, 1) over F = {0, 1} gives the DDF of Example 1.) In [28] it was shown that an FH sequence (x 0 , x 1 , . . . , x v−1 ) with out-of-phase autocorrelation value of at most λ exists if and only if ( The aim in FH sequence design is to minimise collisions: we would like λ to be small. Lempel and Greenberger [50] proved a lower bound for λ, and in [68] bounds relating the size of sets of frequency hopping sequences with their Hamming auto-and cross-correlation values were given. Lempel and Greenberger [50] constructed optimal sequences using transformations of m-sequences (more details in Sect. 4.1). In [28] Fuji-Hara et al. also provided many examples of optimal sequences using designs with certain types of automorphisms. Other constructions of FHS include using cyclotomy [11,55], random walks on expander graphs [26], and error-correcting codes [20,23]. A survey of sequence design from the viewpoint of codes can also be found in [70]. Later in this paper we will show that one of the constructions in [28] by Fuji-Hara et al. gave the same sequences as those constructed by Lempel and Greenberger in [50]. It would be interesting to see how the other constructions relate to each other.
Note that in this correspondence to a difference family, the set of frequency hopping sequence is the rotational closure (Definition 1, Sect. 5) of one single frequency hopping sequence. Collections of DDFs were used to model more general sets of sequences in [4,32,79], referred to as balanced nested difference packings.
It is also to be noted that most of the published work considered either pairwise interference between two sequences (described above as Hamming correlation) or adversarial interference (jamming) [26,58,81], which may not reflect the reality of the application where more than two sequences may be in use. To this end Nyirenda et al. [59] modelled frequency hopping sequences as cover-free codes and considered additional properties required to resist jamming.

Self-synchronising codes
Self-synchronising codes are also called comma-free codes and have the property that no codeword appears as a substring of two concatenated codewords. This allows for synchronisation without external help. Codes achieving self-synchronisation in the presence of up to λ−1 2 errors can be constructed from a DDF(v; k 0 , . . . , In [30], this DDF was called a difference system of sets of index λ over Z v . The sets Q 0 , . . . , Q q−1 give the markers for self-synchronisation and are a redundancy, hence we would like k = q−1 i=0 k i to be small. Other optimisation problems include reducing the rate k/v, reducing λ, and reducing the number q of subsets.
An early paper by Golomb et al. [36] took the combinatorial approach to the subject of self-synchronising codes, and [51] gave a survey of results, constructions and open problems of self-synchronising codes. More recent work on self-synchronising codes can be found in [13] which gave some variants on the definitions, and in [6] in the guise of non-overlapping codes, giving constructions and bounds. Further constructions can be found in [30], including constructions from the partitioning of cyclic difference sets and partitioning of hyperplanes in projective geometry, as well as iterative constructions using external and internal DDFs.

Splitting A-codes and secret sharing schemes with cheater detection
In authentication codes (A-codes), a transmitter and a receiver share an encoding rule e, chosen according to some specified probability distribution. To authenticate a source state s, the transmitter encodes s using e and sends the resulting message m = e(s) to the receiver. The receiver receives a message m and accepts it if it is a valid encoding of some source, i.e. when m = e(s ) for some source s . In a splitting A-code, the message is computed with an input of randomness so that a source state is not uniquely mapped to a message. An adversary (who does not know which encoding rule is being used) may send their own message m to the receiver in the hope that it will be accepted as valid. This is known as an impersonation attack, and succeeds if m is a valid encoding of some source s. Also of concern are substitution attacks, in which an adversary who has seen an encoding m of a source s replaces it with a new value m . This attack succeeds if m is a valid encoding of some source s = s. We refer to [64] for further background. It was shown in [64] that optimal splitting A-codes can be constructed from a perfect uniform external DDF(v; k 0 = k, . . . , k q−1 = k) with |E(d)| = 1. This gives an A-code with q source states, v encoding rules, v messages, and each source state can be mapped to k valid messages. This type of DDF was called an external difference family (EDF) in [64]. The probability of an adversary successfully impersonating the transmitter is given by kq/v and the probability of successfully substituting a message being transmitted is given by 1/kq (which also happens to equal k(q − 1)/(v − 1) in this particular context). These are parameters to be minimised.
A secret sharing scheme is a means of distributing some information, known as shares, to a set of players so that authorised subsets of players are able to combine their shares to reconstruct a unique secret, whereas the shares belonging to unauthorised subsets reveal no information about the secret. If some of the players are dishonest, however, then they may cheat by submitting false values that are not their true shares and thereby causing an incorrect value to be obtained during secret reconstruction. Such attacks were first discussed by Tompa and Woll in [72]. Various types of difference family have been used in constructing schemes which allow such cheating to be detected with high probability. In [65], difference sets were used to construct schemes that were optimal with respect to certain bounds on the sizes of shares. In [64], EDFs were used in a similar manner to construct optimal schemes. Other schemes that permit detection of cheaters include those proposed in [3,9,42,60,62,63]. Many of these constructions can be interpreted as involving particular types of difference family; this observation has led to the definition of the concept of algebraic manipulation detection codes [16] (see Sect. 2.4).

Weak algebraic manipulation detection (AMD) codes
An AMD code is a tool that can be combined with a cryptographic system that provides some form of secrecy in order to incorporate extra robustness against an adversary who can actively change values in the system. The notion was proposed in [16] as an abstraction of techniques used in the construction of robust secret sharing schemes. In the basic setting for a weak AMD code, a source is chosen uniformly from a finite set S of sources with |S| = k.
It is then encoded using a (possibly randomised) encoding map E : S → G where G is an abelian group of order v ≥ k. We require the sets of possible encodings of different sources to be disjoint, so that E(s) uniquely determines s. An adversary is able to manipulate this encoded value by adding a group element d ∈ G of its choosing. (We suppose the adversary knows the details of the encoding function, but does not know what source has been chosen, nor the specific value of any randomness used in the encoding.) After this manipulation, an attempt is made to decode the resulting value. If the altered value E(s) + d is a valid encoding E(s ) of some source s then it is decoded to s . Otherwise, decoding fails and the symbol ⊥ is returned; this represents the situation where the adversary's manipulation has been detected. The adversary is deemed to have succeeded if E(s) + d is decoded to s = s, that is if they have caused the stored value to be decoded to a source other than the one that was initially stored.
A set of sources S with |S| = k, abelian group G with |G| = v and encoding rule E constitute a weak (k, v, )-AMD (algebraic manipulation detection) code if for any choice of d ∈ G the adversary's success probability is at most . (The probability is taken over the uniform choice of source, and over the randomness used in the encoding.) In [16], it was shown that a weak (k, v, )-AMD code with deterministic encoding is equivalent to a DDF(v; k) with In [16] these were called (v, k, λ)-bounded difference sets. It is easy to see that these are generalisations of difference sets, allowing general abelian groups and with an upper bound for the number of differences.
Weak AMD codes were introduced in [16], with further detail on constructions, bounds and applications provided in the full version of the paper [15]. Bounds on the adversary's success probability in a weak AMD code were given in [17] and several families with good asymptotic properties were constructed using vector spaces. Additional bounds were given in [66], and constructions and characterisations were given relating weak AMD codes that are optimal with respect to these bounds to a variety of types of external DDF. It is desirable to minimise the tag length (log v − log k, the number of redundant bits) as well as .

Stronger forms of algebraic manipulation detection (AMD) code
Strong AMD codes were defined in [16]; these are able to limit the success probability of an adversary even when the adversary knows which source has been encoded. Specifically, for every source s ∈ S and every element d ∈ G, the probability that (E(s) + d) is decoded to a value s / ∈ {s, ⊥} is at most . (Here the probability is taken over the randomness in the encoding rule E. Unlike the case of a weak AMD code, a strong AMD code cannot use a deterministic encoding rule.) Constructions from vector spaces and caps in projective space were given in [17]. Additional bounds and characterisations were given in [66]. A construction based on a polynomial over a finite field was given in [16] and applied to the construction of robust secret sharing schemes, and robust fuzzy extractors. This construction has since been used for a range of applications, including the construction of anonymous message transmission schemes [8], non-malleable codes [25], strongly decodeable stochastic codes [37], secure communication in the presence of a byzantine relay [38,39], and codes for the adversarial wiretap channel [76]. New constructions, including an asymptotically optimal randomised construction were given in [18].
AMD codes that resist adversaries who learn some limited information about the source were constructed and analysed in [1], and their application to tampering detection over wiretap channels was discussed.
AMD codes secure in a stronger model in which an adversary succeeds even when producing a new encoding of the original source have been used in the design of secure cryptographic devices and related applications [33,34,48,56,57,74,75].

Optical orthogonal codes (OOCs)
Optical orthogonal codes (OOCs) are sequences arising from applications in code-division multiple access in fibre optic channels. OOC with low auto-and cross-correlation values allow users to transmit information efficiently in an asynchronous environment.
such that auto-correlation values are at most λ a and cross-correlation values are at most λ c . For each sequence X i , let Q i be the set of integers modulo v denoting the positions of the non-zero bits.
Background and motivation to the study of OOC were given in [12], which also included constructions from designs, algebraic codes and projective geometry. In [5] constant weight cyclically permutable codes, which are also uniform DDFs, were used to construct OOC, and a recursive construction was given. In [82] OOC were used to construct compressed sensing matrix and a relationship between OOC and modular Golomb rulers ( [14]) was given-a (v, k) modular Golomb ruler is a set of k integers {d 0 , . . . , d k−1 } such that all the differences are distinct and non-zero modulo v-in fact, a DDF(v; k) with |I (d)| ≤ 1 for all d = 0.
A generalisation to two-dimensional OOC with a combinatorial approach can be found in [10,19]. Combinatorial and recursive constructions as well as bounds can be found in [27], and [47] allowed variable weight OOC and used various types of difference families and designs to construct such OOCs.

Other applications
The list of applications discussed in this section is by no means exhaustive, and DDFs arise in a variety of other areas of combinatorics and coding theory. For example, in [29], complete sets of disjoint difference families (in fact, partition type perfect uniform DDFs where the subsets are grouped) were used in constructing 1-factorisations of complete graphs and in constructing cyclically resolvable cyclic Steiner systems. In [80], high-rate quasi-cyclic codes were constructed using perfect internal uniform DDF, and a generalisation to families of sets of non-negative integers with specific internal differences was given. Z-cyclic whist tournaments correspond to perfect internal DDFs over Z v [2]. In addition, various types of sequences and arrays with specified correlation properties have been proposed for a wide range of applications [35,40]. Many of these can be studied in terms of a relationship with appropriate forms of DDFs [69].

A geometrical look at a perfect partition type disjoint difference family
In [28] a perfect partition type DDF(q n −1; k 0 = q−1, k 1 = q, . . . , k q n−1 −1 = q) over Z q n −1 was constructed from line orbits of a cyclic perspectivity τ in the n-dimensional projective space PG(n, q) over GF(q). In [50] another construction with the same parameters was given. In the next section we will show a correspondence between the two constructions. Before that we will describe in greater detail the the construction of [28, Section III].
An n-dimensional projective space PG(n, q) over the finite field of order q admits a cyclic group of perspectivities τ of order q n − 1 that fixes a hyperplane H ∞ and a point ∞ / ∈ H ∞ . (We refer the reader to [41] for properties of projective spaces and their automorphism groups.) This group τ acts transitively on the points of H ∞ and regularly on the points of PG(n, q) \ (H ∞ ∪ {∞}). We will call the points (and spaces) not contained in H ∞ the affine points (and spaces).
The point orbits of τ are {∞}, H ∞ , and PG(n, q)\(H ∞ ∪{∞}). Dually, the hyperplane orbits are H ∞ , the set of all hyperplanes through ∞, and the set of all hyperplanes of PG(n, q) \ H ∞ not containing ∞. Line orbits under τ are: (A) One orbit of affine lines through ∞this orbit has length q n −1 q−1 ; and (B) q n−1 −1 q−1 orbits of affine lines not through ∞-each orbit has length q n − 1, and τ acts regularly on each orbit; and (C) One orbit of lines contained in H ∞ .
A set of parallel (affine) lines through a point P ∞ ∈ H ∞ consists of one line L 0 from the orbit of type (A) and q − 1 lines from each of the (q n−1 − 1)/(q − 1) orbits of type (B). We will write this set of q n−1 lines P = {L 0 , L 1 , . . . , L q n−1 −1 } as follows (See Fig. 1 We consider the two types of d ∈ Z * q n −1 depending on the action of τ d on L 0 : fixing the line L 0 (and the points P ∞ and ∞) and permuting the points of L 0 . These τ d permute but do not fix the lines within each O i . Hence we have, for these d ∈ Z * q n −1 , Hence we have Without loss of generality consider Fig. 2.) Since τ is transitive on affine points (excluding ∞), there is a d j such that P τ d j 1 = P 2 . Then

Hence for any orbit
This means that τ d j maps P 1 to P 2 and P τ d 1 to P τ d 2 and hence maps the line L 1 to L j . But this is a contradiction since L 1 and L j belong to different orbits under τ . Hence if It is also clear that for any L i in any orbit, there is a d such that |L τ d i ∩ L i | = 1, because τ is transitive on affine points (excluding ∞). Indeed, τ acts regularly on these points, so that for any pair of points (P, Q) on L i there is a unique d such that P τ d = Q. There are q(q − 1) pairs of points and so there are q(q − 1) such values of d. These q(q − 1) values of d for each O i in P, together with the q − 2 values of d where τ d that fixes L 0 , account for all of Z * q n −1 . Now, the points of PG(n, q) \ (H ∞ ∪ {∞}) can be represented as Z q n −1 as follows: pick an arbitrary point P 0 to be designated 0. The point P τ i 0 corresponds to i ∈ Z q n −1 . The action of τ d on any point P is thus represented as P + d. Affine lines are therefore

A perfect external DDF
Given that a partition type perfect internal DDF over Z v with |I(d)| = λ must be a perfect external DDF with |E(d)| = v − λ, the intersection properties |L τ d i ∩ L j |, i = j can be deduced as follows for the two different types (I), (II) of d: (I) For the q − 2 values of τ d of type (I) fixing L 0 , we have: If L i and L j are in the same orbit, then since τ d acts regularly on an orbit of type (B), there is a unique d that maps L i to L j , so |L τ d i ∩ L j | = q, and for all other L k in the same orbit, |L τ d i ∩ L k | = 0. This applies to each orbit, so that for each of the (b) Consider L i = L 0 . Take any point P ∈ L i . We have P τ d ∈ L j for some L j , so |L τ d i ∩ L j | = 1. This applies for all L i , so that for any of the (q n − 1) − (q − 2) values of d, there are (q n−1 − 1)q cases of |L τ d i ∩ L j | = 1, q − 1 of which are when L j = L i .
This construction gives DDF with the same parameters as those constructed using msequences in [50], though [50] restricted their constructions to the case when q is a prime. It was asked in [28] whether these are "essentially the same" constructions. In this section we show a correspondence between these two constructions, and in Sect. 5 we discuss what "essentially the same" might mean. This correspondence also shows that the restriction to q prime in [50] is unnecessary. (Indeed it was pointed out in [70] that the assumption that the field must be prime is not necessary.)

The Lempel-Greenberger m-sequence construction
We refer the reader to [54] for more details on linear recurring sequences. Here we sketch an introduction. Let (s t ) = s 0 s 1 s 2 . . . be a sequence of elements in GF(q), q a prime power, satisfying the n th order linear recurrence relation s t+n = c n−1 s t+n−1 + c n−2 s t+n−2 + · · · + c 0 s t , c i ∈ GF(q), c n−1 = 0.
Then (s t ) is called an (n th order) linearly recurring sequence in GF(q). Such a sequence can be generated using a linear feedback shift register (LFSR). An LFSR is a device with n stages, which we denote by S 0 , . . . , S n−1 . Each stage is capable of storing one element of GF(q). The contents s t+i of all the registers S i (0 ≤ i ≤ n − 1) at a particular time t are known as the state of the LFSR at time t. We will write it either as s(t, n) = s t s t+1 . . . s t+n−1 or as a vector s t = (s t , s t+1 , . . . , s t+n−1 ). The state s 0 = (s 0 , s 1 , . . . , s n−1 ) is the initial state.
At each clock cycle, an output from the LFSR is extracted and the LFSR is updated as described below.
-The content s t of the stage S 0 is output and forms part of the output sequence.
-For all other stages, the content s t+i of stage S i is moved to stage S i−1 (1 ≤ i ≤ n − 1). -The new content s t+n of stage S n−1 is the value of the feedback function f (s t , s t+1 , . . . , s t+n−1 ) = c 0 s t + c 1 s t+1 + · · · c n−1 s t+n−1 , c i ∈ GF(q).
A diagrammatic representation of an LFSR is given in Fig. 3.

Fig. 3 Linear feedback shift register
The characteristic polynomial associated with the LFSR (and the linear recurrence relation) is The state at time t + 1 is also given by s t+1 = s t C, where C is the state update matrix given by A sequence (s t ) generated by an n-stage LFSR is periodic and has maximum period q n −1. A sequence that has maximum period is referred to as an m-sequence. An LFSR generates an m-sequence if and only if its characteristic polynomial is primitive. An m-sequence contains all possible non-zero states of length n, hence we may use, without loss of generality, the impulse response sequence [the sequence generated using initial state (0 · · · 01)].
Let S = (s t ) = s 0 s 1 s 2 . . . be an m-sequence over a prime field GF( p) generated by an nstage LFSR with a primitive characteristic polynomial f (x). Let s(t, k) = s t s t+1 . . . s t+k−1 be a subsequence of length k starting from s t .
The σ k -transformations, 1 ≤ k ≤ n − 1 introduced in [50] are described as follows: We write the σ k -transform of S as U = (u t ), u t = σ k (s(t, k)), which is a sequence over Z p k .
In [50,Theorem 1] it is shown that the sequence U forms a frequency hopping sequence with out-of-phase auto-correlation value of p n−k − 1, and hence a partition type perfect DDF with |I(d)| = p n−k − 1 (Sect. 2.1). We see in the next section that this corresponds to the geometric construction of [28] described in Sect. 3.

A geometric view of the Lempel-Greenberger m-sequence construction
We refer the reader to [41] for details about coordinates in finite projective spaces over GF(q). Here we only sketch what is necessary to describe the m-sequence construction of Sect. 4.1 from the projective geometry point of view.
Let PG(n, q) be an n-dimensional projective space over GF(q). Then we may write with the proviso that ρ(x 0 , x 1 , . . . , x n ) as ρ ranges over GF(q) \ {0} all refer to the same point. Dually a hyperplane of PG(n, q) is written as [a 0 , a 1 , . . . , a n ], a i ∈ GF(q) not all zero, and contains the points (x 0 , x 1 , . . . , x n ) satisfying the equation a 0 x 0 + a 1 x 1 + · · · + a n x n = 0.
Let τ be the projectivity defined by Then τ fixes H ∞ and ∞, acts regularly on O = {P t | t = 0, . . . , p n − 2}, and maps P t to P t+1 . Now we consider what a σ k -transformation means in PG(n, p).
Firstly we consider σ n−1 . This takes the first n − 1 coordinates of the point P t = (s t , s t+1 , . . . , s t+n−1 , 1) and maps them to n−2 i=0 s t+i p i ∈ Z p n−1 . There are p n−1 distinct z i ∈ Z p n−1 and for each z i = 0, there are p points Z i = {P t 0 , . . . , P t p−1 } = {(s t , s t+1 , . . . , s t+n−2 , α, 1) | α ∈ GF( p)} which are mapped to z i by σ n−1 . For z i = 0 there are p − 1 corresponding points in Z 0 since the all-zero state does not occur in an m-sequence.

The other way round?
We see that the m-sequence constructions of [50] gives the projective geometry constructions of [28]. Here we consider how the constructions of [28] relate to m-sequences.
In PG(n, q) we may choose any n +2 points (every set of n +1 of which are independent) as the simplex of reference (there is an automorphism that maps any set of such n + 2 points to any other set). Hence we may choose the hyperplane x n = 0 (denoted H ∞ ) and the point (0, 0, . . . , 0, 1) (denoted ∞). Now, consider a projectivity τ represented by an (n + 1) × (n + 1) matrix A that fixes H ∞ and ∞. It must take the form and we see that So the order of A is given by the order of C. Let the characteristic polynomial of C be f (x). The order of A is hence the order of f (x).
Consider the action of τ on the points of PG(n, q)\(H ∞ ∪∞). For τ to act transitively on these points A must have order q n − 1, which means that f (x) must be primitive. If we use this f (x) as the characteristic polynomial for an LFSR we generate an m-sequence, as in Sect. 4.1. For prime fields, this is precisely the construction of [50].
Projectivities in the same conjugacy classes have matrices that are similar and therefore have the same characteristic polynomial. There are φ(q n −1) n primitive polynomials of degree n over GF(q) and this gives the number of conjugacy classes of projectivities fixing H ∞ and ∞ and acting transitively on the points of PG(n, q) \ (H ∞ ∪ ∞). of the m-sequence. If the set of shifts of an m-sequence is considered as a cyclic code over GF(q) then this gives equivalent codes (more on this in Sect. 5). The group τ has φ(q n − 1) generators, and each of the generators τ i , (i, q n − 1) = 1 corresponds to a multiplier w such that {w Q i : i = 1, . . . , q n−1 − 1} = {Q i : i = 1, . . . , q n−1 − 1}.
We have described this correspondence in terms of the lines of PG(n, q) but this also applies to the correspondence between higher dimensional subspaces and the σ ktransformations. PG(3, 3), the group of perspectivities generated by τ , represented by the matrix It is clear from this correspondence that the m-sequence constructions of [50] also works over a non-prime field. The σ k transform is essentially assigning a unique symbol to each k-tuple from the initial m-sequence.

Equivalence of FH sequences
In [28], Fuji-Hara et al. stated "Often we are interested in properties of FH sequences, such as auto-correlation, randomness and generating method, which remain unchanged when passing from one FH sequence to another that is essentially the same. Providing an exact definition for this concept and enumerating how many non 'essentially the same' FH sequences are also interesting problems deserving of attention." Here we discuss the notion of equivalence of FH sequences. Firstly we adopt the notation of [59] for frequency hopping schemes: An (n, M, q)frequency hopping scheme (FHS) F is a set of M words of length n over an alphabet of size q. Each word is an FH sequence.
Elements of the symmetric group S n can act on F by permuting the coordinate positions of each word in F . Let ρ n denote the permutation 1 2 · · · n ∈ S n . We say that an element of S n is a rotation if it belongs to ρ n , the subgroup generated by ρ n .

Definition 1
Let Q be a finite alphabet. Given a set S ⊆ Q n we define the rotational closure of S to be the set  The proofs of these theorems are trivial, but the theorems suggest that taking the rotational closure of a frequency hopping sequence allows us to work with the standard notion of Hamming distance in place of the Hamming correlation.
Proof We observe that | ⇔ w| is the size of the orbit of w under the action of the subgroup ρ n , which has order n. By the orbit-stabiliser theorem, if | ⇔ w| < n then the stabiliser of w is nontrivial. That is, there is some (non-identity) rotation that maps w onto itself. This implies that its maximum out-of-phase Hamming auto-correlation is n.
In other words, unless a given sequence of length n has worst possible Hamming autocorrelation, its rotational closure always has size n.
The following lemma is also straightforward to prove: In coding theory, two codes are equivalent if one can be obtained from the other by a combination of applying an arbitrary permutation to the alphabet symbols in a particular coordinate position and/or permuting the coordinate positions of the codewords. These are transformations that preserve the Hamming distance between any two codewords. In the case of frequency hopping sequences, it is the maximum Hamming correlation that we wish to preserve. This is a stronger condition, and hence the set of transformations that are permitted in the definition of equivalence will be smaller. For example, we can no longer apply different permutations to the alphabet in different coordinate positions, as that can alter the out-ofphase Hamming correlations. Because the rotation of coordinate positions is inherent to the definition of Hamming correlation, if we wish to permute the alphabet symbols then we must apply the same permutation to the symbols in each coordinate position. Similarly, not all permutations of coordinates preserve the out-of-phase Hamming auto-correlation of a sequence.
However, we can use the notion of rotational closure to determine an appropriate set of column permutations that will preserve Hamming correlation. Recall that for a given word, its out-of-phase Hamming auto-correlation is uniquely determined by the minimum distance of its rotational closure. Now, any permutation of coordinates preserves Hamming distance, so if we can find a set of permutations that preserve the property of being rotationally closed, then these will in turn preserve the out-of-phase Hamming auto-correlation of individual sequences.
Suppose a word w of length n has H (w) < n. Then its rotational closure consists of the elements Applying a permutation γ ∈ S n to the coordinates of these words gives the set ⇔ w γ = {w γ , w ρ n γ , w ρ 2 n γ , . . . , w ρ n−1 n γ }.
We wish to establish conditions on γ that ensure that ⇔ w γ is itself rotationally closed. This means that w γ = w ρ i n γρ j n for some j, and so w γρ − j n γ −1 = w ρ i n . Clearly this applies to all i, j, and we have γ ∈ N S n ( ρ n ).

Definition 2
We say that two (n, M, q)-FHSs are equivalent if one can be obtained from the other by a combination of permuting the symbols of the underlying alphabet and/or applying to the coordinates of its sequences any permutation that is an element of N S n ( ρ n ).
Equivalent FHSs have the same maximum Hamming correlation.

Comparison with the notion of equivalence for DDFs
Two distinct difference families are said to be equivalent if there is an isomorphism between the underlying groups that maps one DDF onto a translation of the other. In Sect. 2.1 we discussed the correspondence between a partition type DDF and an FHS. In fact, we will see that two partition type DDFs over Z n are equivalent in this sense if and only if the corresponding FHSs are equivalent in the sense of Definition 2. We begin by noting that the automorphism group of Z n is isomorphic to Z * n . As in Sect. 5 let ρ n ∈ S n be the permutation 1 2 · · · n . Any element γ ∈ N S n ( ρ n ) induces a map φ γ : Z n → Z n by sending i ∈ Z n to the unique element j ∈ Z n for which γ −1 ρ i n γ = ρ j . The map φ γ is a homomorphism, since if φ γ (i 1 ) = j 1 and φ γ (i 2 ) = j 2 then γ −1 ρ i 1 +i 2 n γ = γ −1 ρ i 1 n γ γ −1 ρ i 2 n γ = ρ j 1 n ρ j 2 n = ρ j 1 + j 2 n , so φ γ (i 1 + i 2 ) = φ γ (i 1 ) + φ γ (i 2 ); in fact it is an automorphism. Every automorphism of ρ n can be obtained in this fashion.
Theorem 5 Let F be a length n FHS consisting of a single word, and let D be the corresponding partition type DDF over Z n . Then the FHS obtained by applying a permutation γ ∈ N S n ( ρ n ) to the coordinate positions of F corresponds to a DDF that is a translation of the DDF obtained from D by applying the automorphism φ γ to the elements of Z n .
Proof It is straightforward to verify that γ −1 ρ n γ is the cycle 1 γ 2 γ . . . n γ . For γ ∈ N S n ( ρ n ) this is equal to ρ k n for some k. It follows that for i = 1, 2, . . . , n − 1 we have The correspondence between F and D is obtained by associating positions in the sequence with elements of Z n . For example, the FHS F = (1, 1, 2, 3, 2) corresponds to the DDF (Z 5 ; {0, 1}, {2, 4}, {3}): We observe that in this representation, the i + 1 th element of the sequence F is in correspondence with the element i ∈ Z n . If we apply γ to the positions of F , then the entry in the j + 1 th position is mapped to the i + 1 th position when ( j + 1) γ = i + 1. Repeatedly applying the relation in (3) tells us that in this case we have i + 1 = 1 γ + jk, so i = (1 γ − 1) + jk.
If we apply φ γ to Z n then element j ∈ Z n is replaced by element i when γ −1 ρ j n γ = ρ i . But we have that γ −1 ρ j n γ = (γ −1 ρ n γ ) j = (ρ k n ) j = ρ k j n , so it must be the case that i = k j. It follows that if we then translate this DDF by adding 1 γ − 1 to each element of Z we obtain the same overall transformation that was effected by applying γ to F .

Conclusion
We have given a general definition of a disjoint difference family, and have seen a range of examples of applications in communications and information security for these difference families, with different applications placing different constraints on the associated properties and parameters. Focusing on the case of FHSs and their connection with partition type disjoint difference families, we have shown that a construction due to Fuji-Hara et al. [28] gives rise to precisely the same disjoint difference families as an earlier construction of Lempel and Greenberger [50], thus answering an open question in [28]. In response to the question of Fuji-Hara et al. as to when two FHSs can be considered to be "essentially the same" we have established a notion of equivalence of frequency hopping schemes. FHSs based on a single sequence correspond to partition type disjoint difference families, and in this case we have shown that our definition of equivalence corresponds to an established notion of equivalence for difference families, although our definition also applies more generally to schemes based on more than one sequence.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.