Skip to main content
Log in

Combining MILP modeling with algebraic bias evaluation for linear mask search: improved fast correlation attacks on SNOW

  • Published:
Designs, Codes and Cryptography Aims and scope Submit manuscript

Abstract

The Mixed Integer Linear Programming (MILP) technique has been widely applied in the realm of symmetric-key cryptanalysis. In this paper, we propose a new bitwise breakdown MILP modeling strategy for describing the linear propagation rules of modular addition-based operations. We apply such new techniques to cryptanalysis of the SNOW stream cipher family and find new linear masks: we use the MILP model to find many linear mask candidates among which the best ones are identified with particular algebraic bias evaluation techniques. For SNOW 3G, the correlation of the linear mask we found is the highest on record: such results are highly likely to be optimal according to our analysis. For SNOW 2.0, we find new masks matching the correlation record and many new sub-optimal masks applicable to improving correlation attacks. For SNOW-V/Vi, by investigating both bitwise and truncated linear masks, we find all linear masks having the highest correlation and prove the optimum of the corresponding truncated patterns under the “fewest active S-box preferred” strategy. By using the newly found linear masks, we give correlation attacks on the SNOW family with improved complexities. We emphasize that the newly proposed uniform MILP-aided framework can be potentially applied to analyze LFSR-FSM structures composed of modular addition and S-box as non-linear components.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Algorithm 1
Algorithm 2
Algorithm 3
Algorithm 4
Algorithm 5
Algorithm 6
Fig. 3
Algorithm 7
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

The data sets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Notes

  1. They only differ in the LFSR updating function which makes no difference in our analysis.

  2. In fact, this work is accomplished independently and almost in parallel with [39].

  3. Note that \(\Gamma _{\varvec{x}}\cdot \varvec{x}=\bigoplus _{i=0}^{d-1} \Gamma _{\varvec{x}_i}\cdot \varvec{x}_i\), where \(\Gamma _{\varvec{x}_i}\in \mathbb {F}_{2}^m\) is the bitwise linear mask of \(\varvec{x}_i\in \mathbb {F}_{2}^m\).

  4. SNOW-Vi is exactly the same as SNOW-V, with the only difference in the LFSR update function and the tap \(\varvec{T2}\) moved to the higher half of LFSR-A.

  5. We use the notation \(c\vert d\) to represent the integer value \(2c+d\), i.e., \(c\vert d=2c+d\).

  6. We use \(\textsf {msw}(\cdot )\) to denote the most significant 32-bit word (MSW) of a 128-bit mask.

  7. As illustrated in [13], there may exist some repeated tuples, whose number is comparatively quite small to the usual cases with non-repeated elements. Note that these repeated samples will not affect the processing phase of the LFSR initial state recovery, since the absolute values of the correlation of folded approximation relations in such cases is instead larger than the normal cases.

References

  1. Abdelkhalek A., Sasaki Y., Todo Y., Tolba M., Youssef A.M.: MILP modeling for (large) S-boxes to optimize probability of differential characteristics. IACR Trans. Symmetric Cryptol. 2017(4), 99–129 (2017).

    Article  Google Scholar 

  2. Beierle C., Biryukov A., Cardoso dos Santos L., Großschädl J., Perrin L., Udovenko A., Velichkov V., Wang Q.: Alzette: A 64-bit arx-box. In: Micciancio D., Ristenpart T. (eds.) Advances in Cryptology – CRYPTO (2020), pp. 419–448. Springer, Cham (2020).

  3. Chepyzhov V.V., Johansson T., Smeets B.J.M.: A simple algorithm for fast correlation attacks on stream ciphers. In: Schneier B. (ed.) FSE 2000, vol. 1978, pp. 181–195. LNCS. Springer, Berlin (2000).

    Google Scholar 

  4. Chose P., Joux A., Mitton M.: Fast correlation attacks: an algorithmic point of view. In: Knudsen L.R. (ed.) EUROCRYPT 2002, pp. 209–221. Springer, Berlin (2002).

    Chapter  Google Scholar 

  5. Coppersmith D., Halevi S., Jutla C.: Cryptanalysis of stream ciphers with linear masking. In: Yung M. (ed.) CRYPTO 2002, pp. 515–532. Springer, Berlin (2002).

    Chapter  Google Scholar 

  6. Cui T., Chen S., Fu K., Wang M., Jia K.: New automatic tool for finding impossible differentials and zero-correlation linear approximations. Sci. China Inf. Sci. 64(2) (2021).

  7. Ekdahl P., Johansson T.: A new version of the stream cipher SNOW. In: Nyberg K., Heys H.M. (eds.) SAC 2002. LNCS, vol. 2595, pp. 47–61. Springer.

  8. Ekdahl P., Johansson T., Maximov A., Yang J.: A new SNOW stream cipher called SNOW-V. IACR Trans. Symmetric Cryptol. 2019(3), 1–42 (2019).

    Article  Google Scholar 

  9. Ekdahl P., Maximov A., Johansson T., Yang J.: SNOW-Vi: an extreme performance variant of SNOW-V for lower grade cpus. In: WiSec 2021, pp. 261–272. (ACM) (06).

  10. ElSheikh M., Abdelkhalek A., Youssef A.M.: On MILP-based automatic search for differential trails through modular additions with application to bel-t. In: Buchmann J., Nitaj A., Rachidi T. (eds.) Progress in Cryptology - AFRICACRYPT 2019, pp. 273–296. Springer, Cham (2019).

    Chapter  Google Scholar 

  11. Fu K., Wang M., Guo Y., Sun S., Hu L.: MILP-based automatic search algorithms for differential and linear trails for Speck. In: Peyrin T. (ed.) FSE 2016, vol. 9783, pp. 268–288. LNCS. Springer, Berlin (2016).

    Google Scholar 

  12. Funabiki Y., Todo Y., Isobe T., Morii M.: Several MILP-aided attacks against SNOW 2.0. In: Camenisch J., Papadimitratos P. (eds.) CANS 2018. LNCS, vol. 11124, pp. 394–413. Springer, Berlin (2018).

  13. Gong X., Zhang B.: Fast computation of linear approximation over certain composition functions and applications to SNOW 2.0 and SNOW 3G. Des. Codes Cryptogr. 88(11), 2407–2431 (2020).

  14. Gong X., Zhang B.: Comparing large-unit and bitwise linear approximations of SNOW 2.0 and SNOW 3G and related attacks. IACR Trans. Symmetric Cryptol. 2021(2), 71–103 (2021).

  15. Gong X., Zhang B.: Resistance of SNOW-V against fast correlation attacks. IACR Trans. Symmetric Cryptol. 2021(1), 378–410 (2021).

    Article  Google Scholar 

  16. Hao Y., Leander G., Meier W., Todo Y., Wang Q.: Modeling for three-subset division property without unknown subset - improved cube attacks against Trivium and Grain-128AEAD. In: Canteaut A., Ishai Y. (eds.) EUROCRYPT 2020, Part I, vol. 12105, pp. 466–495. LNCS. Springer, Berlin (2020).

    Google Scholar 

  17. Hu K., Sun S., Todo Y., Wang M., Wang Q.: Massive superpoly recovery with nested monomial predictions. In: Tibouchi M., Wang H. (eds.) ASIACRYPT 2021, Part I, vol. 13090, pp. 392–421. LNCS. Springer, Berlin (2021).

    Chapter  Google Scholar 

  18. Huang S., Wang X., Xu G., Wang M., Zhao J.: Conditional cube attack on reduced-round Keccak sponge function. In: Coron J., Nielsen J.B. (eds.) EUROCRYPT 2017, Part II. LNCS, vol. 10211, pp. 259–288 (2017).

  19. Matsui M.: Linear cryptanalysis method for DES cipher. In: Helleseth T. (ed.) EUROCRYPT’93. LNCS, vol. 765, pp. 386–397. Springer, Berlin.

  20. Maximov A., Johansson T.: Fast computation of large distributions and its cryptographic applications. In: Roy B. (ed.) Advances in Cryptology - ASIACRYPT 2005, pp. 313–332. Springer, Berlin (2005).

    Chapter  Google Scholar 

  21. Mouha N., Wang Q., Gu D., Preneel B.: Differential and linear cryptanalysis using mixed-integer linear programming. In: Wu C., Yung M., Lin D. (eds.) Inscrypt 2011, vol. 7537, pp. 57–76. LNCS. Springer, Berlin (2011).

    Google Scholar 

  22. Nyberg K.: Correlation theorems in cryptanalysis. Discret. Appl. Math. 111(1), 177–188 (2001). https://doi.org/10.1016/S0166-218X(00)00351-6.

    Article  MathSciNet  Google Scholar 

  23. Nyberg K., Wallén J.: Improved linear distinguishers for SNOW 2.0. In: Robshaw M.J.B. (ed.) FSE 2006. LNCS, vol. 4047, pp. 144–162. Springer, Berlin (2006).

  24. SAGE E.: Specification of the 3GPP confidentiality and integrity algorithms UEA2 & UIA2, document 2: SNOW 3G specification, v1.1 (2006).

  25. Shi Z., Jin C., Zhang J., Cui T., Ding L., Jin Y.: A correlation attack on full SNOW-V and SNOW-Vi. In: EUROCRYPT (2022)

  26. Sun L., Wang W., Liu R., Wang M.: MILP-aided bit-based division property for ARX ciphers. Sci. China Inf. Sci. 61(11), 118102:1–118102:3 (2018).

  27. Sun S., Hu L., Wang P., Qiao K., Ma X., Song L.: Automatic security evaluation and (related-key) differential characteristic search: application to SIMON, PRESENT, LBLOCK, DES(L) and other bit-oriented block ciphers. In: Sarkar P., Iwata T. (eds.) Advances in Cryptology - ASIACRYPT 2014, pp. 158–178. Springer, Berlin (2014).

    Google Scholar 

  28. Sun Y.: Towards the least inequalities for describing a subset in \(z_2^n\). Cryptology ePrint Archive, Report 2021/1084 (2021).

  29. Todo Y., Isobe T., Hao Y., Meier W.: Cube attacks on non-blackbox polynomials based on division property. IEEE Trans. Comput. 67(12), 1720–1736 (2018).

    Article  MathSciNet  Google Scholar 

  30. Todo Y., Isobe T., Meier W., Aoki K., Zhang B.: Fast correlation attack revisited - cryptanalysis on full Grain-128a, Grain-128, and Grain-v1. In: Shacham H., Boldyreva A. (eds.) CRYPTO 2018, Part II, vol. 10992, pp. 129–159. LNCS. Springer, Berlin (2018).

    Google Scholar 

  31. Udovenko A.: MILP modeling of boolean functions by minimum number of inequalities. Cryptology ePrint Archive, Report 2021/1099 (2021).

  32. Wagner D.: A generalized birthday problem. In: Yung M. (ed.) Advances in Cryptology - CRYPTO 2002, pp. 288–304. Springer, Berlin (2002).

    Chapter  Google Scholar 

  33. Wang Q., Hao Y., Todo Y., Li C., Isobe T., Meier W.: Improved division property based cube attacks exploiting algebraic properties of superpoly. In: Shacham H., Boldyreva A. (eds.) CRYPTO 2018, Part I, vol. 10991, pp. 275–305. LNCS. Springer, Berlin (2018).

    Chapter  Google Scholar 

  34. Watanabe D., Biryukov A., Cannière C.D.: A distinguishing attack of SNOW 2.0 with linear masking method. In: Matsui M., Zuccherato R.J. (eds.) SAC 2003. LNCS, vol. 3006, pp. 222–233. Springer, Berlin (2003).

  35. Xiang Z., Zhang W., Bao Z., Lin D.: Applying MILP method to searching integral distinguishers based on division property for 6 lightweight block ciphers. In: Cheon J.H., Takagi T. (eds.) ASIACRYPT 2016, Part I. LNCS, vol. 10031, pp. 648–678 (2016).

  36. Yang J., Johansson T., Maximov A.: Vectorized linear approximations for attacks on SNOW 3G. IACR Trans. Symmetric Cryptol. 2019(4), 249–271 (2019).

    Google Scholar 

  37. Yang J., Johansson T., Maximov A.: Improved guess-and-determine and distinguishing attacks on SNOW-V. IACR Trans. Symmetric Cryptol. 2021(3), 54–83 (2021).

    Article  Google Scholar 

  38. Zhang B., Xu C., Meier W.: Fast correlation attacks over extension fields, large-unit linear approximation and cryptanalysis of SNOW 2.0. In: Gennaro R., Robshaw M. (eds.) CRYPTO 2015, Part I. LNCS, vol. 9215, pp. 643–662. Springer, Berlin (2015).

  39. Zhou Z., Feng D., Zhang B.: Efficient and extensive search for precise linear approximations with high correlations of full SNOW-V. Des. Codes Cryptogr. 90(10), 2449–2479 (2022). https://doi.org/10.1007/s10623-022-01090-8.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We wish to thank the anonymous reviewers and the editors for their insightful and instructive suggestions that improved the technical as well as editorial quality of this paper. This work is supported by the National Natural Science Foundation of China (Grant Nos. 62202062, 62002024).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yonglin Hao.

Additional information

Communicated by M. Naya-Plasencia.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Overall schematic of SNOW-V/Vi, SNOW 3G and SNOW 2.0

Fig. 6
figure 6

The keystream generation phase of the SNOW-V stream cipher

Fig. 7
figure 7

The keystream generation phase of SNOW 3G

Fig. 8
figure 8

The keystream generation phase of SNOW 2.0

MILP model construction of common operations

According to Sect. 2.4, basic linear operations can be perfectly described with MILP model constraints. For \(x_0, x_1, y\in \mathbb {F}_2\), an XOR operation \((x_0, x_1)\rightarrow y=x_0\oplus x_1\) can be represented as linear constraints in MILP models as Eq. (B1)

$$\begin{aligned} \mathcal {M}.\texttt{con}\leftarrow \left\{ \begin{aligned} x_0 + x_1 - y \ge 0\\ x_0 - x_1 + y \ge 0\\ -x_0 + x_1 + y \ge 0\\ x_0 + x_1 + y \le 2 \end{aligned} \right. \end{aligned}$$
(B1)

and we may simplify the representation of such linear constraints as Eq. (B2).

$$\begin{aligned} \mathcal {M}.\texttt{con}\leftarrow y=x_0 \oplus x_1 \end{aligned}$$
(B2)

For 0-1 matrix \(M\in \mathbb {F}_2^{m\times n}\), the (column) vectors \((\varvec{x}, \varvec{y})\in \mathbb {F}_2^n \times \mathbb {F}_2^m\) satisfying \(\varvec{y} =M\varvec{x}\) can be regarded as composition of XORs which is described with MILP model \(\mathcal {M}\) in the form of Eq. (B3).

$$\begin{aligned} \left\{ \begin{aligned}&\mathcal {M}.\texttt{var} \leftarrow \varvec{x}, \varvec{y} \text { as binaries}\\&\mathcal {M}.\texttt{con} \leftarrow \varvec{y}=M\vert _{\oplus }\varvec{x} \end{aligned} \right. \end{aligned}$$
(B3)

1.1 Bitwise linear propagation rules

For the XOR operation \(\varvec{y}=\varvec{x}_0 \oplus \varvec{x}_1\), the available linear masks satisfy \( \Gamma _{\varvec{y}}=\Gamma _{\varvec{x}_0} = \Gamma _{\varvec{x}_1} \) which is a straightforward linear constraint in MILP model as well. For branch operation \(\varvec{x}\xrightarrow {\texttt{branch}}(\varvec{y}_0, \varvec{y}_1)=(\varvec{x}, \varvec{x})\), the corresponding linear mask \((\Gamma _{\varvec{x}}, \Gamma _{\varvec{y}_0}, \Gamma _{\varvec{y}_1})\) satisfies Eq. (B4). According to Eq. (B1) and Eq. (B2), Eq. (B4) is also a straightforward linear constraint in MILP model.

$$\begin{aligned} \Gamma _{\varvec{x}}=\Gamma _{\varvec{y}_0} \oplus \Gamma _{\varvec{y}_1} \end{aligned}$$
(B4)

For linear \(\mathbb {F}_2^n\rightarrow \mathbb {F}_2^m\) transformation \(\varvec{y} = M\varvec{x}\), the available linear masks \((\Gamma _{\varvec{x}}, \Gamma _{\varvec{y}})\) satisfy \( \Gamma _{\varvec{x}} =M^T \Gamma _{\varvec{y}} \) which, according to Eq. (B3), can be captured by the MILP model constraint

$$\begin{aligned} \left\{ \begin{aligned}&\mathcal {M}.\texttt{var} \leftarrow \Gamma _{\varvec{x}}, \Gamma _{\varvec{y}} \text { as binaries}\\&\mathcal {M}.\texttt{con} \leftarrow \Gamma _{\varvec{x}} =M^T\vert _{\oplus } \Gamma _{\varvec{y}} \end{aligned} \right. \end{aligned}$$

For 8-bit S-boxes, according to Sect. 2.4, the input–output masks \((\Gamma _i, \Gamma _o)\) simply share the same truncated linear masks: \(T_i=T_o\). We define Algorithm 11 as the MILP description of the input–output masks of S-boxes.

\(\texttt{S}_1\) and \(\texttt{S}_2\) used in SNOW 3G (also SNOW 2.0) FSM updating functions can be regarded as an S-box layer (using four 8-bit S-boxes in parallel) followed by a linear diffusion using 0-1 matrices \(M_1, M_2 \in \mathbb {F}_2^{32\times 32}\). With \(M_1\) and \(M_2\) defined in Eqs. (B5) and (B6), we can construct the MILP model for \(\texttt{S}_i\) (\(i=1,2\)) by calling \(\texttt{siModel}\) as in Algorithm 9.

$$\begin{aligned} M_1= \left( \begin{array}{cccccccccccccccccccccccccccccccc} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0\\ 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0\\ 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1\\ 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0\\ 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1\\ 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1\\ 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1\\ 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 1\\ 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 1\\ 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1\\ 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1\\ 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1\\ 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0\\ 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1\\ 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1\\ 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0\\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0 \end{array} \right) \end{aligned}$$
(B5)
$$\begin{aligned} M_2= \left( \begin{array}{cccccccccccccccccccccccccccccccc} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1 \\ 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1 \\ 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1 \\ 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 1 \\ 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 1 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 1 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1 \\ 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1 \\ 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 1 \\ 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0 \\ 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 1 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 1 \\ 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0&{} 0&{} 0&{} 0&{} 0&{} 0&{} 1&{} 0 \end{array} \right) \end{aligned}$$
(B6)

\(AES^R(\cdot ) = \texttt{MC} \circ \texttt{SR} \circ \texttt{SB}(\cdot )\) used in SNOW-V/Vi FSM updating functions is a permutation on \(\mathbb {F}_2^{128}\). It can also be regarded as an S-box layer \(\texttt{SB}\) followed by the linear layer \(\texttt{MC} \circ \texttt{SR}\): the nonlinear layer \(\texttt{SB}\) uses 16 8-bit S-boxes in parallel; the \(\texttt{MC}\) can be regarded as the \(M_1\) diffusion carried out in parallel. Therefore, the MILP model construction for \(AES^R\) can be provided as Algorithm 8.

Algorithm 8
figure h

Model construction for \(AES^R\) (\(\texttt{aesModel}\))

Algorithm 9
figure i

Model construction for \(\texttt{S}_i\) (\(\texttt {siModel}\))

Algorithm 10
figure j

Model construction of truncated linear symbol \(\texttt{actSym}\)

Algorithm 11
figure k

Model construction for 8-bit S-box (\(\texttt {sbox}\))

1.2 Truncated linear propagation rules

It is obvious that \(\Gamma _{\varvec{x}}=\Gamma _{\varvec{y}}\Rightarrow T_{\varvec{x}}=T_{\varvec{y}}\). Therefore, for \(\varvec{y} = \varvec{x}_0 \oplus \varvec{x}_1 \), the corresponding truncated linear mask \((T_{\varvec{x}_0}, T_{\varvec{x}_1}, T_{\varvec{y}})\) always satisfy \(T_{\varvec{x}_0}= T_{\varvec{x}_1}= T_{\varvec{y}}\). Let \(\varvec{x}\) be an 8-bit word. The available input–output truncated linear mask of the branch operation \(\varvec{x}\xrightarrow {\texttt{branch}}(\varvec{y}_0, \varvec{y}_1)=(\varvec{x}, \varvec{x})\) contains 3 bits namely \((T_{\varvec{x}}, T_{\varvec{y}_0}, T_{\varvec{y}_1})\). Defining the set \( \mathcal {S}=\{(0,0,0), (1,0,1), (0,1,0), (1,1,0), (1,1,1)\} \subseteq \mathbb {F}_2^3 \), we have \((T_{\varvec{x}}, T_{\varvec{y}_0}, T_{\varvec{y}_1})\in \mathcal {S}\) which can be described as MILP model constraints as Eq. (B7).

$$\begin{aligned} \mathcal {M}.\texttt{con}\leftarrow \left\{ \begin{aligned} T_{\varvec{x}}+ T_{\varvec{y}_0}- T_{\varvec{y}_1}\ge 0\\ T_{\varvec{x}}- T_{\varvec{y}_0}+ T_{\varvec{y}_1}\ge 0\\ -T_{\varvec{x}}+ T_{\varvec{y}_0}+ T_{\varvec{y}_1}\ge 0 \end{aligned} \right. \end{aligned}$$
(B7)

We simplify Eq. (B7) as Eq. (B8)

$$\begin{aligned} \mathcal {M}.\texttt{con}\leftarrow T_{\varvec{x}}=T_{\varvec{y}_0} \oplus _T T_{\varvec{y}_1} \end{aligned}$$
(B8)

Therefore, for \(\varvec{x}\xrightarrow {\texttt{branch}}( \varvec{y}_0, \varvec{y}_1)\) of an arbitrary length, the corresponding truncated linear MILP model can be constructed as Algorithm 12.

Algorithm 12
figure l

Truncated linear model of the branch operation (\(\texttt {branchTrunc}\))

For \(\texttt{S}_1\), \(\texttt{S}_2\) and \(AES^R\), the S-box layer does not affect the truncated linear masks and the effects of linear layer can be modeled simply with its branch number. Since both \(M_1\) and \(M_2\) have branch number 5, the truncated linear models for \(\texttt{S}_i\) (\(i=0,1\)) and \(AES^R\) can be described with Algorithms 13 and 14 respectively.

Algorithm 13
figure m

Truncated linear model of \(\texttt{S}_i\) (\(i=0,1\)) (\(\texttt {siTruncModel}\))

Algorithm 14
figure n

Truncated linear model of \(AES^R\) (\(\texttt {aesTruncModel}\))

MILP model construction of bitwise breakdown functions

This part describes the model construction process of \(h_a\), \(f_a\), \(f_1\) and \(f_2\) functions. All MILP model constraints are deduced with the H-representation method in [27] according to the available input–output linear masks.

The available input–output linear masks for \(h_a\), \(f_a\) have been given in Tables 2 and 3. The MILP models are constructed with Algorithms 15 and 16.

There are 98 available linear mask \((\Gamma _x,\Gamma _y,\Gamma _w,\Gamma _{ic},\Gamma _z,\Gamma _{oc},\Gamma _{od})\)’s (Table 5) for the \(f_1\) and the MILP model is constructed as Algorithm 17. For \(f_2\), the 122 available \((\Gamma _x,\Gamma _y,\Gamma _w,\Gamma _{ic},\Gamma _{id},\Gamma _z,\Gamma _{oc},\Gamma _{od})\)’s and the corresponding MILP models are given in Table 6 and Algorithm 18 respectively.

Algorithm 15
figure o

Model construction for \(h_a\) (\(\texttt {haModel}\))

Algorithm 16
figure p

Model construction for \(f_a\) (\(\texttt {faModel}\))

Algorithm 17
figure q

Model construction for \(f_1\) (\(\texttt {f1Model}\))

Algorithm 18
figure r

Model construction for \(f_2\) (\(\texttt {f2Model}\))

Table 5 The available linear masks of \(f_1\). We define \(\varvec{\Gamma }=(\Gamma _x,\Gamma _y,\Gamma _w,\Gamma _{ic},\Gamma _z,\Gamma _{oc},\Gamma _{od})\)
Table 6 The available linear masks of \(f_2\). We define \(\varvec{\Gamma }=(\Gamma _x,\Gamma _y,\Gamma _w,\Gamma _{ic},\Gamma _{id},\Gamma _z,\Gamma _{oc},\Gamma _{od})\)
$$\begin{aligned} \underline{A}=\left( \begin{array}{ccccccccc} 0&{}0&{}0&{}0&{}0&{}0&{}1&{}-1&{}0\\ 0&{}0&{}0&{}0&{}0&{}-1&{}0&{}0&{}1\\ 0&{}0&{}0&{}0&{}0&{}1&{}-1&{}1&{}0\\ 0&{}0&{}0&{}0&{}0&{}1&{}0&{}1&{}-1\\ 1&{}1&{}-4&{}1&{}1&{}-1&{}0&{}-1&{}4\\ -1&{}-1&{}4&{}-1&{}-1&{}-1&{}0&{}-1&{}4\\ 2&{}-1&{}2&{}-1&{}-1&{}0&{}-2&{}2&{}3\\ -2&{}1&{}-2&{}1&{}1&{}0&{}-2&{}2&{}3\\ -1&{}-1&{}-1&{}-1&{}-1&{}-1&{}0&{}-2&{}-1\\ 1&{}1&{}1&{}1&{}1&{}-1&{}0&{}-2&{}-1\\ 2&{}2&{}-1&{}-1&{}-1&{}0&{}-2&{}2&{}3\\ -1&{}-1&{}2&{}2&{}-1&{}0&{}-2&{}2&{}3\\ 1&{}-2&{}-2&{}1&{}1&{}0&{}-2&{}2&{}3\\ -2&{}1&{}1&{}-2&{}1&{}0&{}-2&{}2&{}3 \end{array} \right) , \quad \varvec{\underline{\alpha }}= \left( \begin{array}{c} 0\\ 0\\ 0\\ 0\\ 1\\ 1\\ 0\\ 1\\ 7\\ 2\\ 0\\ 0\\ 1\\ 1 \end{array} \right) \end{aligned}$$
(C13)
$$\begin{aligned} \overline{A}= \left( \begin{array}{ccccccccc} 0&{}0&{}0&{}0&{}0&{}0&{}1&{}-1&{}0\\ -1&{}-1&{}-1&{}-1&{}4&{}-1&{}-6&{}5&{}-1\\ 1&{}1&{}1&{}1&{}-4&{}-1&{}-6&{}5&{}-1\\ -2&{}2&{}0&{}2&{}-2&{}1&{}0&{}1&{}3\\ 0&{}-2&{}2&{}-2&{}2&{}1&{}0&{}1&{}3\\ 0&{}0&{}0&{}0&{}0&{}1&{}0&{}1&{}-1\\ 2&{}0&{}-2&{}-2&{}2&{}1&{}0&{}1&{}3\\ 2&{}-2&{}-2&{}2&{}0&{}1&{}0&{}1&{}3\\ 2&{}2&{}2&{}-4&{}-4&{}1&{}0&{}1&{}5\\ -1&{}-1&{}-1&{}-1&{}-1&{}-1&{}-3&{}1&{}-2\\ 1&{}1&{}1&{}1&{}1&{}-1&{}-3&{}1&{}-2\\ -2&{}-2&{}1&{}1&{}1&{}0&{}-1&{}1&{}2\\ -1&{}2&{}-1&{}-1&{}2&{}0&{}-1&{}1&{}2\\ 4&{}-2&{}-2&{}4&{}-2&{}1&{}0&{}1&{}5\\ 0&{}0&{}0&{}0&{}0&{}1&{}0&{}-1&{}1\\ 0&{}0&{}0&{}0&{}0&{}-1&{}0&{}1&{}1 \end{array} \right) , \quad \varvec{\overline{\alpha }}= \left( \begin{array}{c} 0\\ 6\\ 6\\ 0\\ 0\\ 0\\ 0\\ 0\\ 2\\ 8\\ 3\\ 2\\ 1\\ 0\\ 0\\ 0 \end{array}\right) \end{aligned}$$
(C14)
$$\begin{aligned} \underline{B}= \left( \begin{array}{cccccccccc} 5&{}-1&{}-1&{}-1&{}-1&{}-1&{}0&{}1&{}-1&{}4\\ -5&{}1&{}1&{}1&{}1&{}1&{}0&{}1&{}-1&{}4\\ 0&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}0&{}-1\\ 0&{}0&{}0&{}0&{}0&{}0&{}0&{}1&{}-1&{}0\\ -1&{}-1&{}3&{}-1&{}3&{}-1&{}0&{}-3&{}3&{}4\\ 1&{}1&{}-3&{}1&{}-3&{}1&{}0&{}-3&{}3&{}4\\ -1&{}-1&{}3&{}3&{}-1&{}-1&{}0&{}-3&{}3&{}4\\ -1&{}-1&{}-1&{}3&{}3&{}-1&{}0&{}-3&{}3&{}4\\ 1&{}1&{}1&{}1&{}1&{}1&{}0&{}-1&{}-1&{}-2\\ 1&{}1&{}1&{}-3&{}-3&{}1&{}0&{}-3&{}3&{}4\\ 1&{}1&{}-3&{}-3&{}1&{}1&{}0&{}-3&{}3&{}4\\ -1&{}-1&{}-1&{}-1&{}-1&{}-1&{}0&{}-1&{}-1&{}-2\\ 1&{}-1&{}1&{}-1&{}1&{}-1&{}0&{}-1&{}1&{}2\\ -1&{}1&{}1&{}-1&{}1&{}-1&{}0&{}-1&{}1&{}2\\ 1&{}-1&{}-1&{}1&{}1&{}-1&{}0&{}-1&{}1&{}2\\ -1&{}1&{}-1&{}1&{}1&{}-1&{}0&{}-1&{}1&{}2\\ -1&{}1&{}-1&{}1&{}-1&{}1&{}0&{}-1&{}1&{}2\\ 1&{}-1&{}-1&{}1&{}-1&{}1&{}0&{}-1&{}1&{}2\\ 1&{}1&{}-1&{}-1&{}-1&{}1&{}0&{}-1&{}1&{}2\\ -1&{}-1&{}1&{}1&{}-1&{}1&{}0&{}-1&{}1&{}2\\ -1&{}1&{}1&{}-1&{}-1&{}1&{}0&{}-1&{}1&{}2\\ 1&{}-1&{}1&{}-1&{}-1&{}1&{}0&{}-1&{}1&{}2\\ 1&{}1&{}-1&{}-1&{}1&{}-1&{}0&{}-1&{}1&{}2\\ 1&{}1&{}-1&{}1&{}-1&{}-1&{}0&{}-1&{}1&{}2\\ -1&{}1&{}-1&{}-1&{}1&{}1&{}0&{}-1&{}1&{}2\\ 1&{}-1&{}-1&{}-1&{}1&{}1&{}0&{}-1&{}1&{}2\\ -1&{}-1&{}1&{}-1&{}1&{}1&{}0&{}-1&{}1&{}2\\ -1&{}1&{}1&{}1&{}-1&{}-1&{}0&{}-1&{}1&{}2\\ -1&{}-1&{}1&{}1&{}1&{}-1&{}0&{}-1&{}1&{}2\\ -1&{}-1&{}-1&{}1&{}1&{}1&{}0&{}-1&{}1&{}2\\ 1&{}-1&{}1&{}1&{}-1&{}-1&{}0&{}-1&{}1&{}2\\ 1&{}1&{}1&{}-1&{}-1&{}-1&{}0&{}-1&{}1&{}2\\ 0&{}0&{}0&{}0&{}0&{}0&{}-1&{}0&{}1&{}1\\ 1&{}1&{}1&{}1&{}-1&{}1&{}0&{}1&{}-1&{}0\\ 1&{}-1&{}1&{}1&{}1&{}1&{}0&{}1&{}-1&{}0\\ 1&{}1&{}1&{}-1&{}1&{}1&{}0&{}1&{}-1&{}0\\ 1&{}1&{}1&{}1&{}1&{}-1&{}0&{}1&{}-1&{}0\\ 1&{}1&{}-1&{}1&{}1&{}1&{}0&{}1&{}-1&{}0\\ -1&{}-1&{}-1&{}-1&{}5&{}-1&{}0&{}1&{}-1&{}4\\ -1&{}5&{}-1&{}-1&{}-1&{}-1&{}0&{}1&{}-1&{}4\\ -1&{}-1&{}-1&{}-1&{}-1&{}5&{}0&{}1&{}-1&{}4\\ -1&{}-1&{}1&{}-1&{}-1&{}-1&{}0&{}1&{}-1&{}0\\ -1&{}-1&{}-1&{}5&{}-1&{}-1&{}0&{}1&{}-1&{}4 \end{array} \right) , \; \varvec{\underline{\beta }}= \left( \begin{array}{c} 0\\ 0\\ 0\\ 0\\ 0\\ 2\\ 0\\ 0\\ 2\\ 2\\ 2\\ 8\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 4\\ 0 \end{array} \right) \end{aligned}$$
(C15)
$$\begin{aligned} \overline{B}= \left( \begin{array}{cccccccccc} 0&{}0&{}0&{}0&{}0&{}0&{}-1&{}1&{}-1&{}1\\ -1&{}5&{}-1&{}-1&{}-1&{}-1&{}0&{}0&{}-1&{}5\\ 0&{}0&{}0&{}0&{}0&{}0&{}1&{}1&{}0&{}-1\\ 1&{}-5&{}1&{}1&{}1&{}1&{}0&{}0&{}-1&{}5\\ -1&{}-1&{}3&{}-1&{}3&{}-1&{}0&{}-3&{}3&{}1\\ 1&{}-3&{}-3&{}1&{}1&{}1&{}0&{}-3&{}3&{}1\\ 3&{}3&{}-1&{}-1&{}-1&{}-1&{}0&{}-3&{}3&{}1\\ -3&{}1&{}1&{}1&{}-3&{}1&{}0&{}-3&{}3&{}1\\ 1&{}1&{}1&{}1&{}1&{}1&{}0&{}-1&{}-1&{}-1\\ -1&{}-1&{}-1&{}-1&{}-1&{}-1&{}0&{}-1&{}-1&{}-1\\ 1&{}-1&{}1&{}1&{}-1&{}-1&{}0&{}-1&{}1&{}1\\ -1&{}1&{}-1&{}1&{}1&{}-1&{}0&{}-1&{}1&{}1\\ 1&{}-1&{}1&{}-1&{}-1&{}1&{}0&{}-1&{}1&{}1\\ -1&{}1&{}-1&{}-1&{}1&{}1&{}0&{}-1&{}1&{}1\\ 1&{}1&{}-1&{}-1&{}1&{}-1&{}0&{}0&{}0&{}2\\ 1&{}1&{}-1&{}1&{}-1&{}-1&{}0&{}0&{}0&{}2\\ -1&{}-1&{}1&{}-1&{}1&{}1&{}0&{}0&{}0&{}2\\ 1&{}1&{}1&{}-1&{}-1&{}-1&{}0&{}0&{}0&{}2\\ -1&{}-1&{}1&{}1&{}-1&{}1&{}0&{}0&{}0&{}2\\ 1&{}-1&{}-1&{}-1&{}1&{}1&{}0&{}0&{}0&{}2\\ -1&{}1&{}1&{}1&{}-1&{}-1&{}0&{}-1&{}1&{}1\\ -1&{}-1&{}1&{}1&{}1&{}-1&{}0&{}-1&{}1&{}1\\ 1&{}-1&{}-1&{}1&{}1&{}-1&{}0&{}-1&{}1&{}1\\ 1&{}-1&{}1&{}-1&{}1&{}-1&{}0&{}-1&{}1&{}1\\ -1&{}1&{}-1&{}1&{}-1&{}1&{}0&{}-1&{}1&{}1\\ -1&{}1&{}1&{}-1&{}1&{}-1&{}0&{}-1&{}1&{}1\\ -1&{}1&{}1&{}-1&{}-1&{}1&{}0&{}-1&{}1&{}1\\ 1&{}-1&{}-1&{}1&{}-1&{}1&{}0&{}-1&{}1&{}1\\ 1&{}1&{}-1&{}-1&{}-1&{}1&{}0&{}-1&{}1&{}1\\ -1&{}-1&{}-1&{}1&{}1&{}1&{}0&{}-1&{}1&{}1\\ -1&{}-1&{}-1&{}1&{}-1&{}-1&{}-1&{}1&{}-1&{}1\\ 1&{}-1&{}-1&{}-1&{}-1&{}-1&{}-1&{}1&{}-1&{}1\\ -1&{}-1&{}-1&{}-1&{}1&{}-1&{}-1&{}1&{}-1&{}1\\ -1&{}-1&{}-1&{}-1&{}-1&{}1&{}-1&{}1&{}-1&{}1\\ -1&{}1&{}-1&{}-1&{}-1&{}-1&{}-1&{}1&{}-1&{}1\\ -1&{}-1&{}1&{}-1&{}-1&{}-1&{}-1&{}1&{}-1&{}1\\ 1&{}1&{}1&{}1&{}1&{}-1&{}0&{}1&{}-1&{}0\\ -1&{}1&{}1&{}1&{}1&{}1&{}0&{}1&{}-1&{}0\\ 1&{}-1&{}1&{}1&{}1&{}1&{}0&{}1&{}-1&{}0\\ 1&{}1&{}1&{}1&{}-1&{}1&{}0&{}1&{}-1&{}0\\ 1&{}1&{}-1&{}1&{}1&{}1&{}0&{}1&{}-1&{}0\\ 1&{}1&{}1&{}-1&{}1&{}1&{}-1&{}1&{}-1&{}1 \end{array} \right) , \; \varvec{\overline{\beta }}= \left( \begin{array}{c} 0\\ 0\\ 0\\ 0\\ 3\\ 5\\ 3\\ 5\\ 1\\ 7\\ 1\\ 1\\ 1\\ 1\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 4\\ 4\\ 4\\ 4\\ 4\\ 4\\ 0\\ 0\\ 0\\ 0\\ 0\\ 0 \end{array} \right) \end{aligned}$$
(C16)

Truncated linear MILP model construction of bytewise breakdown functions

This part describes the truncated linear propagation model construction process of \(h_b\), \(f_b\), \(h_{b2}\) and \(f_{b2}\) functions.

The available input–output truncated linear masks for \(h_b\) and \(f_b\) are given in Tables 7 and 8. The MILP model are constructed accordingly as Algorithms 21 and 22. Note that the entries in Table 2 for \(h_{a}\) are identical to those in Table 7 for \(h_{b}\). Therefore, we can directly use Algorithm 15 as an subroutine of Algorithm 21.

For \(h_{b2}\), all 114 available \((T_{{x}},T_{{y}},T_{{w}}, T_{{z}}, T_{oc}, T_{od_0}, T_{od_1})\)’s are listed in Table 9 and the MILP model can be constructed with Algorithm 19. As to \(f_{b2}\), the 905 \((T_{{x}},T_{{y}},T_{ic}, T_{id_0}, T_{id_1}, T_{{z}}, T_{oc}, T_{od_0}, T_{od_1})\)’s in Table 10 can be modeled with Algorithm 19.

Table 7 The available input–output truncated linear masks for \(h_{b}\)
Table 8 The available input–output truncated linear masks for \(f_{b}\)
Algorithm 19
figure s

Truncated Linear Model construction for \(h_{b2}\) (\(\texttt {hb2Model}\))

Algorithm 20
figure t

Truncated Linear Model construction for \(f_{b2}\) (\(\texttt {fb2Model}\))

Table 9 The available input–output truncated linear masks for \(h_{b2}\). We define \(\varvec{T}=(T_{{x}},T_{{y}},T_{{w}}, T_{{z}}, T_{oc}, T_{od_0}, T_{od_1})\) and \(\vert \log \underline{\texttt{Cor}}\vert =\max \{ \underline{p}+2\underline{q}\}\)
Table 10 The available input–output truncated linear masks for \(f_{b2}\) and the logarithm of the correlation lower bounds. The lines in bold satisfy the \((T_{oc}, T_{od_0}, T_{od_1})=(0,0,0)\): this is required by Proposition 2 and should be satisfied by the final \(f_{b2}\) call in Eq. (16). We define \(\varvec{T}=(T_{x},T_{y},T_{ic}, T_{id_0}, T_{id_1}, T_{z}, T_{oc}, T_{od_0}, T_{od_1})\) and \(\vert \log \underline{\texttt{Cor}}\vert =\max \{ \underline{p}+2\underline{q}\}\)
Algorithm 21
figure u

Truncated Linear Model construction for \(h_b\) (\(\texttt {hbModel}\))

Algorithm 22
figure v

Truncated Linear Model construction for \(f_b\) (\(\texttt {fbModel}\))

Details for accomplishing candidate search

When constructing bitwise model \(\mathcal {M}\)’s, we may define parameters \((\underline{P},\overline{P})\) as upper and lower bounds of \(b_{obj}\). By adding MILP model constraint

$$\begin{aligned} \mathcal {M}.\texttt{con}\leftarrow \underline{P}\le b_{obj}\le \overline{P} \end{aligned}$$

we are able to acquire quickly the \((\Gamma _{\varvec{\ell }}, \Gamma _{\varvec{z}})\) candidates with \(b_{obj}\in \left[ \underline{P},\overline{P}\right] \). The values of \((\underline{P},\overline{P})\) are set based on the global optimal value of \(\min b_{obj}\), denoted as \(P_{\min }\), and the power of the MILP solver: for too large \(\overline{P}\), there might be too many solutions so that the solver cannot terminate in feasible time.

The parameter \(\lambda \) in Algorithm 2 is set to \(\lambda =0\) by default.

1.1 Details for SNOW 3G

Truncated and bitwise linear masks of intermediate states satisfy Eq. (E20): the constraints in \(\mathcal {M}_T\) are listed on the left while those for \(\mathcal {M}\) are on the right.

$$\begin{aligned} \left\{ \begin{aligned}&T_{\varvec{u}_0}=T_{\varvec{u}_1}, \, T_{\varvec{v}_0}=T_{\varvec{v}_1}, \, T_{\varvec{w}_0}=T_{\varvec{w}_1}\\&T_{\varvec{u}_1} \xrightarrow {\texttt{S}_1} T_{\varvec{a}}, \, T_{\varvec{a}}\xrightarrow {\texttt{branch}}(T_{\varvec{a}_0}, T_{\varvec{a}_1}) \\&T_{\varvec{z}_{t-1}}=T_{\varvec{v}_0},\, T_{\varvec{z}_{t-1}}=T_{-1}, \\&(T_{14}, T_{\varvec{u}_0})\xrightarrow {\boxplus } T_{\varvec{z}_{t-1}}\\&T_{\varvec{z}_{t}}=T_{0},\, T_{\varvec{z}_{t}}=T_{\varvec{a}_0},\, (T_{15}, T_{\varvec{w}_0})\xrightarrow {\boxplus } T_{\varvec{z}_{t}}\\&T_{\varvec{z}_{t+1}}=T_1, \, T_{\varvec{w}_1}\xrightarrow {\texttt{S}_1} T_{\varvec{z}_{t+1}}, \,T_{\varvec{v}_1}\xrightarrow {\texttt{S}_2} T_5\\&(T_{16}, T_{5}, T_{\varvec{a}_1}) \xrightarrow {\boxplus ^2} T_{\varvec{z}_{t+1}} \end{aligned} \right. \quad \left\{ \begin{aligned}&\Gamma _{\varvec{u}_0}=\Gamma _{\varvec{u}_1}, \Gamma _{\varvec{v}_0}=\Gamma _{\varvec{v}_1}, \Gamma _{\varvec{w}_0}=\Gamma _{\varvec{w}_1}\\&\Gamma _{\varvec{u}_1} \xrightarrow {\texttt{S}_1} \Gamma _{\varvec{a}}, \Gamma _{\varvec{a}}\xrightarrow {\texttt{branch}}(\Gamma _{\varvec{a}_0}, \Gamma _{\varvec{a}_1}) \\&\Gamma _{\varvec{z}_{t-1}}=\Gamma _{\varvec{v}_0}, \Gamma _{\varvec{z}_{t-1}}=\Gamma _{-1},\\&(\Gamma _{14}, \Gamma _{\varvec{u}_0})\xrightarrow {\boxplus } \Gamma _{\varvec{z}_{t-1}}\\&\Gamma _{\varvec{z}_{t}}=\Gamma _{0},\, \Gamma _{\varvec{z}_{t}}=\Gamma _{\varvec{a}_0},\, (\Gamma _{15}, \Gamma _{\varvec{w}_0})\xrightarrow {\boxplus } \Gamma _{\varvec{z}_{t}}\\&\Gamma _{\varvec{z}_{t+1}}=\Gamma _1, \Gamma _{\varvec{w}_1}\xrightarrow {\texttt{S}_1} \Gamma _{\varvec{z}_{t+1}}, \Gamma _{\varvec{v}_1}\xrightarrow {\texttt{S}_2} \Gamma _5\\&(\Gamma _{16}, \Gamma _{5}, \Gamma _{\varvec{a}_1}) \xrightarrow {\boxplus ^2} \Gamma _{\varvec{z}_{t+1}} \end{aligned} \right. \end{aligned}$$
(E20)

For each \(\boxplus ^2\), the output of \(\texttt{conModAdd}\) in Algorithm 2 returns \((\varvec{p},\varvec{q})\). According to Sect. (4.1), its contribution to the correlations can be evaluated as \(2^{-(\vert \varvec{p}\vert +2\vert \varvec{q}\vert )}\). We use \(b_{obj}\) in Eq. (E21) to define \(\mathcal {M}.\texttt{obj}\).

$$\begin{aligned} b_{obj}= \sum _{\forall \boxplus } \vert \Gamma _{\varvec{oc}}\vert + \sum _{\forall \boxplus ^2} (\vert \varvec{p}\vert +2\vert \varvec{q}\vert ) + 6(\vert T_{\varvec{u}_1}\vert +\vert T_{\varvec{v}_1}\vert +\vert T_{\varvec{w}_1}\vert ) \end{aligned}$$
(E21)

For \(\mathcal {M}_T\), each \(\boxplus \) and \(\boxplus ^2\) model construction call (Algorithms 3 and 4) returns a \(\varvec{\tau }=(\tau _0,\ldots , \tau _3)\) vector. According to Sect. 4.2, we can define \(t_{obj}\) as Eq. (E22) and set \(\mathcal {M}_T.\texttt{obj}\leftarrow \min t_{obj}\) for searching optimal \((T_{\varvec{\ell }}, T_{\varvec{z}})\)’s.

$$\begin{aligned} t_{obj}= 8\sum _{\forall \boxplus } \sum _{i=0}^3\tau _i + \sum _{\forall \boxplus ^2} (19\tau _0+21\tau _1+21\tau _2+20\tau _3) + 6(\vert T_{\varvec{u}_1}\vert +\vert T_{\varvec{v}_1}\vert +\vert T_{\varvec{w}_1}\vert ) \end{aligned}$$
(E22)

There is only 1 optimal truncated mask solution for \(\mathcal {M}_T\) with \(t_{obj}=53\). With \(b_{obj}\) in Eq. (E22), There is \(P_{\min }=35\) and the Gurobi solver can exhaust all solutions with \(\overline{P}\le 37\). We can further set \(\underline{P}\) to values \(\ge 38\) so as to acquire more candidates. The best masks in Table 11 can be covered with \(\overline{P}=38\).

Table 11 The linear masks for SNOW 3G when \(\Gamma _{\textbf{z}_{2}}=\Gamma _{5}=\texttt {0x1014190f}\) such that \(\vert \texttt{Cor}\vert \ge 2^{-21}\)

1.2 Details for SNOW 2.0

For SNOW 2.0, we can directly deduce the bitwise MILP model \(\mathcal {M}\) and acquire \((\Gamma _{\varvec{\ell }}, \Gamma _{\varvec{z}})\) directly. \(\mathcal {M}\) can be deduced from the intermediate state linear masks satisfying Eq. (E23).

$$\begin{aligned} \left\{ \begin{aligned}&\Gamma _{\varvec{u}_0}=\Gamma _{\varvec{u}_1}, \, \Gamma _{\varvec{v}_0}=\Gamma _{\varvec{v}_1}&\\&\Gamma _{\varvec{z}_{t}}=\Gamma _{\varvec{v}_0},\, \Gamma _{\varvec{z}_{t}}=\Gamma _{0},\, (\Gamma _{15}, \Gamma _{\varvec{u}_0})\xrightarrow {\boxplus } \Gamma _{\varvec{z}_{t}}\\&\Gamma _{\varvec{z}_{t+1}}=\Gamma _1, \, \Gamma _{\varvec{u}_1} \xrightarrow {\texttt{S}_1} \Gamma _{\varvec{z}_{t+1}}, \, (\Gamma _{16}, \Gamma _{5}, \Gamma _{\varvec{v}_1}) \xrightarrow {\boxplus ^2} \Gamma _{\varvec{z}_{t+1}} \end{aligned} \right. \end{aligned}$$
(E23)

Similar to SNOW 3G, \(\mathcal {M}\) for SNOW 2.0 has objective \(\mathcal {M}.\texttt{obj}\leftarrow \min b_{obj}\) with the \(b_{obj}\) defined as Eq. (E24)

$$\begin{aligned} b_{obj}= \sum _{\forall \boxplus } \vert \Gamma _{\varvec{oc}}\vert + \sum _{\forall \boxplus ^2} (\vert \varvec{p}\vert +2\vert \varvec{q}\vert ) + 6\vert T_{\varvec{u}_1}\vert \end{aligned}$$
(E24)

Therefore, Candidate Search for SNOW 2.0 is accomplished with a simplified Algorithm 7 skipping all \(\mathcal {M}_T\)-related steps (Step 2,3,5). According to the solutions of \(\mathcal {M}\), we have \(P_{\min }=21\) and the solver enables us to acquire all solutions with \(b_{obj}\le 24\). We also get another millions of by setting \(\underline{P}=25\) and \(\overline{P}\ge 26\). In fact, the best \((\Gamma _{\varvec{\ell }}, \Gamma _{\varvec{z}})\)’s in Table 12 are acquired with \(\overline{P}\le 26\).

Table 12 The linear masks for SNOW 2.0 such that \(\vert \texttt{Cor}\vert \ge 2^{-15}\)

Proof of Theorem 3

Proof

The function A is defined as

$$\begin{aligned} A(Sc_0)=\sum _{Sd_0}\rho _0(Sc_0,Sd_0), \end{aligned}$$

where

$$\begin{aligned} \rho _0(Sc_0,Sd_0)=\prod \nolimits _{j=0}^3 \varvec{U}^{(\varvec{a}^0_j,\varvec{u}^j_0,\varvec{v}^j_0,\varvec{w}^0_j)}[c^j_0\vert d^0_j][0\vert d^0_{j-1}]. \end{aligned}$$

In terms of the definition of the matrices \(\varvec{U}^{(\varvec{a}^0_j,\varvec{u}^j_0,\varvec{v}^j_0,\varvec{w}^0_j)}\) for \(j=0,1,2,3\), we deduce that for any fixed \((\sigma ^0_0,\sigma ^1_0,\sigma ^2_0,\sigma ^3_0)\in {\{0,1\}}^4\), we have

$$\begin{aligned} \begin{aligned} A(\sigma ^0_0,\sigma ^1_0,\sigma ^2_0,\sigma ^3_0)&= \Pr \left\{ \! \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^0_j,\varvec{u}^j_0,\varvec{v}^j_0,\varvec{w}^0_j)}(\cdot )=0,c^0_0 =\sigma ^0_0,c^14_0=\sigma ^1_0,c^2_0=\sigma ^2_0,c^3_0=\sigma ^3_0\right\} \\&\quad - \Pr \left\{ \! \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^0_j,\varvec{u}^j_0,\varvec{v}^j_0,\varvec{w}^0_j)}(\cdot )=1,c^0_0 =\sigma ^0_0,c^1_0=\sigma ^1_0,c^2_0=\sigma ^2_0,c^3_0=\sigma ^3_0\right\} . \end{aligned} \end{aligned}$$

The function B is defined as

$$\begin{aligned} B(Sc_1)=\sum _{Sd_1}\sum _{Sc_0} A(Sc_0)\cdot \rho _1(Sc_0,Sc_1,Sd_1), \end{aligned}$$

where

$$\begin{aligned} \rho _1(Sc_0,Sc_1,Sd_1)=\prod \nolimits _{j=0}^3 \varvec{U}^{(\varvec{a}^1_j,\varvec{u}^j_1,\varvec{v}^j_1,\varvec{w}^1_j)}[c^j_1\vert d^1_j][c^j_0\vert d^1_{j-1}]. \end{aligned}$$

Similarly we deduce that for any fixed \((\sigma ^0_1,\sigma ^1_1,\sigma ^2_1,\sigma ^3_1)\in {\{0,1\}}^4\),

$$\begin{aligned}\begin{aligned} B(\sigma ^0_1,\sigma ^1_1,\sigma ^2_1,\sigma ^3_1)&=\Pr \left\{ \! \bigoplus _{k=0}^1 \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^k_j,\varvec{u}^j_k,\varvec{v}^j_k,\varvec{w}^k_j)} (\cdot )=0,c^0_1=\sigma ^0_1,c^1_1=\sigma ^1_1,c^2_1=\sigma ^2_1,c^3_1=\sigma ^3_1\right\} \\&\quad - \Pr \left\{ \bigoplus _{k=0}^1 \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^k_j,\varvec{u}^j_k,\varvec{v}^j_k,\varvec{w}^k_j)} (\cdot )=1,c^0_1=\sigma ^0_1,c^1_1=\sigma ^1_1,c^2_1=\sigma ^2_1,c^3_1=\sigma ^3_1\right\} \end{aligned} \end{aligned}$$

Similarly, from the definitions of the functions C and D respectively, we can deduce that

$$\begin{aligned} \begin{aligned} C(\sigma ^0_2,\sigma ^1_2,\sigma ^2_2,\sigma ^3_2)&=\Pr \left\{ \! \bigoplus _{k=0}^2 \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^k_j,\varvec{u}^j_k,\varvec{v}^j_k,\varvec{w}^k_j)} (\cdot )=0,c^0_2=\sigma ^0_2,c^1_2=\sigma ^1_2,c^2_2=\sigma ^2_2,c^3_2=\sigma ^3_2\right\} \\&\quad -\Pr \left\{ \bigoplus _{k=0}^2 \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^k_j,\varvec{u}^j_k,\varvec{v}^j_k,\varvec{w}^k_j)} (\cdot )=1,c^0_2=\sigma ^0_2,c^1_2=\sigma ^1_2,c^2_2=\sigma ^2_2,c^3_2=\sigma ^3_2\right\} , \end{aligned} \end{aligned}$$

for any fixed \((\sigma ^0_2,\sigma ^1_2,\sigma ^2_2,\sigma ^3_2)\in {\{0,1\}}^4\), and

$$\begin{aligned} \begin{aligned} D(\sigma ^0_3,\sigma ^1_3,\sigma ^2_3,\sigma ^3_3)&=\Pr \left\{ \! \bigoplus _{k=0}^3 \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^k_j,\varvec{u}^j_k,\varvec{v}^j_k,\varvec{w}^k_j)} (\cdot )=0,c^0_3=\sigma ^0_3,c^1_3=\sigma ^1_3,c^2_3=\sigma ^2_3,c^3_3=\sigma ^3_3\right\} \\&\quad -\Pr \left\{ \bigoplus _{k=0}^3 \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^k_j,\varvec{u}^j_k,\varvec{v}^j_k,\varvec{w}^k_j)} (\cdot )=1,c^0_3=\sigma ^0_3,c^1_3=\sigma ^1_3,c^2_3=\sigma ^2_3,c^3_3=\sigma ^3_3\right\} , \end{aligned} \end{aligned}$$

for any fixed \((\sigma ^0_3,\sigma ^1_3,\sigma ^2_3,\sigma ^3_3)\in {\{0,1\}}^4\).

Finally, we derive

$$\begin{aligned} \begin{aligned} \sum _{Sc_3} D(Sc_3)&=\sum _{\sigma ^0_3\in \{0,1\}} \sum _{\sigma ^1_3\in \{0,1\}} \sum _{\sigma ^2_3\in \{0,1\}} \sum _{\sigma ^3_3\in \{0,1\}} D(\sigma ^0_3,\sigma ^1_3,\sigma ^2_3,\sigma ^3_3)\\&=\Pr \left\{ \!\bigoplus _{k=0}^3 \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^k_j,\varvec{u}^j_k,\varvec{v}^j_k,\varvec{w}^k_j)}(\cdot )=0 \right\} -\Pr \left\{ \!\bigoplus _{k=0}^3 \bigoplus _{j=0}^3 \bar{f}^{(\varvec{a}^k_j,\varvec{u}^j_k,\varvec{v}^j_k,\varvec{w}^k_j)}(\cdot )=1 \right\} \\&=\texttt{Cor}( (\varvec{U},\varvec{V},\varvec{W})\xrightarrow {\mathcal {F}} \varvec{A} ). \end{aligned} \end{aligned}$$

Thus we complete the proof. \(\square \)

Detailed process for carrying out Step 1 to Step 5

In the following part, we will describe our strategy for carrying out Step 1 to Step 4 in turn.

  • Step 1: Let us recall that the expression for \(A(Sc_0)\) is

    $$\begin{aligned} A(Sc_0)=\sum _{Sd_0}\rho _0(Sc_0,Sd_0), \end{aligned}$$

    where

    $$\begin{aligned} \rho _0(Sc_0,Sd_0)=\prod _{j=0}^3 \varvec{U}^{(\varvec{a}^0_j,\varvec{u}^j_0,\varvec{v}^j_0,\varvec{w}^0_j)}[c^j_0\vert d^0_j][0\vert d^0_{j-1}]. \end{aligned}$$

    To compute \(A(Sc_0)\) for all the \(2^4\) combinations of involved local carries in \(Sc_0\), we do the following:

    1. 1.1

      Compute \(A_0(c^0_0,c^1_0,d^0_1)\triangleq \sum _{d^0_0}\varvec{U}^{(\varvec{a}^0_0,\varvec{u}^0_0,\varvec{v}^0_0,\varvec{w}^0_0)}[c^0_0\vert d^0_0][0\vert 0] \cdot \varvec{U}^{(\varvec{a}^0_1,\varvec{u}^1_0,\varvec{v}^1_0,\varvec{w}^0_1)}[c^1_0\vert d^0_1][0\vert d^0_0]\) for all the \(2^3\) choices of \((c^0_0,c^1_0,d^0_1)\). The time complexity is \(\mathcal {O}(2^4)\) and the memory complexity is \(\mathcal {O}(2^3)\).

    2. 1.2

      Compute \(A_1(c^0_0,c^1_0,c^2_0,d^0_2)\triangleq \sum _{d^0_1} A_0(c^0_0,c^1_0,d^0_1) \cdot \varvec{U}^{(\varvec{a}^0_2,\varvec{u}^2_0,\varvec{v}^2_0,\varvec{w}^0_2)}[c^2_0\vert d^0_2][0\vert d^0_1]\) for all the \(2^4\) choices of \((c^0_0,c^1_0,c^2_0,d^0_2)\). The time complexity is \(\mathcal {O}(2^5)\) and the memory complexity is \(\mathcal {O}(2^4)\).

    3. 1.3

      Compute \(A_2(c^0_0,c^1_0,c^2_0,c^3_0,d^0_3)\triangleq \sum _{d^0_2} A_1(c^0_0,c^1_0,c^2_0,d^0_2) \cdot \varvec{U}^{(\varvec{a}^0_3,\varvec{u}^3_0,\varvec{v}^3_0,\varvec{w}^0_3)}[c^3_0\vert d^0_3][0\vert d^0_2]\) for all the \(2^5\) choices of \((c^0_0,c^1_0,c^2_0,c^3_0,d^0_3)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    4. 1.4

      Compute \(A(Sc_0)=\sum _{d^0_3} A_2(c^0_0,c^1_0,c^2_0,c^3_0,d^0_3)\) for all the \(2^4\) choices in \((Sc_0)\), i.e., \((c^0_0,c^1_0,c^2_0,c^3_0)\). The time complexity is \(\mathcal {O}(2^5)\) and the memory complexity is \(\mathcal {O}(2^4)\).

    • Complexity of Step 1 The total time complexity of Step 1 is around \(\mathcal {O}(2^{7.17})\).

  • Step 2: The expression for \(B(Sc_1)\) is

    $$\begin{aligned} B(Sc_1)=\sum _{Sd_1}\sum _{Sc_0} A(Sc_0)\cdot \rho _1(Sc_0,Sc_1,Sd_1), \end{aligned}$$

    where

    $$\begin{aligned} \rho _1(Sc_0,Sc_1,Sd_1)=\prod _{j=0}^3 \varvec{U}^{(\varvec{a}^1_j,\varvec{u}^j_1,\varvec{v}^j_1,\varvec{w}^1_j)}[c^j_1\vert d^1_j][c^j_0\vert d^1_{j-1}]. \end{aligned}$$

    We describe the process for computing \(B(Sc_1)\) for all \(2^4\) combinations of involved local carries in \(Sc_1\) as follows.

    1. 2.1

      Compute \(B_0(c^1_0,c^2_0,c^3_0,c^0_1,d^1_0)\triangleq \sum _{c^0_0} A(c^0_0,c^1_0,c^2_0,c^3_0)\cdot \varvec{U}^{(\varvec{a}^1_0,\varvec{u}^0_1,\varvec{v}^0_1,\varvec{w}^1_0)}[c^0_1\vert d^1_0][c^0_0\vert 0]\) for all the \(2^5\) choices of \((c^1_0,c^2_0,c^3_0,c^0_1,d^1_0)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    2. 2.2

      Compute \(B_1(c^1_0,c^2_0,c^3_0,c^0_1,c^1_1,d^1_1)\!\triangleq \!\sum _{d^1_0} B_0(c^1_0,c^2_0,c^3_0,c^0_1,d^1_0)\!\cdot \! \varvec{U}^{(\varvec{a}^1_1,\varvec{u}^1_1,\varvec{v}^1_1,\varvec{w}^1_1)}[c^1_1\vert d^1_1][c^1_0\vert d^1_0]\) for all the \(2^6\) choices of \((c^1_0,c^2_0,c^3_0,c^0_1,c^1_1,d^1_1)\). The time complexity is \(\mathcal {O}(2^7)\) and the memory complexity is \(\mathcal {O}(2^6)\).

    3. 2.3

      Compute \(B_2(c^2_0,c^3_0,c^0_1,c^1_1,d^1_1)\triangleq \sum _{c^1_0} B_1(c^1_0,c^2_0,c^3_0,c^0_1,c^1_1,d^1_1)\) for all the \(2^5\) choices of \((c^2_0,c^3_0,c^0_1,c^1_1,d^1_1)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    4. 2.4

      Compute \(B_3(c^2_0,c^3_0,c^0_1,c^1_1,c^2_1,d^1_2)\!\triangleq \!\sum _{d^1_1} B_2(c^2_0,c^3_0,c^0_1,c^1_1,d^1_1)\!\cdot \! \varvec{U}^{(\varvec{a}^1_2,\varvec{u}^2_1,\varvec{v}^2_1,\varvec{w}^1_2)}[c^2_1\vert d^1_2][c^2_0\vert d^1_1]\) for all the \(2^6\) choices of \((c^2_0,c^3_0,c^0_1,c^1_1,c^2_1,d^1_2)\). The time complexity is \(\mathcal {O}(2^7)\) and the memory complexity is \(\mathcal {O}(2^6)\).

    5. 2.5

      Compute \(B_4(c^3_0,c^0_1,c^1_1,c^2_1,d^1_2)\triangleq \sum _{c^2_0} B_3(c^2_0,c^3_0,c^0_1,c^1_1,c^2_1,d^1_2)\) for all the \(2^5\) choices of \((c^3_0,c^0_1,c^1_1,c^2_1,d^1_2)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    6. 2.6

      Compute \(B_5(c^3_0,c^0_1,c^1_1,c^2_1,c^3_1,d^1_3)\!\triangleq \!\sum _{d^1_2} B_4(c^3_0,c^0_1,c^1_1,c^2_1,d^1_2)\!\cdot \! \varvec{U}^{(\varvec{a}^1_3,\varvec{u}^3_1,\varvec{v}^3_1,\varvec{w}^1_3)}[c^3_1\vert d^1_3][c^3_0\vert d^1_2]\) for all the \(2^6\) choices of \((c^3_0,c^0_1,c^1_1,c^2_1,c^3_1,d^1_3)\). The time complexity is \(\mathcal {O}(2^7)\) and the memory complexity is \(\mathcal {O}(2^6)\).

    7. 2.7

      Compute \(B_6(c^0_1,c^1_1,c^2_1,c^3_1,d^1_3)\!\triangleq \! \sum _{c^3_0} B_5(c^3_0,c^0_1,c^1_1,c^2_1,c^3_1,d^1_3)\) for all the \(2^5\) choices of \((c^0_1,c^1_1,c^2_1,c^3_1,d^1_3)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    8. 2.8

      Compute \(B(Sc_1)=\sum _{d^1_3} B_6(c^0_1,c^1_1,c^2_1,c^3_1,d^1_3)\) for all the \(2^4\) choices in \((Sc_1)\), i.e., \((c^0_1,c^1_1,c^2_1,c^3_1)\). The time complexity is \(\mathcal {O}(2^5)\) and the memory complexity is \(\mathcal {O}(2^4)\).

    • Complexity of Step 2 The total time complexity of Step 2 is around \(\mathcal {O}(2^{9.39})\).

  • Step 3: The expression for \(C(Sc_2)\) is

    $$\begin{aligned} C(Sc_2)=\sum _{Sd_2}\sum _{Sc_1} B(Sc_1)\cdot \rho _2(Sc_1,Sc_2,Sd_2), \end{aligned}$$

    where

    $$\begin{aligned} \rho _2(Sc_1,Sc_2,Sd_2)=\prod _{j=0}^3 \varvec{U}^{(\varvec{a}^2_j,\varvec{u}^j_2,\varvec{v}^j_2,\varvec{w}^2_j)}[c^j_2\vert d^2_j][c^j_1\vert d^2_{j-1}]. \end{aligned}$$

    The computation of \(C(Sc_2)\) for all \(2^4\) combinations of involved local carries in \(Sc_2\) is carried out according to the following steps.

    1. 3.1

      Compute \(C_0(c^1_1,c^2_1,c^3_1,c^0_2,d^2_0)\triangleq \sum _{c^0_1} B(c^0_1, c^1_1,c^2_1,c^3_1)\cdot \varvec{U}^{(\varvec{a}^2_0,\varvec{u}^0_2,\varvec{v}^0_2,\varvec{w}^2_0)}[c^0_2\vert d^2_0][c^0_1\vert 0]\) for all the \(2^5\) choices of \((c^1_1,c^2_1,c^3_1,c^0_2,d^2_0)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    2. 3.2

      Compute \(C_1(c^1_1,c^2_1,c^3_1,c^0_2,c^1_2,d^2_1)\!\triangleq \! \sum _{d^2_0} C_0(c^1_1,c^2_1,c^3_1,c^0_2,d^2_0)\!\cdot \! \varvec{U}^{(\varvec{a}^2_1,\varvec{u}^1_2,\varvec{v}^1_2,\varvec{w}^2_1)}[c^1_2\vert d^2_1][c^1_1\vert d^2_0]\) for all the \(2^6\) choices of \((c^1_1,c^2_1,c^3_1,c^0_2,c^1_2,d^2_1)\). The time complexity is \(\mathcal {O}(2^7)\) and the memory complexity is \(\mathcal {O}(2^6)\).

    3. 3.3

      Compute \(C_2(c^2_1,c^3_1,c^0_2,c^1_2,d^2_1)\triangleq \sum _{c^1_1} C_1(c^1_1,c^2_1,c^3_1,c^0_2,c^1_2,d^2_1)\) for all the \(2^5\) choices of \((c^2_1,c^3_1,c^0_2,c^1_2,d^2_1)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    4. 3.4

      Compute \(C_3(c^2_1,c^3_1,c^0_2,c^1_2,c^2_2,d^2_2)\!\triangleq \! \sum _{d^2_1} C_2(c^2_1,c^3_1,c^0_2,c^1_2,d^2_1)\!\cdot \! \varvec{U}^{(\varvec{a}^2_2,\varvec{u}^2_2,\varvec{v}^2_2,\varvec{w}^2_2)}[c^2_2\vert d^2_2][c^2_1\vert d^2_1]\) for all the \(2^6\) choices of \((c^2_1,c^3_1,c^0_2,c^1_2,c^2_2,d^2_2)\). The time complexity is \(\mathcal {O}(2^7)\) and the memory complexity is \(\mathcal {O}(2^6)\).

    5. 3.5

      Compute \(C_4(c^3_1,c^0_2,c^1_2,c^2_2,d^2_2)\triangleq \sum _{c^2_1} C_3(c^2_1,c^3_1,c^0_2,c^1_2,c^2_2,d^2_2)\) for all the \(2^5\) choices of \((c^3_1,c^0_2,c^1_2,c^2_2,d^2_2)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    6. 3.6

      Compute \(C_5(c^3_1,c^0_2,c^1_2,c^2_2,c^3_2,d^2_3)\!\triangleq \! \sum _{d^2_2} C_4(c^3_1,c^0_2,c^1_2,c^2_2,d^2_2)\!\cdot \! \varvec{U}^{(\varvec{a}^2_3,\varvec{u}^3_2,\varvec{v}^3_2,\varvec{w}^2_3)}[c^3_2\vert d^2_3][c^3_1\vert d^2_2]\) for all the \(2^6\) choices of \((c^3_1,c^0_2,c^1_2,c^2_2,c^3_2,d^2_3)\). The time complexity is \(\mathcal {O}(2^7)\) and the memory complexity is \(\mathcal {O}(2^6)\).

    7. 3.7

      Compute \(C_6(c^0_2,c^1_2,c^2_2,c^3_2,d^2_3)\triangleq \sum _{c^3_1} C_5(c^3_1,c^0_2,c^1_2,c^2_2,c^3_2,d^2_3)\) for all the \(2^5\) choices of \((c^0_2,c^1_2,c^2_2,c^3_2,d^2_3)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    8. 3.8

      Compute \(C(Sc_2)= \sum _{d^2_3} C_6(c^0_2,c^1_2,c^2_2,c^3_2,d^2_3)\) for all the \(2^4\) choices in \((Sc_2)\), i.e., \((c^0_2,c^1_2,c^2_2,c^3_2)\). The time complexity is \(\mathcal {O}(2^5)\) and the memory complexity is \(\mathcal {O}(2^4)\).

    • Complexity of Step 3 The total time complexity of Step 3 is around \(\mathcal {O}(2^{9.39})\).

  • Step 4: The expression for \(D(Sc_3)\) is

    $$\begin{aligned} D(Sc_3)=\sum _{Sd_3}\sum _{Sc_2} C(Sc_2)\cdot \rho _3(Sc_2,Sc_3,Sd_3), \end{aligned}$$

    where

    $$\begin{aligned} \rho _3(Sc_2,Sc_3,Sd_3)=\prod _{j=0}^3 \varvec{U}^{(\varvec{a}^3_j,\varvec{u}^j_3,\varvec{v}^j_3,\varvec{w}^3_j)}[c^j_3\vert d^3_j][c^j_2\vert d^3_{j-1}]. \end{aligned}$$

    The computation of \(D(Sc_3)\) for all \(2^4\) combinations of involved local carries in \(Sc_3\) is carried out according to the following steps.

    1. 4.1

      Compute \(D_0(c^1_2,c^2_2,c^3_2,c^0_3,d^3_0)\triangleq \sum _{c^0_2} C(c^0_2, c^1_2,c^2_2,c^3_2)\cdot \varvec{U}^{(\varvec{a}^3_0,\varvec{u}^0_3,\varvec{v}^0_3,\varvec{w}^3_0)}[c^0_3\vert d^3_0][c^0_2\vert 0]\) for all the \(2^5\) choices of \((c^1_2,c^2_2,c^3_2,c^0_3,d^3_0)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    2. 4.2

      Compute \(D_1(c^1_2,c^2_2,c^3_2,c^0_3,c^1_3,d^3_1)\!\triangleq \!\sum _{d^3_0} D_0(c^1_2,c^2_2,c^3_2,c^0_3,d^3_0)\cdot \varvec{U}^{(\varvec{a}^3_1,\varvec{u}^1_3,\varvec{v}^1_3,\varvec{w}^3_1)}[c^1_3\vert d^3_1][c^1_2\vert d^3_0]\) for all the \(2^6\) choices of \((c^1_2,c^2_2,c^3_2,c^0_3,c^1_3,d^3_1)\). The time complexity is \(\mathcal {O}(2^7)\) and the memory complexity is \(\mathcal {O}(2^6)\).

    3. 4.3

      Compute \(D_2(c^2_2,c^3_2,c^0_3,c^1_3,d^3_1)\triangleq \sum _{c^1_2} D_1(c^1_2,c^2_2,c^3_2,c^0_3,c^1_3,d^3_1)\) for all the \(2^5\) choices of \((c^2_2,c^3_2,c^0_3,c^1_3,d^3_1)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    4. 4.4

      Compute \(D_3(c^2_2,c^3_2,c^0_3,c^1_3,c^2_3,d^3_2)\!\triangleq \!\sum _{d^3_1} D_2(c^2_2,c^3_2,c^0_3,c^1_3,d^3_1) \cdot \varvec{U}^{(\varvec{a}^3_2,\varvec{u}^2_3,\varvec{v}^2_3,\varvec{w}^3_2)}[c^2_3\vert d^3_2][c^2_2\vert d^3_1]\) for all the \(2^6\) choices of \((c^2_2,c^3_2,c^0_3,c^1_3,c^2_3,d^3_2)\). The time complexity is \(\mathcal {O}(2^7)\) and the memory complexity is \(\mathcal {O}(2^6)\).

    5. 4.5

      Compute \(D_4(c^3_2,c^0_3,c^1_3,c^2_3,d^3_2)\triangleq \sum _{c^2_2} D_3(c^2_2,c^3_2,c^0_3,c^1_3,c^2_3,d^3_2)\) for all the \(2^5\) choices of \((c^3_2,c^0_3,c^1_3,c^2_3,d^3_2)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    6. 4.6

      Compute \(D_5(c^3_2,c^0_3,c^1_3,c^2_3,c^3_3,d^3_3)\!\triangleq \!\sum _{d^3_2} D_4(c^3_2,c^0_3,c^1_3,c^2_3,d^3_2) \cdot \varvec{U}^{(\varvec{a}^3_3,\varvec{u}^3_3,\varvec{v}^3_3,\varvec{w}^3_3)}[c^3_3\vert d^3_3][c^3_2\vert d^3_2]\) for all the \(2^6\) choices of \((c^3_2,c^0_3,c^1_3,c^2_3,c^3_3,d^3_3)\). The time complexity is \(\mathcal {O}(2^7)\) and the memory complexity is \(\mathcal {O}(2^6)\).

    7. 4.7

      Compute \(D_6(c^0_3,c^1_3,c^2_3,c^3_3,d^3_3)\triangleq \sum _{c^3_2} D_5(c^3_2,c^0_3,c^1_3,c^2_3,c^3_3,d^3_3)\) for all the \(2^5\) choices of \((c^0_3,c^1_3,c^2_3,c^3_3,d^3_3)\). The time complexity is \(\mathcal {O}(2^6)\) and the memory complexity is \(\mathcal {O}(2^5)\).

    8. 4.8

      Compute \(D(Sc_3)= \sum _{d^3_3} D_6(c^0_3,c^1_3,c^2_3,c^3_3,d^3_3)\) for all the \(2^4\) choices in \((Sc_3)\), i.e., \((c^0_3,c^1_3,c^2_3,c^3_3)\). The time complexity is \(\mathcal {O}(2^5)\) and the memory complexity is \(\mathcal {O}(2^4)\).

    • Complexity of Step 4 The total time complexity of Step 4 is around \(\mathcal {O}(2^{9.39})\).

After the values of \(D(Sc_3)\) for all the combinations of involved local carries in \(Sc_3\) are obtained, the accurate value of \(\texttt{Cor}( (\varvec{U},\varvec{V},\varvec{W})\xrightarrow {\mathcal {F}} \varvec{A} )\) can be derived according to Theorem 3 as \(\texttt{Cor}( (\varvec{U},\varvec{V},\varvec{W})\xrightarrow {\mathcal {F}} \varvec{A} )=\sum _{Sc_3}D(Sc_3)\) with a time complexity of \(\mathcal {O}(2^4)\). To sum up, the total time complexity for computing \(\texttt{Cor}( (\varvec{U},\varvec{V},\varvec{W})\xrightarrow {\mathcal {F}} \varvec{A} )\) according to Step 1 to Step 5 is around \(2^{7.17}+2^{9.39}+2^{9.39}+2^{9.39}+2^4 = \mathcal {O}(2^{11})\).

Searching for 4-tuples of vectors

Here we present the method in [13, 14] for searching 4-tuples of column vectors of the generator matrix \(\varvec{G}\) which add to 0 on some bits.

Rewriting the matrix \(\varvec{G}\) in column vectors as \(\varvec{G}=({\textbf {g}}_1,{\textbf {g}}_2,...,{\textbf {g}}_N)\), we try to find the XORs of the l-bit column vectors that vanish on some \(l-l'\) bits. Specifically for SNOW 3G and SNOW 2.0, we look for a number of 4-tuples from \(\varvec{G}\) which add to 0 on their most significant \(l-l'\) bits. As stated in [13, 14], this can be solved using Wagner’s k-tree algorithm [32] by combining a small technique. Below we illustrate this process.

Let \(l_1\) and \(l_2\) be two positive integers such that \(l_1+l_2=l-l'\), and \(\texttt {high}_n({\textbf {a}})\) be the value of the vector \({\textbf {a}}\) on the most significant n bits. Collecting the N column vectors of \(\varvec{G}\) in one single list \(\varvec{L}\), we carry out the following two steps:

  • Create a new list \(\varvec{L}_1\) from the original list \(\varvec{L}\) composed of all the XORs of \({\textbf {g}}_{j_1}\) and \({\textbf {g}}_{j_2}\) with \({\textbf {g}}_{j_1}\ne {\textbf {g}}_{j_2}\), \({\textbf {g}}_{j_1}, {\textbf {g}}_{j_2}\in L\) such that \(\texttt {high}_{l_1}({\textbf {g}}_{j_1}\oplus {\textbf {g}}_{j_2})=0\). We say that \(l_1\) bits are eliminated. For \(j=1,2,...,N\), we will regard the column vectors \({\textbf {g}}_j\) as random vectors, thus \(\varvec{L}_1\) has an expected size of \(m_1\triangleq \left( {\begin{matrix} N \\ 2 \end{matrix}}\right) 2^{-l_1}\thickapprox {N^2}{2^{-(l_1+1)}}\). This step is fulfilled by a sort-and-merge procedure as follows: First, sort the N vectors into \(2^{l_1}\) equivalence classes according to their values on the most significant \(l_1\) bits, thus any two vectors in the same equivalence class have the same value on these bits. Then, look at each pair of vectors \(({\textbf {g}}_{j_1},{\textbf {g}}_{j_2})\) in each equivalence class to create \(\varvec{L}_1\).

  • Create a new list \(\varvec{L}_2\) from \(\varvec{L}_1\) by further eliminating \(l_2\) bits using the same sort-and-merge procedure as that in Step 1. That is, first sort the \(m_1\) vectors in \(\varvec{L}_1\) into \(2^{l_2}\) equivalence classes according to their values on the next most significant \(l_2\) bits, and then look at each pair of vectors in each equivalence class to create \(\varvec{L}_2\). Similarly, the expected number of elements in \(\varvec{L}_2\) is \(m_2\triangleq \left( {\begin{matrix} m_1 \\ 2 \end{matrix}}\right) 2^{-l_2}\thickapprox {m_1^2}{2^{-(l_2+1)}}\).

Following the above steps, we make an estimation that, we obtain about \(m_2\) 4-tuplesFootnote 7\(({\textbf {g}}_{j_1},{\textbf {g}}_{j_2},{\textbf {g}}_{j_3},{\textbf {g}}_{j_4})\) such that \(\texttt {high}_{l-l'}({\textbf {g}}_{j_1}\oplus {\textbf {g}}_{j_2}\oplus {\textbf {g}}_{j_3}\oplus {\textbf {g}}_{j_4})=0\), which correspond to \(m_2\) parity checks with the correlation \(\alpha ^4\) involving only \(x_0,x_1,...,x_{l'-1}\). The running time and memory complexities of the above procedure are essentially proportional to the size of the lists that have been processed, which can be estimated as \(\mathcal {O}(N+m_1)\).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gong, X., Hao, Y. & Wang, Q. Combining MILP modeling with algebraic bias evaluation for linear mask search: improved fast correlation attacks on SNOW. Des. Codes Cryptogr. (2024). https://doi.org/10.1007/s10623-024-01362-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10623-024-01362-5

Keywords

Mathematics Subject Classification

Navigation