On Minimising Automata with Errors

  • Paweł Gawrychowski
  • Artur Jeż
  • Andreas Maletti
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6907)


The problem of k-minimisation for a DFA M is the computation of a smallest DFA N (where the size |M | of a DFA M is the size of the domain of the transition function) such that L(M) ΔL(N) ⊆ Σ< k , which means that their recognized languages differ only on words of length less than k. The previously best algorithm, which runs in time \(\mathcal{O}(\mid M \mid{\rm log}^{2} n)\) where n is the number of states, is extended to DFAs with partial transition functions. Moreover, a faster \(\mathcal{O}(\mid M \mid\log n)\) algorithm for DFAs that recognise finite languages is presented. In comparison to the previous algorithm for total DFAs, the new algorithm is much simpler and allows the calculation of a k-minimal DFA for each k in parallel. Secondly, it is demonstrated that calculating the least number of introduced errors is hard: Given a DFA M and numbers k and m, it is NP-hard to decide whether there exists a k-minimal DFA N with |L(M) ΔL(N) ≤ m. A similar result holds for hyper-minimisation of DFAs in general: Given a DFA M and numbers s and m, it is NP-hard to decide whether there exists a DFA N with at most s states such that |L(M) ΔL(N) ≤ m.


finite automaton minimisation lossy compression 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Andersson, A., Thorup, M.: Dynamic ordered sets with exponential search trees. J. ACM 54(3) (2007)Google Scholar
  2. 2.
    Badr, A., Geffert, V., Shipman, I.: Hyper-minimizing minimized deterministic finite state automata. RAIRO Theoret. Inform. Appl. 43(1), 69–94 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Castiglione, G., Restivo, A., Sciortino, M.: Hopcroft’s algorithm and cyclic automata. In: Martín-Vide, C., Otto, F., Fernau, H. (eds.) LATA 2008. LNCS, vol. 5196, pp. 172–183. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Gawrychowski, P., Jeż, A.: Hyper-minimisation made efficient. In: Královič, R., Niwiński, D. (eds.) MFCS 2009. LNCS, vol. 5734, pp. 356–368. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  5. 5.
    Gries, D.: Describing an algorithm by Hopcroft. Acta Inf. 2(2), 97–109 (1973)MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Holzer, M., Maletti, A.: An n logn algorithm for hyper-minimizing a (minimized) deterministic automaton. Theoret. Comput. Sci. 411(38–39), 3404–3413 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Hopcroft, J.E.: An \(n\,\textrm{log}\, n\) algorithm for minimizing states in a finite automaton. In: Kohavi, Z. (ed.) Theory of Machines and Computations, pp. 189–196. Academic Press, London (1971)Google Scholar
  8. 8.
    Maletti, A.: Better hyper-minimization— not as fast, but fewer errors. In: Domaratzki, M., Salomaa, K. (eds.) CIAA 2010. LNCS, vol. 6482, pp. 201–210. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  9. 9.
    Pagh, R., Rodler, F.F.: Cuckoo hashing. J. Algorithms 51(2), 122–144 (2004)MathSciNetzbMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag GmbH Berlin Heidelberg 2011

Authors and Affiliations

  • Paweł Gawrychowski
    • 1
  • Artur Jeż
    • 1
  • Andreas Maletti
    • 2
  1. 1.Institute of Computer ScienceUniversity of WrocławWrocławPoland
  2. 2.Institute for Natural Language ProcessingUniversität StuttgartStuttgartGermany

Personalised recommendations