Advertisement

On the Usability of Probably Approximately Correct Implication Bases

  • Daniel BorchmannEmail author
  • Tom Hanika
  • Sergei Obiedkov
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10308)

Abstract

We revisit the notion of probably approximately correct implication bases from the literature and present a first formulation in the language of formal concept analysis, with the goal to investigate whether such bases represent a suitable substitute for exact implication bases in practical use cases. To this end, we quantitatively examine the behavior of probably approximately correct implication bases on artificial and real-world data sets and compare their precision and recall with respect to their corresponding exact implication bases. Using a small example, we also provide evidence suggesting that implications from probably approximately correct bases can still represent meaningful knowledge from a given data set.

Keywords

Formal concept analysis Implications Query learning PAC learning 

Notes

Acknowledgments

Daniel Borchmann gratefully acknowledges support by the Cluster of Excellence “Center for Advancing Electronics Dresden” (cfAED). Sergei Obiedkov received support within the framework of the Basic Research Program at the National Research University Higher School of Economics (HSE) and within the framework of a subsidy by the Russian Academic Excellence Project ‘5-100’. The computations presented in this paper were conducted by conexp-clj, a general-purpose software for formal concept analysis (https://github.com/exot/conexp-clj).

References

  1. 1.
    Adaricheva, K., Nation, J.B.: Discovery of the D-basis in binary tables based on hypergraph dualization. Theoret. Comput. Sci. 658, 307–315 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Agrawal, R., Imielinski, T., Swami, A.N.: Mining association rules between sets of items in large databases. In: Proceedings of ACM SIGMOD International Conference on Management of Data, pp. 207–216 (1993)Google Scholar
  3. 3.
    Angluin, D.: Queries and concept learning. Mach. Learn. 2(4), 319–342 (1988)MathSciNetGoogle Scholar
  4. 4.
    Angluin, D., Frazier, M., Pitt, L.: Learning conjunctions of Horn clauses. Mach. Learn. 9(2–3), 147–164 (1992)zbMATHGoogle Scholar
  5. 5.
    Arias, M., Balcázar, J.L.: Construction and learnability of canonical Horn formulas. Mach. Learn. 85(3), 273–297 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Babin, M.A.: Models, methods, and programs for generating relationships from a lattice of closed sets. Ph.D. thesis. Higher School of Economics, Moscow (2012)Google Scholar
  7. 7.
    Borchmann, D.: Learning terminological knowledge with high confidence from erroneous data. Ph.D. thesis, Technische Universität Dresden, Dresden (2014)Google Scholar
  8. 8.
    Borchmann, D., Hanika, T.: Some experimental results on randomly generating formal contexts. In: Huchard, M., Kuznetsov, S. (eds.) Proceedings of 13th International Conference on Concept Lattices and Their Applications (CLA 2016), CEUR Workshop Proceedings, vol. 1624, pp. 57–69. CEUR-WS.org (2016)Google Scholar
  9. 9.
    Ganter, B.: Two basic algorithms in concept analysis. In: Kwuida, L., Sertkaya, B. (eds.) ICFCA 2010. LNCS, vol. 5986, pp. 312–340. Springer, Heidelberg (2010). doi: 10.1007/978-3-642-11928-6_22 CrossRefGoogle Scholar
  10. 10.
    Ganter, B., Obiedkov, S.: Conceptual Exploration. Springer, Heidelberg (2016)CrossRefzbMATHGoogle Scholar
  11. 11.
    Guigues, J.-L., Duquenne, V.: Famille minimale d’implications informatives résultant d’un tableau de données binaires. Mathématiques et Sciences Humaines 24(95), 5–18 (1986)Google Scholar
  12. 12.
    Kautz, H., Kearns, M., Selman, B.: Horn approximations of empirical data. Artif. Intell. 74(1), 129–145 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Kriegel, F., Borchmann, D.: NextClosures: parallel computation of the canonical base. In: Yahia, S.B., Konecny, J. (eds.) Proceedings of 12th International Conference on Concept Lattices and Their Applications (CLA 2015), CEUR Workshop Proceedings, vol. 1466, pp. 182–192. CEUR-WS.org, Clermont-Ferrand (2015)Google Scholar
  14. 14.
    Kuznetsov, S.O.: On the intractability of computing the Duquenne-Guigues base. J. Univers. Comput. Sci. 10(8), 927–933 (2004)MathSciNetzbMATHGoogle Scholar
  15. 15.
    Obiedkov, S., Duquenne, V.: Attribute-incremental construction of the canonical implication basis. Ann. Math. Artif. Intell. 49(1–4), 77–99 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Ryssel, U., Distel, F., Borchmann, D.: Fast algorithms for implication bases and attribute exploration using proper premises. Ann. Math. Artif. Intell. Special Issue 65, 1–29 (2013)zbMATHGoogle Scholar
  17. 17.
    Stumme, G.: Off to new shores - conceptual knowledge discovery and processing. Int. J. Hum.-Comput. Stud. (IJHCS) 59(3), 287–325 (2003)CrossRefGoogle Scholar
  18. 18.
    Valiant, L.G.: A theory of the learnable. Commun. ACM 27(11), 1134–1142 (1984)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Chair of Automata TheoryTechnische Universität DresdenDresdenGermany
  2. 2.Knowledge and Data Engineering GroupUniversity of KasselKasselGermany
  3. 3.Interdisciplinary Research Center for Information System DesignUniversity of KasselKasselGermany
  4. 4.Faculty of Computer ScienceNational Research University Higher School of EconomicsMoscowRussian Federation

Personalised recommendations