CrossMine: Efficient Classification Across Multiple Database Relations
Most of today’s structured data is stored in relational data- bases. Such a database consists of multiple relations that are linked together conceptually via entity-relationship links in the design of relational database schemas. Multi-relational classification can be widely used in many disciplines including financial decision making and medical research. However, most classification approaches only work on single “flat” data relations. It is usually difficult to convert multiple relations into a single flat relation without either introducing huge “universal relation” or losing essential information. Previous works using Inductive Logic Programming approaches (recently also known as Relational Mining) have proven effective with high accuracy in multi-relational classification. Unfortunately, they fail to achieve high scalability w.r.t. the number of relations in databases because they repeatedly join different relations to search for good literals.
In this paper we propose CrossMine, an efficient and scalable approach for multi-relational classification. CrossMine employs tuple ID propagation, a novel method for virtually joining relations, which enables flexible and efficient search among multiple relations. CrossMine also uses aggregated information to provide essential statistics for classification. A selective sampling method is used to achieve high scalability w.r.t. the number of tuples in the databases. Our comprehensive experiments on both real and synthetic databases demonstrate the high scalability and accuracy of CrossMine.
KeywordsClass Label Inductive Logic Programming Target Relation Multiple Relation Clause Generation
Unable to display preview. Download preview PDF.
- 2.Aronis, J.M., Provost, F.J.: Increasing the Efficiency of Data Mining Algorithms with Breadth-First Marker Propagation. In: Proc. 2003 Int. Conf. Knowledge Discovery and Data Mining, Newport Beach, CA (1997)Google Scholar
- 3.Blockeel, H., De Raedt, L., Ramon, J.: Top-down induction of logical decision trees. In: Proc. 1998 Int. Conf. Machine Learning, Madison, WI (August 1998)Google Scholar
- 7.Clark, P., Boswell, R.: Rule induction with CN2: Some recent improvements. In: Proc. 1991 European Working Session on Learning, pp. 151–163. Porto, Portugal (March 1991)Google Scholar
- 8.Garcia-Molina, H., Ullman, J.D., Widom, J.: Database Systems: The Complete Book. Prentice-Hall, Englewood Cliffs (2002)Google Scholar
- 9.Gehrke, J., Ramakrishnan, R., Ganti, V.: Rainforest: A framework for fast decision tree construction of large datasets. In: Proc. 1998 Int. Conf. Very Large Data Bases, New York (August 1998)Google Scholar
- 10.Lavrac, N., Dzeroski, S.: Inductive Logic Programming: Techniques and Applications. Ellis Horwood (1994)Google Scholar
- 13.Muggleton, S.: Inverse entailment and progol. In: New Generation Computing. Special issue on Inductive Logic Programming (1995)Google Scholar
- 14.Muggleton, S., Feng, C.: Efficient induction of logic programs. In: Proc. 1990 Conf. Algorithmic Learning Theory, Tokyo, Japan (1990)Google Scholar
- 15.Neville, J., Jensen, D., Friedland, L., Hay, M.: Learning Relational Probability Trees. In: Proc. 2003 Int. Conf. Knowledge Discovery and Data Mining, Washtington, DC (2003)Google Scholar
- 16.Popescul, A., Ungar, L., Lawrence, S., Pennock, M.: Towards structural logistic regression: Combining relational and statistical learning. In: Proc. Multi-Relational Data Mining Workshop, Alberta, Canada (2002)Google Scholar
- 17.Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
- 18.Quinlan, J.R., Cameron-Jones, R.M.: FOIL: A midterm report. In: Proc. 1993 European Conf. Machine Learning, Vienna, Austria (1993)Google Scholar
- 19.Taskar, B., Segal, E., Koller, D.: Probabilistic classification and clustering in relational data. In: Proc. 2001 Int. Joint Conf. Artificial Intelligence, Seattle, WA (2001)Google Scholar