Data Mining and Knowledge Discovery

, Volume 4, Issue 2, pp 127-162

First online:

RainForest—A Framework for Fast Decision Tree Construction of Large Datasets

  • Johannes GehrkeAffiliated withDepartment of Computer Sciences, University of Wisconsin-
  • , Raghu RamakrishnanAffiliated withDepartment of Computer Sciences, University of Wisconsin-
  • , Venkatesh GantiAffiliated withDepartment of Computer Sciences, University of Wisconsin-

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access


Classification of large datasets is an important data mining problem. Many classification algorithms have been proposed in the literature, but studies have shown that so far no algorithm uniformly outperforms all other algorithms in terms of quality. In this paper, we present a unifying framework called Rain Forest for classification tree construction that separates the scalability aspects of algorithms for constructing a tree from the central features that determine the quality of the tree. The generic algorithm is easy to instantiate with specific split selection methods from the literature (including C4.5, CART, CHAID, FACT, ID3 and extensions, SLIQ, SPRINT and QUEST).

In addition to its generality, in that it yields scalable versions of a wide range of classification algorithms, our approach also offers performance improvements of over a factor of three over the SPRINT algorithm, the fastest scalable classification algorithm proposed previously. In contrast to SPRINT, however, our generic algorithm requires a certain minimum amount of main memory, proportional to the set of distinct values in a column of the input relation. Given current main memory costs, this requirement is readily met in most if not all workloads.

data mining decision trees classification scalability