Compact Prediction Tree: A Lossless Model for Accurate Sequence Prediction

  • Ted Gueniche
  • Philippe Fournier-Viger
  • Vincent S. Tseng
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8347)


Predicting the next item of a sequence over a finite alphabet has important applications in many domains. In this paper, we present a novel prediction model named CPT (Compact Prediction Tree) which losslessly compress the training data so that all relevant information is available for each prediction. Our approach is incremental, offers a low time complexity for its training phase and is easily adaptable for different applications and contexts. We compared the performance of CPT with state of the art techniques, namely PPM (Prediction by Partial Matching), DG (Dependency Graph) and All-K-th-Order Markov. Results show that CPT yield higher accuracy on most datasets (up to 12% more than the second best approach), has better training time than DG and PPM, and is considerably smaller than All-K-th-Order Markov.


sequence prediction next item prediction accuracy compression 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Arlitt, M., Jin, T.: A workload characterization study of the 1998 world cup web site. IEEE Network 14(3), 30–37 (2000)CrossRefGoogle Scholar
  2. 2.
    Cleary, J., Witten, I.: Data compression using adaptive coding and partial string matching. IEEE Trans. on Inform. Theory 24(4), 413–421 (1984)MathSciNetGoogle Scholar
  3. 3.
    Deshpande, M., Karypis, G.: Selective Markov models for predicting Web page accesses. ACM Transactions on Internet Technology 4(2), 163–184 (2004)CrossRefGoogle Scholar
  4. 4.
    Fournier-Viger, P., Gueniche, T., Tseng, V.S.: Using Partially-Ordered Sequential Rules to Generate More Accurate Sequence Prediction. In: Zhou, S., Zhang, S., Karypis, G. (eds.) ADMA 2012. LNCS (LNAI), vol. 7713, pp. 431–442. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  5. 5.
    Padmanabhan, V.N., Mogul, J.C.: Using Prefetching to Improve World Wide Web Latency. Computer Communications 16, 358–368 (1998)Google Scholar
  6. 6.
    Domenech, J., de la Ossa, B., Sahuquillo, J., Gil, J.A., Pont, A.: A taxonomy of web prediction algorithms. Expert Systems with Applications (9) (2012)Google Scholar
  7. 7.
    Papapetrou, P., Kollios, G., Sclaroff, S., Gunopulos, D.: Discovering Frequent Arrangements of Temporal Intervals. In: Proc. of the 5th IEEE International Conference on Data Mining, pp. 354–361 (2005)Google Scholar
  8. 8.
    Pitkow, J., Pirolli, P.: Mining longest repeating subsequence to predict world wide web surfing. In: Proc. 2nd USENIX Symposium on Internet Technologies and Systems, Boulder, CO, pp. 13–25 (1999)Google Scholar
  9. 9.
    Sun, R., Giles, C.L.: Sequence Learning: From Recognition and Prediction to Sequential Decision Making. IEEE Intelligent Systems 16(4), 67–70 (2001)CrossRefGoogle Scholar
  10. 10.
    Willems, F., Shtarkov, Y., Tjalkens, T.: The context-tree weighting method: Basic properties. IEEE Trans. on Information Theory 31(3), 653–664 (1995)CrossRefMathSciNetGoogle Scholar
  11. 11.
    Zheng, Z., Kohavi, R., Mason, L.: Real world performance of association rule algorithms. In: Proc. 7th ACM Intern. Conf. on KDD, pp. 401–406 (2001)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Ted Gueniche
    • 1
  • Philippe Fournier-Viger
    • 1
  • Vincent S. Tseng
    • 2
  1. 1.Dept. of Computer ScienceUniversity of MonctonCanada
  2. 2.Dept. of Computer Science and Inf. Eng.National Cheng Kung UniversityTaiwan

Personalised recommendations