Framework for Performance Comparison of Classifiers

Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 131)


Data stream mining is the process of extracting knowledge structures from continuous, rapid data records. Classification is one of the task involved in data stream mining that maps data into predefined groups or classes. Most of the stream learning algorithms learn decision models that continuously evolve over time, run in resource-aware environments, detect and react to changes in the environment generating data. Built model will always have high accuracy on the training data, but performance on unseen data is to be checked. Performance of different classifiers for same task in same environment can differ, so there is need for some method which will help one to select the best suited classifier for the required task. Performance comparison will be effective if graphical interface facility is given. There is a need of user friendly interface having facility of multiple classifier selection for performance comparison, saving environment for future use and plotting the performance graph of classifiers. A framework which will provide different measures for performance comparison like true positive rate, true negative rate etc. is today’s requirement. Objective of this paper is to enhance the existing software used for stream data analysis with the above mentioned facilities.


stream data learning algorithm performance evaluation classifier framework 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Gama, J., Sebastiao, R., Rodrigues, P.P.: Issues in Evaluation of Stream Learning Algorithms. In: Proc. of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2009)Google Scholar
  2. 2.
    Gama, J., Rodrigues, P.P., Castilla, G.: Evaluating Algorithms that Learn from Data Streams. Preliminary workGoogle Scholar
  3. 3.
    Gama, J.: Issues and Challenges in Learning from Data Streams Extended AbstractGoogle Scholar
  4. 4.
    Hulten, G., Domingos, P.: Catching up with the data: research issues in mining data streams. In: Proc. of Workshop on Research Issues in Data Mining and Knowledge Discovery (2001)Google Scholar
  5. 5.
    Dawid, P.: Statistical theory: The prequential Approach. Journal of the Royal Statistical Society-A 147, 278–292 (1984)MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    Kirkby, R.: Improving Hoeffding Trees. PhD thesis, University of Waikato, New Zealand (2008)Google Scholar
  7. 7.
    Stanley, K.: Learning concept drift with a committee of decision trees. A Technical ReportGoogle Scholar
  8. 8.
    Wald, A.: Sequential Analysis. John Wiley and Sons, Inc. (1947)Google Scholar
  9. 9.
    Hulten, G., Domingos, P.: VFML – a toolkit for mining high-speed timechanging data streams (2003),
  10. 10.
    Mierswa, I., Wurst, M., Klinkenberg, R., Scholz, M., Euler, T.: Yale: Rapid prototyping for complex data mining task. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 935–940. ACM Press (2006)Google Scholar
  11. 11.
  12. 12.
    Domingos, P., Hulten, G.: Mining High-Speed Data Streams. In: Parsa, I., Ramakrishnan, R., Stolfo, S. (eds.) Proceedings of ACM Sixth International Conference on Knowledge Discovery and Data Mining, pp. 71–80. ACM Press (2000)Google Scholar
  13. 13.

Copyright information

© Springer India Pvt. Ltd. 2012

Authors and Affiliations

  1. 1.College of Engineering PunePuneIndia

Personalised recommendations