Skip to main content

Classifying Continuous Classes with Reinforcement Learning RULES

  • Conference paper
  • First Online:
Book cover Intelligent Information and Database Systems (ACIIDS 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 9012))

Included in the following conference series:

Abstract

Autonomous machines are interesting for both researchers and regular people. Everyone wants to have a self control machine that do the work by itself and deal with all types of problems. Thus, supervised learning and classification became important for high-dimensional and complex problems. However, classification algorithms only deals with discrete classes while practical and real-life applications contain continuous labels. Although several statistical techniques in machine learning were applied to solve this problem but they act as a black box and their actions are difficult to justify. Covering algorithms (CA), however, is one type of inductive learning that can be used to build a simple and powerful repository. Nevertheless, current CA approaches that deal with continuous classes are bias, non-updatable, overspecialized and sensitive to noise, or time consuming. Consequently, this paper proposes a novel non-discretization algorithm that deal with numeric classes while predicting discrete actions. It is a new version of RULES family called RULES-3C that learns interactively and transfer experience through exploiting the properties of reinforcement learning. This paper will investigate and assess the performance of RULES-3C with different practical cases and algorithms. Friedman test is also applied to rank RULES-3C performance and measure its significance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Escalante-B, A.N., Wiskott, L.: How to Solve Classification and Regression Problems on High-Dimensional Data with a Supervised Extension of Slow Feature Analysis. Journal of Machine Learning Research 14, 3683–3719 (2013)

    MATH  MathSciNet  Google Scholar 

  2. Asgharbeygi, N., Nejati, N., Langley, P., Arai, S.: Guiding Inference Through Relational Reinforcement Learning. In: Kramer, S., Pfahringer, B. (eds.) ILP 2005. LNCS (LNAI), vol. 3625, pp. 20–37. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  3. Aksoy, M.S., Mathkour, H., Alasoos, B.A.: Performance Evaluation of RULES-3 Induction System for Data Mining. International Journal of Innovative Computing, Information and Control 6, 3339–3346 (2010)

    Google Scholar 

  4. Kotsiantis, S.B.: Supervised Machine Learning: A Review of Classification Techniques. Informatica 31, 249–268 (2007)

    MATH  MathSciNet  Google Scholar 

  5. Kurgan, L.A., Cios, K.J., Dick, S.: Highly Scalable and Robust Rule Learner: Performance Evaluation and Comparison. IEEE Systems, Man, and Cybernetics—Part B Cybernetics 36, 32–53 (2006)

    Article  Google Scholar 

  6. Pham, D., Bigot, S., Dimov, S.: RULES-5: a rule induction algorithm for classification problems involving continuous attributes. In: Institution of Mechanical Engineers, pp. 1273–1286 (2003)

    Google Scholar 

  7. ElGibreen, H., Aksoy, M.S.: Inductive Learning for Continuous Classes and the Effect of RULES Family. International Journal of Information and Education Technology 5, 564–570 (2014)

    Article  Google Scholar 

  8. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)

    Google Scholar 

  9. Demšar, J.: Statistical Comparisons of Classifiers over Multiple Data Sets. Journal of Machine Learning Research 7, 1–30 (2006)

    MATH  Google Scholar 

  10. Moriarty, D.E., Schultz, A., Grefenstette, J.: Evolutionary Algorithms for Reinforcement Learning. Journal of Artificial Intelligence Research 11, 241–276 (1999)

    MATH  Google Scholar 

  11. Nissen, S.: Large Scale Reinforcement Learning using Q-SARSA(λ) and Cascading Neural Networks. Master Thesis, Department of Computer Science, University of Copenhagen, Denmark (2007)

    Google Scholar 

  12. Watkins, C.: Learning from Delayed Rewards. PhD Thesis, Cambridge University, Cambridge, England (1989)

    Google Scholar 

  13. Ertel, W.: Reinforcement Learning. In: Introduction to Artificial Intelligence, pp. 257–277. Springer (2011)

    Google Scholar 

  14. Garcia, F., Martin-Clouaire, R., Nguyen, G.L.: Generating Decision Rules By Reinforcement Learning For A Class Of Crop Management Problems. In: 3rd European Conference of the European Federation for Information Technology in Agriculture, Food and the Environment (EFITA’01). Montpellier (2001)

    Google Scholar 

  15. Lagoudakis, M.G., Parr, R.: Reinforcement Learning as Classification: Leveraging Modern Classifiers. In: Twentieth International Conference on Macine Learning (ICML-2003). Washington DC (2003)

    Google Scholar 

  16. Lanzi, P.L.: Learning classifier systems from a reinforcement learning perspective. Soft Computing - A Fusion of Foundations, Methodologies and Applications 6, 162–170 (2002)

    MATH  Google Scholar 

  17. Wiering, M.A., van Hasselt, H., Pietersma, A.D., Schomaker, L.: Reinforcement Learning Algorithms for solving Classification Problems. In: IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL), pp. 91–96. Paris, France (2011)

    Google Scholar 

  18. Hwang, K.-S., Chen, Y.-J., Jiang, W.-C., Yang, T.-W.: Induced states in a decision tree constructed by Q-learning. Information Sciences 213, 39–49 (2012)

    Article  Google Scholar 

  19. Tamee, K., Bull, L., Pinngern, O.: A Learning Classifier Systems Approach to Clustering (2006)

    Google Scholar 

  20. Likas, A.: A Reinforcement Learning approach to on-line clustering. Neural Computation 11, 1915–1932 (1999)

    Article  Google Scholar 

  21. Barbakh, W., Fyfe, C.: Clustering with Reinforcement Learning. In: Yin, H., Tino, P., Corchado, E., Byrne, W., Yao, X. (eds.) IDEAL 2007. LNCS, vol. 4881, pp. 507–516. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  22. ElGibreen, H., Aksoy, M.S.: Multi Model Transfer Learning with RULES Family. In: Perner, P. (ed.) MLDM 2013. LNCS, vol. 7988, pp. 42–56. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  23. Wilson, A., Fern, A., Tadepalli, P.: Transfer Learning in Sequential Decision Problems: A Hierarchical Bayesian Approach. In: Workshop on Unsupervised and Transfer Learning, pp. 217–227 (2012)

    Google Scholar 

  24. Konidaris, G., Scheidwasser, I., Barto, A.G.: Transfer in reinforcement learning via shared features. Journal of Machine Learning Research 13, 1333–1371 (2012)

    MATH  MathSciNet  Google Scholar 

  25. Dzeroski, S., Cestnik, B., Petrovski, I.: Using the m-estimate in rule induction. Journal of Computing and Information Technology 1, 37–46 (1993)

    Google Scholar 

  26. Yang, Y., Webb, G.: Proportional k-Interval Discretization for Naive-Bayes Classifiers. In: Raedt, L., Flach, P. (eds.) ECML 2001. LNCS (LNAI), vol. 2167, pp. 564–575. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  27. Webmaster.Team. KEEL: A software tool to assess evolutionary algorithms for Data Mining problems including regression, classification, clustering, pattern mining and so on (2012). http://keel.es/

  28. Alcalá-Fdez, J., Fernandez, A., Luengo, J., Derrac, J., García, S., Sánchez, L., Herrera, F.: KEEL Data-Mining Software Tool: Data Set Repository, Integration of Algorithms and Experimental Analysis Framework. Journal of Multiple-Valued Logic and Soft Computing 17, 255–287 (2011)

    Google Scholar 

  29. Sutton, R., Barto, A.: Reinforcement Learning: An Introduction, 1st edn. MIT Press, Cambridge, MA (1998)

    Google Scholar 

  30. Zahálka, J., Železný, F.: An experimental test of Occam’s razor in classification. Machine Learning, vol. 82, pp. 475–481, 2011/03/01(2011)

    Google Scholar 

  31. Addinsoft. XLSTAT (2014). http://www.xlstat.com/en/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hebah ElGibreen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

ElGibreen, H., Aksoy, M.S. (2015). Classifying Continuous Classes with Reinforcement Learning RULES. In: Nguyen, N., Trawiński, B., Kosala, R. (eds) Intelligent Information and Database Systems. ACIIDS 2015. Lecture Notes in Computer Science(), vol 9012. Springer, Cham. https://doi.org/10.1007/978-3-319-15705-4_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-15705-4_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-15704-7

  • Online ISBN: 978-3-319-15705-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics