Skip to main content
Log in

Intelligent data structures selection using neural networks

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

It is well known that abstract data types represent the core for any software application, and a proper use of them is an essential requirement for developing a robust and efficient system. Data structures are essential in obtaining efficient algorithms, having a major importance in the software development process. Selecting and creating the appropriate data structure for implementing an abstract data type can greatly impact the performance and the efficiency of the software systems. It is not a trivial problem for a software developer, as it is hard to anticipate all the use scenarios of the deployed application, and a static selection before the system’s execution is, generally, not accurate. In this paper, we are focusing on the problem of dynamic selection of efficient data structures for abstract data types implementation using a supervised learning approach. In order to dynamically select the most suitable representation for an aggregate according to the software system’s current execution context, a neural network will be used. We experimentally evaluate the proposed technique on a case study, emphasizing the advantages of the proposed model in comparison with existing similar approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Bik AJ, Wijshoff HA (1996) Automatic data structure selection and transformation for sparse matrix computations. IEEE Trans Parallel Distrib Syst 7(2): 109–126

    Article  Google Scholar 

  2. Bik AJC, Wijshoff HAG (1993) On automatic data structure selection and code generation for sparse computations. In: Lecture notes in computer science, Springer, Berlin, pp 57–75

  3. Chuang TR, Hwang WL (1993) A probabilistic approach to the problem of automatic selection of data representations. In: Proceedings of the 1996 ACM SIGPLAN international conference on functional programming, ACM Press, New York, pp 190–200

  4. Cormen TH, Leiserson CE, Rivest RL, Stein C (2001) Introduction to algorithms, 2nd edn. The MIT Press, Boston

    MATH  Google Scholar 

  5. DICOM home page. ftp://medical.nema.org/medical/dicom/DataSets/

  6. Digital Imaging and Communications in Medicine. http://medical.nema.org/

  7. Forina M et al (2010) UCI machine learning repository: parvus—an extendible package for data exploration, classification and correlation. http://archive.ics.uci.edu/ml

  8. Health Level 7. http://www.hl7.org/

  9. Köknar-Tezel S, Latecki L (2011) Improving SVM classification on imbalanced time series data sets with ghost points. Knowl Inf Syst 28(1): 1–23

    Article  Google Scholar 

  10. Kröse B, van der Smagt P (1996) An introduction to neural networks. The University of Amsterdam, Amsterdam

    Google Scholar 

  11. Linstrom KR, Boye AJ (2005) A neural network prediction model for a psychiatric application. Int Conf Comput Intell Multimedia Appl 0: 36–40

    Google Scholar 

  12. Low J, Rovner P (1976) Techniques for the automatic selection of data structures. In: Proceedings of the 3rd ACM SIGACT-SIGPLAN smposium on pinciples on programming languages, ACM, Atlanta, Georgia, pp 58–67

  13. Low JR (1978) Automatic data structure selection: an example and overview. Commun ACM 21(5): 376–385

    Article  Google Scholar 

  14. Mitchell TM (1997) Machine learning. McGraw-Hill, New York

    MATH  Google Scholar 

  15. Mount DM (2001) Lecture notes CMSC 420, data structures, technical report. Department of Computer Science, University of Maryland, Maryland

    Google Scholar 

  16. Nguyen N (2010) A new SVM approach to multi-instance multi-label learning ICDM, pp 384–392

  17. Open source clinical image and object management. http://www.dcm4che.org/

  18. Osirix, Advanced Imaging in 3D/4D/5D Sample DICOM Image Sets. http://pubimage.hcuge.ch:8080/

  19. Raykar VC, Krishnapuram B, Yu S (2010) Designing efficient cascaded classifiers: tradeoff between accuracy and cost proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 853–860

  20. Patient contributed image repository. http://www.pcir.org/

  21. Rojas R (1996) Neural networks: a systematic Introduction. Springer, Berlin

    Google Scholar 

  22. Rovner P (1978) Automatic representation selection for associative data structures. In: International workshop on Managing requirements knowledge, vol 0. p 691

  23. Roux J-P Signal and image processing lab sample DICOM image sets. http://www.creatis.insa-lyon.fr/~jpr/PUBLIC/gdcm/gdcmSampleData/

  24. Rumelhart DE, Hinton GE, Williams R (1986) Learning internal representations by error propagation. Parallel data processing, vol 1. The MIT Press, Cambridge, pp 318–362

  25. Russell S, Norvig P (2003) Artificial intelligence—A modern approach. Prentice Hall international series in artificial intelligence. Prentice Hall, Upper Saddle River

    Google Scholar 

  26. Schonberg E, Schwartz JT, Sharir M (1979) Automatic data structure selection in SETL. In: Proceedings of the 6th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. ACM, pp 197–210

  27. Schonberg E, Schwartz JT, Sharir M (1981) An automatic technique for selection of data representations in SETL programs. ACM Trans Program Lang Syst 3(2): 126–143

    Article  MATH  Google Scholar 

  28. Skowronski MD, Harris JG (2007) Automatic speech recognition using a predictive echo state network classifier. Neural Netw 20(3): 414–423

    Article  MATH  Google Scholar 

  29. Somervuo P, Kohonen T (1999) Self-organizing maps and learning vector quantization for feature sequences. Neural Process Lett 10: 151–159

    Article  Google Scholar 

  30. Spearman C (1904) The proof and measurement of association between two things. Am J Psychol 15: 72–101

    Article  Google Scholar 

  31. Sutton RS, Barto AG (1998) Reinforcement learning: an introduction. MIT Press, Boston

    Google Scholar 

  32. Takacs B, Demiris Y (2010) Spectral clustering in multi-agent systems. Knowl Inf Syst 25(3): 607–622

    Article  Google Scholar 

  33. Washington State University College of Pharmacy—DICOM sample files. http://info.betaustur.org/

  34. Wang L, Yang B, Chen Y, Abraham A, Sun H, Chen Z, Wang H (2011) Improvement of neural network classifier using floating centroids. Knowl Inf Syst. doi:10.1007/s10115-011-0410-8, 1–22

  35. Watt DA, Brown D (2001) Java collections: an introduction to abstract data types, data structures and algorithms. Wiley, New York

    Google Scholar 

  36. Wine data set. http://archive.ics.uci.edu/ml/datasets/Wine

  37. Yellin DM (2003) Competitive algorithms for the dynamic selection of component implementations. IBM Syst J 42(1): 85–97

    Article  MathSciNet  Google Scholar 

  38. Zhou ZH, Li M (2010) Semi-supervised learning by disagreement. Knowl Inf Syst 24(3): 415–439

    Article  Google Scholar 

  39. Zhu X, Ding W, Yu P, Zhang C (2011) One-class learning and concept summarization for data streams. Knowl Inf Syst 28(3): 523–553

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Istvan Gergely Czibula.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Czibula, G., Czibula, I.G. & Găceanu, R.D. Intelligent data structures selection using neural networks. Knowl Inf Syst 34, 171–192 (2013). https://doi.org/10.1007/s10115-011-0468-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-011-0468-3

Keywords

Navigation