Neural Computing and Applications

, Volume 23, Issue 7–8, pp 2481–2491 | Cite as

Predicting the performance measures of a message-passing multiprocessor architecture using artificial neural networks

  • Elrasheed Ismail Mohommoud Zayid
  • Mehmet Fatih Akay
Original Article

Abstract

In this paper, we develop multi-layer feed-forward artificial neural network (MFANN) models for predicting the performance measures of a message-passing multiprocessor architecture interconnected by the simultaneous optical multiprocessor exchange bus (SOME-Bus), which is a fiber-optic interconnection network. OPNET Modeler is used to simulate the SOME-Bus multiprocessor architecture and to create the training and testing datasets. The performance of the MFANN prediction models is evaluated using standard error of estimate (SEE) and multiple correlation coefficient (R). Also, the results of the MFANN models are compared with the ones obtained by generalized regression neural network (GRNN), support vector regression (SVR), and multiple linear regression (MLR). It is shown that MFANN models perform better (i.e., lower SEE and higher R) than GRNN-based, SVR-based, and MLR-based models for predicting the performance measures of a message-passing multiprocessor architecture.

Keywords

Artificial neural networks Multiprocessor architectures Message passing Performance evaluation 

References

  1. 1.
    Zhou X, Lu K, Wang X, Li X (2012) Exploiting parallelism in deterministic shared memory multiprocessing. J Parallel Distrib Comput 72:716–727CrossRefGoogle Scholar
  2. 2.
    Culler D, Singh JP, Gupta A (2009) Parallel computer architecture: a hardware/software approach, 4th edn. Morgan Kaufmann, New YorkGoogle Scholar
  3. 3.
    Chow ALH, Golubchik L, Khuller S, Yaoc Y (2012) Performance tradeoffs in structured peer to peer streaming. J Parallel Distrib Comput 72:323–337CrossRefMATHGoogle Scholar
  4. 4.
    Chan F, Cao J, Sun Y (2003) High-level abstractions for message-passing parallel programming. Parallel Comput 29:1589–1621CrossRefGoogle Scholar
  5. 5.
    Eeckhout L, Sampson J, Calder B (2005) Exploiting program microarchitecture independent characteristics and phase behavior for reduced benchmark suite simulation. In: Proceedings of the IEEE international symposium on workload characterization. Austin, TX, 6–8 October 2005, pp 2–12Google Scholar
  6. 6.
    Akay MF, Katsinis C (2008) Performance improvement of parallel programs on a broadcast-based distributed shared memory multiprocessor by simulation. Simul Model Pract Theory 16:338–352CrossRefGoogle Scholar
  7. 7.
    Akay MF, Abasıkeleş I (2010) Predicting the performance measures of an optical distributed shared memory multiprocessor by using support vector regression. Expert Syst Appl 37:6293–6630CrossRefGoogle Scholar
  8. 8.
    Genbrugge D, Eeckhout L (2007) Statistical simulation of chip multiprocessors running multi-program workloads. In: Proceedings of the 25th international conference on computer design, Lake Tahoe, CA, 7–10 October 2007, pp 464–471Google Scholar
  9. 9.
    Zayid EIM, Akay MF (2012) Computing and estimating the performance measures of a message passing multiprocessor architecture by using artificial neural networks. In: Proceedings of the 2nd international conference on computation for science and technology, ICCST-2, Niğde, Turkey, 9–11 July 2012, pp 76–77Google Scholar
  10. 10.
    Akay MF, Zayid EIM (2011) Predicting the performance measures of a message passing multiprocessor architecture by using artificial neural networks. In: Proceedings of the 2nd international symposium on computing in science and engineering, ISCSE-2011, Kuşadası, Turkey, 1–4 June 2011, pp 53–58Google Scholar
  11. 11.
    OPNET Modeler Inc. (2012) OPNET University program. http://www.opnet.com/university_program
  12. 12.
    Katsinis C (2001) Performance analysis of the simultaneous optical multi-processor exchange bus. Parallel Comput 27:1079–1115CrossRefMATHGoogle Scholar
  13. 13.
    Acacio ME, González J, García JM, Duato J (2002) The use of prediction for accelerating upgrade misses in cc-NUMA multiprocessors. In: Proceedings of the 11th international conference on parallel architectures and compilation techniques. Virginia, USA, p 155Google Scholar
  14. 14.
    Hesham E, Mostafa A (2005) Advanced computer architecture and parallel processing. Wiley, HobokenGoogle Scholar
  15. 15.
    Thiele L, Wandeler E, Chakraborty S (2005) Performance analysis of multiprocessor DSPs: a stream-oriented component model. IEEE Signal Process Mag 22:38–46CrossRefGoogle Scholar
  16. 16.
    Lemoff BE, Ali ME, Panatopoulos G, Flower GM, Madhavan B, Levi AFJ, Dolfi DW (2004) MAUI: enabling fiber-to-the-processor with parallel wavelength optical interconnects. J Lightwave Technol 22:2043–2054CrossRefGoogle Scholar
  17. 17.
    Pratas F, Trancoso P, Sousa L, Stamatakis A, Shi G, Kindratenko V (2011) Fine-grain parallelism using multi-core, Cell/BE, and GPU systems. Parallel Comput 38:365–390CrossRefGoogle Scholar
  18. 18.
    Chen M-S, Yen H-W (2011) Applications of machine learning approach on multi-queue message scheduling. Expert Syst Appl 38:3323–3335CrossRefGoogle Scholar
  19. 19.
    Khashei M, Hamadani AZ, Bijari B (2012) A novel hybrid classification model of artificial neural networks and multiple linear regression models. Expert Syst Appl 39:2606–2620CrossRefGoogle Scholar
  20. 20.
    Alpaydın E (2010) Introduction to machine learning, 2nd edn. MIT press, LondonMATHGoogle Scholar
  21. 21.
    Firat M, Gungor M (2009) Generalized regression neural networks and feed forward neural networks for prediction of scour depth around bridge piers. Adv Eng Softw 40:731–737CrossRefMATHGoogle Scholar
  22. 22.
    Witten IH, Frank E (2005) Data mining: practical machine learning tools and techniques. Morgan Kaufmann, San FranciscoGoogle Scholar
  23. 23.
    Cherkassky V, Ma Y (2004) Comparison of loss functions for linear regression. In: Proceedings of the IEEE international joint conference on neural networks, pp 400–405Google Scholar
  24. 24.
    Cristianini N, Shawe-Taylor J (2000) An introduction to support vector machines and other kernel-based learning methods. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  25. 25.
    Vapnik VN (2000) The nature of statistical learning theory. Springer, New YorkCrossRefMATHGoogle Scholar
  26. 26.
    Gunn SR (1998) Support vector machines for classification and regression. Technical Report, Department of Electronics and Computer Science, University of Southampton, SouthamptonGoogle Scholar
  27. 27.
    Schölkopf B, Smola AJ (2002) Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press, CambridgeGoogle Scholar

Copyright information

© Springer-Verlag London 2012

Authors and Affiliations

  • Elrasheed Ismail Mohommoud Zayid
    • 1
  • Mehmet Fatih Akay
    • 2
  1. 1.Department of Electrical-Electronics EngineeringCukurova UniversityAdanaTurkey
  2. 2.Department of Computer EngineeringCukurova UniversityAdanaTurkey

Personalised recommendations