Cluster Computing

, Volume 17, Issue 2, pp 349–358 | Cite as

Iridis-pi: a low-cost, compact demonstration cluster

  • Simon J. Cox
  • James T. Cox
  • Richard P. Boardman
  • Steven J. Johnston
  • Mark Scott
  • Neil S. O’Brien
Article

Abstract

In this paper, we report on our “Iridis-Pi” cluster, which consists of 64 Raspberry Pi Model B nodes each equipped with a 700 MHz ARM processor, 256 Mbit of RAM and a 16 GiB SD card for local storage. The cluster has a number of advantages which are not shared with conventional data-centre based cluster, including its low total power consumption, easy portability due to its small size and weight, affordability, and passive, ambient cooling. We propose that these attributes make Iridis-Pi ideally suited to educational applications, where it provides a low-cost starting point to inspire and enable students to understand and apply high-performance computing and data handling to tackle complex engineering and scientific challenges. We present the results of benchmarking both the computational power and network performance of the “Iridis-Pi.” We also argue that such systems should be considered in some additional specialist application areas where these unique attributes may prove advantageous. We believe that the choice of an ARM CPU foreshadows a trend towards the increasing adoption of low-power, non-PC-compatible architectures in high performance clusters.

Keywords

Low-power cluster MPI ARM Low cost Education Hadoop HDFS HPL 

References

  1. 1.
    Open source ARM userland. http://www.raspberrypi.org/archives/2221
  2. 2.
    Petitet, A., Whaley, R.C., Dongarra, J., Cleary, A.: HPL—a portable implementation of the high-performance linpack benchmark for distributed-memory computers. http://www.netlib.org/benchmark/hpl/. Accessed Jan 2013
  3. 3.
    Andersen, D.G., Franklin, J., Kaminsky, M., Phanishayee, A., Tan, L., Vasudevan, V.: Fawn: a fast array of wimpy nodes. Commun. ACM 54(7), 101–109 (2011). doi:10.1145/1965724.1965747 CrossRefGoogle Scholar
  4. 4.
    Argonne National Laboratory: MPICH2. http://www-unix.mcs.anl.gov/mpi/mpich2/
  5. 5.
    Asanovic, K., Bodik, R., Catanzaro, B.C., Gebis, J.J., Husbands, P., Keutzer, K., Patterson, D.A., Plishker, W.L., Shalf, J., Williams, S.W., Yelick, K.A.: The landscape of parallel computing research: a view from Berkeley. Tech. Rep. UCB/EECS-2006-183, EECS Department, University of California, Berkeley. http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-183.html (2006)
  6. 6.
    Dongarra, J.J., Luszczek, P., Petitet, A.: The linpack benchmark: past, present and future. Concurr. Comput., Pract. Exp. 15(9), 803–820 (2003). doi:10.1002/cpe.728 CrossRefGoogle Scholar
  7. 7.
    Gates, M., Tirumala, A., Ferguson, J., Dugan, J., Qin, F., Gibbs, K., Estabrook, J.: Iperf—the TCP/UDP bandwidth measurement tool. http://sourceforge.net/projects/iperf/ [Online]
  8. 8.
    Hamilton, J.: Cooperative expendable micro-slice servers (cems): low cost, low power servers for internet-scale services. www.mvdirona.com/jrh/talksandpapers/jameshamilton_cems.pdf (2009)
  9. 9.
    Levenberg, K.: A method for the solution of certain non-linear problems in least squares. Q. Appl. Math. 2, 164–168 (1944) MATHMathSciNetGoogle Scholar
  10. 10.
    Lim, K., Ranganathan, P., Chang, J., Patel, C., Mudge, T., Reinhardt, S.: Understanding and designing new server architectures for emerging warehouse-computing environments. Comput. Archit. News 36(3), 315–326 (2008). doi:10.1145/1394608.1382148 CrossRefGoogle Scholar
  11. 11.
    Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 11(2), 431–441 (1963). doi:10.1137/0111030 CrossRefMATHMathSciNetGoogle Scholar
  12. 12.
    Raspberry Pi Foundation: Raspberry Pi FAQs. http://www.raspberrypi.org/faqs. Accessed 2012
  13. 13.
    Snell, Q.O., Mikler, A.R., Gustafson, J.L.: Netpipe: a network protocol independent performance evaluator. In: IASTED International Conference on Intelligent Information Management and Systems (1996) Google Scholar
  14. 14.
    Szalay, A.S., Bell, G.C., Huang, H.H., Terzis, A., White, A.: Low-power amdahl-balanced blades for data intensive computing. Oper. Syst. Rev. 44(1), 71–75 (2010). doi:10.1145/1740390.1740407 CrossRefGoogle Scholar
  15. 15.
    Whaley, R.C., Petitet, A.: Minimizing development and maintenance costs in supporting persistently optimized BLAS. Softw. Pract. Exp. 35(2), 101–121 (2005). http://www.cs.utsa.edu/~whaley/papers/spercw04.ps CrossRefGoogle Scholar
  16. 16.
    Ghemawat, S., Gobioff, H., Leung, S.-T.: The Google file system. In: ACM Symposium on Operating Systems Principles, pp. 29–43 (2003) Google Scholar
  17. 17.
    Shvachko, K., Kuang, H., Radia, S., Chansler, R.: The hadoop distributed file system. In: Symposium on Mass Storage Systems (2010) Google Scholar
  18. 18.
    Dean, J., Ghemawat, S.: MapReduce: simplied data processing on large clusters. In: Operating Systems Design and Implementation, pp. 137–150 (2004) Google Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Simon J. Cox
    • 1
  • James T. Cox
    • 1
  • Richard P. Boardman
    • 1
  • Steven J. Johnston
    • 1
  • Mark Scott
    • 1
  • Neil S. O’Brien
    • 1
  1. 1.Faculty of Engineering and the EnvironmentUniversity of SouthamptonSouthamptonUK

Personalised recommendations