Cluster Computing

, Volume 16, Issue 1, pp 55–63 | Cite as

CHERUB: power consumption aware cluster resource management

Article

Abstract

This paper presents an evaluation of ACPI energy saving modes, and deduces the design and implementation of an energy saving daemon for clusters called cherub. The design of the cherub daemon is modular and extensible. Since the only requirement is a central approach for resource management, cherub is suited for Server Load Balancing (SLB) clusters managed by dispatchers like Linux Virtual Server (LVS), as well as for High Performance Computing (HPC) clusters. Our experimental results show that cherub’s scheduling algorithm works well, i.e. it will save energy, if possible, and avoids state-flapping.

Keywords

Green computing Cluster computing 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Cisco Systems Inc.: Cisco EnergyWise Technology. www.cisco.com/go/energywise. Accessed July 2011
  2. 2.
    Cluster Resources Inc.: Cluster resources products—TORQUE resource manager. http://www.adaptivecomputing.com/resources/docs/. Accessed July 2011
  3. 3.
    Cluster Resources Inc.: Cluster resources products—Moab Cluster Software Suite. http://www.clusterresources.com/products/moab-cluster-suite.php. Accessed July 2011
  4. 4.
    Dolz, M.F., Fernández, J.C., Mayo, R., Quintana-Ortí, E.S.: EnergySaving Cluster Roll: power saving system for clusters. In: Architecture of Computing Systems—ARCS 2010, pp. 162–173 (2010) CrossRefGoogle Scholar
  5. 5.
    Ecos: 80 PLUS. http://www.80plus.org/ (2005). Accessed July 2011
  6. 6.
    FreeIPMI Core Team: GNU FreeIPMI. http://www.gnu.org/software/freeipmi/. Accessed July 2011
  7. 7.
    Ganglia Development Team: Homepage of Ganglia monitoring system. http://ganglia.sourceforge.net. Accessed July 2011
  8. 8.
    Hewlett-Packard Corporation, Intel Corporation, Microsoft Corporation, Phoenix Technologies Ltd., and Toshiba Corporation: Advanced configuration and power interface specification. http://www.acpi.info/spec.htm (April 2010). Accessed July 2011
  9. 9.
    Kerstens, A., DuChene, S.: Applying Green Computing to clusters and the data center. In: Proceedings of the Linux Symposium, Volume One, pp. 113–122 (2008) Google Scholar
  10. 10.
    Laurie, D.: IPMItool. http://ipmitool.sourceforge.net/. Accessed July 2011
  11. 11.
    Mersenne Research Inc.: Great Internet Mersenne Prime Search—GIMPS. http://www.mersenne.org (2010). Accessed July 2011
  12. 12.
    Pinheiro, E., Bianchini, R., Carrera, E.V., Heath, T.: Load balancing and unbalancing for power and performance in cluster-based systems. In: Proceedings of the Workshop on Compilers and Operating Systems for Low Power (COLP’01) (September 2001) Google Scholar
  13. 13.
    The Green500 Supercomputers: http://www.green500.org/lists.php (2010). Accessed July 2011
  14. 14.
    The Linpack Benchmark: http://www.top500.org/project/linpack (2010). Accessed July 2011
  15. 15.
    The Linux Virtual Server Project: Linux Server Cluster for Load Balancing. http://www.linuxvirtualserver.org. Accessed July 2011
  16. 16.
    U.S. Environmental Protection Agency: EPA report to congress on server and data center energy efficiency (2007). http://www.energystar.gov/ia/partners/prod_development/downloads/EPA_Datacenter_Report_Congress_Final1.pdf
  17. 17.
    Vasudevan, V., Andersen, D., Kaminsky, M., Tan, L., Franklin, J., Moraru, I.: Energy-efficient cluster computing with FAWN: workloads and implications. In: e-Energy ’10: Proceedings of the 1st International Conference on Energy-Efficient Computing and Networking, pp. 195–204. ACM, New York (2010) CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.Institute of Computer ScienceUniversity of PotsdamPotsdamGermany

Personalised recommendations