Advertisement

Federating Advanced Cyberinfrastructures with Autonomic Capabilities

  • Javier Diaz-Montes
  • Ivan Rodero
  • Mengsong Zou
  • Manish Parashar
Chapter

Abstract

Cloud computing has emerged as a dominant paradigm that has been widely adopted by enterprises. Clouds provide on-demand access to computing utilities, an abstraction of unlimited computing resources, and support for on-demand scale up, scale down and scale out. Clouds are also rapidly joining high performance computing system, clusters and grids as viable platforms for scientific exploration and discovery. Furthermore, dynamically federated Cloud-of-Clouds infrastructure can support heterogeneous and highly dynamic applications requirements by composing appropriate (public and/or private) cloud services and capabilities. As a result, providing scalable and robust mechanisms to federate distributed infrastructures and handle application workflows, that can effectively utilize them, is critical. In this chapter, we present a federation model to support the dynamic federation of resources and autonomic management mechanisms that coordinate multiple workflows to use resources based on objectives. We demonstrate the effectiveness of the proposed framework and autonomic mechanisms through the discussion of an experimental evaluation of illustrative use case application scenarios, and from these experiences, we discuss that such a federation model can support new types of application formulations.

Keywords

Cloud Computing High Performance Computing Public Cloud Autonomic Manager Tuple Space 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

The research presented in this work is supported in part by US National Science Foundation (NSF) via grants numbers OCI 1310283, OCI 1339036, DMS 1228203 and IIP 0758566, by the Director, Office of Advanced Scientific Computing Research, Office of Science, of the U.S. Department of Energy through the Scientific Discovery through Advanced Computing (SciDAC) Institute of Scalable Data Management, Analysis and Visualization (SDAV) under award number DE-SC0007455, the Advanced Scientific Computing Research and Fusion Energy Sciences Partnership for Edge Physics Simulations (EPSI) under award number DE-FG02-06ER54857, the ExaCT Combustion Co-Design Center via subcontract number 4000110839 from UT Battelle, and by an IBM Faculty Award. We used resources provided by: XSEDE NSF OCI-1053575, FutureGrid NSF OCI-0910812, and NERSC Center DOE DE-AC02-05CH11231. The research and was conducted as part of the NSF Cloud and Autonomic Computing (CAC) Center at Rutgers University and the Rutgers Discovery Informatics Institute (RDI2). We would also like to acknowledge Hyunjoo Kim, Moustafa AbdelBaky, and Aditya Devarakonda for their contributions to the CometCloud project.

References

  1. 1.
    A. Agarwal, M. Ahmed, A. Berman, B. L. Caron, et al. GridX1: A Canadian computational grid. Future Gener. Comput. Syst., 23:680–687, June 2007.CrossRefGoogle Scholar
  2. 2.
    A. Andrieux, K. Czajkowski, A. Dan, K. Keahey, H. Ludwig, T. Nakata, J. Pruyne, J. Rofrano, S. Tuecke, and M. Xu. Web Services Agreement Specification (WS-Agreement), GFD-R-P.107. Technical report, GRAAP WG, Open Grid Forum, March 2007.Google Scholar
  3. 3.
    M. D. Assuncao and R. Buyya. Performance analysis of allocation policies for interGrid resource provisioning. Information and Software Technology, 51:42–55, January 2009.CrossRefGoogle Scholar
  4. 4.
    L. F. Bittencourt, C. R. Senna, and E. R. M. Madeira. Enabling execution of service workflows in grid/cloud hybrid systems. In Network Operations and Management Symp. Workshop, pages 343–349, 2010.Google Scholar
  5. 5.
    N. Bobroff, L. Fong, S. Kalayci, Y. Liu, J. C. Martinez, I. Rodero, S. M. Sadjadi, and D. Villegas. Enabling interoperability among meta-schedulers. In IEEE CCGrid, pages 306–315, 2008.Google Scholar
  6. 6.
    R. Bolze, F. Cappello, E. Caron, M. Dayde, et al. Grid’5000: a large scale and highly reconfigurable experimental Grid testbed. International Journal of High Performance Computing Applications, 20:481–494, November 2006.CrossRefGoogle Scholar
  7. 7.
    N. Carriero and D. Gelernter. Linda in context. Commun. ACM, 32(4):444–458, 1989.CrossRefGoogle Scholar
  8. 8.
    A. Celesti, F. Tusa, M. Villari, and A. Puliafito. How to enhance cloud architectures to enable cross-federation. In IEEE CLOUD, pages 337–34, 2010.Google Scholar
  9. 9.
    M. D. de Assuncao, R. Buyya, and S. Venugopal. Intergrid: a case for internetworking islands of grids. Concurrency Computat. Pract. and Exper., 20(8):997–1024, 2008.CrossRefGoogle Scholar
  10. 10.
    M. D. de Assuncao, A. di Costanzo, and R. Buyya. Evaluating the cost-benefit of using cloud computing to extend the capacity of clusters. In ACM HPDC, pages 141–150, 2009.Google Scholar
  11. 11.
    E. Deelman, G. Singh, M. Livny, B. Berriman, and J. Good. The cost of doing science on the cloud: the montage example. In Proceedings of the 2008 ACM/IEEE conference on Supercomputing, SC ‘08, pages 50:1–50:12, Piscataway, NJ, USA, 2008. IEEE Press.Google Scholar
  12. 12.
    T. Dunning and R. Nandkumar. International cyberinfrastructure: activities around the globe. Cyberinfrastructure Technology Watch Quarterly, 2:2–4, February 2006.Google Scholar
  13. 13.
    E. Elmroth and J. Tordsson. A standards-based grid resource brokering service supporting advance reservations, coallocation, and cross-grid interoperability. Concurr. Comput.: Pract. Exper., 21(18):2298–2335, Dec. 2009.Google Scholar
  14. 14.
    D. Erwin and D. Snelling. UNICORE: A Grid Computing Environment. In International Euro-Par Conference on Parallel Processing, pages 825–834, Manchester, UK, August 2001.Google Scholar
  15. 15.
    L. Field, E. Laure, and M. W. Schulz. Grid deployment experiences: Grid interoperation. J. Grid Comput., 7(3):287–296, 2009.CrossRefGoogle Scholar
  16. 16.
    I. Foster and C. Kesselman. The Grid: Blueprint for a New Computing Infrastructure. Morgan-Kauffman, 1999.Google Scholar
  17. 17.
    I. Foster, C. Kesselman, and S. Tuecke. The Anatomy of the Grid: Enabling Scalable Virtual Organizations. International Journal of High Perfomance Computing Applications, 15(3):200–222, 2001.CrossRefGoogle Scholar
  18. 18.
    G. Fox and D. Gannon. Cloud Programming Paradigms for Technical Computing Applications. Technical report, Indiana University, 2012.Google Scholar
  19. 19.
    T. Goodale, S. Jha, T. Kielmann, A. Merzky, J. Shalf, and C. Smith. A Simple API for Grid Applications (SAGA), GWD-R.72. Technical report, SAGA-CORE Working Group, Open Grid Forum, September 2006.Google Scholar
  20. 20.
    I. Gorton, Y. Liu, and J. Yin. Exploring architecture options for a federated, cloud-based system biology knowledgebase. In IEEE Intl. Conf. on Cloud Computing Technology and Science, pages 218–225, 2010.Google Scholar
  21. 21.
    T. Hey and A. Trefethen. The UK e-Science Core Programme and the Grid. Future Gener. Comput. Syst., 18:1017–1031, 2002.CrossRefzbMATHGoogle Scholar
  22. 22.
    E. Huedo, R. Montero, and I. Llorente. A recursive architecture for hierarchical grid resource management. Future Gener. Comput. Syst., 25:401–405, April 2009.CrossRefGoogle Scholar
  23. 23.
    K. Hukushima and K. Nemoto. Exchange Monte Carlo method and application to spin glass simulations. J. Phys. Soc. Jpn., 65:1604–1608, 1996.CrossRefGoogle Scholar
  24. 24.
    A. Iosup, D. Epema, T. Tannenbaum, M. Farrelle, and M. Livny. Inter-Operable Grids through Delegated MatchMaking. In International Conference for High Performance Computing, Networking, Storage and Analysis (SC07), pages 13:1–13:12, Reno, Nevada, November 2007.Google Scholar
  25. 25.
    A. Iosup, S. Ostermann, N. Yigitbasi, R. Prodan, T. Fahringer, and D. Epema. Performance analysis of cloud computing services for many-tasks scientific computing. IEEE Trans. Parallel Distrib. Syst., 22(6):931–945, 2011.CrossRefGoogle Scholar
  26. 26.
    A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice Hall, 1998.Google Scholar
  27. 27.
    K. Keahey and T. Freeman. Science clouds: Early experiences in cloud computing for scientific applications. In Cloud Computing and Its Applications (CCA-08), October 2008.Google Scholar
  28. 28.
    A. Kertesz and P. Kacsuk. Grid Meta-Broker Architecture: Towards an Interoperable Grid Resource Brokering Service. In CoreGRID Workshop on Grid Middleware in conjunction with Euro-Par, LNCS 4375, pages 112–116, Desden, Germany, 2008.Google Scholar
  29. 29.
    A. Kertész and P. Kacsuk. Gmbs: A new middleware service for making grids interoperable. Future Gener. Comput. Syst., 26(4):542–553, Apr. 2010.CrossRefGoogle Scholar
  30. 30.
    A. Kertesz, I. Rodero, and F. Guim. Bpdl: A data model for grid resource broker capabilities. Technical Report TR-0074, Institute on Resource Management and Scheduling, CoreGRID - Network of Excellence, March 2007.Google Scholar
  31. 31.
    A. Kertesz, I. Rodero, and F. Guim. Meta-Brokering Solutions for Expanding Grid Middleware Limitations. In Workshop on Secure, Trusted, Manageable and Controllable Grid Services (SGS) in conjunction with International Euro-Par Conference on Parallel Processing, Gran Canaria, Spain, July 2008.Google Scholar
  32. 32.
    A. Kertžsz, I. Rodero, F. Guim, A. Kertžsz, I. Rodero, and F. Guim. A data model for grid resource broker capabilities. In Grid Middleware and Services, pages 39–52, 2008.Google Scholar
  33. 33.
    H. Kim, Y. E. Khamra, I. Rodero, S. Jha, and M. Parashar. Autonomic management of application workflows on hybrid computing infrastructure. Sci. Program., 19(2–3):75–89, 2011.Google Scholar
  34. 34.
    H. Kim, M. Parashar, D. J. Foran, and L. Yang. Investigating the use of autonomic cloudbursts for high-throughput medical image registration. In IEEE/ACM GRID, pages 34–41, 2009.Google Scholar
  35. 35.
    K. Leal, E. Huedo, and I. M. Llorente. A decentralized model for scheduling independent tasks in federated grids. Future Gener. Comput. Syst., 25(8):840–852, 2009.CrossRefGoogle Scholar
  36. 36.
    Z. Li and M. Parashar. A computational infrastructure for grid-based asynchronous parallel applications. In HPDC, pages 229–230, 2007.Google Scholar
  37. 37.
    Z. Li and M. Parashar. Grid-based asynchronous replica exchange. In IEEE/ACM GRID, pages 201–208, 2007.Google Scholar
  38. 38.
    M. Marzolla, P. Andreetto, V. Venturi, A. Ferraro, et al. Open Standards-Based Interoperability of Job Submission and Management Interfaces across the Grid Middleware Platforms gLite and UNICORE. In IEEE International Conference on e-Science and Grid Computing, pages 592–601, Bangalore, India, December 2007.Google Scholar
  39. 39.
    H. Mohamed and D. Epema. KOALA: a Co-allocating Grid Scheduler. Concurrency and Computation: Practice & Experience, 20:1851–1876, November 2008.CrossRefGoogle Scholar
  40. 40.
    D. Nurmi, R. Wolski, C. Grzegorczyk, G. Obertelli, S. Soman, L. Youseff, and D. Zagorodnov. The eucalyptus open-source cloud-computing system. In IEEE/ACM CCGRID, pages 124–131, 2009.Google Scholar
  41. 41.
    A. Oleksiak, A. Tullo, P. Graham, T. Kuczynski, J. Nabrzyski, D. Szejnfeld, and T. Sloan. HPC-Europa: Towards Uniform Access to European HPC Infrastructures. In IEEE/ACM International Workshop on Grid Computing, pages 308–311, November 2005.Google Scholar
  42. 42.
    S. Ostermann, R. Prodan, and T. Fahringer. Extending grids with cloud resource management for scientific computing. In IEEE/ACM Grid, pages 42–49, 2009.Google Scholar
  43. 43.
    M. Parashar, M. AbdelBaky, I. Rodero, and A. Devarakonda. Cloud Paradigm and Practices for CDS&E. Technical report, Cloud and Autonomic Computing Center, Rutgers Univ., 2012.Google Scholar
  44. 44.
    J. C. Phillips, R. Braun, W. Wang, J. Gumbart, E. Tajkhorshid, E. Villa, C. Chipot, R. D. Skeel, L. V. Kal, and K. Schulten. Scalable molecular dynamics with NAMD. J. of Computational Chem., pages 1781–1802, 2005.Google Scholar
  45. 45.
    A. Quiroz, H. Kim, M. Parashar, N. Gnanasambandam, and N. Sharma. Towards autonomic workload provisioning for enterprise grids and clouds. In IEEE/ACM GRID, 2009.Google Scholar
  46. 46.
    A. Quiroz, M. Parashar, N. Gnanasambandam, and N. Sharma. Autonomic policy adaptation using decentralized online clustering. In ICAC, pages 151–160, 2010.Google Scholar
  47. 47.
    A. Quiroz, M. Parashar, N. Gnanasambandam, and N. Sharma. Design and evaluation of decentralized online clustering. TAAS, 7(3):34, 2012.Google Scholar
  48. 48.
    I. Raicu, I. Foster, and Y. Zhao. Many-task computing for grids and supercomputers. In Proc. Workshop on Many-Task Computing on Grids and Supercomputers, pages 1–11, 2008.Google Scholar
  49. 49.
    I. Raicu, Z. Zhang, M. Wilde, I. Foster, P. Beckman, K. Iskra, and B. Clifford. Towards loosely. coupled programming on petascale systems. In IEEE/ACM Supercomputing, 2008.Google Scholar
  50. 50.
    N. Ram and S. Ramakrishran. International cyberinfrastructure: activities around the globe. Cyberinfrastructure Technology Watch Quarterly, 2:15–19, February 2006.Google Scholar
  51. 51.
    M. Riedel, A. Memon, M. Memon, D. Mallmann, et al. Improving e-Science with Interoperability of the e-Infrastructures EGEE and DEISA. In International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pages 225–231, Opatija, Croatia, May 2008.Google Scholar
  52. 52.
    P. Riteau, M. Tsugawa, A. Matsunaga, J. Fortes, and K. Keahey. Large-scale cloud computing research: Sky computing on futuregrid and grid’5000. In ERCIM News, 2010.Google Scholar
  53. 53.
    B. Rochwerger, D. Breitgand, E. Levy, A. Galis, et al. The reservoir model and architecture for open federated cloud computing. IBM Journal of Research and Development, 53, 2009.Google Scholar
  54. 54.
    I. Rodero, F. Guim, J. Corbalan, L. Fong, Y. Liu, and S. Sadjadi. Looking for an Evolution of Grid Scheduling: Meta-brokering. Grid Middleware and Services: Challenges and Solutions, pages 105–119, August 2008.Google Scholar
  55. 55.
    I. Rodero, F. Guim, J. Corbalan, L. Fong, and S. Sadjadi. Broker Selection Strategies in Interoperable Grid Systems. Future Gener. Comput. Syst., 26(1):72–86, January 2010.CrossRefGoogle Scholar
  56. 56.
    I. Rodero, F. Guim, J. Corbalan, and A. Goyeneche. The grid backfilling: a multi-site scheduling architecture with data mining prediction techniques. In Grid Middleware and Services, pages 137–152, 2008.Google Scholar
  57. 57.
    I. Rodero, F. Guim, J. Corbalan, and J. Labarta. How the JSDL can Exploit the Parallelism? In IEEE International Symposium on Cluster Computing and the Grid (CCGrid), pages 275–282, Singapore, May 2006.Google Scholar
  58. 58.
    I. Rodero, J. Jaramillo, A. Quiroz, M. Parashar, F. Guim, and S. Poole. Energy-efficient application-aware online provisioning for virtualized clouds and data centers. In Green Computing Conf., pages 31–45, 2010.Google Scholar
  59. 59.
    I. Rodero, D. Villegas, N. Bobroff, Y. Liu, L. Fong, and S. M. Sadjadi. Enabling interoperability among grid meta-schedulers. J. Grid Comput., 11(2):311–336, 2013.CrossRefGoogle Scholar
  60. 60.
    C. Schmidt and M. Parashar. Squid: Enabling search in dht-based systems. J. Parallel Distrib. Comput., 68(7):962–975, 2008.CrossRefzbMATHGoogle Scholar
  61. 61.
    J. Seidel, O. Waldrich, W. Ziegler, P. Wieder, and R. Yahyapour. Using SLA for Resource Management and Scheduling - a Survey, TR-0096. Technical report, Institute on Resource Management and Scheduling, 2007.Google Scholar
  62. 62.
    B. Sotomayor, R. Montero, I. Llorente, and I. Foster. Virtual infrastructure management in private and hybrid clouds. IEEE Internet Computing, 13:14–22, 2009.CrossRefGoogle Scholar
  63. 63.
    I. Stoica, R. Morris, D. Liben-Nowell, D. R. Karger, M. F. Kaashoek, F. Dabek, and H. Balakrishnan. Chord: A scalable peer-to-peer lookup protocol for internet applications. In ACM SIGCOMM, pages 149–160, 2001.Google Scholar
  64. 64.
    R. Swendsen and J. Wang. Replica Monte Carlo simulation of spin-glasses. Physical Review Letters, 57:2607–2609, 1986.CrossRefMathSciNetGoogle Scholar
  65. 65.
    P. Troger, H. Rajic, A. Haas, and P. Domagalski. Standardization of an API for Distributed Resource Management Systems. In Proceedings of the Seventh IEEE International Symposium on Cluster Computing and the Grid, pages 619–626, Washington, DC, USA, 2007.Google Scholar
  66. 66.
    C. Vazquez, E. Huedo, R. Montero, and I. Llorente. Dynamic provision of computing resources from grid infrastructures and cloud providers. In Grid and Pervasive Computing Conf., pages 113–120, 2009.Google Scholar
  67. 67.
    T. Vazquez, E. Huedo, R. Montero, and I. Lorente. Evaluation of a Utility Computing Model Based on the Federation of Grid Infrastructures. In International Euro-Par Conference on Parallel Processing, pages 372–381, Rennes, France, August 2007.Google Scholar
  68. 68.
    C. Vecchiola, S. Pandey, and R. Buyya. High-performance cloud computing: A view of scientific applications. In Proceedings of the 2009 10th International Symposium on Pervasive Systems, Algorithms, and Networks, ISPAN ‘09, pages 4–16, Washington, DC, USA, 2009. IEEE Computer Society.Google Scholar
  69. 69.
    D. Villegas, N. Bobroff, I. Rodero, J. Delgado, et al. Cloud federation in a layered service model. J. Comput. Syst. Sci., 78(5):1330–1344, 2012.CrossRefGoogle Scholar
  70. 70.
    J.-S. Vockler, G. Juve, and M. R. Ewa Deelman and. Experiences using cloud computing for a scientific workflow application. In 2nd Workshop on Scientific Cloud Computing in conjunction with ACM HPDC, pages 402–412, 2011.Google Scholar
  71. 71.
  72. 72.
    CometCloud Project. http://www.cometcloud.org.
  73. 73.
    DAS-3 Project. http://www.cs.vu.nl/das.
  74. 74.
    DEISA Project. http://www.deisa.eu.
  75. 75.
    D-Grid Project. http://www.d-grid.de.
  76. 76.
    EGI Europe. http://www.egi.eu.
  77. 77.
  78. 78.
    Grid’ 5000 Project. https://www.grid5000.fr.
  79. 79.
    R. Zhang, M. Parashar, and E. Gallichio. Salsa: Scalable asynchronous replica exchange for parallel molecular dynamics applications. In ICPP, pages 127–134, 2006.Google Scholar
  80. 80.
  81. 81.
    IEEE Intercloud WG (ICWG) Working Group. http://standards.ieee.org/develop/wg/ICWG-2302_WG.html.
  82. 82.
    IEEE Standard for Intercloud Interoperability and Federation. http://standards.ieee.org/develop/project/2302.html.
  83. 83.
    Naregi Project, http://www.naregi.org.
  84. 84.
  85. 85.
    Open Cloud Computing Interface (OCCI). http://occi-wg.org/.
  86. 86.
  87. 87.
  88. 88.
  89. 89.
    PRACE Project. http://www.prace-ri.eu.
  90. 90.
  91. 91.
    XSEDE Project. https://www.xsede.org/.

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  • Javier Diaz-Montes
    • 1
  • Ivan Rodero
    • 1
  • Mengsong Zou
    • 1
  • Manish Parashar
    • 1
  1. 1.Rutgers Discovery Informatics Institute, NSF Cloud and Autonomic Computing Center, Department of Electrical and Computer EngineeringRutgers UniversityPiscatawayUSA

Personalised recommendations