CD 2004: Component Deployment pp 35-49 | Cite as

Deploying CORBA Components on a Computational Grid: General Principles and Early Experiments Using the Globus Toolkit

  • Sébastien Lacour
  • Christian Pérez
  • Thierry Priol
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3083)

Abstract

The deployment of high bandwidth wide-area networks has led computational grids to offer a very powerful computing resource. In particular, this inherently distributed resource is well-suited for multiphysics applications. To face the complexity of such applications, the software component technology appears to be a very adequate programming model. However, to take advantage of the computational power of grids, component-based applications should be automatically deployed in computational grids. Based on the Corba component specifications for the deployment of components, which seem to currently be the most complete, this paper proposes a detailed process for component deployment in computational grids. It also reports on early experiments on deploying Corba components in a computational grid using the Globus Toolkit 2.4.

Keywords

Computational Grid Component Server Grid Resource Grid Environment Object Management Group 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Szyperski, C.: Component Software: Beyond Object-Oriented Programming, 1st edn. Addison-Wesley/ACM Press (1998)Google Scholar
  2. 2.
    Sun Microsystems: Enterprise JavaBeans Specification (2001)Google Scholar
  3. 3.
    Cerami, E.: Web Services Essentials, 1st edn. O’Reilly & Associates, Sebastopol (2002)Google Scholar
  4. 4.
    Open Management Group (OMG): CORBA components, version 3. Document formal/02-06-65 (2002)Google Scholar
  5. 5.
    Armstrong, R., Gannon, D., Geist, A., Keahey, K., Kohn, S., Mclnnes, L., Parker, S., Smolinski, B.: Toward a common component architecture for high-performance scientific computing. In: Proc. of the 8th IEEE International Symposium on High Performance Distributed Computing (HPDC 1999), Redondo Beach, CA, pp. 115–124 (1999)Google Scholar
  6. 6.
    Denis, A., Prez, C., Priol, T.: Towards high performance CORBA and MPI middlewares for grid computing. In: Lee, C.A. (ed.) GRID 2001. LNCS, vol. 2242, pp. 14–25. Springer, Heidelberg (2001); held in conjunction with SuperComputing 2001 (SC 2001)CrossRefGoogle Scholar
  7. 7.
    Prez, C., Priol, T., Ribes, A.: A parallel CORBA component model for numerical code coupling. The International Journal of High Performance Computing Applications (IJHPCA) 17, 417–429 (2003); Special issue Best Applications Papers from the 3rd International Workshop on Grid ComputingCrossRefGoogle Scholar
  8. 8.
    Foster, I., Kesselman, C.: Computational Grids. In: The Grid: Blueprint for a New Computing Infrastructure, pp. 15–51. Morgan Kaufmann, San Francisco (1998)Google Scholar
  9. 9.
    The Globus Alliance web site, http://www.Globus.org/
  10. 10.
    Foster, I., Kesselman, C.: The Globus Project: a status report. In: Proc. of the 7th Heterogeneous Computing Workshop, held in conjunction with IPPS/SPDP, Orlando, FL, pp. 4–18 (1998)Google Scholar
  11. 11.
    The TeraGrid web site, http://www.TeraGrid.org/
  12. 12.
    Open Management Group (OMG): Common Object Request Broker Architecture (CORBA/IIOP). Document formal/02-11-03 (2003) Google Scholar
  13. 13.
    The Grid Physics Network (GriPhyN) web site, http://www.GriPhyN.org/
  14. 14.
    The DOE Science Grid web site, http://DOEScienceGrid.org/
  15. 15.
    The DataGrid Project web site, http://www.eu-datagrid.org/
  16. 16.
    The Common TeraGrid Software Stack (CTSS), http://www.TeraGrid.org/userinfo/guide_software.html
  17. 17.
    Foster, I.T., Kesselman, C., Tsudik, G., Tuecke, S.: A security architecture for computational grids. In: Proc. of the 5th ACM Conference on Computer and Communications Security, San Francisco, CA, pp. 83–92. ACM Press, New York (1998)CrossRefGoogle Scholar
  18. 18.
    Czajkowski, K., Foster, I., Karonis, N., Kesselman, C., Martin, S., Smith, W., Tuecke, S.: A resource management architecture for metacomputing systems. In: Feitelson, D.G., Rudolph, L. (eds.) IPPS-WS 1998, SPDP-WS 1998, and JSSPP 1998. LNCS, vol. 1459, pp. 62–82. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  19. 19.
    Resource Specification Language (RSL) version 1.0, http://www-fp.globus.org/gram/rsl_specl.html
  20. 20.
    Bester, J., Foster, I., Kesselman, C., Tedesco, J., Tuecke, S.: GASS: A data movement and access service for wide area computing systems. In: Proc. of the 6th Workshop on Input/Output in Parallel and Distributed Systems (IOPADS), Atlanta, GA, pp. 78–88. ACM Press, New York (1999)CrossRefGoogle Scholar
  21. 21.
    Czajkowski, K., Fitzgerald, S., Foster, I., Kesselman, C.: Grid information services for distributed resource sharing. In: Proc. of the 10th IEEE International Symposium on High-Performance Distributed Computing (HPDC-10), San Francisco, CA, pp. 181–194 (2001)Google Scholar
  22. 22.
    Wolski, R.: Forecasting network performance to support dynamic scheduling us-ing the network weather service. In: Proc. of the 6th International Symposium on High-Performance Distributed Computing (HPDC-6 1997), Portland, OR, pp. 316–325 (1997)Google Scholar
  23. 23.
    Swany, M., Wolski, R.: Representing dynamic performance information in grid environments with the network weather service. In: Proc. of the 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGrid 2002), Berlin, Germany, pp. 48–56 (2002)Google Scholar
  24. 24.
    The Globus Project: GridFTP: Universal data transfer for the grid. White paper (2000), http://www.globus.org/datagrid/deliverables/C2WPdraft3.pdf
  25. 25.
    The Globus Project: GridFTP update. Technical report (2002), http://www.globus.org/datagrid/deliverables/GridFTP-Overview-200201.pdf
  26. 26.
    Object Management Group (OMG): Deployment and configuration of component-based distributed applications specification. Draft Adopted Specification ptc/03-07-02 (2003), http://www.OMG.org/cgi-bin/apps/doc7ptc/03-07-02.pdf
  27. 27.
    The Object Management Group (OMG) web site, http://www.OMG.org/
  28. 28.
    Furmento, N., Mayer, A., McGough, S., Newhouse, S., Field, T., Darlington, J.: Optimisation of component-based applications within a grid environment. In: Proc. of the 2001 ACM/IEEE conference on Supercomputing, Denver, CO., p. 30. ACM Press, New York (2001)CrossRefGoogle Scholar
  29. 29.
    Furmento, N., Mayer, A., McGough, S., Newhouse, S., Darlington, J.: A component framework for HPC applications. In: Sakellariou, R., Keane, J.A., Gurd, J.R., Freeman, L. (eds.) Euro-Par 2001. LNCS, vol. 2150, pp. 540–548. Springer, Heidelberg (2001)CrossRefGoogle Scholar
  30. 30.
    Kichkaylo, T., Ivan, A.A., Karamcheti, V.: Constrained component deployment in wide-area networks using AI planning techniques. In: Proc. of the 17th In-ternational Parallel and Distributed Processing Symposium (IPDPS 2003), Nice, France, p. 3 (2003)Google Scholar
  31. 31.
    Maheswaran, M., Siegel, H.J.: A dynamic matching and scheduling algorithm for heterogeneous computing systems. In: Proc. of the 7th Heterogeneous Computing Workshop, held in conjunction with IPPS/SPDP 1998, Orlando, FL, pp. 57–69 (1998)Google Scholar
  32. 32.
    Pilhofer, F.: Assembly and deployment toolkit, http://www.fpx.de/MicoCCM/Toolkit/
  33. 33.
    The ICENI web site at the London e-Science Centre, http://www.lesc.ic.ac.uk/iceni/
  34. 34.
    Furmento, N., Mayer, A., McGough, S., Newhouse, S., Field, T., Darlington, J.: ICENI: Optimisation of component applications within a grid environment. Journal of Parallel Computing 28, 1753–1772 (2002)MATHCrossRefGoogle Scholar
  35. 35.
    Furmento, N., Lee, W., Newhouse, S., Darlington, J.: Test and deployment of ICENI, an integrated grid middleware on the UK e-Science grid. In: Proc. of the UK e-Science All Hands Meeting, Nottingham, UK, pp. 192–195 (2003)Google Scholar
  36. 36.
    Young, L., McGough, S., Newhouse, S., Darlington, J.: Scheduling architecture and algorithms within the ICENI grid middleware. In: Proc. of the UK e-Science All Hands Meeting, Nottingham, UK, pp. 5–12 (2003)Google Scholar
  37. 37.
    ICENI Research Group: Imperial college e-Science networked infrastructure. User’s guide, The London e-Science Centre (2004), http://www.lesc.ic.ac.uk/iceni/downloads/guide.pdf
  38. 38.
    Ivan, A.A., Harman, J., Allen, M., Karamcheti, V.: Partitionable services: A framework for seamlessly adapting distributed applications to heterogeneous environments. In: Proc. of the 11th IEEE International Symposium on High Performance Distributed Computing (HPDC-11), Edinburgh, Scotland, pp. 103–112 (2002)Google Scholar
  39. 39.
    The CCA Forum web site, http://www.CCA-forum.org/
  40. 40.
    Bramley, R., Chiu, K., Diwan, S., Gannon, D., Govindaraju, M., Mukhi, N., Temko, B., Yechuri, M.: A component based services architecture for building distributed applications. In: Proc. of the 9th IEEE International Symposium on High Performance Distributed Computing (HPDC 2000), Pittsburgh, PA, pp. 51–59 (2000)Google Scholar
  41. 41.
    Govindaraju, M., Krishnan, S., Chiu, K., Slominski, A., Gannon, D., Bramle, R.: Merging the CCA component model with the OGSI framework. In: Proc. of the 3rd International Symposium on Cluster Computing and the Grid (CCGrid 2003), Tokyo, Japan, pp. 182–189 (2003)Google Scholar
  42. 42.
    Courtrai, L., Guidec, F., Sommer, N.L., Maho, Y.: Resource management for parallel adaptive components. In: Proc. of the 5th International Workshop on Java for Parallel and Distributed Computing (JPDC), held in conjunction with the 17th International Parallel and Distributed Processing Symposium (IPDPS 2003), Nice, France, pp. 134–140 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Sébastien Lacour
    • 1
  • Christian Pérez
    • 1
  • Thierry Priol
    • 1
  1. 1.IRISA / INRIA, Campus de BeaulieuRennesFrance

Personalised recommendations