The Mobile Object Layer: A Run-Time Substrate for Mobile Adaptive Computations

  • Nikos Chrisochoides
  • Kevin Barker
  • Démian Nave
  • Chris Hawblitzel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1505)


We present a parallel runtime substrate that supports a global addressing scheme, object mobility, and automatic message forwarding required for the implementation of adaptive applications on distributed memory machines. Our approach is application-driven; the target applications are characterized by very large variations in time and length scales. Preliminary performance data from parallel unstructured adaptive mesh refinement on an SP2 suggest that the flexibility and general nature of the approach we follow does not cause undue overhead.


Mobile Object Runtime System Home Node Constrain Delaunay Triangulation Target Processor 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Chrisochoides, N., Pingali, K., Kodukula, I.; Data movement and control substrate for parallel scientific computing Lecture Notes in Computer Science (LNCS), Springer-Verlag 1199, 256 1997.Google Scholar
  2. [2]
    Foster, I., Kesselamn, C., Tuecke, S.; The Nexus task-parallel runtime system, In Proc. 1st Int. Workshop on Parallel Processing, 1994.Google Scholar
  3. [3]
    Beckman, P., Gannon, D.; Tulip: Parallel run-time support system for pC++ (1996),
  4. [4]
    DiNicola, P., Gildea, K., Govindaraju, R., Mirza, J., Shah, G.; LAPI architecture definition: Low level API draft, IBM Confidential, December 1996.Google Scholar
  5. [5]
    Hawblitzel, C., Chrisochoides, N.; Mobile object layer: A data migration framework for Active Messages paradigm, University of Notre Dame Department of Computer Science and Engineering TR 98-07, 1998.Google Scholar
  6. [6]
    Fowler, R.; The complexity of using forwarding addresses for decentralized object finding, In Proc. 5th Annual ACM Symp. on Principles of Distributed Computing, 1986.Google Scholar
  7. [7]
    von Eicken, T., Culler, D., Goldstein, S., Schauser, K.; Active messages: A mechanism for integrated communication and computation, In Proc. 19th Int. Symp. on Computer Architecture, 1992.Google Scholar
  8. [8]
    Blumofe, R., Leiserson, C.; Scheduling multithreaded computations by work stealing, In FOCS-35, pp. 356–368, 1994.Google Scholar
  9. [9]
    Chrisochoides, N.; Multithreaded model for load balancing parallel, adaptive computations on multicomputers, J. Appl. Num. Math, 6 (1996), pp. 1–17.zbMATHGoogle Scholar
  10. [10]
    Chew, L. Paul, Chrisochoides, N., Sukup, F.; Parallel constrained Delaunay meshing, In Proc. Joint ASME/ASCE/SES Summer Meeting, Special Symp. on Trends in Unstructured Mesh Generation, 1997.Google Scholar
  11. [11]
    Chew, L. Paul; Constrained Delaunay triangulations, Algorithmica, 4 (1989), 97–108.MathSciNetCrossRefzbMATHGoogle Scholar
  12. [12]
    Chrisochoides, N., Nave, D., Hawblitzel, C.; Data migration substrate for the load balancing of parallel adaptive unstructured mesh computations, In Proc. 6th Int. Conf. on Numerical Grid Generation in Computational Field Simulation, 1998.Google Scholar
  13. [13]
    Chang, C., Czajkowski, G., Hawblitzel, C., von Eicken, T.; Low-latency communication on the IBM RISC System/6000 SP, In Proc. SC’ 96, 1996.Google Scholar
  14. [14]
    Arjomandi, E., O’Farrell, W., Kalas, I., Koblents, G., Eigler, F., Gao, G.; ABC++: Concurrency by inheritance in C++, IBM Sys. J., Vol. 34, No.1, 1995, pp. 120–137.CrossRefGoogle Scholar
  15. [15]
    Amza, C., Cox, A., Dwarkadas, S., Keleher, P., Lu, H., Rajamony, R., Yu, W., Zwaenepoel, W.; TreadMarks: Shared memory computing on networks of workstations, (1996) IEEE Computer, 29(2), 18.CrossRefGoogle Scholar
  16. [16]
    Kale, L., Krishnan, S.; “Charm++,” in Parallel Programming Using C++, eds. Wilson, G. and Lu, P., The MIT Press, 1998.Google Scholar
  17. [17]
    Chang, C., Sussman, A., Saltz, J.; “Chaos++”, Parallel Programming Using C++, eds. Wilson, G. and Lu, P., The MIT Press, 1998.Google Scholar
  18. [18]
    Kesselman, C.; “CC++”, In Parallel Programming Using C++, eds. Wilson, G. and Lu, P., The MIT Press, 1998.Google Scholar
  19. [19]
    Chase, J., Amador, F., Lazowska, E., Levy, H., Littlefield, R.; The Amber system: Parallel programming on a network of multiprocessors, SOSP-12, pp. 147–158, December, 1989.Google Scholar
  20. [20]
    Johnson, K., Kaashoek, F., Wallach, D.; CRL: High-performance all-software distributed shared memory, In Proc. 15th Annual Symp. on OS Principles, 1995.Google Scholar
  21. [21]
    Kuskin, J., Ofelt, D., Heinrich, M., Heinlein, J., Simoni, R., Gharachorloo, K., Chapin, J., Nakahira, D., Baxter, J., Horowitz, M., Gupta, A., Rosenblum, M., Hennessy, J.; The Stanford FLASH multiprocessor, In Proc. 21st Int. Symp. on Computer Architecture, 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • Nikos Chrisochoides
    • 1
  • Kevin Barker
    • 1
  • Démian Nave
    • 1
  • Chris Hawblitzel
    • 1
  1. 1.University of Notre DameNotre DameUSA

Personalised recommendations