Advertisement

Delayed Evaluation, Self-optimising Software Components as a Programming Model

  • Peter Liniker
  • Olav Beckmann
  • Paul H. J. Kelly
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2400)

Abstract

We argue that delayed-evaluation, self-optimising scientific software components, which dynamically change their behaviour according to their calling context at runtime offer a possible way of bridging the apparent conflict between the quality of scientific software and its performance. Rather than equipping scientific software components with a performance interface which allows the caller to supply the context information that is lost when building abstract software components, we propose to recapture this lost context information at runtime. This paper is accompanied by a public release of a parallel linear algebra library with both C and C++ language interfaces which implements this proposal. We demonstrate the usability of this library by showing that it can be used to supply linear algebra component functionality to an existing external software package. We give preliminary performance figures and discuss avenues for future work.

Keywords

Software Component Iterative Solver Performance Interface Language Interface Basic Linear Algebra 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    O. Beckmann. Interprocedural Optimisation of Regular Parallel Computations at Runtime. PhD thesis, Imperial College of Science, Technology and Medicine, University of London, Jan. 2001.Google Scholar
  2. 2.
    O. Beckmann and P. H. J. Kelly. Runtime interprocedural data placement optimisation for lazy parallel libraries (extended abstract). In Proceedings of Euro-Par’ 97, number 1300 in LNCS, pages 306–309. Springer Verlag, Aug. 1997.CrossRefGoogle Scholar
  3. 3.
    J. Bilmes, K. Asanovic, C.-W. Chin, and J. Demmel. Optimizing matrix multiply using PhiPAC: A portable, high performance, ANSI C coding methodology. In ICS’ 97 [11], pages 340–347.Google Scholar
  4. 4.
    BLAST Forum. Basic linear algebra subprograms technical BLAST forum standard, Aug. 2001. Available via http://www.netlib.org/blas/blas-forum.
  5. 5.
    J. Choi, J. J. Dongarra, S. Ostrouchov, A. Petitet, D. Walker, and R. C. Whaley. LAPACK working note 100: a proposal for a set of parallel basic linear algebra subprograms. Technical Report CS-95-292, Computer Science Department, University of Tennessee, Knoxville, July 1995.Google Scholar
  6. 6.
    J. Darlington, A. J. Field, P. G. Harrison, P. H. J. Kelly, D. W. N. Sharp, Q. Wu, and R. L. While. Parallel programming using skeleton functions. In PARLE’ 93: Parallel Architectures and Languages Europe, number 694 in LNCS. Springer-Verlag, 1993.Google Scholar
  7. 7.
    Release of DESO library. http://www.doc.ic.ac.uk/~ob3/deso.
  8. 8.
    J. Dongarra, A. Lumsdaine, R. Pozo, and K. A. Remington. LAPACK working note 102: IML++ v. 1.2: Iterative methods library reference guide. Technical Report UT-CS-95-303, Department of Computer Science, University of Tennessee, Aug. 1995.Google Scholar
  9. 9.
    D. R. Engler. Incorporating application semantics and control into compilation. In DSL’ 97: Proceedings of the Conference on Domain-Specific Languages, pages 103–118. USENIX, Oct. 15-17 1997.Google Scholar
  10. 10.
    N. Furmento, A. Mayer, S. McGough, S. Newhouse, T. Field, and J. Darlington. Optimisation of component-based applications within a grid environment. In Supercomputing 2001, 2001.Google Scholar
  11. 11.
    Proceedings of the 11th International Conference on Supercomputing (ICS-97), New York, July7–11 1997. ACM Press.Google Scholar
  12. 12.
    S. Karmesin, J. Crotinger, J. Cummings, S. Haney, W. J. Humphrey, J. Reynders, S. Smith, and T. Williams. Array design and expression evaluation in POOMAII. In ISCOPE’98: Proceedings of the 2nd International Scientific Computing in Object-Oriented Parallel Environments, number 231–238 in LNCS, page 223 ff. Springer-Verlag, 1998.Google Scholar
  13. 13.
    K. Kennedy. Telescoping languages: A compiler strategy for implementation of high-level domain-specific programming systems. In IPDPS’ 00: Proceedings of the 14th International Conference on Parallel and Distributed Processing Symposium, pages 297–306. IEEE, May 1-5 2000.Google Scholar
  14. 14.
    M. E. Mace. Memory Storage Patterns in Parallel Processing. Kluwer Academic Press, 1987.Google Scholar
  15. 15.
    D. J. Quinlan, M. Schordan, B. Philip, and M. Kowarschik. Compile-time support for the optimization of user-defined object-oriented abstractions. In POOSC’ 00: Parallel/High-Performance Object-Oriented Scientific Computing, Oct. 2001.Google Scholar
  16. 16.
    J. G. Siek and A. Lumsdaine. The matrix template library: A generic programming approach to high performance numerical linear algebra. In ISCOPE’ 98: International Symposium on Computing in Object-Oriented Parallel Environments, number 1505 in LNCS, pages 59–71, 1998.CrossRefGoogle Scholar
  17. 17.
    T. L. Veldhuizen. Arrays in Blitz++. In ISCOPE’98: Proceedings of the 2nd International Scientific Computing in Object-Oriented Parallel Environments, number 1505 in LNCS, page 223 ff. Springer-Verlag, 1998.CrossRefGoogle Scholar
  18. 18.
    T. L. Veldhuizen. C++ templates as partial evaluation. In PEPM’ 99: Partial Evaluation and Semantic-Based Program Manipulation, pages 13–18, 1999.Google Scholar
  19. 19.
    R. C. Whaley, A. Petitet, and J. J. Dongarra. Automated empirical optimizations of software and the ATLAS project. Parallel Computing, 27(1-2):3–35, Jan. 2001.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Peter Liniker
    • 1
  • Olav Beckmann
    • 1
  • Paul H. J. Kelly
    • 1
  1. 1.Department of ComputingImperial CollegeLondonUK

Personalised recommendations