Hands-On Training for Undergraduates in High-Performance Computing Using Java
In recent years, the object-oriented approach has emerged as a key technology for building highly complex scientific codes, as has the use of parallel computers for the solution of large-scale problems. We believe that the paradigm shift towards parallelism will continue and, therefore, principles and techniques of writing parallel programs should be taught to the students at an early stage of their education rather than as an advanced topic near the end of a curriculum. A certain understanding of the practical aspects of numerical modeling is also a useful facet in computer science education. The reason is that, in addition to their traditional prime rôle in computational science and engineering, numerical techniques are also increasingly employed in seemingly non- numerical settings as large-scale data mining and web searching. This paper describes a practical training course for undergraduates, where carefully selected problems of high-performance computing are solved using the programming language Java.
KeywordsErential Equation Practical Training Data Parallelism Vector Operation Task Parallelism
Unable to display preview. Download preview PDF.
- 2.Java Grande Forum. http://www.javagrande.org/..
- 5.J. E. Moreira, S. P. Midkiff, and M. Gupta. From op to mega ops: Java for technical computing. To appear in ACM Transactions on Programming Languages and Systems.Google Scholar
- 6.R. L. Burden and J. D. Faires. Numerical Analysis. Brooks/Cole Publishing Company, Pacific Grove, CA, 6th edition, 1997.Google Scholar
- 9.A. Malony and S. Shende. Performance technology for complex parallel and distributed systems. In G. Kotsis and P. Kacsuk, editors, Distributed and Parallel Systems: From Concepts to Applications-Proc. Third Austrian-Hungarian Workshop on Distributed and Parallel Systems, DAPSYS 2000, pages 37–46, Norwell, MA, 2000. Kluwer.Google Scholar
- 10.W. E. Nagel, A. Arnold, M. Weber, H.-C. Hoppe, and K. Solchenbach. VAMPIR: Visualization and analysis of MPI resources. Supercomputer 63, XII(1): 69–80, 1996.Google Scholar