Advertisement

Pattern Recognition and Image Analysis

, Volume 17, Issue 3, pp 390–398 | Cite as

Partial evaluation technique for distributed image processing

  • A. Tchernykh
  • A. Cristóbal-Salas
  • V. Kober
  • I. A. Ovseevich
Image Processing, Analysis, Recognition, and Understanding
  • 44 Downloads

Abstract

In this paper, a partial evaluation technique to reduce communication costs of distributed image processing is presented. It combines application of incomplete structures and partial evaluation together with classical program optimization such as constant-propagation, loop unrolling and dead-code elimination. Through a detailed performance analysis, we establish conditions under which the technique is beneficial.

Keywords

Fast Fourier Transform Residual Program Partial Evaluation Cache Line Fast Fourier Transform Algorithm 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    E. Oran Brigham, Fast Fourier Transform and Its Applications (Prentice-Hall, 1988).Google Scholar
  2. 2.
    M.-J. Quinn, Parallel Computing Theory and Practice (McGraw-Hill, 1994).Google Scholar
  3. 3.
    P.-N. Swarztrauber, “Multiprocessor FFTs,” Parallel Computing 5, 197–210 (1987).zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    R.-M. Chamberlain, “Gray Codes, Fast Fourier Transforms and Hypercubes,” Parallel Computing 6, 225–233 (1988).zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    M. V. Aliev, A. M. Belov, A. V. Yershov, and M. A. Chicheva, “Parallelization of the Hypercomplex Discrete Fourier Transform,” Pattern Recognition and Image Analysis, 15(1), 110 (2005).Google Scholar
  6. 6.
    A.-P.-W. Bohm and R.-E. Hiromoto, “The Data Flow Parallelism of FFT,” in Advanced Topics in Dataflow Computing and Multithreading, Ed. by G.-R. Gao, L. Bic, and J.-L. Gaudiot, ISBN: 0-8186-6542-4 (1995), pp. 393–404.Google Scholar
  7. 7.
    N.-D. Jones, “An Introduction to Partial Evaluation,” ACM Computing Surveys 28(3), (1996).Google Scholar
  8. 8.
    Mogensen and P. Sestoft, “Partial Evaluation,” in Encyclopedia of Computer Science and Technology, Ed. by A. Kent and J. G. Williams (1997), vol. 37, pp. 247–279.Google Scholar
  9. 9.
    A. P. Ershov, “Mixed Computation: Potential Applications and Problems for Study,” Theoretical Computer Science 18 (1982).Google Scholar
  10. 10.
    I. Bjorner, A. Ershov, and N. Jones, Partial Evaluation and Mixed Computation (North-Holland, 1988).Google Scholar
  11. 11.
    C. Consel and O. Danvy, “Partial Evaluation of Pattern Matching in String,” Information Processing Letter 30(2), 79–86 (1989).CrossRefGoogle Scholar
  12. 12.
    C. Consel and O. Danvy, “Static and Dynamic Semantic Processing,” in ACM Symposium on Principles of Programming Languages (1991), pp. 14–23.Google Scholar
  13. 13.
    J. Jorgensen, “Generating a Compiler for a Lazy Language by Partial Evaluation,” in ACM Symposium on Principles of Programming Languages, 1992, pp. 258–268.Google Scholar
  14. 14.
    N. Jones, C. Gomard, and P. Sestoft, Partial Evaluation and Automatic Program Generation (Prentice-Hall, 1993).Google Scholar
  15. 15.
    P. Sesyoft and H. Sondergaard, Eds., in Special Issue on Partial Evaluation and Semantic-Based Program Manipulation (PEPM’94), (Lisp and Symbolic Computation, 1995), vol. 8, no. 3.Google Scholar
  16. 16.
    M. Sperber, H. Klaeren, and P. Thiemann, “Distributed Partial Evaluation,” in PASCO’97: Second Int. Symp. on Parallel Symbolic Computation, Ed. by E. Kaltofen (World Scientific Publishing Company, Maui, Hawaii, 1997), pp. 80–87.CrossRefGoogle Scholar
  17. 17.
    J.-L. Lawall, Faster Fourier Transforms via Automatic Program Specialization (IRISA Research Reports, 1998).Google Scholar
  18. 18.
    A.-A. Faraj, “Communication Characteristics in the NAS Parallel Benchmarks,” Master Thesis, College of Arts and Sciences (Florida State University, October, 2002).Google Scholar
  19. 19.
    D. Lahaut and C. Germain, “Static Communications in Parallel Scientic Programs,” in PARLE’94, Parallel Architecture and Languages, (LNCS 817, 1994), pp. 262–276.Google Scholar
  20. 20.
    R. Gluck, R. Nakashige, and R. Zochling, “Binding-Time Analysis Applied to Mathematical Algorithms,” in 17th IFIP Conf. on System Modelling and Optimization, Eds. J. Dolezal and J. Fidler (Prague, Czech Republic, 1995).Google Scholar
  21. 21.
    Ogawa H., Matsuoka S. “OMPI: Optimizing MPI programs using Partial Evaluation,” in Proc. of the 1996 IEEE/ACM Supercomputing Conf. (Pittsburgh, 1996).Google Scholar
  22. 22.
    Arvind, R. S. Nikhil, and K. K. Pingali, “I-Structures: Data Structures for Parallel Computing,” ACM Trans. on Programming Languages and Systems 11(4), 598–632 (1989).CrossRefGoogle Scholar
  23. 23.
    P. S. Barth, “Using Atomic Data Structures for Parallel Simulation,” in Proc. of the Scalable High Performance Computing Conf. (VA, Williamsburg, 1992).Google Scholar
  24. 24.
    S. Sur and W. Bohm, “Efficient Declarative Programs: Experience in Implementing NAS Benchmark FT,” Technical Report CS-93-128 (Colorado State University, 1993).Google Scholar
  25. 25.
    X. Shen and B. S. Ang, “Implementing I-Structures at Cache Coherence Level,” in Proc. on the 5th Annual MIT Student Workshop on Scalable Computing (MIT, 1995).Google Scholar
  26. 26.
    W.-Y. Lin and J.-L. Gaudiot, “I-Structure Software Cache—A Split-Phase Transaction Runtime Cache System,” in Proc. of PACT’ 96 Boston (MA, Boston, 1996), pp. 20–23.Google Scholar
  27. 27.
    A. Cristóbal-Salas and A. Tchernykh, “I-Structure Software Cache for Distributed Applications,” Dyna, No. 141, 67–74 (1971); Dyna (Medellín, Colombia, 2004), ISSN 0012-7353.2004.Google Scholar
  28. 28.
    A. Cristóbal, A. Tchernykh, J.-L. Gaudiot, and W. Y. Lin, “Non-Strict Execution in Parallel and Distributed Computing,” Int. J. of Parallel Programming 31(2), 77–105 (2003).zbMATHCrossRefGoogle Scholar
  29. 29.
    J.-B. Dennis and G.-R. Gao, “On Memory Models and Cache Management for Shared-Memory Multiprocessors,” CSG MEMO 363, CSL, (MIT, 1995).Google Scholar
  30. 30.
    J. N. Amaral, W.-Y. Lin, J.-L. Gaudiot, and G. R. Gao, “Exploiting Locality in Single Assignment Data Structures Updated Through Split-Phase Transactions,” Int. J. of Cluster Computing, Special Issue on Internet Scalability: Advances in Parallel, Distributed, and Mobile Systems, 4, 4 (2001).Google Scholar
  31. 31.
    A. Cristóbal-Salas, A. Tchernykh, and J.-L. Gaudiot, “Incomplete Information Processing for Optimization of Distributed Applications,” in Proc. of the Fourth ACIS Int. Conf. on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing (SNPD’03) (ACIS, 2003), pp. 277–284.Google Scholar
  32. 32.
    K.-M. Kavi, A.-R. Hurson, P. Patadia, E. Abraham, and P. Shanmugam, “Design of Cache Memories for Multithreaded Dataflow Architecture,” in Proc. of ISCA (1995), pp. 253–264.Google Scholar
  33. 33.
    R. Govindarajan, S. Nemawarkar, and P. LeNir, “Design and Performance Evaluation of a Multithreaded Architecture,” in Proc. of the 1st Int. Symp. on High-Performance Computer Architecture (Raliegh, 1995), pp. 298–307.Google Scholar
  34. 34.
    E. Mikulic, “Haar Wavelet Transform,” http://dmr.ath.cx/gfx/haar/index.html.2004.

Copyright information

© Pleiades Publishing, Ltd. 2007

Authors and Affiliations

  • A. Tchernykh
    • 1
  • A. Cristóbal-Salas
    • 2
  • V. Kober
    • 1
  • I. A. Ovseevich
    • 3
  1. 1.Computer Science DepartmentCICESE Research Center EnsenadaMexico
  2. 2.School of Chemistry Sciences and EngineeringUniversity of Baja CaliforniaTijuanaMexico
  3. 3.Institute for Information Transmissions ProblemsRussian Academy of SciencesMoscowRussia

Personalised recommendations