Advertisement

Automatic Exploration of Potential Parallelism in Sequential Applications

  • Vladimir Subotic
  • Eduard Ayguadé
  • Jesus Labarta
  • Mateo Valero
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8488)

Abstract

The multicore era has increased the need for highly parallel software. Since automatic parallelization turned out ineffective for many production codes, the community hopes for the development of tools that may assist parallelization, providing hints to drive the parallelization process. In our previous work, we had designed Tareador, a tool based on dynamic instrumentation that identifies potential task-based parallelism inherent in applications. Also, we showed how a programmer can use Tareador to explore the potential of different parallelization strategies. In this paper, we build up on our previous work by automating the process of exploring parallelism. We have designed an environment that, given a sequential code and configuration of the target parallel architecture, iteratively runs Tareador to find an efficient parallelization strategy. We propose an autonomous algorithm based on simple metrics and a cost function. The algorithm finds an efficient parallelization strategy and provides the programmer with sufficient information to turn that parallelization strategy into an actual parallel program.

Keywords

automatic parallelization potential parallelism OmpSs 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Benkner, S.: Vfc: The vienna fortran compiler. Scientific Programming 7(1), 67–81 (1999)Google Scholar
  2. 2.
    Bik, A., Girkar, M., Grey, P., Tian, X.: Efficient exploitation of parallelism on pentium iii and pentium 4 processor-based systems (2001)Google Scholar
  3. 3.
    Critical Blue. Prism, http://www.criticalblue.com/ (active on June 27, 2013)
  4. 4.
    Blume, W., Doallo, R., Eigenmann, R., Grout, J., Hoeflinger, J., Lawrence, T., Lee, J., Padua, D.A., Paek, Y., Pottenger, W.M., Rauchwerger, L., Tu, P.: Parallel programming with polaris. IEEE Computer 29(12), 78–82 (1996)CrossRefGoogle Scholar
  5. 5.
    Intel Corporation. Intel Parallel Advisor, http://software.intel.com/en-us/intel-advisor-xe (active on June 27, 2013)
  6. 6.
    Duran, A., Ayguadé, E., Badia, R.M., Labarta, J., Martinell, L., Martorell, X., Planas, J.: Ompss: a proposal for programming heterogeneous multi-core architectures. Parallel Processing Letters 21(2), 173–193 (2011)CrossRefMathSciNetGoogle Scholar
  7. 7.
    Vector Fabrics. Pareon, http://www.vectorfabrics.com/products (active on June 27, 2013)
  8. 8.
    Garcia, S., Jeon, D., Louie, C.M., Taylor, M.B.: Kremlin: rethinking and rebooting gprof for the multicore age. In: PLDI, pp. 458–469 (2011)Google Scholar
  9. 9.
    Jost, G., Labarta, J., Gimenez, J.: Paramedir: A tool for programmable performance analysis. In: Bubak, M., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2004. LNCS, vol. 3036, pp. 466–469. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Lattner, C., Adve, V.: LLVM: A compilation framework for lifelong program analysis and transformation, San Jose, CA, USA, pp. 75–88 (March 2004)Google Scholar
  11. 11.
    Mak, J., Faxén, K.-F., Janson, S., Mycroft, A.: Estimating and Exploiting Potential Parallelism by Source-Level Dependence Profiling. In: D’Ambra, P., Guarracino, M., Talia, D. (eds.) Euro-Par 2010, Part I. LNCS, vol. 6271, pp. 26–37. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  12. 12.
    Nethercote, N., Seward, J.: Valgrind, http://valgrind.org/ (active on June 27, 2013)
  13. 13.
    Pheatt, C.: Intel threading building blocks. J. Comput. Sci. Coll. 23(4), 298–298 (2008)Google Scholar
  14. 14.
    Pillet, V., Labarta, J., Cortes, T., Girona, S.: PARAVER: A Tool to Visualize and Analyze Parallel Code. In: WoTUG-18 (1995)Google Scholar
  15. 15.
    Subotic, V., Ferrer, R., Sancho, J.C., Labarta, J., Valero, M.: Quantifying the Potential Task-Based Dataflow Parallelism in MPI Applications. In: Jeannot, E., Namyst, R., Roman, J. (eds.) Euro-Par 2011, Part I. LNCS, vol. 6852, pp. 39–51. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  16. 16.
    Wilson, R.P., French, R.S., Wilson, C.S., Amarasinghe, S.P., Anderson, J.-A.M., Tjiang, S.W.K., Liao, S.-W., Tseng, C.-W., Hall, M.W., Lam, M.S., Hennessy, J.L.: Suif: An infrastructure for research on parallelizing and optimizing compilers. SIGPLAN Notices 29(12), 31–37 (1994)CrossRefGoogle Scholar
  17. 17.
    Zhang, X., Navabi, A., Jagannathan, S.: Alchemist: A transparent dependence distance profiling infrastructure. In: CGO 2009 (2009)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Vladimir Subotic
    • 1
  • Eduard Ayguadé
    • 1
    • 2
  • Jesus Labarta
    • 1
    • 2
  • Mateo Valero
    • 1
    • 2
  1. 1.Barcelona Supercomputing CenterBarcelonaSpain
  2. 2.Universitat Politecnica de CatalunyaBarcelonaSpain

Personalised recommendations