Skip to main content

Advertisement

SpringerLink
Log in
Menu
Find a journal Publish with us
Search
Cart
Book cover

European Conference on Parallel Processing

Euro-Par 2011: Euro-Par 2011: Parallel Processing Workshops pp 375–384Cite as

  1. Home
  2. Euro-Par 2011: Parallel Processing Workshops
  3. Conference paper
Extending a Highly Parallel Data Mining Algorithm to the Intel ® Many Integrated Core Architecture

Extending a Highly Parallel Data Mining Algorithm to the Intel ® Many Integrated Core Architecture

  • Alexander Heinecke30,
  • Michael Klemm32,
  • Dirk Pflüger30,
  • Arndt Bode31 &
  • …
  • Hans-Joachim Bungartz31 
  • Conference paper
  • 1370 Accesses

  • 11 Citations

Part of the Lecture Notes in Computer Science book series (LNTCS,volume 7156)

Abstract

Extracting knowledge from vast datasets is a major challenge in data-driven applications, such as classification and regression, which are mostly compute bound. In this paper, we extend our SG + +  algorithm to the Intel® Many Integrated Core Architecture (Intel® MIC Architecture). The ease of porting an application to Intel MIC Architecture is shown: porting existing SSE code is very easy and straightforward. We evaluate the current prototype pre-release coprocessor board codenamed Intel® “Knights Ferry”. We utilize the pragma-based offloading programming model offered by the Intel® Composer XE for Intel MIC Architecture, generating both the host and the coprocessor code. We compare the achieved performance with an NVIDIA C2050 accelerator and show that the pre-release Knights Ferry coprocessor delivers better performance than the C2050 and exceeds the C2050 when comparing the productivity aspect of implementing algorithms for the coprocessors.

Keywords

  • Intel® Many Integrated Core Architecture
  • Intel® MIC Architecture
  • Intel® Knights Ferry
  • NVIDIA Fermi*
  • GPGPU
  • accelerators
  • coprocessors
  • data mining
  • sparse grids

Download conference paper PDF

References

  1. Bungartz, H.-J., Griebel, M.: Sparse Grids. Acta Numerica 13, 147–269 (2004)

    CrossRef  MathSciNet  Google Scholar 

  2. CAPS Enterprise. Rapidly Develop GPU Accelerated Applications (2011)

    Google Scholar 

  3. Intel Corporation. Pentium® Processor 75/90/100/120/133/150/166/200, Order Number 241997-010 (1997)

    Google Scholar 

  4. Intel Corporation. Intel® Xeon® Processor X5680 (2010), http://ark.intel.com (last accessed August 18, 2011)

  5. Intel Corporation. Intel® Array Building Blocks (2011), http://software.intel.com/en-us/articles/intel-array-building-blocks/ (accessed June 15, 2011)

  6. Intel Corporation. Intel® CilkTM Plus Language Specification, Document Number 324396-001US (2011)

    Google Scholar 

  7. Intel Corporation. Introducing Intel® Many Integrated Core Architecture (2011), http://www.intel.com/technology/architecture-silicon/mic/index.htm (accessed June 15, 2011)

  8. Lee, A., et al.: On the Utility of Graphics Cards to Perform Massively Parallel Simulation of Advanced Monte Carlo Methods. Journal of Computational and Graphical Statistics 19(4), 769–789 (2010)

    CrossRef  Google Scholar 

  9. Seiler, L., et al.: Larrabee: a Many-core x86 Architecture for Visual Computing. ACM Trans. Graph. 27(3), 18:1–18:15 (2008)

    Google Scholar 

  10. Khronos OpenCL Working Group. The OpenCL Specification, Version 1.1 (2010)

    Google Scholar 

  11. Heinecke, A., Pflüger, D.: Multi- and many-core data mining with adaptive sparse grids. In: Proc. of the 2011 ACM Intl. Conf. on Computing Frontiers (2011)

    Google Scholar 

  12. NVIDIA. Next Generation CUDATM Compute Architecture: FermiTM (2010)

    Google Scholar 

  13. NVIDIA. NVIDIA® CUDATM C Programming Guide (2011)

    Google Scholar 

  14. NVIDIA. OpenCLTM Best Practices Guide (2011)

    Google Scholar 

  15. OpenMP Architecture Review Board. OpenMP Application Program Interface, Version 3.0 (2008)

    Google Scholar 

  16. Pflüger, D.: Spatially Adaptive Sparse Grids for High-Dimensional Problems. Dissertation, Institut für Informatik, TUM, München (2010)

    Google Scholar 

  17. Reinders, J.: Intel Threading Building Blocks. O’Reilly, Sebastopol (2007)

    Google Scholar 

  18. Skaugen, K.: Petascale to Exascale. Keynote speech at the Intl. Supercomputing Conf. 2010 (2010)

    Google Scholar 

  19. The Portland Group. PGI Accelerator Compilers (2011), http://www.pgroup.com/resources/accel.htm (accessed June 15, 2011)

  20. Volkov, V., Demmel, J.W.: Benchmarking GPUs to Tune Dense Linear Algebra. In: Proc. of the 2008 ACM/IEEE Conf. on Supercomputing, pp. 31:1–31:11 (2008)

    Google Scholar 

  21. Yelick, K.: Exascale Computing: More and Moore? 2011. Keynote speech at the 2011 ACM Intl. Conf. on Computing Frontiers (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

  1. Technische Universität München, Boltzmannstr. 3, D-85748, Garching, Germany

    Alexander Heinecke & Dirk Pflüger

  2. Leibniz-Rechenzentrum der Bayerischen Akademie der Wissenschaften, Boltzmannstr. 1, D-85748, Garching, Germany

    Arndt Bode & Hans-Joachim Bungartz

  3. Intel GmbH, Dornacher Str. 1, D-85622, Feldkirchen, Germany

    Michael Klemm

Authors
  1. Alexander Heinecke
    View author publications

    You can also search for this author in PubMed Google Scholar

  2. Michael Klemm
    View author publications

    You can also search for this author in PubMed Google Scholar

  3. Dirk Pflüger
    View author publications

    You can also search for this author in PubMed Google Scholar

  4. Arndt Bode
    View author publications

    You can also search for this author in PubMed Google Scholar

  5. Hans-Joachim Bungartz
    View author publications

    You can also search for this author in PubMed Google Scholar

Editor information

Editors and Affiliations

  1. Scilytics, Koellnerhofgasse 3/15A, 1010, Vienna, Austria

    Michael Alexander

  2. ICAR-CNR, Via P. Castellino, 111, 80131, Napoli, Italy

    Pasqua D’Ambra

  3. University of Amsterdam, 1090, Amsterdam, Netherlands

    Adam Belloum

  4. Innovative Computing Laboratory, The University of Tennessee, US

    George Bosilca

  5. Department of Experimental Medicine and Clinic, University Magna Græcia, 88100, Catanzaro, Italy

    Mario Cannataro

  6. Computer Science Department, University of Pisa, Italy

    Marco Danelutto

  7. Second University of Naples, Italy

    Beniamino Di Martino

  8. TUMünchen,, Boltzmannstr. 3, ,, 85748, Garching, Germany

    Michael Gerndt

  9. Equipe Runtime, INRIA Bordeaux Sud-Ouest, 33405, Talence Cedex, France

    Emmanuel Jeannot & Raymond Namyst & 

  10. Equipe HIEPACS, INRIA Bordeaux Sud-Ouest, 33405, Talence Cedex, France

    Jean Roman

  11. Computer Science and Mathematics Division, Oak Ridge National Laboratory, 37831-6164, Oak Ridge, TN, USA

    Stephen L. Scott

  12. Department of Scientific Computing, University of Vienna, Nordbergstr. 15/3C, 1090, Vienna, Austria

    Jesper Larsson Traff

  13. Computer Science and Mathematics Division, Oak Ridge National Laboratory, 37831, Oak Ridge, TN, USA

    Geoffroy Vallée

  14. Technische Universität München, Germany

    Josef Weidendorfer

Rights and permissions

Reprints and Permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Heinecke, A., Klemm, M., Pflüger, D., Bode, A., Bungartz, HJ. (2012). Extending a Highly Parallel Data Mining Algorithm to the Intel ® Many Integrated Core Architecture. In: Alexander, M., et al. Euro-Par 2011: Parallel Processing Workshops. Euro-Par 2011. Lecture Notes in Computer Science, vol 7156. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29740-3_42

Download citation

  • .RIS
  • .ENW
  • .BIB
  • DOI: https://doi.org/10.1007/978-3-642-29740-3_42

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29739-7

  • Online ISBN: 978-3-642-29740-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Search

Navigation

  • Find a journal
  • Publish with us

Discover content

  • Journals A-Z
  • Books A-Z

Publish with us

  • Publish your research
  • Open access publishing

Products and services

  • Our products
  • Librarians
  • Societies
  • Partners and advertisers

Our imprints

  • Springer
  • Nature Portfolio
  • BMC
  • Palgrave Macmillan
  • Apress
  • Your US state privacy rights
  • Accessibility statement
  • Terms and conditions
  • Privacy policy
  • Help and support

167.114.118.210

Not affiliated

Springer Nature

© 2023 Springer Nature