Skip to main content

A Methodology for Fine-Grained Parallelism in JavaScript Applications

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7146))

Abstract

JavaScript has long been the dominant language for client-side web development. The size and complexity of client-side JavaScript programs continues to grow and now includes applications such as games, office suites, and image editing tools traditionally developed using high performance languages. More recently, developers have been expanding the use of JavaScript with standards and implementations for server-side JavaScript. These trends are driving a need for high performance JavaScript implementations. While the performance of JavaScript implementations is improving, support for creating parallel applications that can take advantage of now ubiquitous parallel hardware remains primitive. Pipeline, data, and task parallelism are ways of breaking a program into multiple units of work that can be executed concurrently by parallel hardware. These concepts are made explicit in the stream processing model of parallelization. Using the streaming model, an algorithm is divided into a set of small independent tasks called kernels that are linked together using first-in first-out data channels. The advantage of this approach is that it allows a compiler to effectively map computations to a variety of hardware while freeing programmers from the burden of synchronizing tasks or orchestrating communication between them. In this paper we describe Sluice, a library based method for the specification of streaming constructs in JavaScript applications. While the use of such a library makes concurrency explicit, it does not easily result in parallel execution. We show, however, that by taking advantage of the streaming model, we can dynamically re-compile Sluice programs to target a high performance, multi-threaded stream processing runtime layer. The stream processing layer executes computations in a different process and the offloaded tasks communicate with the original program using fast shared memory buffers. We show that this methodology can result in significant performance improvements for compute intensive workloads.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. CommonJS, http://www.commonjs.org

  2. node.js, http://nodejs.org

  3. Web Workers, http://www.w3.org/TR/workers

  4. OpenMP, http://openmp.org

  5. Intel Thread Building Blocks (TBB), http://www.threadbuildingblocks.org

  6. Gordon, M.I., Thies, W., Amarasinghe, S.: Exploiting Coarse-Grained Task, Data, and Pipeline Parallelism in Stream Programs. In: Proceedings of the 12th International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 151–162. ACM (2006)

    Google Scholar 

  7. Lee, E.A., Messerschmitt, D.G.: Static Scheduling of Synchronous Data Flow Programs for Digital Signal Processing. IEEE Transactions on Computing 36(1) (1987)

    Google Scholar 

  8. Low Level Virtual Machine (LLVM), http://llvm.org

  9. Gal, A., et al.: Trace-based Just-in-Time Type Specialization for Dynamic Languages. In: Proceedings of the 2009 Conference on Programming Language Design and Implementation, pp. 465–478 (2009)

    Google Scholar 

  10. Google V8 JavaScript Engine, http://code.google.com/p/v8

  11. Pixastic, http://www.pixastic.com

  12. Nvidia CUDA SDK, http://developer.nvidia.com

  13. node-fibers, http://github.com/laverdet/node-fibers

  14. Mehrara, M., Mahlke, S.: Dynamically Accelerating Client-side Web Applications through Decoupled Execution. In: Proceedings of the 2011 International Symposium on Code Generation and Optimization, CGO 2011, pp. 74–84 (2011)

    Google Scholar 

  15. Mehrara, M., Hsu, P.C., Samadi, M., Mahlke, S.: Dynamic Parallelization of JavaScript Applications Using an Ultra-lightweight Speculation Mechanism. In: Proceedings of the 17th IEEE International Symposium on High Performance Computer Architecture, HPCA 2011, pp. 87–98 (2011)

    Google Scholar 

  16. Mickens, J., Elson, J., Howell, J., Lorch, J.: Crom: Faster Web Browsing Using Speculative Execution. In: Proceedings of the 7th USENIX Conference on Networked Systems Design and Implementation, NSDI 2010 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Fifield, J., Grunwald, D. (2013). A Methodology for Fine-Grained Parallelism in JavaScript Applications. In: Rajopadhye, S., Mills Strout, M. (eds) Languages and Compilers for Parallel Computing. LCPC 2011. Lecture Notes in Computer Science, vol 7146. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36036-7_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-36036-7_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-36035-0

  • Online ISBN: 978-3-642-36036-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics