The Journal of Supercomputing

, Volume 74, Issue 4, pp 1473–1484 | Cite as

Pardis: a process calculus for parallel and distributed programming in Haskell

  • Christopher Blöcker
  • Ulrich Hoffmann


Parallel programming and distributed programming involve substantial amounts of boilerplate code for process management and data synchronisation. This leads to increased bug potential and often results in unintended non-deterministic program behaviour. Moreover, algorithmic details are mixed with technical details concerning parallelisation and distribution. Process calculi are formal models for parallel and distributed programming but often leave details open, causing a gap between formal model and implementation. We propose a fully deterministic process calculus for parallel and distributed programming and implement it as a domain-specific language in Haskell to address these problems. We eliminate boilerplate code by abstracting from the exact notion of parallelisation and encapsulating it in the implementation of our process combinators. Furthermore, we achieve correctness guarantees regarding process composition at compile time through Haskell’s type system. Our result can be used as a high-level tool to implement parallel and distributed programs.


Process calculus Parallel programming Distributed programming Domain-specific language Haskell 



We would like to thank Jan-Philip Loos and Uwe Schmidt for inspiring discussions and helpful feedback.


  1. 1.
    Belikov E, Deligiannis P, Totoo P, Aljabri M, Loidl HW (2013) A survey of high-level parallel programming models. Technical Report HW-MACS-TR-0103, Department of Computer Science, Heriot-Watt UniversityGoogle Scholar
  2. 2.
    Bocchino RL Jr, Adve VS, Adve SV, Snir M (2009) Parallel programming must be deterministic by default. In: Proceedings of the First USENIX Conference on Hot Topics in Parallelism, HotPar’09, USENIX Association, Berkeley, CA, USA, pp 4–4.
  3. 3.
    Chakravarty MM, Keller G, Lee S, McDonell TL, Grover V (2011) Accelerating Haskell array codes with multicore GPUs. In: Proceedings of the sixth workshop on declarative aspects of multicore programming, DAMP ’11, ACM, New York, NY, USA, pp 3–14.
  4. 4.
    Coutts D, LÃűh, A (2012) Deterministic parallel programming with Haskell. Comput Sci Eng 14(6):36–43Google Scholar
  5. 5.
    Epstein J, Black AP, Peyton-Jones S (2011) Towards Haskell in the cloud. In: Proceedings of the 4th ACM symposium on Haskell, Haskell ’11, ACM, New York, NY, USA, pp 118–129.
  6. 6.
    Gorlatch S (2004) Send-receive considered harmful: myths and realities of message passing. ACM Trans. Program. Lang. Syst. 26(1):47–56. MathSciNetCrossRefGoogle Scholar
  7. 7.
    Harris T, Marlow S, Peyton-Jones S, Herlihy M (2005) Composable memory transactions. In: Proceedings of the tenth ACM SIGPLAN symposium on principles and practice of parallel programming, PPoPP ’05, ACM, New York, NY, USA, pp 48–60.
  8. 8.
    Hoare CAR (1985) Communicating sequential processes. Prentice-Hall Inc, Upper Saddle RiverzbMATHGoogle Scholar
  9. 9.
    Hoare T, van Staden S (2012) The laws of programming unify process calculi. In: Proceedings of the 11th International Conference on Mathematics of Program Construction, MPC’12, Springer-Verlag, Berlin, Heidelberg, pp 7–22.
  10. 10.
    Hughes J (2000) Generalising monads to arrows. Sci. Comput. Program 37(1–3):67–111. MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Lee EA (2006) The problem with threads. Computer 39(5):33–42. CrossRefGoogle Scholar
  12. 12.
    Marlow S (2012) Parallel and concurrent programming in Haskell. In: Proceedings of the 4th summer school conference on central European functional programming school, CEFP’11, Springer-Verlag, Berlin, Heidelberg, pp 339–401.
  13. 13.
    Marlow S, Maier P, Loidl HW, Aswad MK, Trinder P (2010) Seq no more: better strategies for parallel Haskell. In: Proceedings of the third ACM Haskell symposium on Haskell, Haskell ’10, ACM, New York, NY, USA, pp 91–102.
  14. 14.
    Marlow S, Newton R, Peyton Jones S (2011) A monad for deterministic parallelism. In: Proceedings of the 4th ACM symposium on Haskell, Haskell ’11, ACM, New York, NY, USA, pp 71–82.
  15. 15.
    Milner R (1982) A calculus of communicating systems. Springer-Verlag New York Inc, SecaucuszbMATHGoogle Scholar
  16. 16.
    Peyton Jones S, Gordon A, Finne S (1996) Concurrent Haskell. In: Proceedings of the 23rd ACM SIGPLAN-SIGACT symposium on principles of programming languages, POPL ’96, ACM, New York, NY, USA, pp 295–308.
  17. 17.
    Peyton Jones S, Washburn G, Weirich S (2004) Wobbly types: type inference for generalised algebraic data types. Technical Report of MS-CIS-05-26, University of Pennsylvania, Computer and Information Science Department, Levine Hall, 3330 Walnut Street, Philadelphia, Pennsylvania, 19104-6389Google Scholar
  18. 18.
    Schmidt DA (1986) Denotational semantics: a methodology for language development. William C. Brown Publishers, DubuqueGoogle Scholar
  19. 19.
    Schrijvers T, Peyton Jones S, Chakravarty M, Sulzmann M (2008) Type checking with open type functions. SIGPLAN Not 43(9):51–62. CrossRefzbMATHGoogle Scholar
  20. 20.
    Stoy JE (1977) Denotational semantics: The Scott–Strachey approach to programming language theory. MIT Press, CambridgezbMATHGoogle Scholar
  21. 21.
    Trinder PW, Hammond K, Loidl HW, Peyton Jones SL (1998) Algorithm + strategy = parallelism. J Funct Program 8(1):23–60. MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Computer ScienceFH Wedel, University of Applied SciencesWedelGermany

Personalised recommendations