Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Pardis: a process calculus for parallel and distributed programming in Haskell


Parallel programming and distributed programming involve substantial amounts of boilerplate code for process management and data synchronisation. This leads to increased bug potential and often results in unintended non-deterministic program behaviour. Moreover, algorithmic details are mixed with technical details concerning parallelisation and distribution. Process calculi are formal models for parallel and distributed programming but often leave details open, causing a gap between formal model and implementation. We propose a fully deterministic process calculus for parallel and distributed programming and implement it as a domain-specific language in Haskell to address these problems. We eliminate boilerplate code by abstracting from the exact notion of parallelisation and encapsulating it in the implementation of our process combinators. Furthermore, we achieve correctness guarantees regarding process composition at compile time through Haskell’s type system. Our result can be used as a high-level tool to implement parallel and distributed programs.

This is a preview of subscription content, log in to check access.


  1. 1.

    To be precise, there is an undefined value \(\bot _a\) for every type a, e.g. we have \(Bool = \left\{ \bot _{Bool}, false, true \right\} \). We omit the type index since it shall be clear from context.

  2. 2.

    NFData stands for normal form data and values of data types with an instance of NFData can be fully evaluated. Haskell’s evaluation strategy is lazy evaluation, i.e. values are only evaluated if they are needed. However, through NFData we can enforce full evaluation in parallel to benefit from parallelisation.

  3. 3.

    Roughly speaking, a closure is a data structure that contains an executable computation together with inputs for that computation.

  4. 4.

    We assume there is a way to obtain a node but omit node management for brevity.

  5. 5.

    Note that there is a difference between Pardis processes and Cloud Haskell processes.

  6. 6.


  1. 1.

    Belikov E, Deligiannis P, Totoo P, Aljabri M, Loidl HW (2013) A survey of high-level parallel programming models. Technical Report HW-MACS-TR-0103, Department of Computer Science, Heriot-Watt University

  2. 2.

    Bocchino RL Jr, Adve VS, Adve SV, Snir M (2009) Parallel programming must be deterministic by default. In: Proceedings of the First USENIX Conference on Hot Topics in Parallelism, HotPar’09, USENIX Association, Berkeley, CA, USA, pp 4–4.

  3. 3.

    Chakravarty MM, Keller G, Lee S, McDonell TL, Grover V (2011) Accelerating Haskell array codes with multicore GPUs. In: Proceedings of the sixth workshop on declarative aspects of multicore programming, DAMP ’11, ACM, New York, NY, USA, pp 3–14.

  4. 4.

    Coutts D, LÃűh, A (2012) Deterministic parallel programming with Haskell. Comput Sci Eng 14(6):36–43

  5. 5.

    Epstein J, Black AP, Peyton-Jones S (2011) Towards Haskell in the cloud. In: Proceedings of the 4th ACM symposium on Haskell, Haskell ’11, ACM, New York, NY, USA, pp 118–129.

  6. 6.

    Gorlatch S (2004) Send-receive considered harmful: myths and realities of message passing. ACM Trans. Program. Lang. Syst. 26(1):47–56.

  7. 7.

    Harris T, Marlow S, Peyton-Jones S, Herlihy M (2005) Composable memory transactions. In: Proceedings of the tenth ACM SIGPLAN symposium on principles and practice of parallel programming, PPoPP ’05, ACM, New York, NY, USA, pp 48–60.

  8. 8.

    Hoare CAR (1985) Communicating sequential processes. Prentice-Hall Inc, Upper Saddle River

  9. 9.

    Hoare T, van Staden S (2012) The laws of programming unify process calculi. In: Proceedings of the 11th International Conference on Mathematics of Program Construction, MPC’12, Springer-Verlag, Berlin, Heidelberg, pp 7–22.

  10. 10.

    Hughes J (2000) Generalising monads to arrows. Sci. Comput. Program 37(1–3):67–111.

  11. 11.

    Lee EA (2006) The problem with threads. Computer 39(5):33–42.

  12. 12.

    Marlow S (2012) Parallel and concurrent programming in Haskell. In: Proceedings of the 4th summer school conference on central European functional programming school, CEFP’11, Springer-Verlag, Berlin, Heidelberg, pp 339–401.

  13. 13.

    Marlow S, Maier P, Loidl HW, Aswad MK, Trinder P (2010) Seq no more: better strategies for parallel Haskell. In: Proceedings of the third ACM Haskell symposium on Haskell, Haskell ’10, ACM, New York, NY, USA, pp 91–102.

  14. 14.

    Marlow S, Newton R, Peyton Jones S (2011) A monad for deterministic parallelism. In: Proceedings of the 4th ACM symposium on Haskell, Haskell ’11, ACM, New York, NY, USA, pp 71–82.

  15. 15.

    Milner R (1982) A calculus of communicating systems. Springer-Verlag New York Inc, Secaucus

  16. 16.

    Peyton Jones S, Gordon A, Finne S (1996) Concurrent Haskell. In: Proceedings of the 23rd ACM SIGPLAN-SIGACT symposium on principles of programming languages, POPL ’96, ACM, New York, NY, USA, pp 295–308.

  17. 17.

    Peyton Jones S, Washburn G, Weirich S (2004) Wobbly types: type inference for generalised algebraic data types. Technical Report of MS-CIS-05-26, University of Pennsylvania, Computer and Information Science Department, Levine Hall, 3330 Walnut Street, Philadelphia, Pennsylvania, 19104-6389

  18. 18.

    Schmidt DA (1986) Denotational semantics: a methodology for language development. William C. Brown Publishers, Dubuque

  19. 19.

    Schrijvers T, Peyton Jones S, Chakravarty M, Sulzmann M (2008) Type checking with open type functions. SIGPLAN Not 43(9):51–62.

  20. 20.

    Stoy JE (1977) Denotational semantics: The Scott–Strachey approach to programming language theory. MIT Press, Cambridge

  21. 21.

    Trinder PW, Hammond K, Loidl HW, Peyton Jones SL (1998) Algorithm + strategy = parallelism. J Funct Program 8(1):23–60.

Download references


We would like to thank Jan-Philip Loos and Uwe Schmidt for inspiring discussions and helpful feedback.

Author information

Correspondence to Christopher Blöcker.

Additional information

This work was done while Christopher Blöcker was working on his Master’s degree at FH Wedel.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Blöcker, C., Hoffmann, U. Pardis: a process calculus for parallel and distributed programming in Haskell. J Supercomput 74, 1473–1484 (2018).

Download citation


  • Process calculus
  • Parallel programming
  • Distributed programming
  • Domain-specific language
  • Haskell