Skip to main content

PatEC: Pattern-Based Equivalence Checking

  • Conference paper
  • First Online:
Model Checking Software (SPIN 2021)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12864))

Included in the following conference series:

Abstract

Program parallelization is a common software engineering task, in which parallel design patterns are applied. While the focus of parallelization is on performance, the functional behavior should be kept invariant, i.e., sequential and parallelized program should be functionally equivalent. Several verification techniques exist that analyze properties of parallel programs, but only a few approaches inspect functional equivalence between a sequential program and its parallelization. Even fewer approaches consider parallel design patterns when checking equivalence.

In this paper, we present PatEC, which checks equivalence between sequential programs and their OpenMP parallelizations. PatEC utilizes the knowledge about the applied parallel design pattern to split equivalence checking into smaller subtasks. Our experiments show that PatEC is effective, efficient, and often outperforms existing approaches.

This work was funded by the Hessian LOEWE initiative within the Software-Factory 4.0 project.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We assume that sequential programs are deterministic. To deal with non-determinism caused by random methods or I/O functions, one can consider those inputs as part of the starting state.

  2. 2.

    This is sufficient because we do not support aliasing.

  3. 3.

    Nested pattern applications can be checked recursively when being careful with the data-sharing attributes, but it is inefficient and misses equivalences.

  4. 4.

    Note that we make the assumption because (i) it simplifies the sequential equivalence check and (ii) most parallelizations we have seen fulfill the assumption. One can easily get rid of this assumption by letting the sequential check inspect \(P_\mathrm {seg}\) and \(P_\mathrm {par^{-1}}\).

  5. 5.

    In our implementation, we rely on the unparse function of the AST to avoid that formatting differences influence the check.

  6. 6.

    There exists some corner cases in which those variables are not thread-local like static variables. However, we do not support these rare cases.

  7. 7.

    Since it is sufficient that an array access is independent in one dimension, we only extract the complete accesses, e.g., from x[i][j] > 0 we collect x[i][j] but not x[i].

  8. 8.

    Note that our check additionally requires that a variable with attribute private or lastprivate must be modified. Since we only check variables that occur in the loop body, variables that are not used before must be modified.

  9. 9.

    Also, one must use the modifier conditional to ensure that the last write is considered.

  10. 10.

    Again, our implementation relies on the unparse function of the AST.

  11. 11.

    Especially note that all semantics we are aware of do not support all data-sharing attributes [12] or do not cover the data aspect [3].

  12. 12.

    https://git.rwth-aachen.de/svpsys-sw/FECheck, revision PatEC-SPIN2021.

  13. 13.

    To limit PatEC ’s checking to DoAll patterns, use -type=DOALL.

  14. 14.

    https://asc.llnl.gov/coral-benchmarks.

  15. 15.

    https://git.rwth-aachen.de/svpsys-sw/FECheck.

  16. 16.

    An incorrectly detected equivalence is an inequivalent task reported as equivalent.

  17. 17.

    We use the following command line: autoPar -rose:unparse_tokens -rose:autopar:no_aliasing -rose:autopar:enable_diff -fopenmp program.c.

  18. 18.

    PatEC times out twice (2*300 s) and AutoPar only once (300 s).

  19. 19.

    Note that the reported time for status TO in PEQcheck can differ significantly because PEQcheck may generate multiple verification tasks and we use a time out per task instead of one global time out for all tasks.

  20. 20.

    The parsing problems occur in one of the MILCmk header files.

  21. 21.

    Parallelizations without the reduction clause are classified as DoAll.

  22. 22.

    For our experiments, we therefore classified it inequivalent, too.

References

  1. Abadi, M., Keidar-Barner, S., Pidan, D., Veksler, T.: Verifying parallel code after refactoring using equivalence checking. International Journal Parallel Programming 47(1), 59–73 (2019). https://doi.org/10.1007/s10766-017-0548-4

    Article  Google Scholar 

  2. Arab, M.N., Wolf, F., Jannesari, A.: Automatic construct selection and variable classification in OpenMP. In: Proceedings of ICS, pp. 330–341. ACM, New York (2019). https://doi.org/10.1145/3330345.3330375

  3. Atzeni, S., Gopalakrishnan, G.: An operational semantic basis for building an OpenMP data race checker. In: Proceedings of IPDPSW, pp. 395–404. IEEE (2018). https://doi.org/10.1109/IPDPSW.2018.00074

  4. Atzeni, S., et al.: ARCHER: effectively spotting data races in large OpenMP applications. In: Proceedings of IPDPS, pp. 53–62. IEEE (2016). https://doi.org/10.1109/IPDPS.2016.68

  5. Badihi, S., Akinotcho, F., Li, Y., Rubin, J.: ARDiff: scaling program equivalence checking via iterative abstraction and refinement of common code. In: Proceedings of FSE, pp. 13–24. ACM, New York (2020). https://doi.org/10.1145/3368089.3409757

  6. Barthe, G., Crespo, J.M., Kunz, C.: Relational verification using product programs. In: Proc. FM. pp. 200–214. LNCS 6664, Springer, Berlin (2011), https://doi.org/10.1007/978-3-642-21437-0_17

    Chapter  Google Scholar 

  7. Basupalli, V., Yuki, T., Rajopadhye, S.V., Morvan, A., Derrien, S., Quinton, P., Wonnacott, D.: ompVerify: Polyhedral analysis for the OpenMP programmer. In: Proc. IWOMP. pp. 37–53. LNCS 6665, Springer, Berlin (2011), https://doi.org/10.1007/978-3-642-21487-5_4

    Chapter  Google Scholar 

  8. Beckert, B., Bingmann, T., Kiefer, M., Sanders, P., Ulbrich, M., Weigl, A.: Relational equivalence proofs between imperative and MapReduce algorithms. In: Proc. VSTTE. pp. 248–266. LNCS 11294, Springer, Cham (2018), https://doi.org/10.1007/978-3-030-03592-1_14

    Chapter  Google Scholar 

  9. Blom, S., Darabi, S., Huisman, M.: Verification of loop parallelisations. In: Proc. FASE. pp. 202–217. LNCS 9033, Springer, Berlin (2015), https://doi.org/10.1007/978-3-662-46675-9_14

    Chapter  Google Scholar 

  10. Blom, S., Darabi, S., Huisman, M., Safari, M.: Correct program parallelisations. STTT (2021). https://doi.org/10.1007/s10009-020-00601-z

    Article  Google Scholar 

  11. Bora, U., Das, S., Kukreja, P., Joshi, S., Upadrasta, R., Rajopadhye, S.: LLOV: a fast static data-race checker for OpenMP programs. TACO 17(4), 1–26 (2020) https://doi.org/10.1145/3418597

  12. Bronevetsky, G., de Supinski, B.R.: Complete formal specification of the OpenMP memory model. International Journal of Parallel Programming 35(4), 335–392 (2007). https://doi.org/10.1007/s10766-007-0051-4

    Article  MATH  Google Scholar 

  13. Felsing, D., Grebing, S., Klebanov, V., Rümmer, P., Ulbrich, M.: Automating regression verification. In: Proceedings of ASE, pp. 349–360. ACM, New York (2014). https://doi.org/10.1145/2642937.2642987

  14. Godlin, B., Strichman, O.: Regression verification. In: Proceedings of DAC, pp. 466–471. ACM, New York (2009). https://doi.org/10.1145/1629911.1630034

  15. Goncalves, R., Amaris, M., Okada, T.K., Bruel, P., Goldman, A.: OpenMP is not as easy as it appears. In: Proceedings of HICSS, pp. 5742–5751. IEEE (2016). https://doi.org/10.1109/HICSS.2016.710

  16. Jakobs, M.C.: Replication package for article ‘PatEC: pattern-based equivalence checking’. In: SPIN 2021, Zenodo (2021). https://doi.org/10.5281/zenodo.4841071

  17. Jakobs, M.C.: PEQcheck: localized and context-aware checking of functional equivalence. In: Proceedings of FormaliSE, pp. 130–140. IEEE (2021). https://doi.org/10.1109/FormaliSE52586.2021.00019

  18. Lahiri, S.K., Hawblitzel, C., Kawaguchi, M., Rebêlo, H.: SYMDIFF: A language-agnostic semantic diff tool for imperative programs. In: Proc. CAV. pp. 712–717. LNCS 7358, Springer, Berlin (2012), https://doi.org/10.1007/978-3-642-31424-7_54

    Chapter  Google Scholar 

  19. Li, Z., Atre, R., Huda, Z.U., Jannesari, A., Wolf, F.: Unveiling parallelization opportunities in sequential programs. Journal of Systems and Software 117, 282–295 (2016). https://doi.org/10.1016/j.jss.2016.03.045

    Article  Google Scholar 

  20. Liao, C., Lin, P., Asplund, J., Schordan, M., Karlin, I.: DataRaceBench: a benchmark suite for systematic evaluation of data race detection tools. In: Proceedings of SC, pp. 11:1–11:14. ACM, New York (2017) https://doi.org/10.1145/3126908.3126958

  21. Liao, C., Quinlan, D.J., Willcock, J., Panas, T.: Extending automatic parallelization to optimize high-level abstractions for multicore. In: Proc. IWOMP. pp. 28–41. LNCS 5568, Springer, Berlin (2009), https://doi.org/10.1007/978-3-642-02303-3_3

    Chapter  Google Scholar 

  22. Lin, Y.: Static nonconcurrency analysis of OpenMP programs. In: Mueller, M.S., Chapman, B.M., de Supinski, B.R., Malony, A.D., Voss, M. (eds.) IWOMP -2005. LNCS, vol. 4315, pp. 36–50. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-68555-5_4

    Chapter  Google Scholar 

  23. Ma, H., Diersen, S., Wang, L., Liao, C., Quinlan, D.J., Yang, Z.: Symbolic analysis of concurrency errors in OpenMP programs. In: Proceedings of ICPP, pp. 510–516. IEEE (2013). https://doi.org/10.1109/ICPP.2013.63

  24. Mattson, T.G., Sanders, B.A., Massingill, B.L.: Patterns for Parallel Programming (4th print). Addison-Wesley, Boston (2008)

    Google Scholar 

  25. McCool, M., Robison, A., Reinders, J.: Structured Parallel Programming: Patterns for Efficient Computation. Elsevier, Morgan Kaufman, Amsterdam (2012)

    Google Scholar 

  26. Mendonca, G.S.D., Liao, C., Pereira, F.M.Q.: AutoParBench: a unified test framework for OpenMP-based parallelizers. In: Proceedings of ICS, pp. 28:1–28:10. ACM, New York (2020). https://doi.org/10.1145/3392717.3392744

  27. de Moura, L.M., Bjørner, N.: Z3: an efficient SMT solver. In: Proc. TACAS. pp. 337–340. LNCS 4963, Springer, Berlin (2008), https://doi.org/10.1007/978-3-540-78800-3_24

    Chapter  Google Scholar 

  28. Nielson, F., Nielson, H.R., Hankin, C.: Principles of Program Analysis. Springer, Berlin (1999). https://doi.org/10.1007/978-3-662-03811-6

  29. OpenMP: OpenMP application programming interface (version 5.1). Technical report, OpenMP Architecture Review Board (2020). https://www.openmp.org/specifications/

  30. Person, S., Dwyer, M.B., Elbaum, S.G., Pasareanu, C.S.: Differential symbolic execution. In: Proceedings of FSE, pp. 226–237. ACM, New York (2008). https://doi.org/10.1145/1453101.1453131

  31. Pugh, W.: A practical algorithm for exact array dependence analysis. Commun. ACM 35(8), 102–114 (1992) https://doi.org/10.1145/135226.135233

  32. Pugh, W., Wonnacott, D.: Going beyond integer programming with the Omega test to eliminate false data dependences. IEEE Trans. Parallel Distrib. Syst. 6(2), 204–211 (1995) https://doi.org/10.1109/71.342135

  33. Quinlan, D., Liao, C.: The ROSE source-to-source compiler infrastructure. In: Cetus Users and Compiler Infrastructure Workshop, vol. 2011, pp. 1–3. Citeseer (2011)

    Google Scholar 

  34. Ramos, D.A., Engler, D.R.: Under-constrained symbolic execution: correctness checking for real code. In: USENIX Security Symposium, pp. 49–64. USENIX (2015). https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/ramos

  35. Royuela, S., Ferrer, R., Caballero, D., Martorell, X.: Compiler analysis for OpenMP tasks correctness. In: Proceedings of CF, pp. 7:1–7:8. ACM, New York (2015). https://doi.org/10.1145/2742854.2742882

  36. Saillard, E., Carribault, P., Barthou, D.: Static validation of barriers and worksharing constructs in OpenMP applications. In: Proc. IWOMP. pp. 73–86. LNCS 8766, Springer, Cham (2014), https://doi.org/10.1007/978-3-319-11454-5_6

    Chapter  Google Scholar 

  37. Siegel, S.F., et al.: CIVL: the concurrency intermediate verification language. In: Proceedings of SC, pp. 61:1–61:12. ACM, New York (2015). https://doi.org/10.1145/2807591.2807635

  38. Siegel, S.F., Zirkel, T.K.: FEVS: A functional equivalence verification suite for high-performance scientific computing. Mathematics in Computer Science 5(4), 427–435 (2011). https://doi.org/10.1007/s11786-011-0101-6

    Article  MATH  Google Scholar 

  39. Swain, B., Li, Y., Liu, P., Laguna, I., Georgakoudis, G., Huang, J.: OMPRacer: a scalable and precise static race detector for OpenMP programs. In: Proceedings of SC. IEEE (2020)

    Google Scholar 

  40. Verma, G., Shi, Y., Liao, C., Chapman, B.M., Yan, Y.: Enhancing DataRaceBench for evaluating data race detection tools. In: Proceedings of Correctness@SC, pp. 20–30. IEEE (2020). https://doi.org/10.1109/Correctness51934.2020.00008

  41. Wiesner, M., Jakobs, M.C.: Verifying pipeline implementations in OpenMP. In: Laarman, A., Sokolova, A. (eds.) SPIN 2021. LNCS, vol. 12864, pp. 81–98. Springer, Charm (2021). https://doi.org/10.1007/978-3-030-84629-9_5

  42. Yu, F., Yang, S., Wang, F., Chen, G., Chan, C.: Symbolic consistency checking of OpenMP parallel programs. In: Proceedings of LCTES, pp. 139–148. ACM, New York (2012). https://doi.org/10.1145/2248418.2248438

  43. Zaks, A., Pnueli, A.: CoVaC: Compiler validation by program analysis of the cross-product. In: Proc. FM. pp. 35–51. LNCS 5014, Springer, Berlin (2008), https://doi.org/10.1007/978-3-540-68237-0_5

    Chapter  Google Scholar 

  44. Zhang, Y., Duesterwald, E., Gao, G.R.: Concurrency analysis for shared memory programs with textually unaligned barriers. In: Proc. LCPC. pp. 95–109. LNCS 5234, Springer, Berlin (2007), https://doi.org/10.1007/978-3-540-85261-2_7

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Marie-Christine Jakobs .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jakobs, MC. (2021). PatEC: Pattern-Based Equivalence Checking. In: Laarman, A., Sokolova, A. (eds) Model Checking Software. SPIN 2021. Lecture Notes in Computer Science(), vol 12864. Springer, Cham. https://doi.org/10.1007/978-3-030-84629-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-84629-9_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-84628-2

  • Online ISBN: 978-3-030-84629-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics