The Information Needed for Reproducing Shared Memory Experiments

  • Vincent Gramoli
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10104)


Reproducibility of experiments is key to research advances. Unfortunately, experiments involving concurrent programs are rarely reproducible. In this paper, we focus on multi-threaded executions where threads synchronize to access shared memory and present a series of causes for performance variations that illustrate the difficulty of reproducing a concurrent experiment. As one can guess, our experimental results are not intended to be reproducible but are meant to illustrate conditions that affect conclusions one can draw out of concurrent experiments.


Reproducibility Synchrobench Artifact NUMA cTDP JIT Pinning 



Some of the observations reported here were presented at the Winter School organized by the ACM SIGOPS France in March 2016. I wish to thank Tim Harris for fruitful discussions on the topic of publishing experimental results of concurrent programs. This research was supported under Australian Research Council’s Discovery Projects funding scheme (project number 160104801) entitled “Data Structures for Multi-Core”. Vincent Gramoli is the recipient of the Australian Research Council Discovery International Award.


  1. 1.
    Ince, D.C., Hatton, L., Graham-Cumming, J.: The case for open computer programs. Nature 482, 485–488 (2012)CrossRefGoogle Scholar
  2. 2.
    Code share: Papers in nature journals should make computer code accessible where possible. Nature 514, October 2014Google Scholar
  3. 3.
    Collberg, C., Proebsting, T.A.: Repeatability in computer systems research. Commun. ACM 59(3), 62–69 (2016)CrossRefGoogle Scholar
  4. 4.
    Blackburn, S.M., Diwan, A., Hauswirth, M., Sweeney, P.F., Amaral, J.N., Babka,V., Binder, W., Brecht,T., Bulej, L., Eeckhout, L., Fischmeister, S., Frampton, D., Garner, R., Georges, A., Hendren, L.J., Hind, M., Hosking, A.L., Jones, R., Kalibera, T., Moret, P., Nystrom, N., Pankratius, V., Tuma, P.: Can you trust your experimental results? (2012)Google Scholar
  5. 5.
    Devietti, J., Lucia, B., Ceze, L., Oskin, M.: DMP: deterministic shared memory multiprocessing. In: Proceedings of 14th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS, vol. XIV, pp. 85–96 (2009)Google Scholar
  6. 6.
    Russinovich, M., Cogswell, B.: Replay for concurrent non-deterministic shared-memory applications. In: Proceedings of ACM SIGPLAN 1996 Conference on Programming Language Design and Implementation, PLDI 1996, pp. 258–266 (1996)Google Scholar
  7. 7.
    Choi, J.D., Srinivasan, H.: Deterministic replay of Java multithreaded applications. In: Proceedings of SIGMETRICS Symposium on Parallel and Distributed Tools, SPDT 1998, pp. 48–59 (1998)Google Scholar
  8. 8.
    Gramoli, V.: More than you ever wanted to know about synchronization: synchrobench, measuring the impact of the synchronization on concurrent algorithms. In: Proceedings of 20th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP 2015, pp. 1–10 (2015)Google Scholar
  9. 9.
    Mytkowicz, T., Diwan, A., Hauswirth, M., Sweeney, P.F.: Producing wrong data without doing anything obviously wrong! In: Proceedings of 14th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS, vol. XIV, pp. 265–276. ACM, New York (2009)Google Scholar
  10. 10.
    Braithwaite, R., McCormick, P., Chun Feng, W.: Empirical memory-access cost models in multicore NUMA architectures. In: Proceedings of International Conference on Parallel Processing (ICPP) (2011)Google Scholar
  11. 11.
    Dick, I., Fekete, A., Gramoli, V.: A skip list for multicore. Pract. Exp. Concurr. Comput. 29(4) (2016)Google Scholar
  12. 12.
    Drepper, U.: What every programmer should know about memory (2007)Google Scholar
  13. 13.
    Harris, T.: Do not believe everything you read in the papers. Personal Communication at the NICTA SSRG 4th Summer School, February 2016Google Scholar
  14. 14.
    Dice, D.: Thread placement policies on NUMA systems - update (2012)Google Scholar
  15. 15.
    Dragojević, A., Felber, P., Gramoli, V., Guerraoui, R.: Why STM can be more than a research toy. Commun. ACM 54(4), 70–77 (2011)CrossRefGoogle Scholar
  16. 16.
    Gramoli, V., Guerraoui, R.: Democratizing transactional programming. Commun. ACM (CACM) 57(1), 86–93 (2014)CrossRefGoogle Scholar
  17. 17.
    Groen, M., Gramoli, V.: Multicore vs manycore: the energy cost of concurrency. In: Dutot, P.-F., Trystram, D. (eds.) Euro-Par 2016. LNCS, vol. 9833, pp. 545–557. Springer, Cham (2016). doi: 10.1007/978-3-319-43659-3_40 Google Scholar
  18. 18.
    Rosendahl, T.: On chip controller (OCC). In: 1st Annual OpenPOWER Summit (2015)Google Scholar
  19. 19.
    Gramoli, V., Kuznetsov, P., Ravi, S., Shang, D.: A concurrency-optimal list-based set. Technical report, February 2015. arXiv:1502.01633v1
  20. 20.
    Goetz, B., Peierls, T., Bloch, J., Bowbeer, J., Lea, D., Holmes, D.: Java Concurrency in Practice. Addison-Wesley Professional, Boston (2005)Google Scholar
  21. 21.
    Gramoli, V., Kuznetsov, P., Ravi, S., Shang, D.: Brief announcement: a concurrency-optimal list-based set. In: 29th International Symposium on Distributed Computing (DISC) (2015)Google Scholar
  22. 22.
    Harmanci, D., Felber, P., Gramoli, V., Fetzer, C.: TMunit: testing transactional memories. In: 4th ACM SIGPLAN Workshop on Transactional Computing (TRANSACT) (2009)Google Scholar
  23. 23.
    Heller, S., Herlihy, M., Luchangco, V., Moir, M., Scherer, W.N., Shavit, N.: A lazy concurrent list-based set algorithm. In: Anderson, J.H., Prencipe, G., Wattenhofer, R. (eds.) OPODIS 2005. LNCS, vol. 3974, pp. 3–16. Springer, Heidelberg (2006). doi: 10.1007/11795490_3 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Data61-CSIRO and University of SydneySydneyAustralia

Personalised recommendations