Scalable Software Infrastructure for Integrating Supercomputing with Volunteer Computing and Cloud Computing

  • Ritu AroraEmail author
  • Carlos Redondo
  • Gerald Joshua
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 964)


Volunteer Computing (VC) is a computing model that uses donated computing cycles on the devices such as laptops, desktops, and tablets to do scientific computing. BOINC is the most popular software framework for VC and it helps in connecting the projects needing computing cycles with the volunteers interested in donating the computing cycles on their resources. It has already enabled projects with high societal impact to harness several PetaFLOPs of donated computing cycles. Given its potential in elastically augmenting the capacity of existing supercomputing resources for running High-Throughput Computing (HTC) jobs, we have extended the BOINC software infrastructure and have made it amenable for integration with the supercomputing and cloud computing environments. We have named the extension of the BOINC software infrastructure as BOINC@TACC, and are using it to route *qualified* HTC jobs from the supercomputers at the Texas Advanced Computing Center (TACC) to not only the typically volunteered devices but also to the cloud computing resources such as Jetstream and Chameleon. BOINC@TACC can be extremely useful for those researchers/scholars who are running low on allocations of compute-cycles on the supercomputers, or are interested in reducing the turnaround time of their HTC jobs when the supercomputers are over-subscribed. We have also developed a web-application for TACC users so that, through the convenience of their web-browser, they can submit their HTC jobs for running on the resources volunteered by the community. An overview of the BOINC@TACC project is presented in this paper. The BOINC@TACC software infrastructure is open-source and can be easily adapted for use by other supercomputing centers that are interested in building their volunteer community and connecting them with the researchers needing multi-petascale (and even exascale) computing power for their HTC jobs.



The BOINC@TACC project is funded through National Science Foundation (NSF) award # 1664022. We are grateful to XSEDE, TACC, and the Science Gateway Community Institute for providing the resources required for implementing this project. We are grateful to David Anderson, Thomas Johnson, and Anubhaw Nand for contributing to the BOINC@TACC codebase and their contribution in preparing this paper. Figure 1 was prepared by Thomas Johnson. Several results presented in this paper were obtained using the Chameleon testbed supported by the NSF and we are grateful to NSF for the same.


  1. 1.
    Anderson, D.P.: BOINC: a system for public-resource computing and storage. In: Fifth IEEE/ACM International Workshop on Grid Computing (2004)Google Scholar
  2. 2.
    BOINC@TACC (2018). Accessed 26 Oct 2018
  3. 3.
    Jetstream (2018). Accessed 26 Oct 2018
  4. 4.
    Chameleon. Accessed 26 Oct 2018
  5. 5.
    Docker. Accessed 26 Oct 2018
  6. 6.
  7. 7.
  8. 8.
    Docker Containers. Accessed 26 Oct 2018
  9. 9.
    Docker Compose. Accessed 26 Oct 2018
  10. 10.
    Docker Volumes. Accessed 26 Oct 2018
  11. 11.
    NVIDIA Docker Github Repo. Accessed 26 Oct 2018
  12. 12.
    ADTD-P Protocol Github Repo. Accessed 26 Oct 2018
  13. 13.
    BOINC2Docker Github Repo. Accessed 26 Oct 2018
  14. 14.
    Science United. Accessed 26 Oct 2018
  15. 15.
    BOINC@TACC Github Repo. Accessed 26 Oct 2018
  16. 16.
    Trott, O., Olson, A.J.: AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization and multithreading. J. Comput. Chem. 31(2), 455–461 (2010)Google Scholar
  17. 17.
    Opensees. Accessed 26 Oct 2018
  18. 18.
    Phillips, J.C., et al.: Scalable molecular dynamics with NAMD. J. Comput. Chem. 26, 1781–1802 (2005)CrossRefGoogle Scholar
  19. 19.
    Plimpton, S.: Fast parallel algorithms for short-range molecular dynamics. J. Comput. Phys. 117, 1–19 (1995)CrossRefGoogle Scholar
  20. 20.
    Berendsen, H.J.C., Drunen, R.V., Spoel, D.V.D.: GROMACS: a message-passing parallel molecular dynamics implementation. Comput. Phys. Commun. 91, 43–56 (1995)CrossRefGoogle Scholar
  21. 21.

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Texas Advanced Computing CenterUniversity of Texas at AustinAustinUSA
  2. 2.University of Texas at AustinAustinUSA

Personalised recommendations