Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Progress in global parallel computing research: a bibliometric approach

  • 643 Accesses

  • 6 Citations

Abstract

This study adopts a bibliometric approach to analyze the progress in global parallel computing research from the related literature in the Science Citation Index Expanded database from 1958 to 2011. By investigating the characteristics of annual publication outputs, we find that parallel computing has recently experienced increasing attention again after its first rapid development in the 1990s, and the research in this field is entering into a new phase. The distribution of publications indicates that the seven major industrial countries (G7), with USA ranking top, are identified as the most productive and influential countries in this domain. Author keywords were analyzed by comparison, and we conclude that the study focus of parallel computing has shifted from hardware to software, with parallel application and programming based on MPI, GPUs and multicores being the research tendencies; grid computing and cloud computing dominate the distributed computing area due to their heterogeneous and scalable structures; and, furthermore, the processors of parallel machines are heading for a diverse development. The citing-cited matrix brings into light the intense interactions among the disciplines of computer science, engineering, mathematics and physics. The mutual interactions between the four disciplines have increased gradually and reflect the subject characteristics in influence content.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

References

  1. Almind, T. C., & Ingwersen, P. (1997). Informetric analyses on the World Wide Web: methodological approaches to ‘webometrics’. Journal of Documentation, 53(4), 404–426. doi:10.1108/EUM0000000007205.

  2. Bandini, S., Mauri, G., & Serra, R. (2001). Cellular automata: from a theoretical parallel computational model to its application to complex systems. Parallel Computing, 27(5SI), 539–553. doi:10.1016/S0167-8191(00)00076-4.

  3. Barnett, G. A., Huh, C., Kim, Y., & Park, H. W. (2011). Citations among communication journals and other disciplines: a network analysis. Scientometrics, 88(2), 449–469. doi:10.1007/s11192-011-0381-2.

  4. Bova, S. W., Breshears, C. P., Cuicchi, C. E., Demirbilek, Z., & Gabb, H. A. (2000). Dual-level parallel analysis of harbor wave response using MPI and OpenMP. International Journal of High Performance Computing Applications, 14(1), 49–64. doi:10.1177/109434200001400104.

  5. Bradford, S. C. (1985). Sources of information on specific subjects. Journal of Information Science, 10(4), 173–180.

  6. Bull, J. M., Enright, J., Guo, X., Maynard, C., & Reid, F. (2010). Performance evaluation of mixed-mode OpenMP/MPI implementations. International Journal of Parallel Programming, 38(5–6), 396–417. doi:10.1007/s10766-010-0137-2.

  7. Carey, G. F. (1986). Parallelism in finite-element modeling. Communications in Applied Numerical Methods, 2(3), 281–287. doi:10.1002/cnm.1630020309.

  8. Chamberlain, B. L., Choi, S. E., Lewis, E. C., Lin, C., Snyder, L., & Weathersby, W. D. (2000). ZPL: a machine independent programming language for parallel computers. IEEE Transactions on Software Engineering, 26(3), 197–211. doi:10.1109/32.842947.

  9. Chao, C. C., Yang, J. M., & Jen, W. Y. (2007). Determining technology trends and forecasts of RFID by a historical review and bibliometric analysis from 1991 to 2005. Technovation, 27(5), 268–279. doi:10.1016/j.technovation.2006.09.003.

  10. Conway, M. E. (1963). A multiprocessor system design. In N. V. Las Vegas (Ed.), AFIPS fall joint computer conference (Vol. 24, pp. 139–146). Baltimore: Spartan Books. doi:10.1145/1463822.1463838.

  11. Dally, W. J., & Lacy, S. (1999). VLSI architecture: past, present, and future. In Proceedings of the 20th Anniversary Conference on Advanced Research in VLSI, Atlanta, GA, March 2124, 1999 (pp. 232–241, ARVLSI’99). Washington, DC: IEEE Computer Society. doi:10.1109/arvlsi.1999.756051.

  12. Dehne, T. (2008). Parallel computing, parallel development. R&D Magazine, 50(3), 22–23.

  13. Duato, J., Yalamanchili, S., & Ni, L. (2003). Interconnection networks. San Francisco, CA, USA: Morgan Kaufmann Publishers.

  14. Dumas, T., Logan, E., & Finley, A. (1993). In focus—using citation analysis and subject classification to identify and monitor trends within a discipline. Proceedings of the ASIS Annual Meeting, 30, 135–150.

  15. Flynn, M. J., & Rudd, K. W. (1996). Parallel architectures. ACM Computing Surveys, 28(1), 67–70. doi:10.1145/234313.234345.

  16. Gill, S. (1958). Parallel programming. Computer Journal, 1(1), 2–10. doi:10.1093/comjnl/1.1.2.

  17. Guzev, V., & Serdyuk, Y. (2003). Asynchronous Parallel Programming Language Based on the Microsoft.NET Platform. In V. Malyshkin (Ed.), Parallel Computing Technologies (Vol. 2763, pp. 236–243, Lecture Notes in Computer Science). Berlin: Springer.

  18. Heath, M. T. (1987). Hypercube multiprocessors (Vol. 29, Applied Mathematics). Philadelphia, PA: Society for Industrial and Applied Mathematics.

  19. Ho, Y. S., Satoh, H., & Lin, S. Y. (2010). Japanese lung cancer research trends and performance in science citation index. Internal Medicine, 49(20), 2219–2228. doi:10.2169/internalmedicine.49.3687.

  20. Holland, J. H. (1975). Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. Ann Arbor, MI, USA: University of Michigan Press.

  21. Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the United States of America-Biological Sciences, 79(8), 2554–2558. doi:10.1073/pnas.79.8.2554.

  22. Hopfield, J. J. (1984). Neurons with graded response have collective computational properties like those of 2-state neurons. Proceedings of the National Academy of Sciences of the United States of America-Biological Sciences, 81(10), 3088–3092. doi:10.1073/pnas.81.10.3088.

  23. Karki, R. (1996). Searching for bridges between disciplines: an author co-citation analysis on the research into scholarly communication. Journal of Information Science, 22(5), 323–334. doi:10.1177/016555159602200501.

  24. Khokhar, A. A., Prasanna, V. K., Shaaban, M. E., & Wang, C. L. (1993). Heterogeneous computing: challenges and opportunities. Computer, 26(6), 18–27. doi:10.1109/2.214439.

  25. Kiselyov, O. (1997). Scheduling algorithms and NP-complete problems. Dr. Dobb’s Journal, 22(2), 107–109.

  26. Kruskal, C. P., & Snir, M. (1986). A unified theory of interconnection network structure. Theoretical Computer Science, 48(1), 75–94. doi:10.1016/0304-3975(86)90084-8.

  27. Larus, J. (2009). Spending Moore’s dividend. Communications of the ACM, 52(5), 62–69. doi:10.1145/1506409.1506425.

  28. Lewis, T. G., & El-Rewini, H. (1992). Introduction to parallel computing. Upper Saddle River, NJ, USA: Prentice Hall.

  29. Lü, T., Shi, J. M., & Lin, Z. B. (1992). Domain decomposition methods. Beijing, China: Science Press.

  30. Mela, G. S., Cimmino, M. A., & Ugolini, D. (1999). Impact assessment of oncology research in the European Union. European Journal of Cancer, 35(8), 1182–1186. doi:10.1016/S0959-8049(99)00107-0.

  31. Molina, A. H. (1990). Transputers and transputer-based parallel computers—sociotechnical constituencies and the buildup of British European capabilities in information technologies. Research Policy, 19(4), 309–333. doi:10.1016/0048-7333(90)90016-Y.

  32. Mühlenbein, H., Schomisch, M., & Born, J. (1991). The parallel genetic algorithm as function optimizer. Parallel Computing, 17(6–7), 619–632. doi:10.1016/S0167-8191(05)80052-3.

  33. Nguyen, T.-A., & Kuonen, P. (2003). ParoC++: A Requirement-driven Parallel Object-oriented Programming Language. In P. A. Sloot, D. Abramson, A. Bogdanov, J. Dongarra, A. Zomaya, & Y. Gorbachev (Eds.), Computational ScienceICCS 2003 (Vol. 2657, pp. 165–174, Lecture Notes in Computer Science) Berlin: Springer.

  34. Noor, A. K. (1988). Parallel processing in finite-element structural-analysis. Engineering with Computers, 3(4), 225–241. doi:10.1007/BF01202143.

  35. Noor, A. K., & Atluri, S. N. (1987). Advances and trends in computational structural mechanics. AIAA Journal, 25(7), 977–995. doi:10.2514/3.9731.

  36. Noor, A. K., & Voigt, S. J. (1975). Hypermatrix scheme for finite element systems on CDC STAR-100 computer. Computers & Structures, 5(5–6), 287–296.

  37. Pease, M. C. I. (1977). The indirect binary n-cube microprocessor array. IEEE Transactions on Computers, C, 26(5), 458–473. doi:10.1109/TC.1977.1674863.

  38. Phillips, J. C., Braun, R., Wang, W., Gumbart, J., Tajkhorshid, E., Villa, E., et al. (2005). Scalable molecular dynamics with NAMD. Journal of Computational Chemistry, 26(16), 1781–1802. doi:10.1002/jcc.20289.

  39. Pritchard, A. (1969). Statistical bibliography or bibliometrics. Journal of Documentation, 25(4), 348–349.

  40. Raghavachari, M., & Rogers, A. (1999). ACE: a language for parallel programming with customizable protocols. ACM Transactions on Computer Systems, 17(3), 202–248. doi:10.1145/320656.320657.

  41. Rao, A., Rao, T., & Dattaguru, B. (2003). A new parallel overlapped domain decomposition method for nonlinear dynamic finite element analysis. Computers & Structures, 81(26–27), 2441–2454. doi:10.1016/S0045-7949(03)00312-2.

  42. Sato, H., Tanaka, Y., Iwama, H., Kawakika, S., Saito, M., Morikami, K., et al. (1992). Parallelization of AMBER molecular dynamics program for the AP1000 highly parallel computer. In Scalable High Performance Computing Conference, Williamsburg, VA, April 2629 1992 (pp. 113–120). Los Alamitos, CA: IEEE Computer Society Press. doi:10.1109/SHPCC.1992.232680.

  43. Schubert, A., Glänzel, W., & Braun, T. (1989). Scientometric datafiles. A comprehensive set of indicators on 2649 journals and 96 countries in all major science fields and subfields 1981–1985. Scientometrics, 16(1), 3. doi:10.1007/BF02093234.

  44. Seiffert, U. (2004). Artificial neural networks on massively parallel computer hardware. Neurocomputing, 57, 135–150.

  45. Sérot, J., Ginhac, D., & Dérutin, J.-P. (1999). SKiPPER: a Skeleton-Based Parallel Programming Environment for Real-Time Image Processing Applications. In V. Malyshkin (Ed.), Parallel Computing Technologies (Vol. 1662, pp. 296-305, Lecture Notes in Computer Science) Berlin: Springer.

  46. Shi, Z. Z. (2011). Advanced Artificial Intelligence. Beijing, China: Science Press.

  47. Smith, I. M. (2000). A general purpose system for finite element analyses in parallel. Engineering Computations, 17(1), 75–91. doi:10.1108/02644400010309610.

  48. Spezzano, G., & Talia, D. (1999). Programming cellular automata algorithms on parallel computers. Future Generation Computer Systems, 16(2–3), 203–216. doi:10.1016/S0167-739X(99)00047-3.

  49. Sun, X. H., & Ni, L. M. (1990). Another view on parallel speedup. In Proceedings of the 1990 ACM/IEEE conference on Supercomputing, New York, NY, November 12–16, 1990 (pp. 324–333, Supercomputing’90). Los Alamitos, CA: IEEE Computer Society Press. doi:10.1109/superc.1990.130037.

  50. Sur, S., Koop, M. J., & Panda, D. K. (2006). High-performance and scalable MPI over InfiniBand with reduced memory usage: an in-depth performance analysis. In Proceedings of the 2006 ACM/IEEE conference on Supercomputing, Tampa, Florida, November 11–17, 2006 (Supercomputing’06). New York, NY: ACM. doi:10.1145/1188455.1188565.

  51. Sutter, H. (2005). A fundamental turn toward concurrency in software. Dr. Dobb’s Journal, 30(3), 16.

  52. Thomaszewski, B., Pabst, S., & Blochinger, W. (2008). Parallel techniques for physically based simulation on multi-core processor architectures. Computers & Graphics—UK, 32(1), 25–40. doi:10.1016/j.cag.2007.11.003.

  53. Ullman, J. D. (1975). NP-complete scheduling problems. Journal of Computer and System Sciences, 10(3), 384–393.

  54. Wang, X. F., & Ziavras, S. G. (2004). Parallel LU factorization of sparse matrices on FPGA-based configurable computing engines. Concurrency and Computation-Practice & Experience, 16(4), 319–343. doi:10.1002/cpe.748.

  55. Wolf, F., & Mohr, B. (2003). Automatic performance analysis of hybrid MPI/OpenMP applications. Journal of Systems Architecture, 49(10–11), 421–439. doi:10.1016/S1383-7621(03)00102-4.

  56. Xie, S. D., Zhang, J., & Ho, Y. S. (2008). Assessment of world aerosol research trends by bibliometric analysis. Scientometrics, 77(1), 113–130. doi:10.1007/s11192-007-1928-0.

  57. Yagawa, G., & Shioya, R. (1993). Parallel finite elements on a massively parallel computer with domain decomposition. Computing Systems in Engineering, 4(4–6), 495–503.

  58. Yagawa, G., Soneda, N., & Yoshimura, S. (1991). A large scale finite element analysis using domain decomposition method on a parallel computer. Computers & Structures, 38(5–6), 615–625.

  59. York, T. A. (1993). Survey of field programmable logic devices. Microprocessors and Microsystems, 17(7), 371–381. doi:10.1016/0141-9331(93)90059-G.

  60. Yuen, C. K. (1997). Parallel programming—a critique. Parallel Computing, 23(3), 369–380.

  61. Zhang, J., Zhan, Z. H., Chen, W. N., & Zhong, J. H. (2009). Computational Intelligence. Beijing, China: Tsinghua University Press.

  62. Zitt, M., & Bassecoulard, E. (1994). Development of a method for detection and trend analysis of research fronts built by lexical or cocitation analysis. Scientometrics, 30(1), 333. doi:10.1007/BF02017232.

  63. Zuo, H. R., Zhang, Q. H., Xu, Y., & Zhao, R. J. (2009). Parallel optimize technology based on GPU. Application Research of Computers, 11, 4115–4118.

  64. Zyserman, F. I., & Santos, J. E. (2000). Parallel finite element algorithm with domain decomposition for three-dimensional magnetotelluric modelling. Journal of Applied Geophysics, 44(4), 337–351. doi:10.1016/S0926-9851(00)00012-4.

Download references

Acknowledgments

This paper is supported by the Ministry of Science and Technology of China under the Grant No. 2011AA120304. The authors thank Liu Xingjian for technical discussions. Thanks also to Su Shiliang for helpful advices on the methodology.

Author information

Correspondence to Yaolin Liu.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Liu, Z., Liu, Y., Guo, Y. et al. Progress in global parallel computing research: a bibliometric approach. Scientometrics 95, 967–983 (2013). https://doi.org/10.1007/s11192-012-0927-y

Download citation

Keywords

  • Parallel computing
  • Bibliometric analysis
  • Citing-cited matrix
  • Research trends