The Journal of Supercomputing

, Volume 64, Issue 1, pp 156–166 | Cite as

A Read-Copy Update based parallel server for distributed crowd simulations

  • Guillermo Vigueras
  • Juan M. Orduña
  • Miguel Lozano
Article

Abstract

The Read-Copy Update (RCU) synchronization method was designed to cope with multiprocessor scalability some years ago, and it was included in the Linux kernel October of 2002. Recently, libraries providing user-space access to this method have been released, although they still have not been used in complex applications.

In this paper, we propose the evaluation of the RCU synchronization method for two different cases of use in a distributed system architecture for crowd simulations. We have compared the RCU implementation with a parallel implementation based on Mutex, a traditional locking synchronization method for solving race conditions among threads in parallel applications. The performance evaluation results show that the use of RCU significantly decreases the system response time and increases the system throughput, supporting a higher number of agents while providing the same latency levels. The reason for this behavior is that the RCU method allows read accesses in parallel with write accesses to dynamic data structures, avoiding the sequential access that a Mutex represents for these data structures. In this way, it can better exploit the existing number of processor cores. These results show the potential of this synchronization method for improving parallel and distributed applications.

Keywords

Read-copy update synchronization method Crowd simulations 

References

  1. 1.
    Chen D, Theodoropoulos GK, Turner SJ, Cai W, Minson R, Zhang Y (2008) Large scale agent-based simulation on the grid. Future Gener Comput Syst 24(7):658–671. doi:10.1016/j.future.2008.01.004 CrossRefGoogle Scholar
  2. 2.
    Desnoyers M, McKenney P, Stern A, Dagenais M, Walpole J (2012) User-Level implementations of read-copy update. IEEE Trans Parallel Distrib Syst 23:375–382 CrossRefGoogle Scholar
  3. 3.
    Duato J, Yalamanchili S, Ni L (1997) Interconnection networks: an engineering approach. IEEE Computer Society Press, New York Google Scholar
  4. 4.
    Gajinov V, Zyulkyarov F, Unsal OS, Cristal A, Ayguade E, Harris T, Valero M (2009) Quaketm: parallelizing a complex sequential application using transactional memory. In: Proceedings of the 23rd international conference on supercomputing, ICS ’09. ACM, New York, pp 126–135 CrossRefGoogle Scholar
  5. 5.
    Hart TE, McKenney PE, Brown AD (2006) Making lockless synchronization fast: performance implications of memory reclamation. In: 20th IEEE international parallel and distributed processing symposium, Rhodes, Greece Google Scholar
  6. 6.
    Henderson T, Bhatti S (2003) Networked games: a QoS-sensitive application for QoS-insensitive user? In: Proceedings of the ACM SIGCOMM 2003. ACM Press/ACM SIGCOMM, New York, pp 141–147 Google Scholar
  7. 7.
    Lozano M, Morillo P, Orduña JM, Cavero V, Vigueras G (2009) A new system architecture for crowd simulation. J Netw Comput Appl 32(2):474–482. doi:10.1016/j.jnca.2008.02.011 CrossRefGoogle Scholar
  8. 8.
    McKenney PE, Slingwine JD (1998) Read-copy update: using execution history to solve concurrency problems. In: Parallel and distributed computing and systems, Las Vegas, NV, pp 509–518 Google Scholar
  9. 9.
    McKenney PE, Walpole J (2008) Introducing technology into the Linux kernel: a case study. SIGOPS Oper Syst Rev 42(5):4–17 CrossRefGoogle Scholar
  10. 10.
    París DL, Brazalez A (2009) A new autonomous agent approach for the simulation of pedestrians in urban environments. Integr Comput-Aided Eng 16(4):283–297 Google Scholar
  11. 11.
    Reynolds CW (1987) Flocks, herds and schools: A distributed behavioral model. In: SIGGRAPH ’87: proceedings of the 14th annual conference on computer graphics and interactive techniques. ACM, New York, pp 25–34. doi:10.1145/37401.37406 CrossRefGoogle Scholar
  12. 12.
    Shendarkar A, Vasudevan K, Lee S, Son YJ (2006) Crowd simulation for emergency response using BDI agent based on virtual reality. In: WSC ’06: proceedings of the 38th conference on winter simulation, pp 545–553 Google Scholar
  13. 13.
    Sundell H, Tsigas P (2008) Lock-free deques and doubly linked lists. J Parallel Distrib Comput 68(7):1008–1020. doi:10.1016/j.jpdc.2008.03.001 MATHCrossRefGoogle Scholar
  14. 14.
    Triplett J, McKenney PE, Walpole J (2010) Scalable concurrent hash tables via relativistic programming. SIGOPS Oper Syst Rev 44:102–109 CrossRefGoogle Scholar
  15. 15.
    Vigueras G, Lozano M, Orduña JM, Grimaldo F (2010) A comparative study of partitioning methods for crowd simulations. Appl Soft Comput 10(1):225–235. doi:10.1016/j.asoc.2009.07.004 CrossRefGoogle Scholar
  16. 16.
    Vigueras G, Lozano M, Perez C, Orduña J (2008) A scalable architecture for crowd simulation: implementing a parallel action server. In: Proceedings of the 37th international conference on parallel processing (ICPP-08), pp 430–437. doi:10.1109/ICPP.2008.20 Google Scholar
  17. 17.
    Zyulkyarov F, Gajinov V, Unsal OS, Cristal A, Ayguadé E, Harris T, Valero M (2009) Atomic quake: using transactional memory in an interactive multiplayer game server. In: Proceedings of the 14th ACM SIGPLAN symposium on principles and practice of parallel programming, PPoPP ’09. ACM, New York, pp 25–34 Google Scholar
  18. 18.
    Dragojevic A, Felber P, Gramoli V, Guerraoui R (2011) Why STM can be more than a Research Toy. Commun ACM 54:70–77 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Guillermo Vigueras
    • 1
  • Juan M. Orduña
    • 1
    • 2
  • Miguel Lozano
    • 1
    • 2
  1. 1.Departamento de InformáticaUniversidad de ValenciaValenciaSpain
  2. 2.Avda. UniversidadBurjassot (Valencia)Spain

Personalised recommendations