Skip to main content

Advertisement

Log in

Human–agent collaboration for disaster response

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

In the aftermath of major disasters, first responders are typically overwhelmed with large numbers of, spatially distributed, search and rescue tasks, each with their own requirements. Moreover, responders have to operate in highly uncertain and dynamic environments where new tasks may appear and hazards may be spreading across the disaster space. Hence, rescue missions may need to be re-planned as new information comes in, tasks are completed, or new hazards are discovered. Finding an optimal allocation of resources to complete all the tasks is a major computational challenge. In this paper, we use decision theoretic techniques to solve the task allocation problem posed by emergency response planning and then deploy our solution as part of an agent-based planning tool in real-world field trials. By so doing, we are able to study the interactional issues that arise when humans are guided by an agent. Specifically, we develop an algorithm, based on a multi-agent Markov decision process representation of the task allocation problem and show that it outperforms standard baseline solutions. We then integrate the algorithm into a planning agent that responds to requests for tasks from participants in a mixed-reality location-based game, called AtomicOrchid, that simulates disaster response settings in the real-world. We then run a number of trials of our planning agent and compare it against a purely human driven system. Our analysis of these trials show that human commanders adapt to the planning agent by taking on a more supervisory role and that, by providing humans with the flexibility of requesting plans from the agent, allows them to perform more tasks more efficiently than using purely human interactions to allocate tasks. We also discuss how such flexibility could lead to poor performance if left unchecked.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. http://bit.ly/1ebNYty.

  2. http://www.rescueglobal.org.

  3. http://www3.hants.gov.uk/emergencyplanning.htm.

  4. As access to emergency responders is either limited or costly for field trials, it was considered reasonable to hire volunteers that were taught to use the tools we gave them. The design of a fully-fledged training tool for disaster responder would be beyond the scope of this paper.

  5. Given the invisibility of radiation, it is possible to create a believable and challenging environment for the responders to solve in our mixed-reality game (see Sect. 5).

  6. This assumption is not central to our problem and only serves to inform the decision making of the agent as we see later. It is also possible to obtain similar information about radiation levels by fusing the responders’ geiger counter readings, but this is beyond the scope of the paper.

  7. While some agencies may be trained to obey orders (e.g., military or fire-fighting), others (e.g., transport providers or medics) are not always trained to do so [23].

  8. Other methods such as sequential greedy assignment or swap-based hill climbing [42] may also be useful. However, they do not explore the policy space as well as MCTS [29].

  9. http://mapattack.org.

  10. http://www.dm.af.mil/library/angelthunder2013.asp.

  11. The EKF accommodates the nonlinearities in the radiation dynamics expressed through Eq. (4).

References

  1. Abbott, K. R. & Sarin, S. K. (1994). Experiences with workflow management: Issues for the next generation. In Proceedings of the 1994 ACM conference on computer supported cooperative work (CSCW) (pp. 113–120).

  2. Auer, P., Cesa-Bianchi, N., & Fischer, P. (2002). Finite-time analysis of the multi-armed bandit problem. Machine Learning, 47(2–3), 235–256.

    Article  MATH  Google Scholar 

  3. Bader, T., Meissner, A., & Tscherney, R. (2008). Digital map table with fovea-tablett \(\textregistered \): Smart furniture for emergency operation centers. In proceedings of the 5th international conference on information systems for crisis response and management (pp. 679–688).

  4. Barto, A. G., Bradtke, S. J., & Singh, S. P. (1995). Learning to act using real-time dynamic programming. Artificial Intelligence, 72(1), 81–138.

    Article  Google Scholar 

  5. Benford, S., Magerkurth, C., & Ljungstrand, P. (2005). Bridging the physical and digital in pervasive gaming. Communications of the ACM, 48(3), 54.

    Article  Google Scholar 

  6. Bernstein, D. S., Givan, R., Immerman, N., & Zilberstein, S. (2002). The complexity of decentralized control of markov decision processes. Mathematics of Operations Research, 27(4), 819–840.

    Article  MATH  MathSciNet  Google Scholar 

  7. Boutilier, C. (1996). Planning, learning and coordination in multi-agent decision processes. Proceedings of TARK, 1996, 195–210.

    Google Scholar 

  8. Boutilier, C., Dearden, R., & Goldszmidt, M. (2000). Stochastic dynamic programming with factored representations. Artificial Intelligence, 121(1), 49–107.

    Article  MATH  MathSciNet  Google Scholar 

  9. Bowers, J., Button, G., & Sharrock, W. (1994). Workflow from within and without: Technology and cooperative work on the print industry shopfloor introduction: Workflow systems and work practice. In Fourth European conference on computer-supported cooperative work (pp. 51–66).

  10. Bradshaw, J. M., Feltovich, P., & Johnson, M. (2011). Human–agent interaction. In G. Boy (Ed.), Handbook of human–machine interaction (Chap. 13) (pp. 293–302). Surrey: Ashgate.

    Google Scholar 

  11. Brown, B., Reeves, S., & Sherwood, S. (2011). Into the wild: Challenges and opportunities for field trial methods. In Proceedings of the SIGCHI conference on human factors in computing systems, CHI ’11 (pp. 1657–1666). New York, NY: ACM.

  12. Chapman, A., Micillo, R. A., Kota, R., & Jennings, N. R. (2009). Decentralised dynamic task allocation: A practical game-theoretic approach. Proceedings of AAMAS, 2009, 915–922.

    Google Scholar 

  13. Chen, R., Sharman, R., Rao, H. R., & Upadhyaya, S. J. (2005). Design principles of coordinated multi-incident emergency response systems. Simulation, 3495, 177–202.

    Google Scholar 

  14. Convertino, G., Mentis, H. M., Slavkovic, A., Rosson, M. B., & Carroll, J. M. (2011). Supporting common ground and awareness in emergency management planning. ACM Transactions on Computer–Human Interaction, 18(4), 1–34.

    Article  Google Scholar 

  15. Cooke, G. J. N., & Harry, B. B. P. (2006). Distributed mission environments: Effects of geographic distribution on team cognition, process, and performance. Towards a science of distributed learning and training. Washington, DC: American Psychological Association.

    Google Scholar 

  16. Crabtree, A., Benford, S., Greenhalgh, C., Tennent, P., Chalmers, M., & Brown, B. (2006). Supporting ethnographic studies of ubiquitous computing in the wild. In Proceedings of the 6th ACM conference on designing interactive systems—DIS ’06 (p. 60). New York, NY: ACM.

  17. Drury, J., Cocking, C., & Reicher, S. (2009). Everyone for themselves? A comparative study of crowd solidarity among emergency survivors. The British Journal of Social Psychology/The British Psychological Society, 48(Pt 3), 487–506.

    Article  Google Scholar 

  18. Fischer, J. E., Jiang, W., Kerne, A., Greenhalgh, C., Ramchurn, S. D., Reece, S., Pantidi, N., & Rodden, T. (2014). Supporting team coordination on the ground: Requirements from a mixed reality game. In COOP 2014-proceedings of the 11th international conference on the design of cooperative systems, 27–30 May 2014, Nice (France) (pp. 49–67). Berlin: Springer.

  19. Fischer, J. E., Reeves, S., Rodden, T., Reece, S., Ramchurn, S. D., & Jones, D. (2015). Building a birds eye view: Collaborative work in disaster response. In Proceedings of the SIGCHI conference on human computer interaction (CHI 2015)—to appear.

  20. Guestrin, C., Koller, D., & Parr, R. (2001). Multiagent planning with factored MDPS. NIPS, 1, 1523–1530.

    Google Scholar 

  21. Guestrin, C., Koller, D., Parr, R., & Venkataraman, S. (2003). Efficient solution algorithms for factored MDPS. Journal of Artificial Intelligence Research, 19, 399–468.

    MATH  MathSciNet  Google Scholar 

  22. Hawe, G. I., Coates, G., Wilson, D. T., & Crouch, R. S. (2012). Agent-based simulation for large-scale emergency response. ACM Computing Surveys, 45(1), 1–51.

    Article  Google Scholar 

  23. Initiative, H. H. et al. (2010). Disaster relief 2.0: The future of information sharing in humanitarian emergencies. In Disaster Relief 2.0: The future of information sharing in humanitarian emergencies. HHI; United Nations Foundation, OCHA; The Vodafone Foundation.

  24. Jennings, N. R., Moreau, L., Nicholson, D., Ramchurn, S. D., Roberts, S. J., Rodden, T., et al. (2014). On human–agent collectives. Communications of the ACM, 57(12), 80–88.

    Article  Google Scholar 

  25. Jiang, W., Fischer, J. E., Greenhalgh, C., Ramchurn, S. D., Wu, F., Jennings, N. R., Rodden, T. (2014). Social implications of agent-based planning support for human teams. In International conference on collaboration technologies and systems (pp, 310–317).

  26. Khan, M. A., Turgut, D., & Bölöni, L. (2011). Optimizing coalition formation for tasks with dynamically evolving rewards and nondeterministic action effects. Journal of Autonomous Agents and Multi-agent Systems, 22(3), 415–438.

    Article  Google Scholar 

  27. Kitano, H., & Tadokoro, S. (2001). Robocup rescue: A grand challenge for multiagent and intelligent systems. AI Magazine, 22(1), 39–52.

    Google Scholar 

  28. Kleiner, A., Farinelli, A., Ramchurn, S., Shi, B., Mafioletti, F., & Refatto, R. (2013). Rmasbench: A benchmarking system for multi-agent coordination in urban search and rescue. In International conference on autonomous agents and multi-agent systems (AAMAS 2013).

  29. Kocsis, L., & Szepesvári, C. (2006). Bandit based Monte-Carlo planning. Proceedings of ECML, 2006, 282–293.

    Google Scholar 

  30. Koes, M., Nourbakhsh, I., & Sycara, K. (2006). Constraint optimization coordination architecture for search and rescue robotics. In Proceedings of IEEE international conference on robotics and automation (pp. 3977–3982). IEEE.

  31. Koller, D. & Parr, R. (2000). Policy iteration for factored MDPS. In Proceedings of the sixteenth conference on Uncertainty in artificial intelligence (pp. 326–334). San Francisco, CA: Morgan Kaufmann Publishers Inc.

  32. Lee, Y. M., Ghosh, S., & Ettl, M. (2009, Dec). Simulating distribution of emergency relief supplies for disaster response operations. In Proceedings of the 2009 winter simulation conference (WSC) (pp. 2797–2808). IEEE.

  33. Lenox, T. L., Payne, T., Hahn, S., Lewis, M., & Sycara, K. (2000). Agent-based aiding for individual and team planning tasks. Proceedings of the human factors and Ergonomics Society Annual Meeting, 44(1), 65–68.

    Article  Google Scholar 

  34. Malone, T. W. & Crowston, K. (1990). What is coordination theory and how can it help design cooperative work systems? In Proceedings of the 1990 ACM conference on computer-supported cooperative work—CSCW ’90 (pp. 357–370). New York, NY: ACM.

  35. Mausam, & Kolobov, A. (2012). Planning with markov decision processes: An AI perspective. Synthesis Lectures on AI and Machine Learning, 6(1), 1–210.

    Article  Google Scholar 

  36. Monares, A., Ochoa, S. F., Pino, J. A., Herskovic, V., Rodriguez-Covili, J., & Neyem, A. (2011). Mobile computing in urban emergency situations: Improving the support to firefighters in the field. Expert Systems with Applications, 38(2), 1255–1267.

    Article  Google Scholar 

  37. Moran, S., Pantidi, N., Bachour, K., Fischer, J. E., Flintham, M., Rodden, T., Evans, S., & Johnson, S. (2013). Team reactions to voiced agent instructions in a pervasive game. In Proceedings of the 2013 international conference on Intelligent user interfaces—IUI ’13 (p. 371).

  38. Murthy, S., Akkiraju, R., Rachlin, J., & Wu, F. (1997). Agent-based cooperative scheduling. In Proceedings of AAAI workshop on constraints and agents (pp. 112–117).

  39. Musliner, D. J., Durfee, E. H., Wu, J., Dolgov, D. A., Goldman, R. P., & Boddy, M. S. (2006). Coordinated plan management using multiagent MDPS. In AAAI spring symposium: Distributed plan and schedule management (pp. 73–80).

  40. Nakajima, Y., Shiina, H., Yamane, S., Ishida, T., & Yamaki, H. (2007, Jan). Disaster evacuation guide: Using a massively multiagent server and gps mobile phones. In International symposium on applications and the internet, 2007. SAINT 2007 (p. 2).

  41. Padilha, R. P., Gomes, J. O., & Canós, J. H. (2010). The design of collaboration support between command and operation teams during emergency response. Current, 759–763.

  42. Proper, S., & Tadepalli, P. (2009). Solving multi-agent assignment Markov decision processes. Proceedings of AAMAS, 2009, 681–688.

    Google Scholar 

  43. Pujol-Gonzalez, M., Cerquides, J., Farinelli, A., Meseguer, P., & Rodríguez-Aguilar, J. A. (2014). Binary max-sum for multi-team task allocation in robocup rescue. In Optimisation in multi-agent systems and distributed constraint reasoning (OptMAS-DCR), Paris, France, 05/05/2014.

  44. Pynadath, D. V., & Tambe, M. (2002). The communicative multiagent team decision problem: Analyzing teamwork theories and models. Journal of Artificial Intelligence Research, 16, 389–423.

    MATH  MathSciNet  Google Scholar 

  45. Ramchurn, S. D., Farinelli, A., Macarthur, K. S., & Jennings, N. R. (2010). Decentralized coordination in robocup rescue. The Computer Journal, 53(9), 1447–1461.

    Article  Google Scholar 

  46. Rasmussen, C. E., & Williams, C. K. I. (2006). Gaussian processes for machine learning. Cambridge, MA: MIT.

    MATH  Google Scholar 

  47. Reece, S. & Roberts, S. (2010). An introduction to Gaussian processes for the Kalman filter expert. In Proceedings of International conference on information fusion (FUSION) (pp. 1–9). IEEE.

  48. Reece, S., Ghosh, S., Roberts, S., Rogers, A., & Jennings, N. R. (2014). Efficient state–space inference of periodic latent force models. Journal of Machine Learning Research, 15, 2337–2397.

    MATH  MathSciNet  Google Scholar 

  49. Robinson, C. & Brown, D. (2005). First responder information flow simulation: A tool for technology assessment. In Proceedings of the winter simulation conference, 2005 (pp. 919–925). IEEE.

  50. Scerri, P., Farinelli, A., Okamoto, S., & Tambe, M. (2005). Allocating tasks in extreme teams. In Proceedings of AAMAS (pp. 727–734). New York, NY: ACM.

  51. Scerri, P., Pynadath, D., Johnson, L., Rosenbloom, P., Si, M., Schurr, N., & Tambe, M. (2003). A prototype infrastructure for distributed robot-agent-person teams. In Proceedings of the second international joint conference on autonomous agents and multiagent systems, AAMAS ’03 (pp. 433–440). New York, NY: ACM.

  52. Scerri, P., Tambe, M., & Pynadath, D. V. (2002). Towards adjustable autonomy for the real-world. Journal of Artificial Intelligence Research, 17(1), 171–228.

    MATH  MathSciNet  Google Scholar 

  53. Schurr, N., Marecki, J., Lewis, J. P., Tambe, M., & Scerri, P. (2005). The defacto system: Training tool for incident commanders. In National conference on artificial intelligence (AAAI) (pp. 1555–1562).

  54. Searle, J. (1975). A taxonomy of illocutionary acts. In K. Günderson (Ed.), Language, mind, and knowledge (Vol. 7, pp. 344–369)., Studies in the philosophy of science Minneapolis: University of Minneapolis Press.

    Google Scholar 

  55. Simonović, S. P. (2010). Systems approach to management of disasters. Hoboken, NJ: Wiley.

    Book  Google Scholar 

  56. Skinner, C. & Ramchurn, S. D. (2010). The robocup rescue simulation platform. In AAMAS (pp. 1647–1648). IFAAMAS.

  57. Sukthankar, G., Sycara, K., Giampapa, J. A., & Burnett, C. (2009). Communications for agent-based human team support. In: Handbook of research on multi-agent dystems: Semantics and dynamics of organizational models (p. 285).

  58. Tambe, M. (2011). Security and game theory: Algorithms, deployed systems lessons learned (1st ed.). New York, NY: Cambridge University Press.

    Book  Google Scholar 

  59. Toups, Z. O., Kerne, A., & Hamilton, W. A. (2011). The team coordination game: Zero-fidelity simulation abstracted from fire emergency response practice. ACM Transactions on Computer–Human Interaction, 18(4), 1–37.

    Article  Google Scholar 

  60. Wagner, T., Phelps, J., Guralnik, V., & VanRiper, R. (2004). An application view of coordinators: Coordination managers for first responders. In AAAI.

  61. Wirz, M., Roggen, D., & Tröster, G. (2010). User acceptance study of a mobile system for assistance during emergency situations at large-scale events. In The 3rd international conference on human-centric computing.

Download references

Acknowledgments

This work was done as part of the EPSRC-funded ORCHID Project (EP/I011587/1). We also wish to thank Trung Dong Huynh for generating the traces of the player movements as well as Davide Zilli and Sebastian Stein for initial input on the Responder App. Finally we wish thank the anonymous reviewers for their constructive comments that helped improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sarvapali D. Ramchurn.

Appendices

Appendix 1: Radiation cloud modelling

The radiation cloud diffusion process is modelled using the Smoluchowski drift-diffusion equation,

$$\begin{aligned} \frac{D \hbox {Rad}(\mathbf{z}, \tau )}{D \tau }=\kappa \triangledown ^2 \hbox {Rad}(\mathbf{z},\tau )-\hbox {Rad}(\mathbf{z},\tau )\triangledown \cdot \mathbf{w}(\mathbf{z},\tau )+\sigma (\mathbf{z},\tau ) \end{aligned}$$
(4)

where \(D\) is the material derivative, \(\text {Rad}(\mathbf{z},\tau )\) is the radiation cloud intensity at location \(\mathbf{z}=(x,y)\) at time \(\tau \), \(\kappa \) is a fixed diffusion coefficient and \(\sigma \) is the radiation source(s) emission rate. The diffusion equation is solved on a regular grid defined across the environment with grid coordinates \(G\) (as defined in Sect. 3.1). Furthermore, the grid is solved at discrete time instances \(\tau \). The cloud is driven by stochastic wind forces which vary both spatially and temporally. These forces induce anisotropy into the cloud diffusion process proportional to the local average wind velocity, \(\mathbf{w}(\mathbf{z},\tau )\). The wind velocity is drawn from two independent Gaussian processes (GP), one GP for each Cartesian coordinate axis, \(w_i(\mathbf{z},\tau )\), of \(\mathbf{w}(\mathbf{z},\tau )\). The GP captures both the spatial distribution of the wind velocity and the dynamic process resulting from shifting wind patterns (e.g., short term gusts and longer term variations).

In our simulation, each spatial wind velocity component is modelled by an isotropic squared-exponential GP covariance function [46], \(K\), with fixed input and output scales, \(l\) and \(\mu \), respectively (although any covariance function can be substituted),

$$\begin{aligned} K(\mathbf{z},\mathbf{z}^\prime )=\mu ^2\exp -(\mathbf{z}-\mathbf{z}^\prime )^T \mathbf{P}^{-1}(\mathbf{z}-\mathbf{z}^\prime ) \end{aligned}$$

where \(\mathbf{P}\) is a diagonal covariance matrix with diagonal elements \(l^2\). This choice of covariance function generates wind conditions which vary smoothly in both magnitude and direction across the terrain. Furthermore, as wind conditions may change over time we introduce a temporal correlation coefficient, \(\rho \), to the covariance function. Thus, for a single component, \(w_i\), of \(\mathbf{w}\), defined over grid \(G\) at times \(\tau \) and \(\tau ^\prime \), the wind process covariance function is, \(\hbox {Cov}(w_i(\mathbf{z},\tau ),w_i(\mathbf{z^\prime },\tau ^\prime ))=\rho (\tau ,\tau ^\prime ) K(\mathbf{z},\mathbf{z}^\prime )\). We note that, when \(\rho =1\) the wind velocities are time invariant (although spatially variant). Values of \(\rho <1\) model wind conditions that change over time.

Using the above model, we are able to create a moving radiation cloud. This poses a real challenge both for the HQ (\(PA\) and \(H\)) and the responders on the ground as the predictions they make of where the cloud will move to will be prone to uncertainty both due to the simulated wind speed and direction. While it is possible to use radiation readings provided by first responders on the ground, as they move in the disaster space, in our trials, we assumed that these readings are coming from sensors already embedded in the environment to allow the team to focus on path planning for task allocation (which is the focus of this paper) rather than for radiation monitoring. Hence, using such sensor readings, the prediction algorithm provided in Appendix 2 is then used to provide estimates of the radiation levels across the disaster space during the game. These estimates are displayed as a heat map as described in Sect. 5.

Appendix 2: Predictive model of radiation cloud

Predictions of the clouds location are performed using a latent force model (LFM) [47, 48]. The LFM is a Markov model that allows the future state of the cloud and wind conditions to be predicted efficiently from the current state. Predictions are computed using the Extended Kalman filter (EKF) which has a linear computational complexity with regard to the time interval over which the dynamics are predicted forward.Footnote 11 The EKF estimates provide both the mean and variance of the state of the cloud and wind conditions. Figure 6 shows example cloud simulations for slow varying (i.e. \(\rho =0.99\)) and gusty (i.e. \(\rho =0.90\)) wind conditions. The left panes in each subfigure show the ground truth simulation obtained by sampling from the LFM. The middle panes show the mean of the cloud and wind conditions and the right panes show the uncertainty in the conditions.

Fig. 6
figure 6

Radiation and wind simulation ground truth and EKF estimates obtained using measurements from monitor agents (black dots). Left most panes are ground truth radiation and wind conditions, the middle panes are corresponding estimates and right most panes are state uncertainties: a invariant and b gusty wind conditions. The radiation levels are normalised to the range \([0,\ 1]\). a Slowly varying wind conditions. b Gusty wind conditions

The radiation is monitored using a number of sensors on the ground that collect readings of the radiation cloud intensity and, optionally, wind velocity every minute of the game. These monitor agents can be at fixed locations or they can be mobile agents equipped with geiger-counters that inform the user and commander of the local radiation intensity. The measurements can be folded into the EKF and this refines estimates of both the radiation cloud and wind conditions across the grid. Figure 6 shows the impact of such measurements on the uncertainty of the cloud and wind conditions. Location of two monitors are shown as black dots in the upper row of panes in both subfigures. The right most panes show the relative uncertainty in both the cloud and wind conditions as a result of current and past measurements. Figure 6a shows slow varying wind conditions in which case the radiation cloud can be interpolated accurately using sparse sensor measurements and the LFM model. Alternatively, during gusty conditions the radiation cloud model is more uncertain far from the locations where recent measurements have been taken, as shown in Fig. 6b.

Appendix 3: Simulation results of MMDP solution

Before deploying our solution (as part of \(PA\)) to advise human responders, it is important to test its performance to ensure it can return efficient solutions on simulations of the real-world problem. Given there is no extant solution that takes into account uncertainty in team coordination for emergency response, we compare our algorithm with a greedy and a myopic method to evaluate the benefits of coordination and lookahead. For each method, we use our path planning algorithm to compute the path for each responder. In the greedy method, the responders are uncoordinated and select the closest tasks they can do. In the myopic method, the responders are coordinated to select the tasks but have no lookahead for the future tasks (Line 8 in Algorithm 2). Table 3 shows the results for a problem with 17 tasks and 8 responders on a \(50\times 55\) grid. As can be seen, our MMDP algorithm completes more tasks than the myopic and greedy methods (see Table 3). More importantly, our algorithm guarantees the safety of the responders, while in the myopic method only 25 % of the responders survive and in the greedy method all responders are killed by the radioactive cloud. More extensive evaluations are beyond the scope of this paper as our focus here is on the use of the algorithm in a field deployment to test how humans take up advice computed by the planning agent \(PA\).

Table 3 Experimental results for the MMDP, myopic, greedy algorithms in simulation

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ramchurn, S.D., Wu, F., Jiang, W. et al. Human–agent collaboration for disaster response. Auton Agent Multi-Agent Syst 30, 82–111 (2016). https://doi.org/10.1007/s10458-015-9286-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-015-9286-4

Keywords

Navigation