Skip to main content

Distributed Algorithms for Multi-Robot Observation of Multiple Moving Targets


An important issue that arises in the automation of many security, surveillance, and reconnaissance tasks is that of observing the movements of targets navigating in a bounded area of interest. A key research issue in these problems is that of sensor placement—determining where sensors should be located to maintain the targets in view. In complex applications involving limited-range sensors, the use of multiple sensors dynamically moving over time is required. In this paper, we investigate the use of a cooperative team of autonomous sensor-based robots for the observation of multiple moving targets. In other research, analytical techniques have been developed for solving this problem in complex geometrical environments. However, these previous approaches are very computationally expensive—at least exponential in the number of robots—and cannot be implemented on robots operating in real-time. Thus, this paper reports on our studies of a simpler problem involving uncluttered environments—those with either no obstacles or with randomly distributed simple convex obstacles. We focus primarily on developing the on-line distributed control strategies that allow the robot team to attempt to minimize the total time in which targets escape observation by some robot team member in the area of interest. This paper first formalizes the problem (which we term CMOMMT for Cooperative Multi-Robot Observation of Multiple Moving Targets) and discusses related work. We then present a distributed heuristic approach (which we call A-CMOMMT) for solving the CMOMMT problem that uses weighted local force vector control. We analyze the effectiveness of the resulting weighted force vector approach by comparing it to three other approaches. We present the results of our experiments in both simulation and on physical robots that demonstrate the superiority of the A-CMOMMT approach for situations in which the ratio of targets to robots is greater than 1/2. Finally, we conclude by proposing that the CMOMMT problem makes an excellent domain for studying multi-robot learning in inherently cooperative tasks. This approach is the first of its kind for solving the on-line cooperative observation problem and implementing it on a physical robot team.

This is a preview of subscription content, access via your institution.


  • Aha, D. (Ed). 1997. Lazy Learning, Kluwer Academic Publishers: Boston, MA.

    Google Scholar 

  • Bar-Shalom, Y. 1978. Tracking methods in a multitarget environment. IEEE Transactions on Automatic Control, AC-23(4):618–626.

    Google Scholar 

  • Bar-Shalom, Y. 1990. Multitarget Multisensor Tracking: Advanced Applications, Artech House.

  • Benda, M., Jagannathan, V., and Dodhiawalla, R. 1985. On optimal cooperation of knowledge sources. Boeing AI Center, Technical Report BCS-G2010–28.

  • Bernhard, P. 1987. Computation of equilibrium points of delayed partial information pursuit evasion games. In Proceedings of the 26th Conference on Decision and Control, December 1987.

  • Bernhard, P. and Colomb, A.-L. 1988. Saddle point conditions for a class of stochastic dynamical games with imperfect information. IEEE Transactions on Automatic Control, 33(1):98–101.

    Google Scholar 

  • Blackman, S.S. 1986. Multitarget Tracking with Radar Applications. Artech House.

  • Briggs, A.J. 1995. Efficient Geometric Algorithms for Robot Sensing and Control. PhD Thesis, Cornell University.

  • Durfee, E.H., Lesser, V.R., and Corkill, D.D. 1987. Coherent cooperation among communicating problem solvers. IEEE Transactions on Computers, C-36:1275–1291.

    Google Scholar 

  • Everett, H.R., Gilbreath, G.A., Heath-Pastore, T.A., and Laird, R.T. 1993. Coordinated control of multiple security robots. In Proceedings of SPIE Mobile Robots VIII, pp. 292–305.

  • Fernandez, F. and Parker, L.E. 2001. Learning in large cooperative multi-robot domains. International Journal of Robotics and Automation, 16(4):217–226.

    Google Scholar 

  • Fernndez, F. and Borrajo, D. VQQL. 2000. Applying vector quantization to reinforcement learning. In RoboCup-99: Robot Soccer World Cup III, Springer Verlag: Berlin.

    Google Scholar 

  • Fox, G.C., Williams, R.D., and Messina, P.C. 1994. Parallel Computing Works, Morgan Kaufmann: San Mateo, CA.

    Google Scholar 

  • Haynes, T. and Sen, S. 1986. Evolving behavioral strategies in predators and prey. In Adaptation and Learning in Multi-Agent Systems, G. Weiss and S. Sen (Eds.), Springer: Berlin, pp. 113–126.

    Google Scholar 

  • Kitano, H., Kuniyoshi, Y., Noda, I., Asada, M., Matsubara, H., and Osawa, E. 1997. Robocup: A challenge problem for AI. AI Magazine, 18(1):73–85.

    Google Scholar 

  • Korf, R. 1992. A simple solution to pursuit games. In Working Papers of the 11th International Workshop on Distributed Artificial Intelligence, pp. 183–194.

  • LaValle, S.M., Gonzalez-Banos, H.H., Becker, C., and Latombe, J.-C. 1997a. Motion strategies for maintaining visibility of a moving target. In Proceedings of the 1997 International Conference on Robots and Automation, pp. 731–736.

  • LaValle, S.M., Lin, D., Guibas, L.J., Latombe, J.-C., and Motwani, R. 1997b. Finding an unpredictable target in a workspace with obstacles. In Proceedings of the 1997 International Conference on Robots and Automation, pp. 737–742.

  • Linde, Y., Buzo, A., and Gray, R.M. 1980. An algorithm for vector quantizer design. In IEEE Transactions on Communications,Vol 1. Com-28, N 1, pp. 84–95.

    Google Scholar 

  • Lloyd, S.P. 1982. Least squares quantization in pcm. IEEE Transactions on Information Theory, 28:127–135.

    Google Scholar 

  • Mahadevan, S. and Connell, J. 1991. Automatic programming of behavior-based robots using reinforcement learning. In Proceedings of AAAI-91, pp. 8–14.

  • Marsella, S., Adibi, J., Al-Onaizan, Y., Kaminka, G., Muslea, I., and Tambe, M. 1999. On being a teammate: Experiences acquired in the design of RoboCup teams. In Proceedings of the Third Annual Conference on Autonomous Agents, O. Etzioni, J. Muller, and J. Bradshaw (Eds.), pp. 221–227.

  • Mataric, M. 1994. Interaction and intelligent behavior. Ph.D. Thesis, Massachusetts Institute of Technology.

  • MTI Research Inc. 1995. Conac 3-D tracking system. Operating manual, Chelmsford, MA.

  • O'Rourke, J. 1987. Art Gallery Theorems and Algorithms, Oxford University Press: Oxford, UK.

    Google Scholar 

  • Parker, L.E. 1994a. ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots. In Proc. of the 1994 IEEE/RSJ/GI Int'l Conf. on Intelligent Robots and Systems (IROS '94), Munich, Germany, Sept. 1994, pp. 776–783.

  • Parker, L.E. 1994b. An experiment in mobile robotic cooperation. In Proceedings of the ASCE Specialty Conference on Robotics for Challenging Environments, Albuquerque, NM, February 1994.

  • Parker, L.E. 1996a. Multi-robot team design for real-world applications. In Distributed Autonomous Robotic Systems 2, Springer-Verlag: Tokyo, pp. 91–102.

    Google Scholar 

  • Parker, L.E. 1996b. On the design of behavior-based multi-robot teams. Journal of Advanced Robotics.

  • Parker, L.E. 1998a. ALLIANCE: An architecture for fault-tolerant multi-robot cooperation. IEEE Transactions on Robotics and Automation, 14(2):220–240.

    Google Scholar 

  • Parker, L.E. 1998b. Distributed control of multi-robot teams: Cooperative baton-passing task. In Proceedings of the 4th International Conference on Information Systems Analysis and Synthesis (ISAS '98), vol. 3, pp. 89–94.

    Google Scholar 

  • Parker, L.E. 1999. Cooperative robotics for multi-target observation. Intelligent Automation and Soft Computing, special issue on Robotics Research at Oak Ridge National Laboratory, 5(1):5–19.

    Google Scholar 

  • Parker, L.E. and Touzet, C. 2000. Multi-robot learning in a cooperative observation task. In Distributed Autonomous Robotic Systems 4, Springer: Berlin, pp. 391–401.

    Google Scholar 

  • Steeb, R., Cammarata, S., Hayes-Roth, F., Thorndyke, P., and Wesson, R. 1981. Distributed intelligence for air fleet control. Rand Corp., Technical Report R-2728-AFPA.

  • Stone, P. and Veloso, M. 1998. A layered approach to learning client behaviors in the robocup soccer server. Applied Artificial Intelligence, 12:165–188.

    Google Scholar 

  • Sugihara, K., Suzuki, I., and Yamashita, M. 1990. The searchlight scheduling problem. SIAM Journal of Computing, 19(6):1024–1040.

    Google Scholar 

  • Suzuki, I. and Yamashita, M. 1992. Searching for a mobile intruder in a polygonal region. SIAM Journal of Computing, 21(5):863–888.

    Google Scholar 

  • Weiss, G. and Sen, S. (Eds.). 1996. Adaption and Learning in Multi-Agent Systems, Springer: Berlin.

    Google Scholar 

  • Wesson, R.B., Hayes-Roth, F.A., Burge, J.W., Stasz, C., and Sunshine, C.A. 1981. Network structures for distributed situation assessment. IEEE Transactions on Systems, Man and Cybernetics, 11(1):5–23.

    Google Scholar 

Download references

Author information

Authors and Affiliations


Rights and permissions

Reprints and Permissions

About this article

Cite this article

Parker, L.E. Distributed Algorithms for Multi-Robot Observation of Multiple Moving Targets. Autonomous Robots 12, 231–255 (2002).

Download citation

  • Issue Date:

  • DOI: