Sensor Modeling Using Visual Object Relation in Multi Robot Object Tracking
In this paper we present a novel approach to estimating the position of objects tracked by a team of mobile robots. Modeling of moving objects is commonly done in a robo-centric coordinate frame because this information is sufficient for most low level robot control and it is independent of the quality of the current robot localization. For multiple robots to cooperate and share information, though, they need to agree on a global, allocentric frame of reference. When transforming the egocentric object model into a global one, it inherits the localization error of the robot in addition to the error associated with the egocentric model.
We propose using the relation of objects detected in camera images to other objects in the same camera image as a basis for estimating the position of the object in a global coordinate system. The spacial relation of objects with respect to stationary objects (e.g., landmarks) offers several advantages: a) Errors in feature detection are correlated and not assumed independent. Furthermore, the error of relative positions of objects within a single camera frame is comparably small. b) The information is independent of robot localization and odometry. c) As a consequence of the above, it provides a highly efficient method for communicating information about a tracked object and communication can be asynchronous.
We present experimental evidence that shows how two robots are able to infer the position of an object within a global frame of reference, even though they are not localized themselves.
KeywordsMobile Robot Camera Image Object Relation Ball Position Percept Relation
- 1.Arkin, R.: Behavior-Based Robotics. MIT Press, Cambridge, MA, USA (1998)Google Scholar
- 2.Dietl, M., Gutmann, J., Nebel, B.: Cooperative sensing in dynamic environments. In: IROS 2001. IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, Hawaii (2001)Google Scholar
- 3.Fox, D., Burgard, W., Dellart, F., Thrun, S.: Monte carlo localization: Efficient position estimation for mobile robots. In: Proceedings of the Sixteenth National Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial Intelligence (AAAI), pp. 343–349. AAAI Press/MIT Press, Cambridge (1999)Google Scholar
- 4.Gutmann, J.-S., Fox, D.: An experimental comparison of localization methods continued. In: IROS. Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE Computer Society Press, Los Alamitos (2002)Google Scholar
- 5.Kwok, C., Fox, D.: Map-based multiple model tracking of a moving object. In: Nardi, D., Riedmiller, M., Sammut, C., Santos-Victor, J. (eds.) RoboCup 2004. LNCS (LNAI), vol. 3276, pp. 18–33. Springer, Heidelberg (2005)Google Scholar
- 7.Röfer, T., Jüngel, M.: Vision-based fast and reactive monte-carlo localization. In: Polani, D., Bonarini, A., Browning, B., Yoshida, K. (eds.) ICRA. Proceedings of the 2003 IEEE International Conference on Robotics and Automation, pp. 856–861. IEEE Computer Society Press, Los Alamitos (2003)Google Scholar
- 9.Thrun, S., Fox, D., Burgard, W.: Monte carlo localization with mixture proposal distribution. In: Proceedings of the National Conference on Artificial Intelligence (AAAI), pp. 859–865 (2000)Google Scholar