Trust and Reputation Mechanisms for Multi-agent Robotic Systems
In this paper we analyze the functioning of multi-agent robotic systems with decentralized control in conditions of destructive information influences from robots-saboteurs. We considered a type of hidden attacks using interception of messages, formation and transmission of misinformation to a group of robots, and also realizing other actions which have no visible signs of invasion into a group of robots. We analyze existing models of information security of the multi-agent information system based on a measure of trust, calculated in the course of interaction of agents. We suggest a mechanism of information security in which robots-agents produce levels of trust to each other on the basis of the situation analysis developing on a certain step of an iterative algorithm with the use of onboard sensor devices. For improving the metric of likeness of objects relating to one category (“saboteur” or “legitimate agent”) we suggest an algorithm to calculate reputation of agents as a measure of the public opinion created in time about qualities of robots of the category “saboteur” in a group of legitimate robots-agents. It is shown that inter-cluster distance can serve as a metric of quality of trust models in multi-agent systems. We give an example showing the use of the developed mechanism for detection of saboteurs in different situations in using the basic algorithm of distribution of targets in a group of robots.
KeywordsInformation security groups of robots multi-agent robotic systems attack vulnerability modeling
Unable to display preview. Download preview PDF.
- 1.Wooldridge, M.: An Introduction to MultiAgent Systems, 366 p. John Wiley & Sons Ltd., paperback (2002) ISBN 0-471-49691-XGoogle Scholar
- 2.Higgins, F., Tomlinson, A., Martin, K.M.: Threats to the Swarm: Security Considerations for Swarm Robotics. International Journal on Advances in Security 2(2&3), 288–297 (2009)Google Scholar
- 3.Zikratov, I.A., Kozlova, E.V., Zikratova, T.V.: Analysis of vulnerability of robotic complexes with swarm intellect. Scientific and Technical Journal of Information Technologies, Mechanics and Optics 5(87), 149–154 (2013)Google Scholar
- 4.Neeran, K.M., Tripathi, A.R.: Security in the Ajanta MobileAgent system. Technical Report. Department of Computer Science, University of Minnesota (May 1999)Google Scholar
- 6.Xudong, G., Yiling, Y., Yinyuan, Y.: POM-a mobile agent security model against malicious hosts. In: Proceedings of High Performance Computing in the Asia-Pacific Region, vol. 2, pp. 1165–1166 (2000)Google Scholar
- 7.Page, J., Zaslavsky, A., Indrawan, M.: A Buddy model of security for mobile agent communities operating in pervasive scenarios. In: Proceeding of the 2nd ACM Intl. Workshop on Australian Information Security & Data Mining, vol. 54 (2004)Google Scholar
- 8.Page, Zaslavsky, A., Indrawan, M.: Ountering security vulnerabilities using a shared security buddy model schema in mobile agent communities. In: Proc. of the First International Workshop on Safety and Security in Multi-Agent Systems (SASEMAS 2004), pp. 85–101 (2004)Google Scholar
- 13.Kalyaev, I.A., Gayduk, A.R., Kapustyan, S.G.: Models and algorithms of collective control in groups of robots, 280 p. Physmathlit, Moscow (2009)Google Scholar
- 14.Zikratov, I.A., Zikratova, T.V., Lebedev, I.S.: Confidential model of information security of multi-agent robotic systems with decentralized control. Scientific and Technical Journal of Information Technologies, Mechanics and Optics 2(90), 47–52 (2014)Google Scholar
- 15.Carter, J.: Reputation Formalization for an Information-Sharing Multi-Agent System. Computational Intelligence 18(2), 515–534Google Scholar
- 16.Beshta, A., Kipto, M.: Creation of model of trust to objects of an automated information system for preventing of destructive impact on system. News of Tomsk Polytechnic University 322(5), 104–108 (2013)Google Scholar