Advertisement

Trust and Reputation Mechanisms for Multi-agent Robotic Systems

  • Igor A. Zikratov
  • Ilya S. Lebedev
  • Andrei V. Gurtov
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8638)

Abstract

In this paper we analyze the functioning of multi-agent robotic systems with decentralized control in conditions of destructive information influences from robots-saboteurs. We considered a type of hidden attacks using interception of messages, formation and transmission of misinformation to a group of robots, and also realizing other actions which have no visible signs of invasion into a group of robots. We analyze existing models of information security of the multi-agent information system based on a measure of trust, calculated in the course of interaction of agents. We suggest a mechanism of information security in which robots-agents produce levels of trust to each other on the basis of the situation analysis developing on a certain step of an iterative algorithm with the use of onboard sensor devices. For improving the metric of likeness of objects relating to one category (“saboteur” or “legitimate agent”) we suggest an algorithm to calculate reputation of agents as a measure of the public opinion created in time about qualities of robots of the category “saboteur” in a group of legitimate robots-agents. It is shown that inter-cluster distance can serve as a metric of quality of trust models in multi-agent systems. We give an example showing the use of the developed mechanism for detection of saboteurs in different situations in using the basic algorithm of distribution of targets in a group of robots.

Keywords

Information security groups of robots multi-agent robotic systems attack vulnerability modeling 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Wooldridge, M.: An Introduction to MultiAgent Systems, 366 p. John Wiley & Sons Ltd., paperback (2002) ISBN 0-471-49691-XGoogle Scholar
  2. 2.
    Higgins, F., Tomlinson, A., Martin, K.M.: Threats to the Swarm: Security Considerations for Swarm Robotics. International Journal on Advances in Security 2(2&3), 288–297 (2009)Google Scholar
  3. 3.
    Zikratov, I.A., Kozlova, E.V., Zikratova, T.V.: Analysis of vulnerability of robotic complexes with swarm intellect. Scientific and Technical Journal of Information Technologies, Mechanics and Optics 5(87), 149–154 (2013)Google Scholar
  4. 4.
    Neeran, K.M., Tripathi, A.R.: Security in the Ajanta MobileAgent system. Technical Report. Department of Computer Science, University of Minnesota (May 1999)Google Scholar
  5. 5.
    Sander, T., Tschudin, C.F.: Protecting Mobile Agents against malicious hosts. In: Vigna, G. (ed.) Mobile Agents and Security. LNCS, vol. 1419, pp. 44–60. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  6. 6.
    Xudong, G., Yiling, Y., Yinyuan, Y.: POM-a mobile agent security model against malicious hosts. In: Proceedings of High Performance Computing in the Asia-Pacific Region, vol. 2, pp. 1165–1166 (2000)Google Scholar
  7. 7.
    Page, J., Zaslavsky, A., Indrawan, M.: A Buddy model of security for mobile agent communities operating in pervasive scenarios. In: Proceeding of the 2nd ACM Intl. Workshop on Australian Information Security & Data Mining, vol. 54 (2004)Google Scholar
  8. 8.
    Page, Zaslavsky, A., Indrawan, M.: Ountering security vulnerabilities using a shared security buddy model schema in mobile agent communities. In: Proc. of the First International Workshop on Safety and Security in Multi-Agent Systems (SASEMAS 2004), pp. 85–101 (2004)Google Scholar
  9. 9.
    Schillo, M., Funk, P., Rovatsos, M.: Using trust for detecting deceitful agents in artificial societies. Applied Artificial Intelligence 14, 825–848 (2000)CrossRefGoogle Scholar
  10. 10.
    Golbeck, J., Parsia, B., Hendler, J.: Trust Networks on the Semantic Web. In: Klusch, M., Omicini, A., Ossowski, S., Laamanen, H. (eds.) CIA 2003. LNCS (LNAI), vol. 2782, pp. 238–249. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  11. 11.
    Garcia-Morchon, O., Kuptsov, D., Gurtov, A., Wehrle, K.: Cooperative security in distributed networks. Computer Communications (COMCOM) 36(12), 1284–1297 (2013)CrossRefGoogle Scholar
  12. 12.
    Ramchurn, S.D., Huynh, D., Jennings, N.R.: Trust in multi-agent systems. The Knowledge Engineering Review 19(1), 1–25 (2004)CrossRefGoogle Scholar
  13. 13.
    Kalyaev, I.A., Gayduk, A.R., Kapustyan, S.G.: Models and algorithms of collective control in groups of robots, 280 p. Physmathlit, Moscow (2009)Google Scholar
  14. 14.
    Zikratov, I.A., Zikratova, T.V., Lebedev, I.S.: Confidential model of information security of multi-agent robotic systems with decentralized control. Scientific and Technical Journal of Information Technologies, Mechanics and Optics 2(90), 47–52 (2014)Google Scholar
  15. 15.
    Carter, J.: Reputation Formalization for an Information-Sharing Multi-Agent System. Computational Intelligence 18(2), 515–534Google Scholar
  16. 16.
    Beshta, A., Kipto, M.: Creation of model of trust to objects of an automated information system for preventing of destructive impact on system. News of Tomsk Polytechnic University 322(5), 104–108 (2013)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Igor A. Zikratov
    • 1
  • Ilya S. Lebedev
    • 1
  • Andrei V. Gurtov
    • 2
  1. 1.ITMO UniversityRussia
  2. 2.Helsinki Institute for Information Technology HIIT and Department of Computer Science and EngineeringAalto UniversityFinland

Personalised recommendations