Semantic Representation for Communication Between Human and Wireless Robot

  • Chuho Yi
  • Jungwon ChoEmail author
  • Il Hong Suh


We have developed a human-navigation-inspired and semantic map-building for communication between human and wireless robots. Four human navigation strategies (dead reckoning, egocentric place recognition, and reorientation) are integrated in our proposed method as an egocentric-semantic topology representation, egocentric measurement representation, and reorientation rules, respectively. In experiments, we successfully created a semantic representation and estimated the location of the robot by changing inaccurate distance and angle to approximate measurements using our proposed computing method. Semantic representation building took place in a 14 × 26.5 m corridor. The proposed method enables a robot to navigate effectively in a service environment using navigation schemes inspired by humans and their actions.


Semantic mapping Communication (HRI) Wireless robot 

List of symbols

\({{\uprho }}, {{\upomega }}, {{\upzeta }}\)

Measured distance and angle to landmark and measured angle between landmarks

\(s^{\rho } , s^{\omega } , s^{\zeta }\)

Spatial landmark relations: distance and angle to landmarks and angle between landmarks transformed by the symbolizing function for the spatial relation

\({{ }^{i}}v\)

Node \(i\) as a egocentric-topological robot location

\({{ }^{i}}{S_{a,b}} = \left\{ {{{ }^{i}}s_{a}^{\rho } ,{{ }^{i}}s_{a}^{\omega } ,{{ }^{i}}s_{a}^{\zeta } } \right\}\)

Spatial landmark relations containing distance and angle to landmark and angle between landmarks in node \(i\)

\({ }^{i,j} S_{v} = \left\{ {{ }^{i,j} s_{v}^{\rho } ,{ }^{i,j} s_{v}^{\omega } } \right\}\)

Spatial node relations containing distance and angle between nodes \(i\) and \(j\)

\(^{i} {{\Theta }}_{oA}\) = \({\texttt{localAxis}}\)(i\({\texttt{oA}}\), iv)

Local coordinate system generated by reference landmark \({\texttt{oA}}\) at node \(i\)

\(^{i} {{\Gamma }} = \{^{i} {{\Theta }}_{{lA \in^{i} O}} ,^{i} {\text{S}}_{a,b} \}_{{a,b \in^{i} O}}\)

Egocentric-semantic topology area \(^{i} O\), the set of landmarks in node \(i\)

\(^{i,j} {{\Psi }} = \{ {{ }^{i}}{{\Theta }}_{{oA \in^{i} O}}^{ } ,{{ }^{i}}{\text{S}}_{v} \}_{i,j \in E}\)

Relative link in the topological representation

\({\text{M}} = \langle \{{{ }^{i}}\Gamma \}_{i \in V} ,\{{ }^{i,j} \Psi \}_{i,j \in E} \rangle\)

Semantic representation

\({{ }^{i}}{X}_{t}^{ } = \left[ {\begin{array}{lll} {{{ }^{i}}x_{t}^{ } } &\quad {{{ }^{i}}y_{t}^{ } } &\quad {{{ }^{i}}\theta_{t}^{ } } \\ \end{array} } \right]^{T}\)

Regional-measured robot location with respect to node \(i\) at time \(t\)

\({{ }^{i}}{{\Omega }}_{t}^{ } = \langle \begin{array}{ll} {{{ }^{i}}v} &\quad {{{ }^{i}}X_{t}^{ } } \\ \end{array} \rangle\)

Robot location on the egocentric-semantic representation



  1. 1.
    Siegwart, R., & Nourbakhsh, I. (2004). Introduction to autonomous mobile robots. Cambridge: MIT Press.Google Scholar
  2. 2.
    Meyer, J., & Filliat, D. (2003). Map-based navigation in mobile robots. II. A review of map-learning and path-planning strategies. Cognitive Systems Research, 4(4), 283317.CrossRefGoogle Scholar
  3. 3.
    Choset, H., Lynch, K., Hutchinson, S., Kantor, G., Burgard, W., Kavraki, L., et al. (2005). Principles of robot motion: Theory, algorithms, and implementation. Cambridge: MIT Press.zbMATHGoogle Scholar
  4. 4.
    Yi, C., Suh, I. H., Lim, G. H., & Choi, B. U. (2009). Active-semantic localization with a single consumer-grade camera. In IEEE international conference on systems, man, and cybernetics (pp. 2230–2235).Google Scholar
  5. 5.
    Lim, G. H., Suh, I. H., & Suh, H. (2011). Ontology-based unified robot knowledge for service robots in indoor environments. IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 41, 492–509.CrossRefGoogle Scholar
  6. 6.
    Memos, V., Psannis, K., Ishibashi, Y., Kim, B.-G., & Gupta, B. B. (2017). An efficient algorithm for media-based surveillance system (EAMSuS) in IoT smart city framework. Future Generation Computer Systems. Scholar
  7. 7.
    Kokkonis, G., Psannis, K., Roumeliotis, M., & Ishibashi, Y. (2015). Efficient algorithm for transferring a real-time HEVC stream with haptic data through the internet. Journal of Real-Time Image Processing, 12, 343. Scholar
  8. 8.
    Suh, I. H., Lim, G. H., Hwang, W., Suh, H., Choi, J. H., & Park, Y. T. (2007). Ontology-based multi-layered robot knowledge framework (OMRKF) for robot intelligence. In Proceedings of the IEEE the IROS.Google Scholar
  9. 9.
    Lim, G. H., & Suh, I. H. (2008). Weighted action-coupled semantic network (wASN) for robot intelligence. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (IROS).Google Scholar
  10. 10.
    Wang, R., & Spelke, E. (2002). Human spatial representation: Insights from animals. Trends in Cognitive Sciences, 6(9), 376–382.CrossRefGoogle Scholar
  11. 11.
    Loomis, J., Klatzky, R., Golledge, R., Cicinelli, J., Pellegrino, J., & Fry, P. (1993). Nonvisual navigation by blind and sighted: Assessment of path integration ability. Journal of Experimental Psychology: General, 122, 73–91.CrossRefGoogle Scholar
  12. 12.
    Berthoz, A., Israel, I., Georges-Francois, P., Grasso, R., & Tsuzuku, T. (1995). Spatial memory of body linear displacement: What is being stored? Science, 269, 95–98.CrossRefGoogle Scholar
  13. 13.
    Diwadkar, V., & McNamara, T. (1997). Viewpoint dependence in scene recognition. Psychological Science, 8, 302–307.CrossRefGoogle Scholar
  14. 14.
    Wang, R., & Simons, D. (1999). Active and passive scene recognition across views. Cognition, 70, 191–210.CrossRefGoogle Scholar
  15. 15.
    Wang, R., & Spelke, E. (2000). Updating egocentric representations in human navigation. Cognition, 77(3), 215–250.CrossRefGoogle Scholar
  16. 16.
    Burgess, N. (2006). Spatial memory: How egocentric and allocentric combine. Trends in Cognitive Sciences, 10, 551–557.CrossRefGoogle Scholar
  17. 17.
    Lee, W. Y., Hur, K., & Hwang, K. (2012). Mobile robot navigation using wireless sensor networks without localization procedure. Wireless Personal Communications, 62(2), 257–275.CrossRefGoogle Scholar
  18. 18.
    Wang, S., Chen, L., & Hu, H. (2013). Underwater localization and environment mapping using wireless robots. Wireless Personal Communications, 70(3), 1147–1170.CrossRefGoogle Scholar
  19. 19.
    Mata, M., Armingol, J. M., de la Escalera, A., & Salichs, M. A. (2001). A visual landmark recognition system for topological navigation of mobile robots. In ICRA, pp. 1124–1129.Google Scholar
  20. 20.
    Amir, S., Waqar, A., & Siddiqui, M. A. (2017). Kinect controlled UGV. Wireless Personal Communications, 95(2), 631–640.CrossRefGoogle Scholar
  21. 21.
    Nowicki, M. R., Wietrzykowski, J., & Skrzypczy, P. (2017). Real-time visual place recognition for personal localization on a mobile device. Wireless Personal Communications, 97(1), 213–244.CrossRefGoogle Scholar
  22. 22.
    Ramisa, A., Vasudevan, S., Scaramuzza, D., Mantaras, R., & Siegwart, R. (2008). A tale of two object recognition methods for mobile robots. In International conference on computer vision systems (pp. 353–362).Google Scholar
  23. 23.
    Munich, M. E., Pirjanian, P., Bernardo, E. D., Goncalves, L., Karlsson, N., & Lowe, D. (2006). SIFT-ing through features with ViPR. IEEE Robotics and Automation Magazine, 13, 72–77.CrossRefGoogle Scholar
  24. 24.
    Jia, S., Yasuda, A., Chugo, D., & Takase, K. (2008). Map building for a service mobile robot using interactive GUI. In Proceedings of the IEEE international conference on information and automation (pp. 119–124).Google Scholar
  25. 25.
    Yi, C., Suh, I. H., Lim, G. H., Jeong, S., & Choi, B. U. (2009). Cognitive representation and Bayesian model of spatial object contexts for robot localization. In LNCS (vol. 5506, pp. 747–754).CrossRefGoogle Scholar
  26. 26.
    Stoffel, E., Lorenz, B., & Ohlbach, H. (2007). Towards a semantic spatial model for pedestrian indoor navigation. In International conference on conceptual modelingFoundations and applications (pp. 328–337).Google Scholar
  27. 27.
    Ceriani, S., Fontana, G., Giusti, A., Marzorati, D., Matteucci, M., Migliore, D., et al. (2009). RAWSEEDS ground truth collection systems for indoor self-localization and mapping. Autonomous Robots Journal, 27(4), 353–371.CrossRefGoogle Scholar
  28. 28.
    Itti, L., & Baldi, P. (2005). A principled approach to detecting surprising events in video. In Proceedings of IEEE conference on computer vision and pattern recognition (CVPR) (pp. 631–637).Google Scholar
  29. 29.
    Yi, C., Shin, Y., & Cho, J. (2014). Visual attention and clustering-based automatic selection of landmarks using single camera. Journal of Central South University, 21(9), 3525–3533.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of ElectronicsDong Seoul UniversitySeongnamSouth Korea
  2. 2.Department of Computer EducationJeju National UniversityJejuSouth Korea
  3. 3.Department of Electronics and Computer EngineeringHanyang UniversitySeoulSouth Korea

Personalised recommendations