# An Introduction to Hyperdimensional Computing for Robotics

• Technical Contribution
• Published:

## Abstract

Hyperdimensional computing combines very high-dimensional vector spaces (e.g. 10,000 dimensional) with a set of carefully designed operators to perform symbolic computations with large numerical vectors. The goal is to exploit their representational power and noise robustness for a broad range of computational tasks. Although there are surprising and impressive results in the literature, the application to practical problems in the area of robotics is so far very limited. In this work, we aim at providing an easy to access introduction to the underlying mathematical concepts and describe the existing computational implementations in form of vector symbolic architectures (VSAs). This is accompanied by references to existing applications of VSAs in the literature. To bridge the gap to practical applications, we describe and experimentally demonstrate the application of VSAs to three different robotic tasks: viewpoint invariant object recognition, place recognition and learning of simple reactive behaviors. The paper closes with a discussion of current limitations and open questions.

This is a preview of subscription content, log in via an institution to check access.

## Subscribe and save

Springer+ Basic
\$34.99 /Month
• Get 10 units per month
• 1 Unit = 1 Article or 1 Chapter
• Cancel anytime

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

## Notes

1. n-sphere: a hypersphere in the (n+1)-dimensional space. n-cap: portion of an n-sphere cut off by a hyperplane.

2. Please keep in mind, that the unit of the surface area of an n-sphere is an n-dimensional object, thus the unit along the vertical axis changes and the values along the curves are not directly comparable. Nevertheless, the fact that there is a local maximum of the surface area of the almost orthogonal range is surprising. However, it is a direct consequence of the local maximum of the surface area of the whole unit n-sphere (which in turn becomes intuitive based on the recursive expression of the surface area $$A_{n+1} = A_n \cdot \frac{n}{2\pi }$$ since for $$n>2\pi$$ this factor becomes smaller one).

3. Similar experiments can, e.g., be found in [31] and [2]; analytical results on VSA capacity can be found in [7].

4. Details: the red curve in the left plot evaluates vector similarities (the query image index q is known and we compare the similarity of $$I^k_x + I^k_y$$ and $$I^{q=k}_z$$), the red curve in the right plot evaluates the accuracy of a nearest neighbor query (the query image index q is not known to the system and it returns the index k of the nearest neighbor to $$I^q_z$$ of all $$I^k_x + I^k_y$$, $$k\in \{1 \ldots 1000\}$$). x is fixed at viewing angle $$0^{\circ }$$. y varies from $$0^{\circ }$$ to $$350^{\circ }$$. The horizontal axis is the mean angular distance from z to x and y. As a reading example: in the left plot, the red curve evaluated at $$90^{\circ }$$ means that for $$x=0^{\circ }$$, $$y=180^{\circ }$$, $$z=90^{\circ }$$ (e.g. the images from Fig. 6), the average cosine distance of the bundle $$(I^k_0 + I^k_{180})$$ and $$I^k_{90}$$ is about 0.17, and the right plot tells us that for about 53% of the objects the query image was most similar to the correct bundle. For comparison without bundling, the blue curves in Fig. 7 show the results when comparing the query image to the individual images $$I^k_x$$ and $$I^k_y$$ (instead of their bundle). For the distance evaluation in the left plot, we use the closest of the two individual results for each query. For the query results in the right plot, all views $$I^k_x$$ and $$I^k_y$$ are stored in the database and a single query is made (the number of data base entries and thus comparisons has now doubled compared to the bundling approach). The VSA approach not only reduces the number of comparison, it also performs slightly better than using individual comparisons in both plots.

5. This work was previously presented at an IROS workshop, see [26] for details.

## References

1. Aggarwal CC, Hinneburg A, Keim DA (2001) On the surprising behavior of distance metrics in high dimensional space. In: Van den Bussche J, Vianu V (eds) Database theory—ICDT 2001. Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 420–434

2. Ahmad S, Hawkins J (2015) Properties of sparse distributed representations and their application to hierarchical temporal memory. CoRR arxiv:abs/1503.07469

3. Bellman RE (1961) Adaptive Control Processes: A Guided Tour. MIT Press, Cambridge

4. Beyer K, Goldstein J, Ramakrishnan R, Shaft U (1999) When Is nearest neighbor meaningful? In: Beeri C, Buneman P (eds) Database theory—ICDT’99. Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 217–235

5. Danihelka I, Wayne G, Uria B, Kalchbrenner N, Graves A (2016) Associative long short-term memory. In: Balcan MF, Weinberger KQ (eds) Proceedings of the 33rd international conference on machine learning, proceedings of machine learning research, vol 48. PMLR, New York, pp 1986–1994. http://proceedings.mlr.press/v48/danihelka16.html

6. Eliasmith C, Stewart TC, Choo X, Bekolay T, DeWolf T, Tang Y, Rasmussen D (2012) A large-scale model of the functioning brain. Science 338(6111):1202–1205. https://doi.org/10.1126/science.1225266. http://science.sciencemag.org/content/338/6111/1202

7. Frady EP, Kleyko D, Sommer FT (2018) A theory of sequence indexing and working memory in recurrent neural networks. Neural Comput 30(6):1449–1513. https://doi.org/10.1162/neco_a_01084

8. Gayler RW (1998) Multiplicative binding, representation operators, and analogy. In: Advances in analogy research: integr of theory and data from the cogn, comp, and neural sciences. Bulgaria

9. Gayler RW (2003) Vector symbolic architectures answer Jackendoff’s challenges for cognitive neuroscience. In: Proc. of ICCS/ASCS Int. Conf. on cognitive science, pp 133–138. Sydney, Australia

10. Geusebroek JM, Burghouts GJ, Smeulders AWM (2005) The Amsterdam library of object images. Int J Comput Vis 61(1):103–112

11. Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning: data mining, inference and prediction, 2 edn. Springer. http://www-stat.stanford.edu/~tibs/ElemStatLearn/

12. Hawkins J, Ahmad S (2016) Why neurons have thousands of synapses, a theory of sequence memory in neocortex. Front Neural Circ 10:23. https://doi.org/10.3389/fncir.2016.00023

13. Jackendoff R (2002) Foundations of language (brain, meaning, grammar, evolution). Oxford University Press, Oxford

14. Joshi A, Halseth JT, Kanerva P (2017) Language geometry using random indexing. In: de Barros JA, Coecke B, Pothos E (eds) Quantum interaction. Springer International Publishing, Cham, pp 265–274

15. Kanerva P (1997) Fully distributed representation. In: Proc. of real world computing symposium, pp 358–365. Tokyo, Japan

16. Kanerva P (2009) Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors. Cognit Comput 1(2):139–159

17. Kanerva P (2014) Computing with 10,000-bit words. In: 2014 52nd annual Allerton conference on communication, control, and computing (Allerton), pp 304–310 . https://doi.org/10.1109/ALLERTON.2014.7028470

18. Kleyko D, Osipov E, Gayler RW, Khan AI, Dyer AG (2015) Imitation of honey bees’ concept learning processes using Vector Symbolic Architectures. Biol Inspired Cognit Arch 14:57–72. https://doi.org/10.1016/j.bica.2015.09.002

19. Kleyko D, Osipov E, Papakonstantinou N, Vyatkin V, Mousavi A (2015) Fault detection in the hyperspace: towards intelligent automation systems. In: 2015 IEEE 13th international conference on industrial informatics (INDIN), pp 1219–1224. https://doi.org/10.1109/INDIN.2015.7281909

20. Kleyko D, Rahimi A, Rachkovskij DA, Osipov E, Rabaey JM (2018) Classification and recall with binary hyperdimensional computing: tradeoffs in choice of density and mapping characteristics. IEEE Trans Neural Netw Learn Syst 29(12):5880–5898. https://doi.org/10.1109/TNNLS.2018.2814400

21. Krizhevsky A, Sutskever I, Hinton GE (2012) ImageNet classification with deep convolutional neural networks. In: Pereira F, Burges C, Bottou L, Weinberger K (eds) Advances in neural information processing systems, vol 25. Curran Associates, Inc., pp 1097–1105. http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

22. Levy SD, Bajracharya S, Gayler RW (2013) Learning behavior hierarchies via high-dimensional sensor projection. In: Proc. of AAAI conference on learning rich representations from low-level sensors, AAAIWS’13–12, pp 25–27

23. Milford M, Wyeth GF (2012) SeqSLAM: visual route-based navigation for sunny summer days and stormy winter nights. In: Proceedings of the IEEE international conference on robotics and automation (ICRA)

24. Neubert P, Ahmad S, Protzel P (2018) A sequence-based neuronal model for mobile robot localization. In: Proc of KI: advances in artificial intelligence

25. Neubert P, Protzel P (2015) Neubert P, Protzel P (2015) Local region detector+ CNN based landmarks for practical place recognition in changing environments. In: Proceedings of the European conference on mobile robotics (ECMR)

26. Neubert P, Schubert S, Protzel P (2016) Learning vector symbolic architectures for reactive robot behaviours. In: Proc of Intl Conf on intelligent robots and systems (IROS) workshop on machine learning methods for high-level cognitive capabilities in robotics

27. Osipov E, Kleyko D, Legalov A (2017) Associative synthesis of finite state automata model of a controlled object with hyperdimensional computing. In: IECON 2017—43rd annual conference of the IEEE industrial electronics society, pp 3276–3281. https://doi.org/10.1109/IECON.2017.8216554

28. Plate TA (1994) Distributed representations and nested compositional structure. Ph.D. thesis, Toronto, Ont., Canada, Canada

29. Purdy S (2016) Encoding data for HTM systems. CoRR arxiv:abs/1602.05925

30. Rachkovskij DA, Slipchenko SV (2012) Similarity-based retrieval with structure-sensitive sparse binary distributed representations. Comput Intell 28(1):106–129. https://doi.org/10.1111/j.1467-8640.2011.00423.x

31. Rahimi A, Datta S, Kleyko D, Frady EP, Olshausen B, Kanerva P, Rabaey JM (2017) High-dimensional computing as a nanoscalable paradigm. IEEE Trans Circ Syst I Regular Pap 64(9):2508–2521. https://doi.org/10.1109/TCSI.2017.2705051

32. Smolensky P (1990) Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artif Intell 46(1–2):159–216

33. Sünderhauf N, Dayoub F, Shirazi S, Upcroft B, Milford M (2015) On the performance of ConvNet features for place recognition. CoRR arxiv:abs/1501.04158

34. Sünderhauf N, Neubert P, Protzel P (2013) Are we there yet? Challenging SeqSLAM on a 3000 km journey across all four seasons. In: Proceedings of the IEEE international conference on robotics and automation (ICRA), workshop on long-term autonomy

35. Sünderhauf N, Brock O, Scheirer W, Hadsell R, Fox D, Leitner J, Upcroft B, Abbeel P, Burgard W, Milford M, Corke P (2018) The limits and potentials of deep learning for robotics. Int J Robot Res 37(4–5):405–420. https://doi.org/10.1177/0278364918770733

36. Thrun S, Burgard W, Fox D (2005) Probabilistic robotics (intelligent robotics and autonomous agents). The MIT Press, Cambridge

37. Widdows D, Cohen T (2015) Reasoning with vectors: a continuous model for fast robust inference. Logic J IGPL/Interest Group Pure Appl Logics 2:141–173

38. Yerxa T, Anderson A, Weiss E (2018) The hyperdimensional stack machine. In: Proceedings of Cognitive Computing, Hannover, pp. 1–2

## Author information

Authors

### Corresponding author

Correspondence to Peer Neubert.

## Rights and permissions

Reprints and permissions

Neubert, P., Schubert, S. & Protzel, P. An Introduction to Hyperdimensional Computing for Robotics. Künstl Intell 33, 319–330 (2019). https://doi.org/10.1007/s13218-019-00623-z