A Robot-Based Cognitive Assessment Model Based on Visual Working Memory and Attention Level
Vocational assessment is the process of identifying and assessing an individual’s level of functioning in relation to vocational preparation. In this research, we have designed a framework to evaluate and train the visual working memory and attention level of users by using a humanoid robot and a brain headband sensor. The humanoid robot generates a sequence of colors and the user performs the task by arranging the colored blocks in the same order. In addition, a task-switching paradigm is used to switch between the tasks and colors to give a new instruction to the user by the robot. The humanoid robot displays guidance error detection information, observes the performance of users during the assessment and gives instructive feedback to them. This research describes the profile of cognitive and behavioral characteristics associated with visual working memory skills, selective attention and ways of supporting the learning needs of workers affected by this problem. Finally, the research concludes the relationships between visual working memory and attentional level during different level of the assessment.
KeywordsHuman robot interaction Visual working memory assessment Computer vision Sequence learning Socially assistive robots
This work is supported in part by the National Science Foundation under Grant NSF-CNS 1338118. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
- 2.Belpaeme, T., Baxter, P.E., Read, R., Wood, R., Cuayáhuitl, H., Kiefer, B., Racioppa, S., Kruijff-Korbayová, I., Athanasopoulos, G., Enescu, V., Looije, R.: Multimodal child-robot interaction: building social bonds. J. Hum. Robot Interact. 1(2), 33–53 (2012)Google Scholar
- 3.Chang, C.H., Wang, S.C., Wang, C.C.: Vision-based cooperative simultaneous localization and tracking. In: IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, pp. 5191–5197 (2011)Google Scholar
- 7.Dollár, P., Zitnick, C.L.: Structured forests for fast edge detection. In: Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, pp. 1841–1848 (2013)Google Scholar
- 8.Hamadicharef, B., Zhang, H., Guan, C., Wang, C., Phua, K.S., Tee, K.P., Ang, K.K.: Learning EEG-based spectral-spatial patterns for attention level measurement. In: IEEE International Symposium Circuits and Systems, ISCAS 2009, pp. 1465–1468. IEEE, Vancouver, May 2009Google Scholar
- 10.Ismail, L., Shamsuddin, S., Yussof, H., Hashim, H., Bahari, S., Jaafar, A., Zahari, I.: Face detection technique of Humanoid Robot NAO for application in robotic assistive therapy. In: IEEE International Conference on Control System, Computing and Engineering (ICCSCE), pp. 517–521. IEEE, Penang (2011)Google Scholar
- 14.Nguyen, T.L., Boukezzoula, R., Coquin, D., Benoit, E., Perrin, S.: Interaction between humans, NAO robot and multiple cameras for colored objects recognition using information fusion. In: 8th International Conference on Human System Interactions (HSI), pp. 322–328. IEEE, Warsaw, June 2015Google Scholar
- 17.Shamsuddin, S., Yussof, H., Miskam, M.A., Hamid, A.C., Malik, N.A., Hashim, H.: Humanoid robot NAO as HRI mediator to teach emotions using game-centered approach for children with autism. In: HRI 2013 Workshop on Applications for Emotional Robots (2013)Google Scholar
- 20.Sousa, D.A.: How the Brain Learns. Corwin Press, Thousand Oaks (2016)Google Scholar
- 23.Wiechert, G., Triff, M., Liu, Z., Yin, Z., Zhao, S., Zhong, Z., Lingras, P.: Evolutionary semi-supervised rough categorization of brain signals from a wearable headband. In: 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 3131–3138. IEEE, Vancouver (2016)Google Scholar